text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Extracts from Torgeir Bakken's posts Torgeir does not like the idea of keeping up a web page. (I wouldn't either, he does too much). I keep collecting his code from posts in various Microsoft forums since he manages to wrap up some useful problem solutions, and I've finally decided to post the ones I use frequently here so I don't need to do a complete scan of my script collection or heavy Google browsing when I need to use one. The free Microsoft Software Services (SUS), Win2k and WinXP only: With SUS, there are a couple of downsides though... 1) If the users local account on the client computer is a non-admin (a.k.a. not members of the Administrators group), you cannot prevent SUS rebooting the client PC when the patches have been installed. After a reboot notice and a 5 minutes pause, the computer just reboots. As a non-admin, there is no way to supress it. 2) The client can only check for updates once a day, if the client misses the update check "time window" (a.k.a. it was offline/powered off), it will "reschedule" the check to next day. 3) The SUS service needs to run on a Windows 2000 Server (non DC) with IIS installed. There is a separate newsgroup for SUS as well: microsoft.public.softwareupdatesvcs --------------------------------------------------------------- Set oShell = CreateObject("WScript.Shell") sCmd = "C:\q323759.exe /q:a /r:n" oShell.Run sCmd, 1, True Note that the user running the logon script needs to be a local administrator. If you want to do a remote installation that is unattended (no user input) and single file install file based, you can use PsExec.exe in the free PsTools suite from Sysinternals for this (will work on Win NT 4.0, Win2k and WinXP). To install e.g. q323759.exe remotely, do like this: psexec.exe \\some_computer -c C:\q323759.exe /q:a /r:n The file "behind" -c must exist on the computer running PsExec. PsExec will copy the specified program to the remote system for execution and run it with the additional switches. After the install, the file q323759.exe will be deleted from the remote computer. If not domain, username and password can be supplied: psexec.exe \\some_computer -u user -p pwd -c C:\q323759.exe /q:a /r:n See here for a complete VBScript example that installs SW on a list of computers in a text file using PsExec: From: Torgeir Bakken (Torgeir.Bakken-spam@hydro.com) Subject: Re: Auto-Installing WMI Newsgroups: microsoft.public.win32.programmer.wmi Date: 2002-08-12 20:04:56 PST [Local copy of this script - AKA] ' Script that adds a padlock icon in Quick Launch Tray ' that locks the workstation (Win2k/WinXP) Set oShell = CreateObject("WScript.Shell") sCurrUsrPath = oShell.ExpandEnvironmentStrings("%UserProfile%") Set oShortCut = oShell.CreateShortcut(sCurrUsrPath _ & "\Application Data\Microsoft\Internet Explorer\" _ & "Quick Launch\Lock Workstation2.lnk") oShortCut.TargetPath = "rundll32.exe" oShortCut.Arguments = "user32.dll,LockWorkStation" oShortCut.IconLocation = "shell32.dll,47" oShortCut.Save Set oOS = GetObject("winmgmts:").InstancesOf("Win32_OperatingSystem") For Each obj in oOS ' if WinXP, use SWbemDateTime for date/time formatting! sLastBoot = ConvWbemTime(obj.LastBootUpTime) sNow = ConvWbemTime(obj.LocalDateTime) ' uptime = DateDiff("s",CDate(sLastBoot),CDate(sNow)) uptime = DateDiff("s",obj.LastBootUpTime,obj.LocalDateTime) Next Function ConvWbemTime(IntervalFormat) Dim sYear, sMonth, sDay, sHour, sMinutes, sSeconds sYear = mid(IntervalFormat, 1, 4) sMonth = mid(IntervalFormat, 5, 2) sDay = mid(IntervalFormat, 7, 2) sHour = mid(IntervalFormat, 9, 2) sMinutes = mid(IntervalFormat, 11, 2) sSeconds = mid(IntervalFormat, 13, 2) ' Returning format yyyy-mm-dd hh:mm:ss ConvWbemTime = sYear & "-" & sMonth & "-" & sDay & " " _ & sHour & ":" & sMinutes & ":" & sSeconds End Function wscript.echo CpuID Function CpuID() ' Obtaining CPU identification using WMI ' Script author: Torgeir Bakken ' Function returns a number of type integer: ' 0 ==> Unknown CPU ' 1 ==> Pentium or Pentium MMX ' 2 ==> Pentium Pro/II or Celeron ' 3 ==> Pentium III ' 4 ==> Pentium 4 ' based on this table: ' Family Model Type ' 5 < 4 Pentium ' 5 >= 4 Pentium MMX ' 6 < 3 Pentium Pro ' 6 >= 3 < 5 Pentium II ' 6 == 5 Pentium II or Celeron ' 6 == 6 Celeron ' 6 >= 7 Pentium III '15 >= 0 Pentium 4 ' Family 5 and 6 identification based on information from here: ' Dim oWMI, oCpu, sCpuDescr, aCpuDescr, i, iFamily, iModel, iVersion Set oWMI = GetObject("winmgmts:") For Each oCpu in oWMI.InstancesOf("Win32_Processor") sCpuDescr = oCpu.Description Next aCpuDescr = Split(sCpuDescr) For i = 0 to Ubound(aCpuDescr) If LCase(aCpuDescr(i)) = "family" Then iFamily = CInt(aCpuDescr(i+1)) End If If LCase(aCpuDescr(i)) = "model" Then iModel = CInt(aCpuDescr(i+1)) End If Next iVersion = (iFamily * 100) + iModel Select Case True Case iFamily = 5 ' Pentium or Pentium MMX CpuID = 1 Case iVersion < 607 ' Pentium Pro/II or Celeron CpuID = 2 Case iFamily = 6 ' Pentium III CpuID = 3 Case iFamily = 15 ' Pentium 4 CpuID = 4 Case Else ' Unknown CPU CpuID = 0 End Select End Function WScript.Echo InstalledApplications(".") Function InstalledApplications(node) Const HKLM = &H80000002 'HKEY_LOCAL_MACHINE Set oRegistry = GetObject("winmgmts://" _ & node & "/root/default:StdRegProv") sBaseKey = _ "SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\" iRC = oRegistry.EnumKey(HKLM, sBaseKey, arSubKeys) For Each sKey In arSubKeys iRC = oRegistry.GetStringValue( _ HKLM, sBaseKey & sKey, "DisplayName", sValue) If iRC <> 0 Then oRegistry.GetStringValue _ HKLM, sBaseKey & sKey, "QuietDisplayName", sValue End If If sValue <> "" Then InstalledApplications = _ InstalledApplications & sValue & vbCrLf End If Next End Function Set oNicStatus = _ GetObject("winmgmts:root\wmi").InstancesOf("MSNdis_MediaConnectStatus") For Each oNic In oNicStatus If oNic.NdisMediaConnectStatus = 0 Then sNicStatus = "connected" Elseif oNic.NdisMediaConnectStatus = 1 Then sNicStatus = "disconnected" Else sNicStatus = "unknown" End If WScript.Echo "Status of " & oNic.InstanceName & ": " & sNicStatus Next From a Google post. Note: this works great for modifying a variable for all Terminal Server sessions as well. The example below creates a variable named %RouterStatus% with the value of UP in the SYSTEM environment on a server named Mort. - AKA SetWmiEnvVar "MORT", "<SYSTEM>", "RouterStatus", "UP" Sub SetWmiEnvVar( Host, sContext, sVarName, sValue) ' Sets an environment variable on an arbitrary host via WMI ' NOTE: This will of course not affect currently running ' processes. ' For a systemwide value, use <SYSTEM> as context ' for default user, use <DEFAULT> ' otherwis,e use the specific user's name ' Get the class object itself Dim EnvClass, EnvVarInst Set EnvClass = GetObject("WinMgmts://" & Host _ & "/root/cimv2:Win32_Environment") ' Make a new instance of that class Set EnvVarInst = EnvClass.SpawnInstance_ ' File in the key props and props of interest on that instance EnvVarInst.UserName = sContext EnvVarInst.Name = sVarName EnvVarInst.VariableValue = sValue ' Write the new instance in to WMI EnvVarInst.Put_ End Sub You can call another VBScript using the Run method. The downside is that they cannot share code and variables (variables can of course be "transferred" using command line parameters or saving then in registry/in a file). Here is an example on how to run another script and pass it a parameter on the command line: Set WSHShell = CreateObject("WScript.Shell") WSHShell.Run "wscript c:\Test.vbs param1", , True In the Test.vbs file, you access the command line parameters with the WScript.Arguments object: Set oArgs = WScript.Arguments For i = 0 to oArgs.Count - 1 WScript.Echo oArgs(i) If you want to share code and/or variables, there are several other ways of doing this. The different methods all have their pros and cons. Here is a quick overview: 1) Read the second script into a textstream and run ExecuteGlobal or Execute on it. Pros: Can share all variables, functions and subs. Easy to implement. Cons: If you have an error in the included script, you will not get the line number where the error arose. Potential for namespace collisions (variable/procedure names) since all script elements share the same namespace. 2) Use a Windows Script Host File (.WSF) file and include scripts with script tag. Pros: Can share all variables, functions and subs. Easy to run both VBScript and JavaScript code together. If you have an error in a included script, you will get the line number where the error arose. Easy to implement. Cons: Potential for namespace collisions (variable/procedure names) since all script elements share the same namespace. 3) Use a Windows Script Component (.WSC, a COM component written in script) Pros You will get a COM interface to your script library (that also can be used by other programs) Method/property drop down list and syntax help in editors that supports typelibs No namespace collisions (variable/procedure names) since the script elements does not share the same namespace. Ideal for development environment with more than one developer Cons The most "complicated" solution *********************************************************** Here are some more details for each method: ___________________________________________________________ 1) Read the second script into a textstream and run ExecuteGlobal or Execute on it. Load the 2. script into the 1. script at runtime (on the fly) and execute it with ExecuteGlobal. They can then also share variables, functions etc. The sub below does this loading: Sub Include (sInstFile) Dim oFSO, f, s Set oFSO = CreateObject("Scripting.FileSystemObject") Set f = oFSO.OpenTextFile (sInstFile) s = f.ReadAll f.Close ExecuteGlobal s End Sub ___________________________________________________________ 2) Use a Windows Script Host File (.WSF) file and include scripts with script tag. Using Windows Script Files (.wsf) An example: <?xml version="1.0" ?> <package> <job> ' this will run "file1.vbs" <script language="VBScript" src="file1.vbs" /> ' this will run "file2.vbs" <script language="VBScript" src="file2.vbs" /> <script language="VBScript"> ' ' Some script code here if you want ... ' ' You can use variables, subs and functions ' from the included files. ' </script> </job> </package> ___________________________________________________________ 3) Use a Windows Script Component (.WSC). Use a Windows Script Component (.WSC) to get a COM interface for your VBScript "library". You can e.g. use Windows Script Component Wizard to create one. Take a look here for more info: Look up the Windows Script Components chapter in the docs.: WSH 5.6 documentation download: A simple WSC file example: From: Michael Harris \(MVP\) (mikhar-spam@mvps.org) Subject: Re: WSC and ProgID Versions Newsgroups: microsoft.public.scripting.scriptlets Date: 2001-10-01 07:04:22 PST If you want to create a WSC with Self-Registering Typelib, take a look at this article: From: Torgeir Bakken (Torgeir.Bakken-spam@hydro.com) Subject: Re: Creating a Self-Registering Typelib for a WSC Newsgroups: microsoft.public.scripting.wsh Date: 2002-06-09 09:40:06 PST More about WSF/WSC in the thread below: Subject: Implimenting a common.wsf file Also, if you go the WSC route, be sure not to have this line in the WSC file if you create a self-registering typelib: implements id="Behavior" type="Behavior" If you have it, the Write method for Scriptlet.TypeLib will fail. -- torgeir Microsoft MVP Scripting and WMI, Porsgrunn Norway Administration scripting examples and a ONLINE version of the 1328 page Scripting Guide:
http://www.mvps.org/scripting/people/bakkenalia.htm
crawl-001
refinedweb
1,784
57.67
Opened 6 months ago Closed 6 months ago Last modified 6 months ago #3860 closed Bug (No Bug) Declared var not detected Description #include <WinAPIInternals.au3> MsgBox(0,"",$WINVER) AU3Check 3.3.15.4 returns "$WINVER: possibly used before declaration." for this code, prod doesn't. Attachments (0) Change History (8) comment:1 Changed 6 months ago by Jos comment:2 follow-up: ↓ 4 Changed 6 months ago by anonymous Jos, you didn't use the correct token. $__WINVER comment:3 Changed 6 months ago by anonymous the variable is removed on the newest beta build comment:4 in reply to: ↑ 2 Changed 6 months ago by Jos comment:5 Changed 6 months ago by KaFu It’s indeed $__WINVER Tracs validator seems to eat the underscores, sorry for not realizing. Currently traveling, so mit sure about the latest prod version, but should be latest one. comment:6 Changed 6 months ago by KaFu Ah, var removed in Beta? Have to check that. comment:7 Changed 6 months ago by Jos - Component changed from Au3Check to Standard UDFs - Resolution set to No Bug - Status changed from new to closed Ok understand. So looks like the INTERNAL variable is removed from Beta, so you need to set it yourself using _WinAPI_GetVersion() Jos comment:8 Changed 6 months ago by KaFu Yep, confirmed, the INTERNAL variable was removed from Beta, so NO BUG. Guidelines for posting comments: - You cannot re-open a ticket but you may still leave a comment if you have additional information to add. - In-depth discussions should take place on the forum. For more information see the full version of the ticket guidelines here. AU3Check 3.3.14.5 returns the exact same error for me... which version are you running with production? Jos
https://www.autoitscript.com/trac/autoit/ticket/3860
CC-MAIN-2022-33
refinedweb
296
62.58
t = int(input())a,b=0,0prevp1,prevp2 = 0,0while(t>0): l = list(map(int,input().split())) t = int(input()) a,b=0,0 prevp1,prevp2 = 0,0 while(t>0): l = list(map(int,input().split()))> Got it to run using set+unordered_map, but now I wanted to try out the simpler while loop. #include <bits/stdc++.h> using namespace std; The solution would be right if we only had to print the id numbers, We also have to print the total no. before printing the id's. So you have to store each value in array then print ... The problems from have been moved to. The problems which were supposed to accept multiple solutions now have new checkers, and some wrong data has been fixed. So everything should hopefully be fine now. If you notice any issues please comment in the respective problem page. Notes:); ios_base::sync_with_stdio(false); cin.tie(NULL); To understand more about what they do, you can read Fast I/O for Competitive Programming Credits to Vinayak Sangar for the above. #include <iostream>#include <algorithm> using namespace std; #include <algorithm> The complexity of Count Sort is O(n). To what complexity can this solution be brought down to ? My code is O(nlogn). However, some people got O(n) without sorting. How is that possible ? My code is available at [C++] PYRAMID.
https://www.commonlounge.com/community/e448c00bce994d329e411139a57f9ce9/71a3e21ebfd6483193a3ba5fdc89f4e9
CC-MAIN-2019-47
refinedweb
228
69.18
ClothingList implements Clothing, List? Not! No language is perfect, but as languages go, Java comes as close as any I know. So what should one do when one likes a language a lot? Criticize it! One of Java's deficits is its lack of disambiguation of declared methods whose names conflict. Imagine you have a class, ClothingList that implements two interfaces, say List and Clothing. It's going to be a list of your closet contents, with the additional stipulation that the articles all fit you. (Yes, it's a kinda lame example. It's after midnight.) Both interfaces declare a method "size()". List.size() is of course intended to tell you the number of items in the list, while Clothing.size() is intended to tell you how big the articles are (medium, thank you). What we really want is a way to implement both methods and then distinguish between them. Java doesn't do this. It assumes that if two interfaces both declare a method with the same signature, then clearly they must be intended to do the same thing. And should they declare a different return type, then Java will not let you implement both interfaces at all. So this: public class ClothingList implements Clothing, List { public String size() { return null; } } interface Clothing { String size(); } interface List { int size(); } isn't even legal. This leave us with an unpleasant decision to make. If we control the interface Clothing, then we can just change names. If we don't, then we really can't use that interface at all! We could do some crazy stuff with delegates, but calling outside methods that expect a Clothing item is going to be awkward. Been there. Done that. Didn't like it. The point is that a language should support us in what we want to do. We shouldn't have to adjust our design to fit the language.
http://www.drdobbs.com/jvm/clothinglist-implements-clothing-list-no/228701488
CC-MAIN-2015-40
refinedweb
317
74.9
08 October 2012 05:25 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The plants were restarted during the week of 30 September to 7 October as scheduled and have currently resumed normal operating capacity, the source added. The derivative plants include a 300,000 tonne/year linear low density polyethylene (LLDPE), a 300,000 tonne/year high density polyethylene (HDPE) and a 450,000 tonne/year polypropylene (PP) plants. The derivative plants and naphtha cracker were shut on 21 and 22 August respectively for regular maintenance. Prices are expected to be affected, as the plants’ restart will increase the supply of both the PE and PP in the market, sources said. However, the price direction is unclear, they added. SSTPC is a
http://www.icis.com/Articles/2012/10/08/9601722/sinopec-sabic-tianjin-petrochemical-restarts-cracker-pe-pp.html
CC-MAIN-2014-35
refinedweb
122
55.13
a list of the classes provided for compatibility with Qt3, see \l{Qt3 Support. compatclasses The compatclasses argument generates a list in alphabetical order of the support classes. It is normally used only to generate the Qt3 Support Classes page this way: / *! \page compatclasses.html \title Qt3 Support Classes \ingroup classlists \brief Enable porting of code from Qt 3 to Qt 4. These are the classes that Qt provides for compatibility with Qt 3. Most of these are provided by the Qt3Support module. \generatelist compatclasses * / A support class is identified in the \class comment with the \compat complete list of licenses in the documentation. Each license is identified using the \legalese command. This command is used to generate the Qt license information page this way: / *! \page licenses.html \title Other Licenses Used in Qt \ingroup licensing \brief Information about other licenses used for Qt components and third-party code. Qt contains some code that is not provided under the \l{GNU General Public License (GPL)}, \l{GNU Lesser General Public License (LGPL)} or the \l{Qt Commercial Edition}{Qt Commercial License Agreement}, but rather under specific licenses from the original authors. Some pieces of code were developed by The Qt Company and others originated from third parties. This page lists the licenses used, names the authors, and links to the places where it is used. The Qt Company gratefully acknowledges these and other contributions to Qt. We recommend that programs that use Qt also acknowledge these contributions, and quote these license statements in an appendix to the documentation. See also: \l{Licenses for Fonts Used in Qt for Embedded Linux} \generatelist legalese * /. / *! The Qt 3 support library is provided to keep old source code working. In addition to the \c Qt3Support classes, Qt 4 provides compatibility functions when it's possible for an old API to cohabit with the new one. \if !defined(QT3_SUPPORT) \if defined(QT3_SUPPORTWARNINGS) The compiler emits a warning when a compatibility function is called. (This works only with GCC 3.2+ and MSVC 7.) \else To use the Qt 3 support library, you need to have the line QT += qt3support in your .pro file (qmake automatically define the QT3_SUPPORT symbol, turning on compatibility function support). You can also define the symbol manually (for example, if you don't want to link against the \c Qt3Support library), or you can define \c QT3_SUPPORT_WARNINGS instead, telling the compiler to emit a warning when a compatibility function is called. (This works only with GCC 3.2+ and MSVC 7.) \endif \endif * / If the QT3_SUPPORT. If QT3_SUPPORT is not defined but QT3_SUPPORT_WARNINGS. The compiler emits a warning when a compatibility function is called. (This works only with GCC 3.2+ and MSVC 7.) If none of the symbols are defined, the comment will be rendered as The Qt 3 support library is provided to keep old source code working. In addition to the Qt3Supportclasses, Qt 4 provides compatibility functions when it's possible for an old API to cohabit with the new one. To use the Qt 3 support library, you need to have the line QT += qt3support in your .pro file (qmake automatically define the QT3_SUPPORT symbol, turning on compatibility function support). You can also define the symbol manually (e.g., if you don't want to link against the Qt3Supportlibrary), or you can define QT3_SUPPORT_WARNINGSinstead, telling the compiler to emit a warning when a compatibility function is called. (This works only with GCC 3.2+ and MSVC 7.) See also \if, \endif, defines and falsehoods. ="2016"/> .
http://doc.qt.io/qt-5/12-0-qdoc-commands-miscellaneous.html
CC-MAIN-2017-04
refinedweb
588
57.06
Credit: Doug Fort You want to know whether daylight saving time is in effect in your local time zone today. It's a natural temptation to check time.daylight for this purpose, but that doesn't work. Instead you need: import time def is_dst( ): return bool(time.localtime( ).tm_isdst) In my location (as in most others nowadays), time.daylight is always 1 because time.daylight means that this time zone has daylight saving time (DST) at some time during the year, whether or not DST is in effect today. The very last item in the pseudo-tuple you get by calling time.localtime, on the other hand, is 1 only when DST is currently in effect, otherwise it's 0which, in my experience, is exactly the information one usually needs to check. This recipe wraps this check into a function, calling built-in type bool to ensure the result is an elegant true or False rather than a rougher 1 or 0optional refinements, but nice ones, I think. You could alternatively access the relevant item as time.localtime( )[-1], but using attribute-access syntax with the tm_isdst attribute name is more readable. Library Reference and Python in a Nutshell about module time.
http://flylib.com/books/en/2.9.1.84/1/
CC-MAIN-2017-51
refinedweb
202
65.22
Feature #8365 Make variables objects Description While refactoring a wiki article about Ruby, I found this anonymous proposal: . integer new anintvar 5 char new acharvar anintvar Variable types and the reference object they point to can be 2 different types. But the variable decides how to do the casting not the object referenced. "anintvar" and "acharvar" reference the same object." I am pasting this anonymous proposal here in original full length, before I rewrite it to a more concise form in that wiki page. History #1 Updated by Yukihiro Matsumoto about 2 years ago - Status changed from Open to Feedback - Assignee set to Yukihiro Matsumoto I am sorry I don't understand the proposal, nothing more than vague idea. Proposals should be concrete and be able to implement. Could you describe both the model and the syntax for the proposal? Matz. #2 Updated by Boris Stitnicky about 2 years ago Here is my understanding of the original author's idea, which, I hope, will not turn out to be dismally and irremediably flawed. Its mind-boggling quality pleasantly intrigues me, as is the case with many extant Ruby features. I was made receptive to the original author's idea, as I just recently implemented emancipated constant magic (#7149). I don't see such a big difference between constant assignment hooks and variable assignment hooks – Ruby constants, I understood, are just squeamish variables after all. Firstly, the syntax part of the original proposal ("integer new anintvar 5" and such) is, imho, crap. But the idea of Variable class titillates me. With variables, I see two separate issues: - Variable contents (what is assigned to it) - Membership in (speaking in my own terminology) a "club" of variables. Concentrating on 2: Each variable is created a member of some club. For a constant, this club is the namespace, to which the constant belongs. For an instance variable, an instance. For a class variable, a class. For a global constant ... hm ... a club of global constants. For a local variable ... hm ... hm ... a binding? (I really don't know much about how local variables are organized.) The hierarchy of variable classes, as I feel it, would be: - class Variable - class Constant < Variable - class InstanceVariable < Variable - class LocalVariable < Variabls - class GlobalConstant would be either < Variable, or < Constant - class ClassVariable would be probably < Variable, less likely < InstanceVariable So this class hierarchy part is a bit vague. I'm not really sure about it. But when I write foo = 42 this is what would happen behind the scenes: LocalVariable.new( :foo, club_of_local_variables ).set( 42 ) 42 would thus be assigned to LocalVariable instance :foo, which would be reachable from the regional club of local variables (binding?). One would be able to create variable subclasses: # This is a "typed" variable that compulsorily applies Array() to its value class LocalArrayVariable < LocalVariable def set value super( Array( value ) ) end end And hooks, that would cause stuff to happen when specially named variables are assigned to: class << LocalVariable alias old_new new def new( symbol, club ) if symbol.to_s.starts_with? "array_" then LocalArrayVariable.new( symbol.to_s[6..-1].to_sym, club ) # strips away "array_" else old_new( symbol, club ) end end end Maybe this "alias old_new new" is not the best possible hook practice, but nevertheless: array_foo = 42 would cause "foo" to be assigned value [42] foo #=> [42] With this, esoteric behavior of sigils ($, @) could be made exoteric, if not, heaven forbid, user modifiable. One could make variables first module Foo; end variable = Constant.new :Bar, Foo and actually assign them later variable.set( "Bar" ) Foo::Bar #=> "Bar" The variable object could tell us its attributes variable.name #=> :Bar variable.club #=> :Foo # Of course, :club keyword is a joke, I don't know any better name hand us its value variable.get #=> "Bar" Whereas variable = "Bar" would simply assign "Bar" string to "variable" variable. Who knows, perhaps what is today known as Binding, could just become a collection of Variable instances? One also wonders, what would the relationships between variables and their clubs be. Could they abandon their club (and be garbage collected as unreachable)? Could the variables transfer to other clubs? Or could clubs fire them for bad performace? Could they change name? There is potentialy a host of problems, which I as a C-illiterate user, do not fully realize... #3 Updated by Yukihiro Matsumoto about 2 years ago Interesting, but this would make Ruby VERY SLOW. I am not sure whether it's worth the performance degration. Matz. #4 Updated by Boris Stitnicky 8 months ago I have noticed newly added Binding#local_variable_get and Binding#local_variable_set. I wanted to express appreciation. Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/8365
CC-MAIN-2015-22
refinedweb
773
56.05
this is my first time posting on here. I am using c++. My assignment is to read in data from a .txt file into a 10 by 10 array. Then I am supposed to add each row and put their total on the end. Then each column needs added and the totals in a new row at the bottom. I also need the sums sum in the corner. Her is an example of what the output has to look like: Example of output screen. Here is the code I currently have: #include <iostream> #include <iomanip> using namespace std; int main() { cout << "Kaitlin Stevers" << endl; cout << "Exercise 9A - Arrays" << endl; cout << "October 31, 2016" <<endl; const int ROWS = 10; const int COLS = 10; float numbers[10][10]; ifstream inputFile; inputFile.open("Ex9data.txt"); int countRows = -1; int countCols = -1; while(++countRows < ROWS) { countCols = -1; while(++countCols < COLS) { inputFile >> numbers[countRows][countCols]; } } inputFile.close(); } The term summing means to add up numbers. Summing the rows means to add all the numbers in the row. // Pick a row to sum const unsigned int row_to_sum = 3; // Declare and initialize a summation variable. float sum = 0.0f; // Add all the columns in the given row. for (unsigned int i = 0; i < COLS; ++i) { sum += numbers[row_to_sum][i]; } To sum up the a column, keep the column index constant and iterate the rows. const unsigned int column_to_sum = 8; sum = 0.0f; for (unsigned int j = 0; j < ROWS; ++j) { sum += numbers[j][column_to_sum]; }
https://codedump.io/share/KfTdZMa5Hr0M/1/reading-data-from-a-txt-file-into-an-array-and-then-summing-the-rows-and-columns
CC-MAIN-2018-26
refinedweb
247
75.4
PyQt Recently, FreeCAD has switched internally to use PySide instead of PyQt. That change was mainly done because of the licenses, PySide having an LGPL license which is more compatible with FreeeCAD. Other than that, PySide works exactly the same way as PyQt, and in FreeCAD you can usually use any of them, as you prefer. If you choose to use PySide, just replace all "PyQt" in the example code below with "PySide". Differences Between PySide and PyQt PyQt is a python module that allows python applications to create, access and modify Qt applications. You can use it for example to create your own Qt programs in python, or to access and modify the interface of a running qt application, like FreeCAD. By using the PyQt module from inside FreeCAD, you have therefore full control over its interface. You can for example: - Add your own panels, widgets and toolbars - Add or hide elements to existing panels - Change, redirect or add connections between all those elements PyQt has an extensive API documentation, and there are many tutorials on the net to teach you how it works. If you want to work on the FreeCAD interface, the very first thing to do is create a reference to the FreeCAD main window: import sys from PyQt4 import QtGui app = QtGui.qApp mw = app.activeWindow() Then, you can for example browse through all the widgets of the interface: for child in mw.children(): print 'widget name = ', child.objectName(), ', widget type = ', child The widgets in a Qt interface are usually nested into "containers" widgets, so the children of our main window can themselves contain other children. Depending on the widget type, there are a lot of things you can do. Check the API documentation to see what is possible. Adding a new widget, for example a dockWidget (which can be placed in one of FreeCAD's side panels) is easy: myWidget = QtGui.QDockWidget() mw.addDockWidget(QtCore.Qt.RightDockWidgetArea,myWidget) You could then add stuff directly to your widget: myWidget.setObjectName("my Nice New Widget") myWidget.resize(QtCore.QSize(300,100)) # sets size of the widget label = QtGui.QLabel("Hello World", myWidget) # creates a label label.setGeometry(QtCore.QRect(50,50,200,24)) # sets its size label.setObjectName("myLabel") # sets its name, so it can be found by name But a preferred method is to create a UI object which will do all of the setup of your widget at once. The big advantage is that such an UI object can be created graphically with the Qt Designer program. A typical object generated by Qt Designer is like this: class myWidget_Ui(object): def setupUi(self, myWidget): myWidget.setObjectName("my Nice New Widget") myWidget.resize(QtCore.QSize(300,100))) To use it, you just need to apply it to your freshly created widget like
http://www.freecadweb.org/wiki/index.php?title=PyQt
CC-MAIN-2014-49
refinedweb
467
54.73
STATISTICAL FUNCTIONS IN NUMPY In this tutorial, we are going to learn about the statistical functions in Numpy and work with some of them like finding min, max values. Calculating Mode and mean values and finding standard deviation and variance. What are Statistical Functions? There are a number of statistical function in Numpy to find values related to statistics like minimum, maximum etc by going through each element of the given array. Let’s look at some examples to learn more about them. Let’s discover how to use the minimum and maximum function in numpy: Min and Max Function in Numpy import numpy as np arr = np.array([[3,5,6],[7,8,9]]) print("The array is: ") print(arr) print("\n") print("Min value is: ") print(np.min(arr)) print("\n") print("Max value is: ") print(np.max(arr)) Output: The array is: [[3 5 6] [7 8 9]] Min value is: 3 Max value is: 9 Finding the Median in Numpy Median function in numpy is used for finding the median of the array. Let’s look at an example to find out about it: import numpy as np arr = np.array([[10,20,30],[40,50,60],[70,80,90]]) print("The array is: ") print(arr) print("\n") print("Median function applied: ") print(np.median(arr)) Output: The array is: [[10 20 30] [40 50 60] [70 80 90]] Median function applied: 50.0 Finding the Mean in Numpy The mean function in numpy is used for calculating the mean of the elements present in the array. You can calculate the mean by using the axis number as well but it only depends on a special case, normally if you want to find out the mean of the whole array then you should use the simple np.mean() function. For example: import numpy as np arr = np.array([[10,20,30],[40,50,60],[70,80,99]]) print("The array is: ") print(arr) print("\n") print("Mean function applied: ") print(np.mean(arr)) print("Mean function applied on axis 1: ") print(np.mean(arr, axis=1)) Output: The array is: [[10 20 30] [40 50 60] [70 80 99]] Mean function applied: 51.0 Mean function applied on axis 1: [20. 50. 83.] Finding Standard Deviation in Numpy You can find out the standard deviation of an array using the std() function, for example: import numpy as np arr = np.array([10,20,30]) print("The standard deviation of the array is: ") print(np.std(arr)) Output: The standard deviation of the array is: 8.16496580927726 Finding Variance in Numpy As you may or may not know, that variance is the mean (average) of squared deviations, and in order to calculate the variance in numpy we use the var() function. import numpy as np arr = np.array([10,20,30]) print("The variance of the array is: ") print(np.var(arr)) Output: The variance of the array is: 66.66666666666667
https://python-tricks.com/statistical-functions-in-numpy/
CC-MAIN-2021-39
refinedweb
494
63.9
Handling Missing Numeric Data You always need to keep track of where you’ve got missing data and what to do about it. Not only is it the right thing to do from a “build a scalable model” approach, but sklearn will often throw its hands up in frustration if you don’t tell it what to do when it encounters the dreaded np.nan value. The Data Let’s load the iris dataset import numpy as np from sklearn.datasets import load_iris data = load_iris() X = data['data'] y = data['target'] Which is all non-null (np.isnan(X)).any() False And then clumsily make the middle 50 rows all null values. X[50:-50] = np.nan sum(np.isnan(X)) array([50, 50, 50, 50]) And while pandas might be clever enough to toss out NULL values import pandas as pd pd.DataFrame(X).mean() 0 5.797 1 3.196 2 3.508 3 1.135 dtype: float64 numpy isn’t X[:].mean() nan And by extension, neither is sklearn, which is all parked on top of the underlying numpy arrays. from sklearn.linear_model import LinearRegression try: model = LinearRegression() model.fit(X, y) model.predict(X) except: print("Doesn't work") Doesn't work Modelling with Null Values Thankfully, sklearn has a helpful Imputer class to handle this hiccup for us. from sklearn.preprocessing import Imputer imputer = Imputer() imputer.fit_transform(X).mean() 3.4090000000000003 By default, this will fill missing values with the mean of the column. imputer.fit(X) Imputer(axis=0, copy=True, missing_values='NaN', strategy='mean', verbose=0) But if we wanted to use, say, median, it’d be as wasy as passing that into the strategy argument at instantiation. imputer = Imputer(strategy='median') imputer.fit_transform(X).mean() 3.3601666666666667
https://napsterinblue.github.io/notes/machine_learning/preprocessing/imputation/
CC-MAIN-2021-04
refinedweb
295
58.69
Hey I’m new to Arduino but I am really enjoying it so far! I am trying to get this to work: I installed Python 2.7 along with Pyserial 2.5 just to follow the instructions exactly. I am running Windows 7 64 bit on a PC. When I run the servo.py module I usually get this error or it comes up after I try to import the servo: import servo Traceback (most recent call last): File “<pyshell#0>”, line 1, in import servo File “C:\Python27\servo.py”, line 29, in ser = serial.Serial(usbport, 9600, timeout=1) File “C:\Python27\lib\site-packages\serial\serialwin32.py”, line 30, in init SerialBase.init(self, *args, **kwargs) File “C:\Python27\lib\site-packages\serial\serialutil.py”, line 260, in init self.open() File “C:\Python27\lib\site-packages\serial\serialwin32.py”, line 56, in open raise SerialException(“could not open port %s: %s” % (self.portstr, ctypes.WinError())) SerialException: could not open port com8: [Error 5] Access is denied. I have read that I need to give python administrator access but I’m not sure how to do that.
https://forum.arduino.cc/t/python-pyserial-on-windows-7-pc-com-port-problems/83683
CC-MAIN-2022-33
refinedweb
190
61.93
xxfi_close SYNOPSIS #include <libmilter/mfapi.h> sfsistat (*xxfi_close)( SMFICTX *ctx ); The current connection is being closed. DESCRIPTION Called When xxfi_close is always called once at the end of each connection. Default Behavior Do nothing; return SMFIS_CONTINUE. ARGUMENTS Argument Description ctx Opaque context structure. NOTES xxfi_close may be called "out-of-order", i.e. before even the xxfi, xxfi_close is called. It can therefore be the only callback ever used for a given connection, and developers should anticipate this possibility when crafting their xxfi_close code. In particular, it is incorrect to assume the private context pointer will be something other than NULL in this callback. xxfi_close is called on close even if the previous mail transaction was aborted. xxfi_close is responsible for freeing any resources allocated on a per-connection basis. Since the connection is already closing, the return value is currently ignored. Copyright (c) 2000, 2003, 2004 Proofpoint, Inc. and its suppliers. All rights reserved. By using this file, you agree to the terms and conditions set forth in the LICENSE.
https://www.mirbsd.org/htman/i386/manDOCS/milter/xxfi_close.html
CC-MAIN-2016-07
refinedweb
171
52.56
Advanced Namespace Tools blog 13 January 2017 Setting up Werc+Pegasus Again Joy of Repeating Previous Werc Work 9gridchan.org was one of the first few sites to use Uriel's werc for content. Back then it was plan9port only, and even after it was available on Plan 9, it took me awhile to switch over. It wasn't until 2015 that I converted all my webstuff to run on Plan 9 exclusively. The biggest challenge was setting up the Pegasus web server to use werc. I have no particular reason for not using rc-httpd, I will probably start doing so at some point, but for the moment, I'm using a mix of original plan 9 httpd, pegasus, and tcp80 to do my web-serving. I'm working through a series of updates and improvements to my infrastructure, so I wanted to get another node running basically the same setup as doc.9gridchan.org does. I remember it being tricky to get going, and even with a working setup, recreating the work was fairly tricky. Here is how it went: Setting up Pegasus At the time of this writing, the original Pegasus website at is down. I hope it comes back up soon, but at least archive.org has a lot of the content mirrored. Pegasus has a fairly complicated setup process, so it was necessary to consult the online documentation frequently. As I write this post, archive.org is having server issues, so >< and @@ to all that. Anyway... Start out by downloading, untarring, and mking the stuff in pegasus-2.8e.tar. This is the pegasus server itself, and the mon utility. I wanted to be able to serve https with a certificate chain, which required a small patch to pegasus, borrowing the code from the original plan9 httpd: --- httpdorig.c Fri Jan 13 07:18:29 2017 +++ httpd.c Fri Jan 13 07:21:32 2017 @@ -93,6 +93,7 @@ static char *webmnt; // to be "/usr/web/mnt" static int binding; // 0: main document, 1: virtual host doc, 2: user's doc static char *certificate; +PEMChain *certchain; static uchar *cert; static int certlen; static int ntossed; @@ -168,6 +169,11 @@ if(cert == nil) sysfatal("reading certificate: %r"); break; + case 'd': + certchain = readcertchain(EARGF(usage())); + if (certchain == nil) + sysfatal("reading certificate chain: %r"); + break; case 'm': mntok = 1; break; @@ -479,6 +485,8 @@ memset(&conn, 0, sizeof(conn)); conn.cert = cert; conn.certlen = certlen; + if (certchain != nil) + conn.chain = certchain; data = tlsServer(data, &conn); if(data < 0){// added by Kenar syslog(0, HTTPLOG, "error: tlsServer certificate=%s: %r", certificate); After pegasus and mon were built, it was time to set up /usr/web the way pegasus expects. The example directory in pegasus has a framework provided. mkdir /usr/web dircp pegasus-2.8e/example/usr/web /usr/web There are several configuration files that go in other places in the system, some which replace provided defaults. Make any needed backup copies and then: cp pegasus-2.8e/example/lib/namespace.httpd /lib/namespace.httpd cp pegasus-2.8e/example/sys/lib/httpd.conf /sys/lib/httpd.conf cp pegasus-2.8e/example/sys/lib/httpd.rewrite /sys/lib/httpd.rewrite touch /sys/log/blacklist /sys/log/http chmod 666 /sys/log/blacklist /sys/log/http Pegasus also expects to have the domain of incoming http requests in /lib/ndb/local. This could potentially cause conflicts with your existing auth setup. If you don't have a domain, I imagine it works to set the dom= to the ip. On the server I was just configuring, I have this entry in /lib/ndb/local: sys=tip ip=108.61.229.146 dom=tip.9gridchan.org Without this, pegasus will answer every request with the message: "No dom". This doesn't seem to be mentioned in the pegasus documentation, fortunately examining the source code let me figure it out. I added the following entries to /sys/lib/httpd.rewrite, which dont even make sense to me because I have nothing but an unused index.html in the tip subdirectory, but they seem to be important for making things work: */usr/web/tip / */usr/web/tip The content pegasus serves is taken from /usr/web/doc - so I added an index.html file there with the content "FOOMP" and tested the webserver according to the recommended invocation via mon: cpu% b=/amd64/bin cpu% $b/mon -du web $b/httpd -uM Note that this is after the compiled pegasus was placed in /amd64/bin/httpd - the original plan9 httpd is at ip/httpd/httpd so there is no conflict. With this running, when I went to I saw "FOOMP" as desired. Setting up Werc This part involved a bit of vague, unhelpful information; sorry. I remembed that when I was setting up doc.9gridchan.org I had to make a couple modifications to the werc source code. I didn't remember what they were, so I just tarred up the werc directory and copied it over to the other server so I would import them automatically. Werc has been updated since I did the setup, so my particular changes might not be relevant or needed to the latest werc distribution. I will try to get a diff of them anyway, and also try out the newest werc and see what happens - at some point. Right now I just want to finish documenting what I did. [Later note: it seems like the only change is right at the beginning of werc.rc - I added "cd bin" immediately after the shell bang.] There is some kind of weirdness with pegasus/werc involving the working directory of the invocation. I end up using two copies of the werc tree, which is probably a silly and inefficient way to do things. I untarred my werc in /usr/web/doc so its files were under /usr/web/doc/werc, then I made copies of the werc directories into /usr/web/doc/apps, /usr/web/doc/bin, /usr/web/doc/etc, /usr/web/doc/lib, /usr/web/doc/pub, /usr/web/doc/sites, /usr/web/doc/tpl and /usr/web/doc/_werc. In /usr/web/doc/sites (not /usr/web/doc/werc/sites) I put a copy of the doc.9gridchan.org directory, and then made tip.9gridchan.org another copy of it. The invocation of cgi in pegasus is controlled by /usr/web/etc/handler so I created it to look like this, matching the subdirectories of the tip.9gridchan.org dir: /them* - + /doc/werc/bin/werc.rc /blog* - + /doc/werc/bin/werc.rc /antfarm* - + /doc/werc/bin/werc.rc /old* - + /doc/werc/bin/werc.rc /guide* - + /doc/werc/bin/werc.rc With this done, things were almost working - I got an error of "markdown: invalid execution header" or something like that, which was because the default markdown formatter included in the version of werc that I had was a 386 plan 9 executable and I was running on amd64 on this node. Fortunately, an awk alternative was available (and I believe is now the default) so I just had to edit the /usr/web/doc/etc/initrc file used by werc to change the default by uncommenting the md2html.awk entry and commenting out the original markdown: formatter=(md2html.awk) #formatter=(fltr_cache markdown.pl) #formatter=(markdown) With this done, things were working ok, and all that remained was to restart pegasus and have it serve https using a cert I got from letsencrypt, as explained in another blog post: cpu% k=/usr/glenda/stash/factotumkey cpu% c=/usr/glenda/stash/tip.9gridchan.org.crt cpu% $b/mon -du web -r $k $b/httpd -uM -p443 -c $c -d $c And now and friends are online. I was using the 'tip' server for testing this out - the actual public 9gridchan websites using https now are and and..
http://doc.9gridchan.org/blog/170113.werc.pegasus
CC-MAIN-2017-22
refinedweb
1,312
54.83
1573790681 Upcoming new JavaScript features You should know if you use JavaScript everyday Since ECMAScript2015 (also called ES6) was released, JavaScript has changed and improved widely. This is excellent news for all of the JavaScript developers. Furthermore, a new ECMAScript version has released every year. You likely didn’t notice what features were added in the latest ECMAScript, which was released in June 2019. I will briefly show you the new features added in the latest version and talk about the new features for the future version. The features that I will show you are NOT yet decided to be in the next version. All of what I will talk about in this post are currently in stage 3. Check out this repo if you want to get more details. A method that creates a new array with all sub-array elements concatenated into it recursively up to the specified depth. const array = [1, 2, [3, 4]]; array.flat(); // [1, 2, 3, 4]; This is very useful, especially when you want to flatten your nested array. If the depth of your array is deeper than one depth, calling flat once can’t entirely flatten your array. flat takes a parameter for depth, which refers to how many depths you want it to go into to flatten the array. // Crazy example const crazyArray = [1, 2, [3, 4], [[5], [6, [7,8]]]]; crazyArray.flat(Infinity); // [1, 2, 3, 4, 5, 6, 7, 8]; // The parameter must be the number type The deeper you want to search the array, the more computing time will be required to flatten it. Note that IEs and Edge do not support this feature. A method first maps each element using a mapping function, then flattens the result into a new array. const arr = ["it's Sunny in", "", "California"]; arr.flatMap(x => x.split(" ")); // ["it's","Sunny","in", "", "California"] The difference between flat and flatMap is that you can put a custom function in flatMap to manipulate each value. Additionally, unlike flat, flatMap flattens one depth array, only. The return value should be an array type. This would be very useful when you should do something before flattening the array. There were more features added to ES10. if you want to know more about them. In stage 3, there are a few interesting features suggested. I will introduce some of them to you briefly. When you assigned a big number to a variable, weren’t you confused on how big that number is or if you wrote it right? This proposal allows you to put an underscore between numbers so you can count it easier. let budget = 1_000_000_000_000; // What is the value of `budget`? It's 1 trillion! // // Let's confirm: console.log(budget === 10 ** 12); // true It will be up to each developer whether to use this feature once it’s released, but one thing’s for sure, this feature would reduce your headaches for counting how big a number is! Top-level _await_enables modules to act as big async functions: With top-level _await_, ECMAScript Modules (ESM) can _await_resources, causing other modules who _import_them to wait before they start evaluating their body. The motivation of this feature was that when you import a module which has async function, the output of the async function is undefined. // awaiting.mjs import { process } from "./some-module.mjs"; const dynamic = import(computedModuleSpecifier); const data = fetch(url); export const output = process((await dynamic).default, await data); There are two files. output could be undefined if it’s called before the Promises tasks are done. // usage.mjs import { output } from "./awaiting.mjs"; export function outputPlusValue(value) { return output + value } console.log(outputPlusValue(100)); setTimeout(() => console.log(outputPlusValue(100), 1000); usage.mjs will not execute any of the statements in it until the awaits in awaiting.mjs have had their Promises resolved. This would be one of the most useful features amongst proposals in stage 3. We often wrote this kind of code. const obj = { name: 'James' }; const name = obj.name || 'Jane'; // James If obj.name is falsy, then return ‘Jane’, so undefined won’t be returned. But the problem is, an empty string( ‘’) is also considered falsy in this case. Then we should rewrite it again like this below. const name = (obj.name && obj.name !== '') || 'Jane'; It is a pain in the neck to write the code like that every time. This proposal allows you to check null and undefined only. This proposal goes with Nullish Coalescing for JavaScript, especially in TypeScript. TypeScript has announced that they will include Nullish Coalescing for JavaScript and this proposal in their next released version, 3.7.0. const city = country && country.city; // undefined if city doesn't exist Look at the example code. To get city, which is in country object, we should check if country exists and if city exists in country. With Optional Chaining, this code can be refactored like this. const city = country?.city; // undefined if city doesn't exist This feature seems very handy and useful for this situation. import { fetch } from '../yourFetch.js'; (async () => { const res = await fetch(); // res && res.data && res.data.cities || undefined const cities = res?.data?.cities; })(); _Promise.any_accepts an iterable of promises and returns a promise that is fulfilled by the first given promise to be fulfilled, or rejected with an array of rejection reasons if all of the given promises are rejected. With async- await, try { const first = await Promise.any(promises); // Any of the promises was fulfilled. } catch (error) { // All of the promises were rejected. } With Promise pattern, Promise.any(promises).then( (first) => { // Any of the promises was fulfilled. }, (error) => { // All of the promises were rejected. } ); Since there were Promise all, allSettled, and race, there wasn’t any. So this feature is simple but powerful for a needed situation. However, this proposal wasn’t tested yet, so this proposal might have a longer time to be accepted in a future version of ECMAScript. There are so many interesting proposals in stage 3. I can’t wait to meet them in ES11 or ES12. Of course, I won’t need all of them, but some of them would definitely make my codes more elegant. Thank you ! If you liked this post, share it with all of your programming buddies! #JavaScript #Nodejs #Coding #Web Development #Programming 1574080038 nice 1582191172 Great
https://morioh.com/p/90a0bdfc554c
CC-MAIN-2021-49
refinedweb
1,054
67.45
This recipe suggests an idiom for property creation that avoids cluttering the class space with get/set/del methods that will not be used directly. Discussion Using the standard. Following this idiom is, of course, unnecessary as the standard method works just fine. And, in fact, using this idiom for creating properties like bar (above), will seem excessive. Still, it does provide a clean method for restricting the avenues of access to your classes by their users - if there are no get/set/del methods available, then users of your classes will have to use your property, which was your intention. NOTE: see comment "Just so users are aware" below. Nice use of an enclosing namespace. A variation. property() accepts the keyword arguments fget, fset, fdel, and doc. A variation would be to use those names and return locals() rather an explicit tuple. For example, Returning locals() does seem to violate "explicit is better than implicit" but it simplifies the return value a little for read-only properties. Here's the read-only property example: Just so users are aware. Using locals() is a clean solution, however, it must be used with care. The nested method names must be as David has prescribed, and you may not introduce any other names into the local scope, e.g. def bar(): bar = property(**bar()) will give the following error: TypeError: 'x' is an invalid keyword argument for this function My idiom does not suffer from this restriction, however, you are required to return the get/set/del/doc in the correct order, and to not return anything other than those items. A possible compromise would be to return an explicit dictionary rather than a tuple or locals(), e.g. requires python 2.3+ return dict(fget=fget, doc=doc) using a metaclass? If one has a lot of getter/setter methods, using a metaclass approach might be easier in the long run. Maybe something like the following. I have to admit though, that the code for the meta class is much more complicated than the original recipe. The solution you've provided allows for the automated creation of simple properties. But that is not the issue being addressed here. While the examples provided above show only simple properties, the intended use for this idiom is to encapsulate the get/set/del methods of more complex properties, such as the following: suppose we're inside a class, and we want to make a read-only property 'average', that returns the average fitness of all individuals in a population def average(): average = property(**average()) The idea is not about avoiding having to create the get/set/del methods yourself; rather it is about avoiding leaving those methods accessible to users of your class by removing those methods from the class space (where they are normally defined). For simple property creation alone, I would refer people to my own makeproperty recipe, or to your recipe using metaclasses. For a more related metaclass solution to this problem I would refer people to the following attempt: You are right, of course. I guess I got carried away a bit and solved a problem that was not asked for :-) Nice, indeed. It seems you can use 'del' This appears to work: Yes, you can certainly use del to explicitly remove the get/set/del methods from the class' namespace (note: the parenthesis are not required). This idiom does that implicitly for you. There are a couple of other things it does for you: It groups the methods together visually through indentation, offsetting them from other method definitions, and tying them more directly to the property definition. It allows for a consistent and repeatable naming convention. The names of the get/set/del methods are, or can be, identical for every property. There's no need to have get_foo, set_foo, get_bar, set_bar for properties foo and bar when you can just use fget, fset for both. Whether these provide any real benefit can be debated. Personally, I like grouping related things together, and seperating them out from unrelated things. And I like having every property definition look nearly identical, by being able to re-use the names fget, fset, fdel for each one. The fact that these auxiliary methods are also automatically removed from the class' namespace, reducing namespace pollution and unintended avenues for access, is a nice side benefit. I like the idea to use enclosed namespace. Here is another solution: Now property definition is very simple: Unnecessary self argument can be avoided by automatially aplying staticmethod decorator in metaclass. Possible idiom after PEP318. There is ongoing discussion on python-dev about accepting PEP318 - Decorators for Functions, Methods and Classes - for inclusion into Python 2.4. If accepted, some small changes to the property descriptor would allow this idiom to be re-expressed as follows: This is better, but still not ideal: it's still a function definition masquerading as a property definition; and it requires you to return locals() to gain access to the get/set/del methods. Perhaps, one day, the language will grow a property block, e.g. As an aside: Ruby's approach is interesting, though not applicable Please check out my recipe: Easy Property Creation in Python Only 7 lines of code, easy to use, easy to understand, easy to customize, make the code much netter and will save you a lot of typing.
http://code.activestate.com/recipes/205183/
crawl-002
refinedweb
901
51.18
In today’s world, just having a website is not enough. The website needs to have a clean UI and it needs to be intuitive. And most importantly, it needs to have some sort of interactive element. Interactivity keeps users glued to your site for longer periods of time. As a result, it increases the chances that users will become customers. Also, longer interaction time leads to a lower bounce rate and a higher ranking on search engines. One of the most common and basic forms of interaction happens when a user scrolls on your website. But wouldn’t it be quite boring if the user keeps scrolling through your long static page? In this tutorial, we will have a look at three basic animations that you can implement on scroll. Parallax, fade, and slide animations are the most popular animations devs use to make scrolling more fun. Let’s see how we can build them for our sites. Before we move further, here are the end results: Project Setup Prerequisites We will be using Angular 11 to create our project. And we'll use VS Code as our IDE. In order to build the animations, we are going to use the fabulous Green Sock Animation Platform (gsap). It’s one of the best JavaScript animation libraries out there. Create the project Create an Angular project by entering the command below. Make sure to enable routing when it asks you. ng new animations --style css code animations This will create a new project named animations with the style format as CSS. Next, it will open the project in VS Code. Now let’s install gsap. In your VS Code terminal, enter the command below: npm install --save gsap @types/gsap This will install the gsap library and the typing files via @types/gsap. Lastly, let’s create three components. Enter the commands below: ng g c parallax ng g c fade ng g c slide How to set up the routes Let’s create three separate routes: /parallax, /fade, and /scroll. Open your app-routing.module.ts and add the routes as below: How to Create Parallax Animation Since we have now set up the project, let’s start with parallax animation. When you create page animations, you typically use sections. So, open your parallax.component.html file and paste in the code below: Let’s add some styling to these sections. Since we are going to use sections in all three components, we will add styling to the common styles.css file. Open your styles.css file and paste in the CSS below: In the above code, we are making the height and width of the section equal to the viewport’s height and width. Second, we are aligning the content in the center of the section. Lastly, we are setting the font style for how we want to display the text. Since the bg class used in parallax.component.html is specific to parallax, we will define its properties in parallax.component.css. Open that file and paste in the CSS below: In order to set the parallax animation, we need to add some TypeScript code. So, open your parallax.component.ts file and add the code below in your ngOnInit function: I have added inline comments to help you understand the code. Message me if you need further explanation. Finally, add the imports below at the top of your TS file so that you don’t get any compile-time errors: import { gsap } from 'gsap'; import { ScrollTrigger } from 'gsap/all'; That’s it! You can now visit to see the beautiful animation. How to Create Fade Animation For the fade animation, open the fade.component.html file and paste in the HTML code below: In the fade.component.css, paste in the CSS below: We are going to display only one section at a time. So we will hide all sections except the first one. Also, since we are not moving the sections along with the scroll, we'll mark its position as fixed. Let’s add the animation code to make the other sections visible on scroll. Open the fade.component.ts file and paste in the following code: I have added inline comments so as to make the code self-explanatory. For any clarifications, please let me know. Visit to see the smooth fading animation as you scroll. How to Create Slide Animation This is typically the easiest of the lot to understand and implement. Open your slide.component.html file and paste in the code below. It’s similar to fade.component.html, except the class is removed from the first section. We don't need to add any CSS. Next, open the slide.component.ts file and add the code below: Again, I have added the inline comments for a better understanding of the code. For any queries, just reach out to me. Conclusion Animations add a lot of value to your site, and they help keep your users engaged. As with all things, don't go overboard and use animations in moderation. Don’t clutter or mess up the website with heavy images and funky animations. Keep It Simple & Keep It Subtle (KIS & KIS). In this tutorial, we saw how to add simple parallax, fade, and slide animations for page sections. Lastly, a great thanks to Lorem Picsum for providing such great photos. If you liked this article, you might also like the below articles: - Lazy Loading Modules in Angular - .NET 5: How to Authenticate & Authorise API's correctly - Learn TDD with Integration Tests in .NET 5.0 Note: You can find the whole project on GitHub.
https://www.freecodecamp.org/news/beautiful-page-transitions-in-angular/
CC-MAIN-2021-25
refinedweb
945
75.2
Waitress -------- Waitress. Usage ----- Here's normal usage of the server: .. code-block:: python from waitress import serve serve(wsgiapp, listen='*:8080') This will run waitress on port 8080 on all available IP addresses, both IPv4 and IPv6. .. code-block:: python from waitress import serve serve(wsgiapp, host='0.0.0.0', port=8080) This will run waitress on port 8080 on all available IPv4 addresses. If you want to serve your application on all IP addresses, on port 8080, you can omit the ``host`` and ``port`` arguments and just call ``serve`` with the WSGI app as a single argument: .. code-block:: python from waitress import serve serve(wsgiapp) Press Ctrl-C (or Ctrl-Break on Windows) to exit the server. The default is to bind to any IPv4 address on port 8080: .. code-block:: python from waitress import serve serve(wsgiapp) If you want to serve your application through a UNIX domain socket (to serve a downstream HTTP server/proxy, e.g. nginx, lighttpd, etc.), call ``serve`` with the ``unix_socket`` argument: .. code-block:: python from waitress import serve serve(wsgiapp, unix_socket='/path/to/unix.sock') Needless to say, this configuration won't work on Windows. Exceptions generated by your application will be shown on the console by default. See :ref:`logging` to change this. There's an entry point for :term:`PasteDeploy` (``egg:waitress#main``) that lets you use Waitress's WSGI gateway from a configuration file, e.g.: .. code-block:: ini [server:main] use = egg:waitress#main listen = 127.0.0.1:8080 Using ``host`` and ``port`` is also supported: .. code-block:: ini [server:main] host = 127.0.0.1 port = 8080 The :term:`PasteDeploy` syntax for UNIX domain sockets is analagous: .. code-block:: ini [server:main] use = egg:waitress#main unix_socket = /path/to/unix.sock You can find more settings to tweak (arguments to ``waitress.serve`` or equivalent settings in PasteDeploy) in :ref:`arguments`. Additionally, there is a command line runner called ``waitress-serve``, which can be used in development and in situations where the likes of :term:`PasteDeploy` is not necessary: .. code-block:: bash # :ref:`runner`. .. _logging: Logging ------- `: .. code-block:: python import logging logger = logging.getLogger('waitress') logger.setLevel(logging.INFO) Within a PasteDeploy configuration file, you can use the normal Python ``logging`` module ``.ini`` file format to change similar Waitress logging options. For example: .. code-block:: ini [logger_waitress] level = INFO Using Behind a Reverse Proxy ----------------------------: 1. You can pass a ``url_scheme`` configuration variable to the ``waitress.serve`` function. 2. You can configure the proxy reverse server to pass a header, ``X_FORWARDED_PROTO``, whose value will be set for that request as the ``wsgi.url_scheme`` environment value. Note that you must also conigure ``waitress.serve`` by passing the IP address of that proxy as its ``trusted_proxy``. 3. You can use Paste's ``PrefixMiddleware`` in conjunction with configuration settings on the reverse proxy server. Using ``url_scheme`` to set ``wsgi.url_scheme`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can have the Waitress server use the ``https`` url scheme by default.: .. code-block:: python from waitress import serve serve(wsgiapp, listen='0.0.0.0:8080', url_scheme='https') This works if all URLs generated by your application should use the ``https`` scheme. Passing the ``X_FORWARDED_PROTO`` header to set `. Using ``url_prefix`` to influence ``SCRIPT_NAME`` and ``PATH_INFO`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can have the Waitress server use a particular url prefix by default for all URLs generated by downstream applications that take ``SCRIPT_NAME`` into account.: .. code-block:: python. Using Paste's ``PrefixMiddleware`` to set ``wsgi.url_scheme`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~: .. code-block:: python :term:`PasteDeploy` configuration file too, if your web framework uses PasteDeploy-style configuration: .. code-block:: ini [app:myapp] use = egg:mypackage#myapp [filter:paste_prefix] use = egg:PasteDeploy#prefix [pipeline:main] pipeline = paste_prefix myapp [server:main] use = egg:waitress#main listen = 127.0.0.1:8080 Note that you can also set ``PATH_INFO`` and ``SCRIPT_NAME`` using PrefixMiddleware too (its original purpose, really) instead of using Waitress' ``url_prefix`` adjustment. See the PasteDeploy docs for more information. Extended Documentation ---------------------- .. toctree:: :maxdepth: 1 design.rst differences.rst api.rst arguments.rst filewrapper.rst runner.rst glossary.rst Change History -------------- .. include:: ../CHANGES.txt .. include:: ../HISTORY.txt Known Issues ------------ - Does not support SSL natively. Support and Development ----------------------- The `Pylons Project web site
http://docs.pylonsproject.org/projects/waitress/en/latest/_sources/index.txt
CC-MAIN-2017-09
refinedweb
693
50.94
As I imagine you discovered, port scanning can be brutally slow, yet, in most cases, is not processing intensive. Thus, we can use threading to drastically improve our speed. There are thousands of possible ports. If it is taking 5-15 seconds per port to scan, then you might have a long wait ahead of you without the use of threading. Threading can be a complex topic, but it can be broken down and conceptualized as a methodology where we can tell the computer to do another task if the processor is experiencing idle time. In the case of port scanning, we're spending a lot of time just waiting on the response from the server. While we're waiting, why not do something else? That's what threading is for. If you want to learn more about threading, I have a threading tutorial here. So now we mesh the threading tutorial code with our port scanning code: import threading from queue import Queue import time import socket # a print_lock is what is used to prevent "double" modification of shared variables. # this is used so while one thread is using a variable, others cannot access # it. Once done, the thread releases the print_lock. # to use it, you want to specify a print_lock per thing you wish to print_lock. print_lock = threading.Lock() target = 'hackthissite.org' #ip = socket.gethostbyname(target) def portscan(port): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: con = s.connect((target,port)) with print_lock: print('port',port) con.close() except: pass # The threader thread pulls an worker from the queue and processes it def threader(): while True: # gets an worker from the queue worker = q.get() # Run the example job with the avail worker in queue (thread) portscan(worker) # completed with the job q.task_done() # Create the queue and threader q = Queue() # how many threads are we going to allow for for x in range(30): t = threading.Thread(target=threader) # classifying as a daemon, so they will die when the main dies t.daemon = True # begins, must come after daemon definition t.start() start = time.time() # 100 jobs assigned. for worker in range(1,100): q.put(worker) # wait until the thread terminates. q.join() Since this tutorial was a pretty clean meshing of two tutorials, I have just included the lightly-commented code. If you're curious, you can check the embedded video, as things are still explained fairly step-by-step there.
https://pythonprogramming.net/python-threaded-port-scanner/
CC-MAIN-2022-40
refinedweb
407
67.04
Hi, In this project we’re going to control an Arduino with Voice commands with a Simple android App that I’ve create with MIT App Inventor. Watch the video below Resources for this project: - How To Use App Inventor With an Arduino - How to control outlets with 433Mhz Transmitter - Review of the HC-05 Bluetooth Module Parts Required - Arduino UNO – read Best Arduino Starter Kits - 1x Smartphone - 1x Bluetooth Module (for example HC-06 – Read my review here) - 1x 433Mhz Receiver and Transmitter - 2x Remote Controlled Sockets with a Remote Control (Controlled by 433Mhz Frequency) - 1x Breadboard - Jumper Cables You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price! Receiver Circuit Click here to Download the RCSwitch Library. Install it an Re-open the Arduino IDE. Then open the example “ReceiveDemo_Advanced”. Upload the code and open the serial monitor. Start pressing the buttons from the Remote you’re going to use and save them. Final Circuit Upload and install the source code below - Arduino Sketch - Install the RC Switch Library - Voice_Control.apk - Voice_Control.aia (to edit the android app) Note: If you want to edit my app this is what you need to do. Download Voice_Control.aia and upload it to MIT App Inventor. Tips: -. - If the HC-05 Bluetooth Module asks for a password, It’s ’1234′. - Before Testing my “BlueLED” app, test if you’ve made all the connections correctly. How you can do that? Simply enter numbers (’1′, ’0′) into your serial monitor and your LED should be turning on and off. I hope you found this useful! Do you have any questions? Leave a comment down below! Thanks for reading. If you like this post probably you might like my next ones, so please support me by subscribing my blog and my Facebook Page. P.S. Click here to see how to use MIT App Inventor with Arduino P.P.S. Click here to learn how to use the 433Mhz Transmitter/Receiver circuit to control outlets. 24 thoughts on “Control your Arduino with Voice Commands [Android App]” Hi Rui! Do you know where I can find the Remote Controlled Sockets for 220V with Round Holes/pins (Like in Israel…)? I see in your picture you have such round holes – unlike the Amazom link you specified… Thanks! Hi Itzik, I’ve posted those sockets with Amazon Link because I know that a most of readers live in countries with those sockets. You can find on ebay for example or in a local store. Those sockets below should work just fine: Thanks for asking, Rui it’s amazing think you Thank you boubaker! It only works sending ‘1’ and ‘0’ , because i modify the program to send more ‘byte’ numbers, in this case 50 (char 2) and 51 (char 3) , but arduino only reads 1 and 0 , what can be the problem? I want to turn on and off a lamp (1 , 0) and the number 2 and 3 to ‘OPEN’ and ‘CLOSE’ a door, but dont works 🙁 In other programs like BlueTerm arduino works perfectly to 0 – 9 chars, but in this one i cant. Check out my other project where I’m controlling more than 2 buttons. It’s probably a problem with your android app. you made some sort of mistake I guess hi Rui santos i use bluetooth module 0755 with arduino uno i get the code from your website but it can not communication arduino and bluetooth .this problem from bluetooth module or arduino? Hi, can send me the link for your bluetooth module please? hii how to arduino-control DC motor via bluetooth Shield? Hi, This code should work almost exaclty the same, since the bluetooth modules work via serial communication. You might need to make a few changes to my Arduino code, but the android app should work just fine. I assume that your bluetooth shield as a couple of arduino code examples, take a look at them and compare with my code. Thanks for asking, Rui Hi!, I have one problem… I am able to decode the rf signal from my remote. But if I would like to send the rf signal to my sockets, nothing happens. I tested the transmitter. He is working. I tried so much, but I dont know why it isnt working. I does have this sockets : amazon.de/mumbi-Funksteckdosen-Set-Funksteckdose-Fernbedienung/dp/B002UJKW7K/ref=lh_ni_t?ie=UTF8&psc=1&smid=A3JWKAKR8XB7XF And this rf transmitter and receiver: amazon.de/receiver-Superregeneration-Wireless-Transmitter-Burglar/dp/B00ATZV5EQ/ref=lh_ni_t?ie=UTF8&psc=1&smid=A2DD36NWW1ABNW If you are receiving the rf signal from your remote it’s a good. Now you need to transmit the exact signal you’re receiving. Have you watched my youtube video? Yes. I watched your video. Icopied the decimalcode and did paste it in the code. The rf transmitter works but my socket is still just working with the remote. I checked it with the “send demo code” of the rc switch library. So what is false? I did open One of my sockets, and tested the socket receiver. He is working. Is the output current to low for the relay? Hi! I cant find a failure! Ok I actually does send this code to my arduino wich is connected to my 433 MHz Transmitter. –> #include <RCSwitch.h> RCSwitch mySwitch = RCSwitch(); void setup() { Serial.begin(9600); mySwitch.enableTransmit(10); } void loop() { mySwitch.send(5308693, 24); delay(5000); mySwitch.send(5308692, 24); delay(5000); } And this is my Received data which I received before… –> Button#1-ON: Decimal: 5308693 (24Bit) Binary: 010100010000000100010101 Tri-State: FF0F000F0FFF PulseLength: 329 microseconds Protocol: 1 Raw data: 10228,376,932,1100,348,372,936,1096,344,372,936,376,944,376,936,1092,348,372,940,372,948,368,936,376,944,372,940,372,948,372,936,1096,344,376,940,368,948,372,940,1096,348,368,940,1092,348,372,936,1096,344, Button#1-OFF: Decimal: 5308692 (24Bit) Binary: 010100010000000100010100 Tri-State: FF0F000F0FF0 PulseLength: 329 microseconds Protocol: 1 Raw data: 10224,380,932,1100,340,376,936,1096,344,376,932,376,944,376,936,1096,344,376,936,372,948,372,936,376,944,372,936,376,944,376,936,1096,344,376,936,372,948,372,936,1100,344,372,940,1092,348,368,940,372,948, hi, Rui santos I have a problem that when i click on connect and choose the bluetooth address this massage appear to me : “Error 507: Unable to connect. Is the device turned on? The amazon link for 2x Remote Controlled Sockets with a Remote Control (Controlled by 433Mhz Frequency) is broken. PLEASE ADAPT IT WITH TASKER APLICATION AND MAKE THE TUTORIAL AGAIN!! thanks a lot for explanation it works perfect but I wonder is there a way to explain how you make the android app steps ? beacuse I wanted to modify your code to use more than one device and different voice commands thanks HI rui. I made the reciever circuit and can run the program fine. The problem is when I open the serial monitor and press buttons on my remote nothing shows. Hi Mark, Which remote are you talking about? Hello there. i was wondering if this needs an internet for the speech recognition to work? or is there a way to do this with no internet connection like offline. thanks mate for the sources. Hi Rui, Can you tell me which part of the code process the voice ‘ON’ and turn variable ‘state’ into 1? And also for the ‘OFF’ state? I can’t really understand the code This is another great tutorial. You do a very good job explaining how to develop the project. Thank you David for the continuous support!
https://randomnerdtutorials.com/control-your-arduino-with-voice-commands/
CC-MAIN-2021-31
refinedweb
1,309
73.98
Ill get to this later. 3D Printed components and other hardware to make a PCB routing machine, which inherently can do other things. Ill get to this later. So I was fiddling around and figured making one of those breadboard deals would be a good test so I mocked one up in my CAD program and gave it a go. Turned out pretty good, I dont want to turn on the air compressor and wake my room mate up, so its still got the obvious pcb dust in the nooks and crannies, but electrically it would be fine for a small circuit. The only thing I dont really like about it is the screw hole cutouts, for some reason the Y carriage has some left and right play that only really showed up on the screw holes so I'll need to look at that. Heres a timelapse that for some reason youtube wont let go above 240p and some after pictures. So I started revamping this project back in August 2018 to make it better. I saw a good deal for a spindle on Amazon and redesigned most of the 3d printed parts. The changes to the parts were to mate the spindle to the machine and increasing the rigidity of the machine. With that I wanted to add an LCD to display the different states of the machine and the XYZ location of the spindle. So to add that new functionality I changed the controller out to an Arduino Mega (well a chinese cheap knock off because why spend $40 when I can spend $10). Back in the early days of this project before I decided to get the machine to talk in pseudo G-Code (it will accept the g-code spit out by flatcam no problem, but it doesnt follow all the silly parsing rules that I hate) I had developed an Auto Level method where, much like most other solutions, it maps the surface of the work peice and from there interoplated points between those mapped points. Well I scrapped that once I moved onto the pseudo g-code version. With the new revamped software, I included an M code that will map the surface of the work piece and a few more M codes that control it. I currently just finished doing some testing of the new machine and circuit and am very happy with the results. I induced an uneven surface with a sophisticated setup of pushing a screw driver under the board on one side to raise the right side of the board. I then mapped the surface and cutout a little "circuit". My next task is to create the 2 circuit boards, one for the Keypad and LCD, and one for the stepper drivers. The keypad and lcd will have its own fancy enclosure designed and printed out, as will the controller housing for the arduino and stepper drivers. Below are pictures of the current machine, controller circuitry, the CAD design of the circuit to be etched, and the etched "circuit" with a resistor inserted for scale. aight aight, i modified a syntax coloring editor i maed a wile bak to hilite dem gcodes, and den i made a wae to make dat program send da code dats in da editor to dee controller to mayke it do wut u want. da filez be uploaded, along wit da new firmware 4 dee ardweeno. nah im sayin. #getRekt. As a test, I wrote my name in DesignSparks whateverthefuckprogramitscalled, then exported it to a gerber file, then imported that to FlatCAM, which exported G Code, which I imported to this DCNC Terminal program, when then exported it via a serial connection to the controller, which then exported wood off this block of wood to make this. The feed rate was definitely too high on the top one, so I lowered the feedrate, and gave it another go (the one I highlighted the lettters with sharpie) and turned out aight. So G Code is dumb, luckily FlatCAM outputs G Code that makes sense to me. Instead of having a whole line of commands, that need to be interpretted in a different order then the order recieved, it just sends one command at a time which is easy to interpret on an Arduino, dont need to worry about loading the entire line, and then picking through and running stuff in different orders. The firmware is real simple, theres a 128 byte line buffer that reads in until it gets a carriage return or line feed, once it hits that, it goes to an interpret routine which runs through and if it sees what I call a word (X,Y,Z,F,P) it will read in a real number, and store that in its associated variable. If it sees a G, it will read in an integer, then read for words, and then execute that command. If it sees an M, then it currently just ignores it and continues interpreting the line. If it sees an error, then it will update an error flag variable, and then output "error: <description here>", to let the operator know whats up. If it doesnt see an error, after interpreting it will output "ready" telling the sending program it can send the next line. I implemented an actual bresenham line algorithm using longs instead of floats to make the routine quicker and easier to implement fairly accurate feed rates. The only real issue is when the Z axis is the dominant axis since it has 400 steps per mm versus the X/Y which only has 80 steps per mm. With that, if the Z axis has to step 400 times, and the X axis has to step 400 times, the X axis would need a longer minimum delay time between steps then the Z axis or else the motor will stall, words suck and I dont want to put in the effort to make my brain thoughts into them. So far it supports G0, G1, G4, G20, G21, G90, G91, G93, G94. It'll read the M commands and just ignores them, but the only M commands I find that flatcam outputs is like spindle on and shit like that, and since Im using a manually operated dremel, thats no issue. For developing, I've made all the locals static to get a better idea of how much RAM I'm using, which is only 429 bytes out of 2048, which leaves plenty on the stack for parameters and return addresses. And its only using 30% of the program memory, so I still got plenty of space for adding other shit to it. Ima make the program that sends g code files to the controller a little fancier and then do some test cuts. I'm pretty happy with it so far. EDIT: I did a quick tape-a-pen-to-the-dremel-carriage-and-tape-a-notepad-to-the-y-carriage test, and it looks pretty good. I stopped it during the drill hole phase and sent it back to 0,0,0, hence the line through it all. versus the flatcam expected so I made another small change to make use of some scrap rod i had to reinforce the x axis, its that top spar in the new image. I got some particle board to get a real flat surface for the top layer, drilled some holes, cut some holes, did a light sanding and put some sealant on it, and did the same for the bottom layer. Its got a way cleaner look now, the wires now get tucked underneath through some holes in the top layer and run towards the controller. I finally fixed the controller and the keypad to the fixture with screws instead of tape to make it easier to move around. Im thinking instead of make a new version of my CAD program, I'll just make a program that will interpret gerber files and send the appropriate commands to the machine. That way I'll be able to use whatever CAD program I want (that exports gerber files), but whatevs, i dunno. So I was tweaking parts in sketchup, developed a way to tension the belts for x and y axis (z is lead screws so dont need that there) and printed them out and it works pretty good. After that I redesigned all the parts making them beefier, which makes everything stiffer, which should make it more gooder over all. I havent touched the software, but whatevs. So Ive been working on that auto level thing, which turns out is pretty awesome because I can use V bits now, I can etch faster due to milling at a precise depth instead of milling at a range of depths due to the board flexed/uneven/whatevs. And the bits dont break like its going out of style. Heres pictures of board, and the unevenness I introduced to test the auto leveling. Its quite extreme and I dont forsee anything like this happening. I made it do 12 samples in the x axis and 12 in the y axis. I could have improved the results by increasing the sampling to 25, 20 (which will fit in the RAM just fine on the arduino, but its pushing it based on all my other globals and locals and other items pushed on the stack during its calling procedures, maybe I should make most of the locals static so I can get an accurate picture on compile since nothing is being called recursively?) Sorry the lighting sucks, but whatevs. As you can see, the bottom right hand corner where the change in surface depth was most extreme (change in depth versus change in lateral/horizantal direction) it didnt like so well, but as I said, I can increase the number of sample points to correct this. But with how extreme this example was, I shouldnt need to. Heres a time lapse video of it doing its thing, you can watch the Z axis coupling as a reference to see it adjust its height for the contours. It's a shitty video, for some reason my ipad wouldnt let me upload at Hi Def saying I had to be on wifi (which it was, its not a 4g or 3d ipad), but you can still see the auto level in action. Code wise it was pretty simple, I've done some interpolating in code in the past and I just applied that to this. For building the sample points, I just had 2 for loops going through the points and recording them void buildLevelTable(double maxDepth){ if (maxDepth>5) maxDepth=5;//since we're using a uchar for height data, its limits are -5.1 to 5. autoZeroZAxis(10); for (int y=0;y<LEVELH;y++){ for (int x=0;x<LEVELW;x++){ gLevelTable[x][y]=0; gotoZ(gTravelHeight,gZTravelSpeed); gotoXY((gLevelWidth/LEVELW)*x,(gLevelHeight/LEVELH)*y,gXYTravelSpeed); for (double deep=fmax(-5,gTravelHeight);deep<=maxDepth;deep+=0.02){ gotoZ(deep,gZTravelSpeed*5); if (!digitalRead(LEVELPIN) && !digitalRead(LEVELPIN) && !digitalRead(LEVELPIN)){ gLevelTable[x][y]=(double)(deep*25.0f); break; } } } } gotoZ(gTravelHeight,gZTravelSpeed); gotoXY(0,0,gXYTravelSpeed); } And for the interpolation to get the z offset at the current X/Y double zOffset(){ double x=((double)gCurStepX)/80.0; double y=((double)gCurStepY)/80.0; double bw=gLevelWidth/(LEVELW-1); double bh=gLevelHeight/(LEVELH-1); int tablex=floor(x/bw); int tabley=floor(y/bh); x-=((int)((double)tablex*bw)); y-=((int)((double)tabley*bh)); if (tablex>=LEVELW-1 || tabley>=LEVELH-1) return gTravelHeight; if (tablex<0 || tabley<0) return -10; double left=(gLevelTable[tablex][tabley+1]/25.0f-gLevelTable[tablex][tabley]/25.0f)/bh*y+gLevelTable[tablex][tabley]/25.0f; double right=(gLevelTable[tablex+1][tabley+1]/25.0f-gLevelTable[tablex+1][tabley]/25.0f)/bh*y+gLevelTable[tablex+1][tabley]/25.0f; return (right-left)/bw*x+left; } Dont mind the poor code. So, I had a free pin on the arduino, and I'd hate to see it go to waste, so its going to become the sense pin for auto leveling. I got a wire hooked up to ground, and the other wire hooked up to the free pin (D8). They both have aligator clips, so you clip one to the board, and one to the end mill, and then run the auto leveling routine. It probes the entire board (with a width and height you specify) at intervals you decide (so for my board, its 100mm x 70mm, and I have it probe 10 in X axis, and 7 in the Y axis) and gets the Z offset compared to home. It puts all that into a table and becomes accessible during x/y movements. Every step of the xy axis in my gotoXY routine it says "Hey, im at a new position, lets check to see what my new Z height needs to be to get that route depth i want" so it does a bunch of math and corrects the Z position to give you that cutting depth you want. Its a work in progress and I've gotten some nice test cuts, but Ive also found some bugs and need to decide how I want to actually have it implemented in code. right now the codes a mess, and very awkward. I have it implemented in my gotoXY routine, with a parameter asking if you want to make auto leveling corrections. I'm thinking I'll make a gotoXYAutoLevel routine that will do all this, and it will leave the gotoZ and gotoXY commands pure and free of this mess. or should it be gotoXYZAutoLevel? That'd probably be easier. I dont know. I'm going to find out. #hashtagssuck i made a lil video were i go through what i got so far and talk a lilttle about what i want to do in the future with it. So the design I posted a few days ago, was all kinds of wrong. Well, not all kinds, just one kind. I forgot to flip the image of the stepper driver when doing my design, so the pin locations were not correct. I would have had to of solder the headers into the bottom of the board, and then plugged them in upside down. Well, I didnt want to do that, so I redesigned it and in the process added a feature for a fan, or some other 12v <1A accessory you want. Then I milled out that new board, soldered everything into place and then tested it out. The x and y axis's worked just fine, but the Z was all dicked up. Wasnt doing nothing. Then I realized I shouldve tested that pin mapping to the arduino on a bread board before I milled that board. I was trying to use pin's A6 and A7 as digital output pins, but little did I know, you cant. You can only read from those pins. So to fix that and not have to etch a new board, I figured since there only input, i'll drill two new holes in those lines and put a jumper to 2 unused digital IO pins on the other side of the arduino (thank god I thought semi ahead and made traces from unused pins to drilled holes to easily fix problems like these). So I did that, and it works fine now. super awesome. I'll post pictures and shit once I get a new housing printed out for the new desing. كيف حالك أنا معجب بك وعملك يمكن أن تساعد في بناء هذا المشروع الجميل؟ لدي جميع المكونات المطلوبة ولكن خطة الاتصال مفقودة أو توصيل الأسلاك مع جميع الأجزاء هل تسمح لي الرسم التوضيحي لتوصيل شكرا
https://hackaday.io/project/12090-3d-printed-pcb-mill
CC-MAIN-2019-51
refinedweb
2,637
72.8
The most popular chess game programming problem! Even if you haven’t played chess lets make this easy and simple to understand. This Knight’s tour problem defines that if it is possible to travel all the possible blocks of chess from the starting position of the chessboard. To be clear a Knight can move only in eight specific directions/blocks from its current location. Now let’s get playing with the problem starting with some pre-requisites. Backtracking is basically recursion with undoing the recursive step if a solution for the given path does not exist. Recursion The most common programming paradigm is called recursion. In this algorithm we recur back again and again i.e. we call onto the same function possibly with different parameter values to recur over our problem. It mainly consists of a base condition to terminate the calls and a main recursive call part to recurring over the problem. Let us see a quick example. Traversing a matrix of size 4 x 4 recursively: Code implementation: #include <bits/stdc++.h> using namespace std; //simple travelling that works for square matrix void recur_travel(int mat[4][4],int row,int col,int n){ if(row == n && col == n) return; // base condition mat[row][col] = 1; //marking visited cells recur_travel(mat,row+1,col+1,n); //resursion travels to row+1, col+1 //if the matrix is not square this will fail and end up in memory error as //the stack will keep on building and never have the base condition //p.s. just for explaination } int main() { // your code goes here int mat[4][4]; memset(mat,0,sizeof(mat)); recur_travel(mat,0,0,4); //calling recur function //print the path matrix for(int i = 0;i<4;i++) { for(int j = 0;j<4;j++) cout<<mat[i][j]<<" "; cout<<endl; } return 0; } Recursion always uses a recursive stack to make the function calls so evidently recursion will always have an O(n) space complexity to start with. Now when we have revised our recursion prerequisite let us dive into the problem and understand what’s happening in it. Knight tour problem is the classic backtracking problem which asks if the Knight can travel all the cells in the chessboard starting at the left top cell position. Backtracking It refers to the algorithm that works right after the recursive step i.e. if the following recursive step results in a false then it retraces back and rectifies the changes made by the following recursive function. Here in this problem of Knight tour, the knight can move the specified blocks/cells mentioned in the given row, column array: row[8] = {2, 1, -1, -2, -2, -1, 1, 2 }; column[8] = {1, 2, 2, 1, -1, -2, -2, -1 }; Few valid steps are shown from cell I,j. Algorithm Steps: - Travel the matrix to only valid cells marked in the row, column array as coordinates. - If the step is valid then mark the current move in that cell of the matrix. - If the current cell is not valid or like it does not help in a complete traversal of the matrix then recur back and change the current value of the matrix which was updated before the recursive step. - Recur until move equals 8 x 8 = 64 – chess board general size. Code Implementation: #include <bits/stdc++.h> #define N 8 using namespace std; int mat[N][N]; //simple travelling that works for square matrix int row[N] = {2, 1, -1, -2, -2, -1, 1, 2 }; int col[N] = {1, 2, 2, 1, -1, -2, -2, -1 }; bool isValid(int r,int c){ return (r>=0 && c>=0 && r<N && c<N && mat[r][c] == -1); } bool knight_tour(int r,int c,int move){ if(move == N*N) return true; // base condition int move_x, move_y; for(int k = 0;k<N;k++){ move_x = r + row[k]; move_y = c + col[k]; if(isValid(move_x,move_y)){ mat[move_x][move_y] = move + 1; //storing the move number in matrix if(knight_tour(move_x,move_y,move + 1) == 1)return true; else mat[move_x][move_y] = -1;//backtracking } } return false; } int main() { // your code goes here memset(mat,-1,sizeof(mat)); mat[0][0] = 1; if(knight_tour(0,0,1)){//calling recur function //print the path matrix for(int i = 0;i<N;i++) { for(int j = 0;j<N;j++) cout<<mat[i][j]<<" "; cout<<endl; } } else cout<<"Not possible\n"; return 0; } Time Complexity: O(8^(NN)), as there are NN cells in the board/matrix. Space Complexity: O(N*N), size of the board and general recursive calls are not considered in this calculation. Now let us see one more variant of Knight Tour probability problem: In this problem everything remains the same, the knight has to travel the matrix with valid moves, only here we have to find the probability of the knight remaining on the board after K moves. This can easily be solved by counting for each cell how many times the knight has moved to the given cell and sum them up and at last, dividing the answer by N as for each cell of a N x N matrix the probability of valid move if the move to stay there only i.e. 1/N. Note: Better to divide by the float value of N for exact answer and probability. Code Implementation: #include <bits/stdc++.h> using namespace std; double knightProbability(int N, int K, int sr, int sc) { vector<vector<double>> dp(N, vector<double>(N)); //direction of valid moves in the matrix vector<int> dr = {2, 2, 1, 1, -1, -1, -2, -2}; vector<int> dc = {1, -1, 2, -2, 2, -2, 1, -1}; dp[sr][sc] = 1; //marking the starting cell for (; K > 0; K--) { vector<vector<double>> dp2(N, vector<double>(N)); for (int r = 0; r < N; r++) { for (int c = 0; c < N; c++) { for (int k = 0; k < 8; k++) { int cr = r + dr[k]; int cc = c + dc[k]; if (0 <= cr && cr < N && 0 <= cc && cc < N) { dp2[cr][cc] += dp[r][c] / 8.0; //storing the probability in each cell and summing them up. Time Complexity: O(NNK), where K is the number of moves that can be moved by the knight. Space Complexity: O(N*N), where N is the size of the matrix/board. Backtracking hence becomes a very powerful tool when we have to move through all the available possibility by recurring over the subproblems and if we could not reach a solution we trace back and correct it. Although the space complexity is quite high along with the time complexity but traversing through all possible options will require a definite amount of complexity than general algorithms. Hope the use of the algorithm will be easier now with the application and hope this article helps an aspiring software developer and programmer. By Aniruddha Guin
https://www.codingninjas.com/blog/2020/11/23/backtracking-the-knights-tour-problem/
CC-MAIN-2021-17
refinedweb
1,150
53.34
- Key: DDSXTY13-4 - Status: open - Source: Real-Time Innovations ( Gerardo Pardo-Castellote) - Summary: The XML type representation (Section 7.3.3) has an associated XSD that defines the legal XML documents that define DDS Types. In the XML type representation builtin annotations on members appear as "attributes" on the element that describes the member. For example see example in section 7.3.2.5.1.2 (Members): <struct name="structMemberDecl"> <member name="my_key_field" type="int32" key="true" optional="false"/> </struct> These builtin annotations (key, optional), have default values when they are not present. This is defined in the IDL4+ specification. For example when the annotation "key" is not present, the (default) value is "false" when annotation "optional" is not present, the (default) value is "false". There is also a "default" value when the annotation is present with no parameters, but this is not allowed in the XSD. According to the XSD syntax, e.g. see The "default" value specified for an attribute is the value interpreted when the attribute is not present. Therefore we have two options: - Have the XSD specify default values for these attributes to match the "IDL4+ defaults when the attribute is not present - Have the XSD specify no default value. Currently this is not done correctly for some annotations. For example the XSD for structure members has wrong defaults for all the attributes: <xs:complexType <xs:complexContent> <xs:extension <xs:attribute <xs:attribute <xs:attribute <xs:attribute <xs:attribute </xs:extension> </xs:complexContent> </xs:complexType> However it seems like the best solution is to remove the specification of a default value from the XSD. The problem is that when the default is specified the XSD parsers will fill the default value even if the attribute is not specified and it becomes impossible for the application that uses the parser to know if the attribute was there or not in the first place. This would make it impossible to transform between IDL and XML and back to IDL and get the same result back because all annotations would appear present to the XML parser even if they were not entered buy the user. Therefore we recommend removing the specification of 'default" value for all XML attributes that correspond to builtin annotations. This should be done both in Annex A and the machine readable dds-xtypes_type_definition_nonamespace.xsd - Reported: DDS-XTypes 1.2 — Mon, 24 Apr 2017 16:16 GMT - Updated: Mon, 24 Apr 2017 16:16 GMT DDSXTY13 — XSD for XML type representation should not specify default values for attributes representing annotations - Key: DDSXTY13-4 - OMG Task Force: DDS-XTYPES 1.3 RTF
http://issues.omg.org/issues/DDSXTY13-4
CC-MAIN-2017-39
refinedweb
435
50.16
Currying - An Introduction With Function Declarations & Expressions Stephen Charles Weiss Originally published at stephencharlesweiss.com on ・3 min read For a long time, I hated seeing functions like this: const someFn = a => b => a + b;. I thought this was just “code golf” (the idea of reducing a function to its shortest incarnation) without concern for how it would be received by the reader of the code later. This can definitely be true and I’m still generally opposed to golf for the sake of itself. But, what I missed was that writing functions in this way, that is - using currying, can actually be really helpful. Currying enables a greater control over what a function is doing (by reducing each function’s scope) by leveraging the composability of functions. Let’s start with an example of addition with function declarations. Later, we’ll move into ES6 and function expressions. The function’s purpose is trivial, but it was this example that helped me see how currying worked! function addOne(a) { return a + 1 } function addNums(a, b) { return a + b } function addNumsCurried(a) { return function addBy(b) { return a + b } } If we know we always want to add by one, addOne is perfectly reasonable. If we are okay always setting two variables, we can use addBy. addByCurried seems to be fundamentally different, but it actually allows us to determine what we want to add by separately from our base. So, we could have the following const addByTwo = addNumsCurried(2) const addByThree = addNumsCurried(3) console.log( `The typeof addByTwo and addByThree --> `, typeof addByTwo, typeof addByThree ) // The typeof addByTwo and addByThree --> function function It’s important to note that at the point of assignment, addByTwo and addByThree are functions. This is great because it means that we invoke them! We can see this by hopping back into our console and testing it: console.log(addByTwo) // // ƒ addBy(b) { // return a + b; // } Specifically, they are the function addBy which takes a single parameter. addByTwo(3) // 5 addByThree(3) // 6 Okay, now let’s transition to function expressions and ES6 (for ease of comparison, I’m assuming we’re in a totally new global scope, so we won’t have any name collision issues or previously assigned const variables): const addOne = a => a + 1 const addNums = (a, b) => a + b const addNumsCurried = a => b => a + b Wait, what? AddNumsCurried takes advantage of two syntactic sugar features that arrow functions provide: - If there is only one parameter, parentheses ( ()) are optional - If the return statement is only one line, there’s an implicit return and braces ( {}) are not necessary That means addNumsCurried could alternatively be written as: const addNumsCurriedAlt = (a) => { return (b) => { return { a + b } } } This looks pretty similar to how we had it with function declarations. That’s the point! What if we take it one step further and use our new adding prowess to the elements of an array? const addOneToEachBasic = ar => ar.map(num => num + 1) const addOneToEachCompartmentalized = ar => ar.map(num => addOne(num)) const addOneCurried = ar => ar.map(addOne) Personally, the difference between addOneToEachComparatmentalized and addOneCurried is when the light bulb when off! I’d run into this issue a ton with .reduce where I wanted to separate my reducer and define it separately, but I always ran into trouble! It wasn’t until I saw these two side by side producing the same results that I got a better understanding of what was happening. Let’s throw in a wrinkle: Our array is full of numbers, but they can be represented as strings or numbers (but always one or the other). To check we can use a ternary to check the type. We’ll assign the anonymous function to the variable ensureNum. // add type checking to make sure everything is a number const ensureNum = val => (typeof val == 'string' ? Number(val) : val) We want to do that before we add: const addOneToEachWithType = ar => ar.map(ensureNum).map(num => num + 1) const addOneToEachWithTypeAndCurry = ar => ar.map(ensureNum).map(addOne) Last step: Let’s now say we want to not just add by one, but any number. We can use our same currying techniques from function declarations to write the function expression in the following way. const addByToEachWithType = (ar, by) => ar.map(ensureNum).map(addNumsCurried(by)) H/t to Jacob Blakely and his great write up on currying - which served as both the inspiration for this exercise and my guide. Seriously, though. What is a progressive web app? “unofficial” community logo for PWA I’ve been familiar with the “progressive w... Thanks for writing this. I learned about currying in school, but in the years since I've mostly written in the prevailing "objects plus side effects" style. In my experience, even though languages like JS and Ruby have first-class functions, one mostly writes methods there, and pure functions aren't the norm, so you don't get a ton of opportunities to make use of currying. But I'm trying now to revisit FP, immutable data, etc., in my free time (largely because of this talk). I like the const addOneCurried = ar => ar.map(addOne)example. Another nice thing trick is (assuming map is a function rather than a method) to curry mapwith your array, so then you can do things like: curriedMap(compose(increment, square, ...)). The gold standard, IMO, is when the language automatically does partial application, but it's pretty good if you can get a curried version of a non-curried function on the spot, e.g. curry(suchWow)in JS with the Ramda library. I don't like baking it into the function definition, since you lose the ability to call it with all of the arguments at once. (Clojure takes a neat approach -- it doesn't do automatic partial evaluation, but you can curry with eg. (partial such_wow)and there's a macro called ->>which enables a pointfree style.) FWIW I really like Professor Frisby's Mostly Adequate Guide... as an introduction to currying & pointfree style in JS. Chapters 3-5 are the relevant ones. There are some live in-browser exercises too.
https://dev.to/stephencweiss/currying-an-introduction-with-function-declarations-expressions-481m
CC-MAIN-2019-35
refinedweb
1,015
54.42
Directional Wind Meter Using SDP3x Motivation Imagine you are model pilot and for a perfect start of your model aircraft you need the actual weather conditions, especially the speed and direction of the wind. So you can reduce the risk to crash at the start and you can decide for the ideal orientation into the wind for a take off. There are anemometers like in figure 1 which give an indication about the windspeed and by turning it manually, the direction can roughly estimated. But this gives just a snippet of the actual conditions. A more robust and reliable system would be preferable. Other devices on the market like vane style anemometer, see figure 2,. Figure 1: Handheld Anemometer (Anemometer (c)Tpdwkouaa, CC BY-NC-SA 4.0) Figure 2: Vane Sytle anemometer (NOAA) Approach measurement principle is quite simple, see figure 3 for details. In a tube with two openings, one opening faces towards the wind and sees the total pressure, the other opening sees the static pressure of the ambient (yellow marking in the picture). If the wind now blows into the front opening, a differential pressure (DP) is generated proportional to the velocity (Pt - Ps). This has to be scaled by the density of air rho. This leads to the final formula v^2 = 2*(Pt - Ps)/rho. The relation of differential pressure versus velocity is shown in figure 5. A single pitot tube measures only the wind speed from one direction. To get an angle of attack, it is necessary to combine two sensors or more to an array.. The SDP3x output needs to be set to “differential pressure temperature compensation” (please refer to SDP3x datasheet for more details). Then the following conversion factor (first two terms) preceeds the pitot tube true airspeed (TAS) formula: whereas ρ(p0,T0 ) is the air density (1.1289 kg/m^3) at calibration pressure p0 (966 mbar) and calibration temperature T0 (298.15 K), while ρ is the current absolute air pressure (e.g. measured with a barometric absolute pressure sensor) and T the current differential pressure sensor temperature. To obtain Indicated airspeed (IAS) use the formula: whereas ρsealevel is the air density at sea level in the International Standard Atmosphere (1.225 kg/m^3), ρ0 is the calibration pressure (966 mbar), while ρ is the current absolute air pressure (e.g. measured with a barometric absolute pressure sensor) and dp_sensor is the output of Sensirion’s SDP3x sensor. Figure 3: How a pitot tube works Figure 4: Comparison of SDP technology versus membrane sensor Figures 5: Differential pressure over velocity Implementation To get the wind speed and the specific direction at least two sensors are needed, placed orthogonal to each other. To get a better resolution, we decided to use three SDP3x sensors (S1, S2, S3) with a 60° rotation llike shown in figure 6. For those sensors a PCB was made, on which the sensors were mounted. The mechanical part is made of 3D printed material. A picture of the setup is shown in figure 7. The device was tested while mounted on a step motor in front of a self built wind channel with reference sensor. As shown in figure 8, the result looks similar to the theoretical graph in figure 5. An overview of different set-ups used for reference measuring the system can be found in figures 9a to 9c. Figure 6: Schematic of sensor array Figure 7: Picture of the wind meter Figure 8: Reference measurement SDP3x against mass flow meter Figure 9a: 3 SDP3x sensors in 60° rotation in a wind meter set-up on a stepper motor (side view) Figure 9b: 3 SDP3x sensors in 60° rotation in a wind meter set-up on a stepper motor (front view) Figure 9c: 3 SDP3x sensors in 60° rotation in a larger wind meter set-up on a stepper motor The measuring results are shown in the following plots figure 10a with a wind speed of 2.8 m/s and figure 10b with a wind speed of 7.8 m/s. The lines show each of the three SDP sensors. The dotted lines are the fit of a sinusoidal function. Figure 10a: Measurement result for rotation at 2.8 m/s Figure 10b: Measurement result for rotation at 7.8 m/s Code to Calculate Wind Speed and Direction Calculate Wind Direction import numpy as np def calculate_omega(dp1, dp2, dp3): # parameter for sinus curve, estimated on one measurement at 7.2 m/s b=0.64 s1 = dp1+dp2 s2 = dp2+dp3 g1 = s2/s1 g2 = 1/g1 # |g1|==|g2| for omega = 3b/2 if np.abs(g1)<(3*b/2): # lookup based on g1 magn = 2*np.sqrt(1-g1+g1**2) w = np.pi/4+np.arctan((2*g1-1)/np.sqrt(3))-(np.sign(s1)-1)*np.pi/2 else: # lookup based on g2 w = np.pi/2-np.arctan((2*g2-1)/np.sqrt(3))-(np.sign(s2)-1)*np.pi/2 return w Calculate Wind Speed def calculate_amp(dp1,dp2, dp3): rho = 1.2 # scale factor sqrt(2) estimated by measurements # A in differential pressure A = np.sqrt(2)*(np.abs(dp1) + np.abs(dp2) + np.abs(dp3)) # A in m/s A = np.sqrt(2 * rho * A) return A
https://developer.sensirion.com/archive/applications/directional-wind-meter-using-sdp3x/
CC-MAIN-2022-21
refinedweb
885
54.63
I was able to run OpenCV with ROS. I have run basic things (like loading images, getting video stream ect) but I am trying to run SURF or SIFT and I get this error: Linking CXX executable /home/jim/ROS/catkin_ws/devel/lib/opencv_ros/open CMakeFiles/open.dir/src/open.cpp.o: In function `main': open.cpp:(.text+0x15b): undefined reference to `cv::SURF::SURF(double, int, int, bool, bool)' CMakeFiles/open.dir/src/open.cpp.o: In function `cv::SURF::~SURF()': open.cpp:(.text._ZN2cv4SURFD1Ev[_ZN2cv4SURFD1Ev]+0xe): undefined reference to `vtable for cv::SURF' open.cpp:(.text._ZN2cv4SURFD1Ev[_ZN2cv4SURFD1Ev]+0x26): undefined reference to `vtable for cv::SURF' open.cpp:(.text._ZN2cv4SURFD1Ev[_ZN2cv4SURFD1Ev]+0x2e): undefined reference to `vtable for cv::SURF' open.cpp:(.text._ZN2cv4SURFD1Ev[_ZN2cv4SURFD1Ev]+0x3b): undefined reference to `VTT for cv::SURF' collect2: error: ld returned 1 exit status make[2]: *** [/home/jim/ROS/catkin_ws/devel/lib/opencv_ros/open] Error 1 make[1]: *** [opencv_ros/CMakeFiles/open.dir/all] Error 2 make: *** [all] Error 2 Invoking "make" failed The project name is "opencv_ros" and the code is in "open.cpp". I have include these headers: #include <stdio.h> #include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/features2d/features2d.hpp" #include "opencv2/nonfree/features2d.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/nonfree/nonfree.hpp" the thing is that when I run the same code in my c++ IDE (code::blocks) it runs just fine. I even found the header file and saw that it's in the correct place (according to the include I gave). I am running ubuntu 14.04 64bit on Indigo with OpenCV 2.4.9 . Why cant it find SURF in ROS?
https://answers.ros.org/questions/191380/revisions/
CC-MAIN-2019-47
refinedweb
278
53.78
I'm trying to create a route using Network Analyst. Just at the beginning stages of setting this up for a school project. I have a feature class called RoadsClipped, and successfully set up a Network Dataset called Roads_ND using it. Now I'm trying to customize the routing cost using evaluators. I want to use attributes in RoadsClipped, but anything I've tried to access those attributes results in a "Network element evaluator error". I've tried both VBScript and Python. Some simple examples are below. I've tried literally hundreds of variations and nothing work unless I forego trying to look at the attribute entirely. These are just simple tests to see if I can get at the attributes in RoadsClipped, and not the real logic I'll eventually use. Python: def SetCost(value): a=Edge.AttributeValueByName(value) c=0 if l!=0: c=100/a return c value=SetCost("SPEED_LIM") VBScript: a=0 a=Edge.AttributeValueByName("SPEED_LIM") value = a Any suggestions? More information. I'm using ArcMap Desktop 10.5.1 on Windows 7. RoadsClipped has about 2000 records.
https://community.esri.com/thread/214314-network-analyst-and-script-evaluators-accessing-attributes-get-network-element-evaluator-error
CC-MAIN-2019-22
refinedweb
182
61.43
Introduction: Previously we learned XML parsing in Xamarin.Forms. And this article demonstrates how to consume a RESTful web service and how to parse the JSON response from a Xamarin.Forms application. Previously we learned XML parsing in Xamarin.Forms. And this article demonstrates how to consume a RESTful web service and how to parse the JSON response from a Xamarin.Forms application. Requirements: - This article source code is prepared by using Visual Studio 2017 Enterprise. And it is better to install latest visual studio updates from here. - This article is prepared on a Windows 10 machine. - This sample project is Xamarin.Forms PCL project. - This sample app is targeted for Android, iOS & Windows 10 UWP. And tested for Android & UWP only. Hope this sample source code would work for iOS as well. Description: This article can explain you below topics: 1. How to create Xamarin.Forms PCL project with Visual studio 2017? 2. How to check network status from Xamarin.Forms app? 3. How to consuming webservice from Xamarin.Forms? 4. How to parse JSON string? 5. How to bind JSON response to ListView? Let's learn how to use Visual Studio 2017 to create Xamarin.Forms project. 1. How to create Xamarin.Forms PCL project with Visual studio 2017? Before to consume webservice, first we need to create the new project. - Launch Visual Studio 2017/2015. - On the File menu, select New > Project. - The New Project dialog appears. The left pane of the dialog lets you select the type of templates to display. In the left pane, expand Installed > Templates > Visual C# > Cross-Platform. The dialog's center pane displays a list of project templates for Xamarin cross platform apps. - In the center pane, select the Cross Platform App (Xamarin.Forms or Native) template. In the Name text box, type "RestDemo". Click OK to create the project. - And in next dialog, select Blank App=>Xamarin.Forms=>PCL.The selected App template creates a minimal mobile app that compiles and runs but contains no user interface controls or data. You add controls to the app over the course of this tutorial. - Next dialog will ask for you to confirm that your UWP app support min & target versions. For this sample, I target the app with minimum version 10.0.10240 like below: 2. How to check network status from Xamarin.Forms app? Before call webservice, first we need to check internet connectivity of a device, which can be either mobile data or Wi-Fi. In Xamarin.Forms, we are creating cross platform apps, so the different platforms have different implementations. So to check the internet connection in Xamarin.Forms app, we need to follow the steps given below. Step 1: Go to solution explorer and right click on your solution=>Manage NuGet Packages for solution. Now search for Xam.Plugin.Connectivity NuGet package. On the right side, make sure select all platform projects and install it. Step 2: In Android platform, you have to allow the user permission to check internet connectivity. For this, use the steps given below. Right click on RestDemo.Android Project and select Properties => Android Manifest option. Select ACCESS_NETWORK_STATE and INTERNET permission under Required permissions. Step 3: Create a class name "NetworkCheck.cs", and here I placed it in the Model folder. After creating a class, add below method to find network status. - namespace RestDemo.Model - { - public class NetworkCheck - { - public static bool IsInternet() - { - if (CrossConnectivity.Current.IsConnected) - { - return true; - } - else - { - // write your code if there is no Internet available - return false; - } - } - } - } 3. How to consuming webservice from Xamarin.Forms? We can consume webservice in Xamarin using HttpClient. But it is not directly available, and so we need to add "Microsoft.Net.Http" from Nuget. Step 1: Go to solution explorer and right click on your solution=>Manage NuGet Packages for a solution => search for Microsoft.Net.Http NuGet Package=>on the right side, make sure select all platform projects and install it. Note: To add "Microsoft.Net.Http", you must install "Microsoft.Bcl.Build" from Nuget. Otherwise, you will get an error like "Could not install package 'Microsoft.Bcl.Build 1.0.14'. You are trying to install this package into a project that targets 'Xamarin.iOS,Version=v1.0', but the package does not contain any assembly references or content files that are compatible with that framework." Step 2: Now it is time to use HttpClient for consuming webservice and before that we need to check the network connection. Please note that In below code you need to replace your URL, or you can also find the demo webservice url from the source code given below about this article. - public async void GetJSON() - { - // Check network status - if (NetworkCheck.IsInternet()) - { - var client = new System.Net.Http.HttpClient(); - var response = await client.GetAsync("REPLACE YOUR JSON URL"); - string contactsJson = await response.Content.ReadAsStringAsync(); //Getting response - } - } 4. How to parse JSON response string in Xamarin.Forms? Generally, we will get a response from webservice in the form of XML/JSON. And we need to parse them to show them on mobile app UI. Let's assume, in the above code we will get below sample JSON response which is having a list of contacts. - { - "contacts": [{ - "id": "c200", - "name": "Ravi Tamada", - "address": "xx-xx-xxxx,x - street, x - country", - "gender": "male", - "phone": { - "mobile": "+91 0000000000", - "home": "00 000000", - "office": "00 000000" - } - - }] - } So to parse above JSON, we need to follow steps below: Step 1: First we need to generate the C#.net class for JSON response string. So I am using for simply building a C# class from a JSON string. And it's very important is to make the class members as similar to JSON objects otherwise you will never parse JSON properly. Finally, I generate below the C# root class name is "ContactList" - using System; - using System.Collections.Generic; - using System.Linq; - using System.Text; - using System.Threading.Tasks; - - namespace RestDemo.Model - { - public class Phone - { - public string mobile { get; set; } - public string home { get; set; } - public string office { get; set; } - } - - public class Contact - { - public string id { get; set; } - public string name { get; set; } - public string email { get; set; } - public string address { get; set; } - public string gender { get; set; } - public Phone phone { get; set; } - } - - public class ContactList - { - public List<Contact> contacts { get; set; } - } - } Step 2: In Xamarin, we need to add "Newtonsoft.Json" Nuget package to parse JSON string. So to add Newtonsoft.Json, go to solution explorer and right click on your solution=>select Manage NuGet Packages for a solution => search for Newtonsoft.Json NuGet Package=>on the right side, make sure select all platform projects and install it. Step 3: And finally, write below code to parse above JSON response. - public async void GetJSON() - { - //Check network status - if (NetworkCheck.IsInternet()) - { - - var client = new System.Net.Http.HttpClient(); - var response = await client.GetAsync("REPLACE YOUR JSON URL"); - string contactsJson = await response.Content.ReadAsStringAsync(); - ContactList ObjContactList = new ContactList(); - if (contactsJson != "") - { - //Converting JSON Array Objects into generic list - ObjContactList = JsonConvert.DeserializeObject<ContactList>(contacts); - } - //Binding listview with server response - listviewConacts.ItemsSource = ObjContactList.contacts; - } - else - { - await DisplayAlert("JSONParsing", "No network is available.", "Ok"); - } - //Hide loader after server response - ProgressLoader.IsVisible = false; - } 5. How to bind JSON response to ListView? Generally, it is very common scenario is showing a list of items in ListView from the server response. Let's assume we have below JSON response from the server via webservice. - { - "contacts": [{ - "id": "c200", - "name": "Ravi Tamada", - "address": "xx-xx-xxxx,x - street, x - country", - "gender": "male", - "phone": { - "mobile": "+91 0000000000", - "home": "00 000000", - "office": "00 000000" - } - }, - { - "id": "c201", - "name": "Johnny Depp", - "address": "xx-xx-xxxx,x - street, x - country", - "gender": "male", - "phone": { - "mobile": "+91 0000000000", - "home": "00 000000", - "office": "00 000000" - } - }, - { - "id": "c202", - "name": "Leonardo Dicaprio", - "address": "xx-xx-xxxx,x - street, x - country", - "gender": "male", - "phone": { - "mobile": "+91 0000000000", - "home": "00 000000", - "office": "00 000000" - } - - } - ] - } See there is 3 different items in above JSON response and if we want to plan for showing them in ListView first we need to add below Xaml code in your ContentPage(JsonParsingPage.xaml). - <?xml version="1.0" encoding="utf-8" ?> - <ContentPage xmlns="" - xmlns:x="" - x: - <Grid> - <Grid> - <Grid.RowDefinitions> - <RowDefinition Height="Auto"/> - <RowDefinition Height="*"/> - </Grid.RowDefinitions> - <Label Grid. - <ListView x: - <ListView.ItemTemplate> - <DataTemplate> - <ViewCell> - <Grid HorizontalOptions="FillAndExpand" Padding="10"> - <Grid.RowDefinitions> - <RowDefinition Height="Auto"/> - <RowDefinition Height="Auto"/> - <RowDefinition Height="Auto"/> - <RowDefinition Height="Auto"/> - </Grid.RowDefinitions> - <Label Text="{Binding name}" HorizontalOptions="StartAndExpand" Grid. - <Label Text="{Binding email}" HorizontalOptions="StartAndExpand" Grid. - <Label Text="{Binding phone.mobile}" HorizontalOptions="StartAndExpand" Grid. - - <BoxView HeightRequest="2" Margin="0,10,10,0" BackgroundColor="Gray" Grid. - </Grid> - </ViewCell> - - </DataTemplate> - </ListView.ItemTemplate> - </ListView> - </Grid> - <ActivityIndicator x: - </Grid> - </ContentPage> See in above code there is a ListView which was binded with relevant properties (name, email, mobile) which was mentioned in our class name called Contact.cs. Finally, we need to bind our ListView with list like below:(Please find which was already mentioned in the 4th section of GetXML method at 17th line) - //Binding listview with server response - listviewConacts.ItemsSource = ObjContactList.contacts; Demo Screens from Android: Demo Screens from Windows10 UWP: You can directly work on below sample source code that is having the functionality of JSON parsing. :) Have a nice day by Subramanyam Raju :) Wonderful bloggers like yourself who would positively reply encouraged me to be more open and engaging in commenting. So know it's helpful.. Hadoop Training in Chennai Base SAS Training in Chennai I really appreciate information shared above. It’s of great help. If someone want to learn Online (Virtual) instructor lead live training in QLIKSENSE, kindly contact us MaxMunus Offer World Class Virtual Instructor led training on QLIKS This is extremely great information for these blog!! And Very good work. It is very interesting to learn from to easy understood. Thank you for giving information. Please let us know and more information get post to link. Oracle Training in Chennai This is extremely helpful info!! Very good work. Everything is very interesting to learn and easy to understood. Thank you for giving information. Good Work. Keep it up! Android Runtime Permissions Thank you for sharing such a nice and interesting blog with us. Online Public School. I have seen that all will say the same thing repeatedly. But in your blog, I had a chance to get some useful and unique information. I would like to suggest your blog in my dude circle. GD Goenka Rohini CCNA Training in Chennai Cognos Training in Chennai Very Interesting information shared than other blogs Thanks for Sharing and Keep updating us Nice information regarding JSON parsing My sincere thanks for sharing this post Please Continue to share this post Dot Net Training in Chennai really nice blog has been shared by you. before i read this blog i didn't have any knowledge about this but now got some knowledge. so keep on sharing such kind of an interesting blogs. dot net training in chennai TIA. hi you are doing a great job here.as am working SAP type project from past years and now i have to deal with PCL. So in PCL calling webAPI is a bit tricky can u res-post the code with below changes.that would be very helpful to me. what if we want to send some token with headers and parameter and what if we want send image bytes to server.how can we do that. Here there is no timeout option available or any kind of exception is not handled ServicePointManager.ServerCertificateValidationCallback is also not Interior Decorators in Chennai Very Interesting information shared than other blogs. Thanks for Sharing and Keep updating us. Xamarin development India It is amazing and wonderful to visit your site.Thanks for sharing this information,this is useful to me... Android Training in Chennai Ios Training in Chennai nice blog has been shared by you.before i read this blog i didn't have any knowledge about this but now i got some knowledge. so keep on sharing such kind of an interesting blog. software testing training in chennai This comment has been removed by the author. Thanks for sharing such an informative blog. You described it nicely. Quality writing. xamarin mobile application development for android Really Good blog post.provided a helpful information.I hope that you will post more updates like this. Digital marketing company in Chennai Very helpful suggestions that help in the optimizing topic,Thanks for your sharing. หนังฝรั่งแอคชั่น Thank you for taking the time and sharing this information with us. It was indeed very helpful and insightful while being straight forward and to the point. mcdonaldsgutscheine.net/ | startlr.com/ | saludlimpia.com/ I get error as "your app has entered a break state" after trying to run this line , var response =await client.GetAsync(""); what am i missing ? I get the same error at the same line of code. Awe-inspiring bequest! Your site is actually attention-grabbing. Personally i think love for this. USA mobile app developers hi there very nice article keep it up App Mobile Development in the App Store with More Feature Thanks for sharing such an informative blog. You described it nicely. Quality writing. Web Development Company in Navi Mumbai If you need your ex-girlfriend or ex-boyfriend to come crawling back to you on their knees (no matter why you broke up) you need to watch this video right away... (VIDEO) Win your ex back with TEXT messages? Interesting and Useful Information Thankyou so much PMP Chennai Gud Nice Information.ms dynamics crm online training Thanks for this post :) Can you show us, how did you create the webservice and where schould i save the api in my server? Thanks!! Very useful!! :D Thanks for sharing this article.... Interior Designers in Chennai Top Interior Decorators in Chennai Thanks for sharing this type of Windows App Development Tutorial... Interior Designers in Chennai Interior Decorators in Chennai Interior Design in Chennai Where to find xamarin developers - Inwizards is a Xamarin Development Company offers you to hire xamarin developers for xamarin mobile app development services. Xamarin services India xamarin development company this is very nice post and thanks for sharing this wonderful information to us.i really would like to appreciate to your attention for us. Base SAS Training in chennai Thank u for your information and get more new update your page For More Details:Find Girls, Search Bride,Check Your Life Partner,SecondMarriageMatrimony. nice blog has been shared by you.before i read this blog i didn't have any knowledge about this but now i got some knowledge. so keep on sharing such kind of an interesting blog. xamarin development company hire xamarin developer india I Here i had read the content you had posted. It is much interesting so please keep update like this.The mobile tracking is a simple work. Thanks for sharing... Mobile Number Tracker | Phone Number Tracker With Location | Mobile Location Tracker | How To Track A Phone Number | Phone Number Tracker | Phone Number Search Thank you very much, congratulations for making a good practice, I based on your project to use it with a weather ApiRest, when I finish it I share it. Glad to hear that :) Hey Subbu, Nice tutorial but i am getting some error like Package 'Micrsoft.Net.Http' was restored using the .Netframework version =v4.6.1 instead of the target framework of '.NetStandard , version=v2.0' Are you sure you made this application in VS2017 as we dont get the option for PCL in VS2017. we get the option for .Net Standard instead of PCL. do you have any option to convert this .NetStandard to .NetFramework i tried but then the xamarin forms are becoming obsolete.
http://bsubramanyamraju.blogspot.com/2017/04/xamarinforms-consuming-rest-webserivce_17.html
CC-MAIN-2018-17
refinedweb
2,612
59.5
Aa codes From PyMOLWiki Just a quick little script to allow you to convert from 1-to-3 letter codes and 3-to-1 letter codes in PyMOL. Copy the code below and drop it into your .pymolrc file. Then, each time you load PyMOL, "one_letter" and "three_letter" will be defined. The Code Simple # one_letter["SER"] will now return "S"'} # three_letter["S"] will now return "SER" three_letter = dict([[v,k] for k,v in one_letter.items()]) three_letter ={'V':'VAL', 'I':'ILE', 'L':'LEU', 'E':'GLU', 'Q':'GLN', \ 'D':'ASP', 'N':'ASN', 'H':'HIS', 'W':'TRP', 'F':'PHE', 'Y':'TYR', \ 'R':'ARG', 'K':'LYS', 'S':'SER', 'T':'THR', 'M':'MET', 'A':'ALA', \ 'G':'GLY', 'P':'PRO', 'C':'CYS'} Simple and Clever Here's another way to accomplish this # The real convenience in there is that you can easily construct any # kind of hash by just adding a matching list, and zipping. aa1 = list("ACDEFGHIKLMNPQRSTVWY") aa3 = "ALA CYS ASP GLU PHE GLY HIS ILE LYS LEU MET ASN PRO GLN ARG SER THR VAL TRP TYR".split() aa123 = dict(zip(aa1,aa3)) aa321 = dict(zip(aa3,aa1)) # Then to extract a sequence, I tend to go for a construction like: sequence = [ aa321[i.resn] for i in cmd.get_model(selection + " and n. ca").atom ] Using BioPython If you have BioPython you can use the following, which includes also many three-letter codes of modified amino acids: from Bio.PDB import to_one_letter_code as one_letter Using PyMOL from pymol.exporting import _resn_to_aa as one_letter Example Usage # we used to have to do the following to get the amino acid name from pymol import stored stored.aa = "" cmd.iterate("myselection", "stored.aa=resn") # now we can just call three_letter[string.split(cmd.get_fastastr("myselection"),'\n')[1]]
http://www.pymolwiki.org/index.php/Aa_codes
CC-MAIN-2014-42
refinedweb
291
61.87
Hi ! I've got a handler chain. In the client application I send a bean as a parameter in the Call.invoke method. Then the request message go through the chain to the final receiver. I would like to know how I can get the bean wich is in the parameter list in a handler. I know I can use SAAJ to find xml fragment in the SOAP message .. But would it be possible to get it in a "Object" form as I use it in the final web-service ? Thx Franck -----Original Message----- From: Justin Schoeman [mailto:justin@expertron.co.za] Sent: vendredi 17 février 2006 11:08 To: axis-user@ws.apache.org Subject: Re: Consuming Axis2 webservice with c# client? The WSDL and XSD files are attached. I had to edit the WSDL from the published version by filling in the soapAction name for the operations that we use. I am not sure of the significance of this field, and why it was left out in the original service, but without it, the generated clients get EPR not found errors. Modifying this wouldn't cause this problem though, would it? The WSDL files come from a standards body, and I am not sure if they were generated from any specific application. Thanks! Justin Anne Thomas Manes wrote: > WSDL? > > On 2/17/06, *Justin Schoeman* <justin@expertron.co.za > <mailto:justin@expertron.co.za>> wrote: > > Further information, we just managed to get the debug messages out, and > got the extended error: > > Unhandled Exception: System.InvalidOperationException: There is an error > in XML document (1, 877). ---> System.InvalidOperationException : The > specified type is abstract: name='DeviceID', > namespace='', at > <clientID xmlns=''>. > > However, if you look at the generated xml, the full field is: > clientID xmlns=" > <>" > > > so the very next attribute sets the explicit type. Surely this is an > acceptable response? > > Thanks, > > Justin > > > Justin Schoeman wrote: > > Hi all, > > > > I seem to remember a discussion on this a while ago, but cannot > seem to > > find it now. > > > > I am trying to use an Axis2 web service (generated from WSDL) from a > > Visual C# client (also generated from the WSDL. Everything works > fine > > until the client starts parsing the response XML, when it gives > an error > > 'There is an error in XML document(1,877)'. Position 877 in the > response > > xml is the first character name of the first element withing the > complex > > return type. The start of the xml is included below. If > anybody has > > any ideas, please let me know! > > > > Thanks! > > > > Justin > > > > XML Response: > > <?xml version=' 1.0' encoding='utf-8'?> > > <soapenv:Envelope > > xmlns: > xmlns: > > <soapenv:Header> > > <wsa:Action > > mlns:wsa=" > <>">ConfirmCustomerRequest</wsa:Action> > > > > <wsa:ReplyTo > > xmlns:wsa=" > "><wsa:Address></wsa:Address></wsa:ReplyTo> > > > > <wsa:From > > > xmlns:<wsa:Address> ></wsa:Address></wsa:From> > > > > <wsa:MessageID > > xmlns:wsa=" > <>">A7E4A85F20AA66B0C4114018114390618</wsa:MessageID> > > > > </soapenv:Header> > > <soapenv:Body> > > <confirmCustomerResp > > > > <clientID xmlns="" > > > > <serverID xmlns="" > > > > <terminalID > > > > <reqMsgID xmlns="" > > > > <respDateTime > > > >2006-02-17T > 14:59:03.910+02:00</respDateTime> > > > > > <confirmCustResult> > > <custVendDetail address="here" contactNo="0123456789" name="Mr JF > > Schoeman" accNo="12345-67890" /></confirmCustResult> > > </confirmCustomerResp> > > </soapenv:Body></soapenv:Envelope> > >
http://mail-archives.apache.org/mod_mbox/axis-java-user/200602.mbox/%3C299BB85A71A9DC48AC939F76B32B734C157945@auberge.seryx.local%3E
CC-MAIN-2018-43
refinedweb
509
57.16
A question on twitter has moved forward doing an example of serialization and deserialization. This is gonna be a pretty quick example and blog post as this is being done so that I can provide a code sample as my answer to the asker! I'm going to use JSON as the example serialization/deserialization. The basic idea of how to deserialize is... not. Seriously, don't deserialize. Why do we need to extract the data into fields in some object? To use them? pfft, we can extract it later, when it's actually required. One of the cool things I've found working with microobejcts is the delayed execution. We don't actually perform any data manipulation (or retrieval in some cases) until and unless that data is needed. Let's use Hacker News as an example. I'm using this as it'll be an excellent comparrison to see how my view on deserialization should happen. Let's compare to what I did... quite a while ago. Doing many things I won't do now. More, doing many things I don't need to do now. Avoiding these practices allows my code to be much simpler and much easier to maintain. My HackerNewsReader was last touched in in April 2017. Writing now it's July 2018. Well over a year ago - so many different practices in place now. Amazing growth for me. I'm not going to do Java, I'm aiming for quick on this, I don't want to fight tools I haven't spun up in a while. :) Deserialization First thing we'll look at is how to deserialize the JSON. This is also an example for data from a database, or the DataBag back from the ORM system - looking at you Entity Framework. I focus on JSON as it's slightly more complex example. ... OK, looking at the HackerNews API... I'm not going to use it. It's lacking some of the 'interrogation' ability that I want to demonstrate. I'm going to create a fake user json... which I thought I had ... in some project... somewhere... OH, I know I did... but I blew it all away when I screwed up the example. Fine - new fancy pants JSON for my simple example { "name":"Quinn Fyzxs", "birthday":"19061209" "contact_info":[ { "type":"phone", "value":"5551234567" } ] } There - simple json. Not sure if it'll serve my purpose, but let's figure it out. I get to cheat a bit and use some of my MicroObjectsCommon classes for interacting with JSON, found in the json namespace as a foundation. I'm stripping it to hell and back though - a lot we don't need quite yet. hmmm.... Nope. Screw it. I'm pulling the whole thing in. I'm going for quick, not "MINIMAL" - so pulling in my MicroObjectsLib and FluentTypes. These are my foundation of microobjects that allows me to develop quickly. Feel free to use them, or build your own. I advocate having the source to always be able to make them exactly what you need. Anyway... Now I have my JsonObjectBookEnd, I can begin. I have the above JSON, what do I want to do with it? The simplest for this example, and why I built custom, is to ask questions about the age. The core of what we do is to use our JsonObjectBookEnd to control the interaction with the RAW DATA. Raw data is a code smell. If you see it, something's probably not as Object Oriented as it can be. Our skeleton of a User class looks like public class User : IUser { private readonly IJsonObjectBookEnd _source; public User(Text json) : this(new JsonObjectBookEnd(json)) { } private User(IJsonObjectBookEnd source) => _source = source; } Let's ask a question about the age of the user... IsMinor? I'm going to shortcut the evolution of the code and jump straight to what I would show - If I had a lot of methods on User for the age, our User class would know a lot about how to answer age questions. Why? The User knows nothing of Age, it just knows the users birthday. Let's keep objects as ignorant as possible. "I know what I want, you know how to do it". Let's keep it like that. I'd normally implement IsMinor, CanVote, CanBuyAlcohol on User then move them into an Age object. Gonna skip the second two and go straight to Age with isMinor. ... I apologize for all the 'await's. These support classes were developed in a a very async environment and continue to be used in such. I haven't created non-async/await versions. Now I have an Age class that looks like public sealed class Age : IAge { private readonly TimeInstant _birthday; public Age(Text birthday) : this(new YearMonthDayTimeInstant(birthday)) { } private Age(TimeInstant birthday) => _birthday = birthday; public Bool IsMinor() { return Bool.False; } } and my User is public sealed class User : IUser { private readonly IJsonObjectBookEnd _source; private static readonly Text BirthdayKey = new TextOf("birthday"); public User(Text json) : this(new JsonObjectBookEnd(json)) { } private User(IJsonObjectBookEnd source) => _source = source; public async Task<IAge> Age() => new Age(await _source.TextValue(BirthdayKey)); } I'm not bothering with the actual date manipulations, that'll come in later examples... mostly because I don't have my Time foundation classes in the repo yet... Depending on how much I want to drive certain things, I can ask more advanced questions. Right now, not gonna push on that. Showing the deserialization practice. We can ask questions about the age. Sweet. What about the other info? We follow the same pattern. I want to do something with the name: I need a Name object, which the User knows how to construct. Remember: I know what I want, you know how to do it. public sealed class User : IUser { private static readonly Text BirthdayKey = new TextOf("birthday"); private static readonly Text NameKey = new TextOf("name"); private readonly IJsonObjectBookEnd _source; public User(Text json) : this(new JsonObjectBookEnd(json)) { } private User(IJsonObjectBookEnd source) => _source = source; public async Task<IAge> Age() => new Age(await _source.TextValue(BirthdayKey)); public async Task<IName> Name() => new Name(await _source.TextValue(NameKey)); } public sealed class Name : IName { private readonly Text _origin; public Name(Text origin) => _origin = origin; } We can now ask questions of the name object... like... uhh... Normally this just gets the "into/writer" technique to display it... That's enough OK. For a quick post, I think we're demonstrating what I do to deserialize - I create objects representing the concepts of the payload. I can then ask these objects questions. I don't need databags of the data. I'm not currently taking advantage of the system and some auto-parsing - but with my practice of avoiding the asymetric marriage to my JSON library, I'm not seeing a negative impact to the code.
https://quinngil.com/2018/09/09/quick-microobjects-and-json/
CC-MAIN-2018-47
refinedweb
1,140
66.54
Details - Type: New Feature - Status: Closed - Priority: Major - Resolution: Won't Fix - Affects Version/s: maxent-3.0.0-sourceforge - Fix Version/s: None - Component/s: Machine Learning - Labels:None Description [As requested, brought over from Sourceforge.] Conceptually, EventStream is just an Iterator<Event>. You would get better interoperability with other Java libraries if EventStream were declared as such. If you didn't care about backwards compatibility, I'd say just get rid of EventStream entirely and use Iterator<Event> everywhere instead. If you care about backwards compatibility, you could at least declare AbstractEventStream as implementing Iterator<Event> - it declares all of hasNext(), next() and remove(). I believe that shouldn't break anything, and should make all the current EventStream implementations into Iterator<Event>s. Why do I want this? Because, when using OpenNLP maxent from Scala, if a RealValueFileEventStream were an Iterator<Event>, I could write: for (event <- stream){ ... } But since it's not, I instead have to wrap it in an Iterator: val events = new Iterator[Event] { def hasNext = stream.hasNext def next = stream.next } for (event <- events) { ... } Or write the while loop version: while (stream.hasNext){ val event = stream.next ... } Activity - All - Work Log - History - Activity - Transitions Ahh, the curse of checked exceptions bites again. I'm not sure how ObjectStream helps - could you explain? Looks like ObjectStream isn't an Iterator either. I see two relatively simple solutions: (1) Give up on checked exceptions and declare the interface with runtime exceptions instead: public class OpenNLPIOException extends java.lang.RuntimeException{...} public interface EventStream extends Iterator<Event>{ public Event next() throws OpenNLPIOException; public boolean hasNext() throws OpenNLPIOException; ... } You still get all errors reported if something goes wrong, and people can still catch and handle those errors if they want to. Of course, Java will no longer force you to handle them at compile time. And you wouldn't be using a standard exception type. (2) Add a method AbstractEventStream.asIterator() that looks something like: public Iterator<Event> asIterator() { return new Iterator<Event>() { public boolean hasNext() { try { return this.hasNext(); catch (IOException e) } public Event next() ... } Here, we're no longer using the standard Java Iterator interface, but this would allow you to keep EventStream as it is, but still make it easy for users to get an Iterator from most EventStreams if they wanted one. ObjectStream has a read method which can retrieve an object from the stream. It also declares an exception which can be thrown in case of an error. In our experience over at the tools project it turned out that such a read method style interface seems to be easier to use and implement in our cases. When we made the decision we had a long discussion if we prefer an iterator like interface a stream like interface and we decided to take the stream like interface. It is also a question of taste of course. Since both interface styles can do the same. Lets say we want to read every line in a file. The line should be represented as a string. With the stream interface its something like this. In the read method we read until the end of the first line, and return it. It is repeated if it is called again, when the underlying stream is exhausted null is returned. User expects read method to maybe block shortly if underlying system is slow. Iterator like interface. User calls hasNext() to determine if there is one more item. Implementation must now read the first line inside hasNext() and then return the result from next() which seems less clear und less intuitive to me. User does not really know which of the two methods will block. InputStreams/Readers must be adapted to this interface, but are naturally easy to use with the stream style interface. In the end we had the feeling that stream style interface is a bit simpler and more intuitive to use. I guess I don't understand what's unintuitive about the Iterator interface - your description of what to do for hasNext() and next() sound perfectly sensible to me. I can believe that the implementation might be more difficult for Iterator than ObjectStream, but lets put the implementation aside for a bit (either one is definitely possible) and talk just about how they would be used. I have three arguments why you should be using Iterator instead of (or at least in addition to) ObjectStream. (1) Iterator is a Java standard mechanism for iterating over a collection. People who have learned Iterators from other Java code can quickly understand your code because they already know how Iterators work. ObjectStream is a custom interface to OpenNLP and thus few people will have seen it before, thus it will take longer to learn. (2) Using an object with ObjectStream is no easier than using an object with Iterator: // using ObjectStream Event event = stream.read(); while (event != null) // using Iterator Event event; while (iterator.hasNext()) It's basically the same amount of code either way. A little more using ObjectStream, but it could be a little less if you use an assignment in the while loop condition. An added benefit of using the Java standard Iterator interface is that you get much better interop with other JVM languages. For example, if you use the Iterator interface, you can write the following Scala code: for (event <- iterator){ ... } If your class implements ObjectStream instead of Iterator, then you have to go back to the while loop. (3) If you use Iterator, then you get Java collections and the enhanced for loop for free. For example, you could easily declare a RealValueFileEventIterable like this: public class RealValueFileEventIterable implements Iterable<Event> { private File file; public RealValueFileEventIterable(File file) public Iterator<Event> iterator(){ return new RealValueFileEventStream(this.file); } } That's all you need, and then even in Java, people could write: for (Event event : new RealValueFileEventIterable(...)){ ... } I guess may main argument is that using Iterator makes your code easier to use for Java programmers who are familiar with Java standard Interfaces. I think making OpenNLP easy to use for general Java programmers is a noble goal. Here's two discussions of the issues around Iterator and checked exceptions: In both cases, the suggestion is basically the same as my OpenNLPIOException solution - wrap the checked exceptions with well named RuntimeExceptions and declare those as being thrown. We still need to take the exception handling into account. Using a plain Iterator the way you described it would mean not to use checked exceptions (like you already agreed). In the ObjectStream the exception handling is done at the end of the stream and does not look ugly at all, so why not use checked exceptions ? The whole point I think is that our users are usually implementing the ObjectStream, but not using it to actually read the data them self. In OpenNLP the data indexers use the stream to pull in the data they need. So our users usually do not do something like you did in (3). In the end I still believe that it is easier to implement these streams when the underlying data source is an InputStream or Reader. And ObjectStream has also a reset method which is used for example by the Parser to read the data in again, we would need this method on an Iterator like interface too. And the remove method does not really makes sense, and would not be used. > We still need to take the exception handling into account Ok, rewritten with exception handling: // using ObjectStream try { Event event = stream.read(); while (event != null) } catch (IOException e){ ... } // using Iterator Event event; try { while (iterator.hasNext()) { event = iterator.next(); ... } } catch (OpenNLPIOException e) { ... } Looks about the same to me. But maybe you had something else in mind? If you could write the code that you're concerned about, that would help me understand better. > our users are usually implementing the ObjectStream, > but not using it to actually read the data them self Perhaps it would help to see the use case where I wanted this? Here's the code: val testStream = new RealValueFileEventStream(testPath) val events = // wrapper for testStream that makes it into a real Iterator<Event> for (event <- events) Basically, I take each event from a test file, and ask the model to predict an outcome. Is this an atypical use case? > I still believe that it is easier to implement these streams > when the underlying data source is an InputStream or Reader. Yes, I agreed to this earlier. But as a user, I care less about how the streams are implemented, and more about the API I have to use when calling them. Or are you suggesting that most users are going to be implementing custom ObjectStreams? If that's the case, you could easily define ObjectStream as an abstract class that implements Iterator<T> e.g.: public abstract class ObjectStream<T> implements Iterator<T> { public abstract T read(); ... private T next = this.read(); public boolean hasNext() public T next(){ T result = this.next; this.next = this.read(); return result; } } That way, if you think it's easier to implement the ObjectStream API, you just subclass from ObjectStream and you get an Iterator for free. > And ObjectStream has also a reset method ... > we would need this method on an Iterator like interface too Well, the standard Java solution to this would be to declare it as an Iterable<T> instead. Then code like: Event event = stream.read(); while (event != null) stream.reset() while (event != null) { ... event = stream.read(); } would instead be written as: for (Event event : iterable){ ... } for (Event event: iterable) { ... } Basically, anything that can be reset is an Iterable, and any single pass through it is an Iterator. I also agree that remove() isn't necessary but the Iterable interface declares it as optional, so I don't think that's a problem. Just throw an UnsupportedOperationException like it says to in the Iterator javadocs. (Also, note that your AbstractEventStream already has a remove() method that does exactly this.) Let us continue the discussion on the dev list, here is my answer: Ok, here's a documentation patch summarizing the two main objections to Iterator. The discussion continued on the linked mailing list thread and resulted in a documentation of the design decision in the ObjectStream interface. The design will not be changed. One of the big issues we have currently with the EventStream is that it cannot throw an exception when something goes wrong. Since it is a requirement to read the data from disk, there is always a potential for failures. For me it seems not possible to implement this nicely with an Iterator. In the opennlp tools project we introduced the ObjectStream to circumvent these issues.
https://issues.apache.org/jira/browse/OPENNLP-99?attachmentSortBy=dateTime
CC-MAIN-2016-40
refinedweb
1,790
55.95
WPF is built on top of a very powerful and capable graphics framework. Much of the core graphics in WPF supports smooth scaling and with proper UI design, an application can be made more or less resolution independent. But there is one area where this doesn't work - and that is when working with bitmap images. When scaling bitmap images, they become blurry (when scaling up) or blurry and crowded (when scaling down). It would be nice if the image control could select the most suitable image from a list of images depending on its current rendering size. Like in this picture below: Fortunately, this can be easily added on top of the existing Image control. Image When WPF is told to render a bitmap image, regardless of whether it comes from the hard drive or from a network stream, it will simply load the first (or only) frame within the image and render it. This means that even if you have, say an .ico or .tiff file containing the same image in three different resolutions (24x24, 64x64 and 128x128) and put it on an Image control and then set its Height and Width to 128x128 there is no guarantee that WPF will pick the 128x128 image. And even if it did by chance pick the correct one, if you resize your image control to make it smaller, it would just scale down the larger image and you'd end up with a blurry picture. Height Width What we want is a control that loads all the available frames of the image and then, at runtime, picks the correct one to render based on its current size. Image loading in WPF is abstracted away into something called an ImageSource. This is an abstract base class for anything that can provide image data to be rendered by the UI. The most common way to render an image in WPF is to create an Image object in XAML and specify its Source to point at the file to load. WPF will then create the appropriate ImageSource implementation to load the image. ImageSource Source <Image Source="SomeFile.png" Width="128" Height="128" /> The framework itself comes with a dozen implementations of ImageSource, some of which can be seen in figure 1 above. Each of these deal with different types of image data, ranging from simple bitmap data from file to runtime rendering of UI Elements. There are also ImageSources that can render Direct3D surfaces or WPF vector drawings. Multi Frame Images are images that contains more than one image. The most common example are windows icon files (*.ico) which typically contain the same image in multiple sizes and qualities. Other examples of file formats that can contain multiple frames are TIFF and GIF. Although in the case of GIF files, it's mostly used to create animations. Each frame has its own set of metadata, which effectively means that each frame can have different resolution and different pixel depth (bits per pixel, or bpp). You could for example have a file called foo.ico containing the following frames: 0: 16x16 8bpp 1: 16x16 16bpp 2: 16x16 32bpp 3: 32x32 8bpp 4: 32x32 16bpp 5: 32x32 32bpp 6: 128x128 32bpp Below is a screenshot of the image that is included in the sample project when viewed in an editor that understands multiple frames: We start by creating a class called MultiSizeImage that inherits from Image. We also setup an event handler to get called whenever the Source property is set or modified. public class MultiSizeImage : Image { static MultiSizeImage() { // Tell WPF to inform us whenever the Source dependency property is changed SourceProperty.OverrideMetadata(typeof(MultiSizeImage), new FrameworkPropertyMetadata(HandleSourceChanged)); } private static void HandleSourceChanged( DependencyObject sender, DependencyPropertyChangedEventArgs e) { MultiSizeImage img = (MultiSizeImage)sender; // Tell the instance to load all frames in the new image source //img.UpdateAvailableFrames() } // ... } So now we have the initial version of our control that can be used as a drop in replacement for the standard WPF Image control. At this point however, it would make little sense since it doesn't add anything beyond what the standard control does. Let's fix that. To load the actual frames from an image we use the BitmapDecoder class. An instance of this class is available on all BitmapFrame objects, which incidentally, is the object that normally gets created by WPF if you specify your source from XAML. If for some reason the current source is some other type of ImageSource, we'll just skip loading frames and always render the actual source. When that happens, our control will behave just like a normal WPF Image control. BitmapDecoder BitmapFrame From the decoder's list of frames, we'll run a LINQ query to sort them by resolution and quality. To make it easier to handle the different sizes, we'll project the width and hight into a single integer so that we can easily compare two sizes. Otherwise we'd run into the question of whether e.g. 100x100 is larger or smaller than 50x200. In the name of simplicity, we'll assume all frames in the image have the same or similar aspect ratio. private void UpdateAvailableFrames() { _availableFrames.Clear(); var bmFrame = this.Source as BitmapFrame; // We may have some other type of ImageSource // that doesn't have a notion of frames or decoder if (bmFrame == null) return; var decoder = bmFrame.Decoder; if (decoder != null && decoder.Frames != null) { // This will result in an IEnumerable<bitmapframe> // with one frame per size, ordered by their size var framesInSizeOrder = from frame in decoder.Frames let frameSize = frame.PixelHeight * frame.PixelWidth group frame by frameSize into g orderby g.Key select g.OrderByDescending(GetFramePixelDepth).First(); _availableFrames.AddRange(framesInSizeOrder); } } What the above code does is that it sorts all the frames by their size (and size is defined as width * height) and then picks the highest quality frame of that size. Remember from the description above that there can be multiple frames with the same size, and if that happens we only want the one with the highest quality - that is, the one with the highest bits per pixels. For more details about LINQ and it's syntax, see MSDN Introduction to LINQ. width * height The end result is that the variable _availableFrames will contain a list of BitmapSource instances that are sorted by size, from small to large. And there will only be one frame of each size. _availableFrames BitmapSource Now that we have a list of frames already sorted, its trivial to pick the one to draw. We'll override the OnRender method which will get called by WPF whenever it determines that the image needs to be drawn. Since both the Width and Height dependency properties are defined to affect the rendering pipeline of the element, we know that OnRender will get called whenever our size changes. We can therefore put our frame picking algorithm inside the OnRender method. OnRender protected override void OnRender(DrawingContext dc) { if (Source == null) { base.OnRender(dc); return; } ImageSource src = Source; var ourSize = RenderSize.Width * RenderSize.Height; foreach (var frame in _availableFrames) { src = frame; if (frame.PixelWidth * frame.PixelHeight >= ourSize) // We found the correct frame break; } dc.DrawImage(src, new Rect(new Point(0, 0), RenderSize)); } The above code will loop through the sorted list of frames and look for the first frame that has the same or larger size than the current rendering size of the Image control. It will then draw that image using the standard DrawImage method. In the case where we don't have any frames at all, for example if the ImageSource is a drawing, or if it's an image without multiple frames, OnRender will just render the original Source object. The sample contains the MultiSizeImage control which you can copy to your own projects. It's only a single file (MultiSizeImage.cs) - no XAML files is needed. You can then use it from XAML like a normal Image control: MultiSizeImage MultiSizeImage.cs <local:MultiSizeImage Source="SomeFile.png" .... /> To test the control, you can load the included project into Visual Studio 2010 and hit F5. Ultimately, I think something like this should be included in the framework. The Image control could easily support this functionality out of the box - either by automatically picking the appropriate frame, or by some property that lets the user specify which size and/or frame to use. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) UpdateAvailableFrames General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/410625/Fully-Scalable-WPF-Image-Control?PageFlow=FixedWidth
CC-MAIN-2014-49
refinedweb
1,443
61.36
Announcing F# 4.7 Phillip We’re excited to announce general availability of F# 4.7 in conjunction with the .NET Core 3.0 release! In this post, I’ll show you how to get started, explain everything in F# 4.7 and give you a sneak peek at what we’re doing for the next version of F#. F# 4.7 is another incremental release of F# with a focus on infrastructural changes to the compiler and core library and some relaxations on previously onerous syntax requirements. F# 4 get an appropriate .NET Core installed by default. Once you have installed either .NET Core or Visual Studio 2019, you can use F# 4.7 with Visual Studio, Visual Studio for Mac, or Visual Studio Code with Ionide. FSharp.Core now targets .NET Standard 2.0 Starting with FSharp.Core 4.7.0 and F# 4.7, we’re officially dropping support for .NET Standard 1.6. Now that FSharp.Core targets .NET Standard 2.0, you can enjoy a few new goodies on .NET Core: - Simpler dependencies, especially if using a tool like Paket FromConverterand ToConverterstatic methods on FSharpFunc<'T, 'TResult> - Implicit conversions between FSharpFunc<'T, 'TResult>and Converter<'T, 'TResult> - The FuncConvert.ToFSharpFunc<'T>method - Access to the MatchFailureExceptiontype - The WebExtensionsnamespace for working with older web APIs in an F#-friendly way Additionally, the FSharp.Core API surface area has expanded to better support parallel and sequential asynchronous computations: - Async.Parallel has an optional maxDegreesOfParallelismparameter so you can tune the degree of parallelism used Async.Sequentialto allow sequential processing of async computations Thanks to Fraser Waters for contributing the new FSharp.Core additions. Support for LangVersion F# 4.7 introduces the ability to tune your effective language version with your compiler. We’re incredibly excited about this feature, because it allows us to deliver preview features alongside released features for any given compiler release. If you’re interested in trying out preview features and giving feedback early, it’s very easy to get started. Just set the following property in your project file: Once you save the project file, the compiler will now give you access to all preview features that shipped with that compiler. When using F# in preview versions of .NET Core and/or Visual Studio, the language version will be set to preview by default. The lowest-supported language version is F# 4.6. We do not plan on retrofitting language version support for F# 4.5 and lower. Implicit yields In the spirit of making things easier, F# 4.7 introduces implicit yields for lists, arrays, sequences, and any Computation Expression that defines the Yield, Combine, Delay, and Zero members. A longstanding issue with learning F# has been the need to always specify the yield keyword in F# sequence expressions. Now you can delete all the yield keywords, since they’re implicit! This makes F# sequence expressions align with list and array expressions. But that’s not all! Prior to F# 4.7, even with lists and arrays, if you wanted to conditionally generate values it was a requirement to specify yield everywhere, even if you only had one place you did it. All the yield keywords can now be removed: This feature was inspired by Fable programs that use F# list expressions as HTML templating DSLs. Of course, if you prefer writing yield, you still can, with the same rules as before. Syntax relaxations There are two major relaxations for F# syntax added in F# 4.7. Both should make F# code easier to write, especially for beginners. No more required double underscore Prior to F# 4.7, if you wanted to specify member declarations and you didn’t want to name the ‘this’ identifier on F# objects, you had to use a double underscore. Now, you can only specify a single underscore, which previous language versions would reject: This same rule has been relaxed for C-style for loops where the indexer is not meaningful: Thanks to Gustavo Leon for contributing this feature. Indentation relaxations for parameters passed to constructors and static methods Another annoyance with previous F# compilers was a requirement to indent parameters to constructors or static methods. This was due to an old rule in the compiler where the first parameter determined the level of indentation required for the rest of the parameters. This is now relaxed: Preview features As I mentioned previously, F# 4.7 introduces the concept of an effective language version for the compiler. In the spirit of shipping previews as early as possible, we’ve included two new preview features: nameof and opening of static classes. Nameof The nameof function has been of the most-requested feature to add to F#. It’s very convenient when you want to log the names of things (like parameters or classes) and have the name change as you’d expect if you refactor those symbols to use different names over time. We’re still not 100% resolute on the design of it, but the core functionality is good enough that we’d love people to try it out and give us feedback. Here’s a little taste of what you can do with it: You can also contribute to its design by proposing changes to the corresponding RFC. Open static classes Much like nameof, opening of static classes has been requested a lot. Not only does it allow better usage of C# APIs that assume the ability to open static classes, it can also improve F# DSLs. However, we’re also not 100% resolute on its overall design. Here’s a little taste of what it’s like: You can also contribute to its design by proposing changes to the corresponding RFC. F# Interactive for .NET Core Preview Starting with F# 4.7 and .NET Core 3, you can now use F# interactive (FSI) from .NET Core! Just open a command line and type dotnet fsi to get started. The FSI experience for .NET Core is now a very, very stable preview.. This package management support is still only available in nightly builds of the compiler. It will become available for general usage in forthcoming support for Jupyter Notebooks via the .NET Kernel and in the first preview of .NET 5. Updates to F# tools for Visual Studio The Visual Studio 2019 update 16.3 release corresponds with F# 4.7 and .NET Core 3. In this release, we’ve made tooltips a bit nicer and fixed some longstanding issues in the compiler and tools that affect your experience in Visual Studio. We also spent a lot of time doing more infrastructural work to make the F# integration with Roslyn significantly more stable than it was in the past. Record definition tooltips use a more canonical formatting: Anonymous Records also do the same: And record value output in FSI also uses a more canonical form: Properties with explicit get/set modifiers will also reflect those modifiers in tooltips: Looking back at the past year or so of F# evolution The past year (plus a few months) has seen a lot of additions to the F# language and tools. We’ve shipped: - F# 4.5, F# 4.6, and now F# 4.7 with 14 new language features between the three of them - 6 updates to the Visual Studio tools for F# - Massive performance improvements to F# tooling for larger codebases - 2 preview features for the next version of F# - A revamped versioning scheme for FSharp.Core - A new home for F# OSS development under the .NET Foundation It’s been quite a rush, and while the sheer number of updates and fundamental shifts to F#, we’re planning on ramping up these efforts! Looking head towards F# 5 and .NET 5 As .NET undergoes a monumental shift towards .NET 5, F# will also feature a bit of a shift.. Some of the concrete things we’ll focus on is making F# a first-class language for Jupyter Notebooks via the .NET Kernel. We’ll also emphasize language features that make it easier to work with collections of data. I like to think of these things as being “in addition to” everything F# is focused on so far: first-class .NET support, excellent tooling, wonderful features that make general purpose F# programming great, and now an influx of work aligned with “analytical” programming. We’re incredibly excited about the work ahead of us, and we hope you’ll also contribute in the way you see best. Cheers, and happy hacking! Just tried dotnet fsi in .NET SDK 3.0 on my Mac with OSX 10.14.6. Immediately stepped onto a presumably fixed issue – (“dotnet fsi complaining about missing Mono.Posix assembly”). Lovely! Thank you and the community for all the great work. Awesome! Maybe for .net 5, target Python devs? Whatever they do with Python, make it simpler/easier/more intuitive to do in F#, with greater tooling support. How about Python interop? Hey Gulshanur, Python interop isn’t on the immediate roadmap (such a thing would likely be a .NET runtime concept, not a language one). But we are focusing on some of the things that make Python nice for analytical work and using that as an influence for work in F#. If you use pythonnet, the docs show how to use it with c# dynamic keyword. In f# you can use the FSharp.Interop.Dynamic library. “` open Python.Runtime open FSharp.Interop.Dynamic open FSharp.Interop.Dynamic.Operators do use __ = Py.GIL() let np = Py.Import(“math”) np?cos(np?pi ?*? 2) |> printfn “%O” let sin = np?sin sin 5 |> printfn “%O” np?cos(5) ?+? sin(5) |> printfn “%O” “` Can you expand on this sentence? We’ll also emphasize language features that make it easier to work with collections of data. Where can I read language proposals, for example. Thanks! Hey Daniel, You can find all suggestions here: And all RFCs here: Something that’s currently in-progress is making slices more consistent across built-in collection types and supporting “from the end” slicing. Will there be support for WPF, WinForms and ASP.NET Core web applications? Hey Eugue, There are no current plans for templates or designer support for WPF or WinForms. But you can certainly call the APIs in F# code. F# is already fully supported for ASP.NET Core, though we don’t surface templates with Razor files because those take a dependency on being in a C# project right now. I’m coming back to F# after being away for a few years. I took a refresher on some of the fundamentals and came across one of those nasty F#’ish issues that makes me crazy. I’m trying to chart a line and can’t seem to find any Charting tools compatible with 3.0. Am I too early to the game to try an do some plotting? Or (more likely) am I not being resourceful enough in finding a compatible library? Any advice would be so helpful. I will give you my first and second born just to show my gratitude. 🙂 Hey James You could do worse than try XPlot (), which has both Plotly and GoogleCharts APIs. Both versions support NET Standard 2.0 as far as I’m aware so you should be good to go! Looking ahead towards F# 5 and .NET 5 At the Xilinx Developer Forum today MSFT announced a new family of Azure VMs that will include 1-4 Xilinx Alveo U250 FPGAs. Xilinx also announced Vitis, a comprehensive open development framework that is designed for software developers to create, port, and consume hardware-accelerated logic. My understanding is that most of Azure’s FPGA nodes use Intel (Altera) FPGA hardware. Azure will no doubt also support Intel’s forthcoming Agilex family of Xeon CPU/FPGA semiconductors with on-die integration. Nevertheless, Xilinx is the major FPGA hardware company. Lombiq’s Hastlayer effort (which even implements posits!) is interesting (and supports Xilinx FPGAs). But a more substantial effort to enable .NET 5 to consume Xilinx FPGA functionality through Vitis would be very welcome! Woo! #r nuget! Can someone please explain me how to get this thing working on Ubuntu? I installed dotnetcore 3.0 as indicated but fsharpc still tells me “F# Compiler for F# 4.0 (Open Source Edition)” (most likely from a previous installation I had done with “sudo apt install fsharp”). The above text refers that after installing dotnetcore 3.0, F# 4.7 will be available on Visual Studio. But I don’t have Visual Studio and I’m not even remotely interested in installing Visual Studio — in fact, I don’t even understand why you’re bundling both things together. Wasn’t the point of the new move of Microsoft to decouple both products? It crossed my mind I could go to the “Releases” page of the GitHub’s F# project but alas, it’s again only talking about “Visual Studio” releases. What’s this? I’m confused.. Hello Jorge, fsharpcis the name of the F# compiler packaged in Mono, not .NET Core. If you use dotnet buildit will call the .NET Core compiler for F#, which is what you should use. This is not working on Visual Studio for mac. The implicit yields result in ‘ignored’ warnings. Switched to the preview branch to update VS, still no luck. When trying VS code, I can use the new yield syntax, but then I get ‘Unable to find the file ‘System.ComponentModel.Composition’. So, again, with F# on Mac no luck (never seemed to work in the past either). Hello, VS for Mac does not support F# 4.7 yet. Can you share a repro for the error you are seeing? The new syntax does not depend on nor does any part of the Ionide extension for F#. I suspect you are using something that depends on a version of MEF that doesn’t work with .NET Core. Have you tried creating a new .NET Core project and using VSCode for that? Thanks for taking the time to reply! I think implicit yields are great, so I was eager to use them. The libraries the problem occurs are: They both use only these dependencies: nuget MathNet.Numerics.FSharp nuget FSharp.Formatting nuget FSharp.Formatting.CommandTool nuget Unquote nuget Expecto nuget Expecto.BenchmarkDotNet nuget Expecto.FsCheck nuget Expecto.Hopac I can also not imagine which of the libs require MEF? Are there any plans to add language support for high kinded types, similar to Haskell and Scala? This may require a CLR change. You can track the feature here (though it’s a proposal of C#):
https://devblogs.microsoft.com/dotnet/announcing-f-4-7/
CC-MAIN-2020-16
refinedweb
2,445
66.94
SYSTRACE(9) BSD Kernel Manual SYSTRACE(9) systrace_redirect, systrace_fork, systrace_exit - enforce policies for system calls #include <dev/systrace.h> int systrace_redirect(int code, struct proc *p, void *args, register_t *retval); void systrace_fork(struct proc *oldproc, struct proc *p); void systrace_exit(struct proc *p); These functions are used to enforce policy on the system calls as described in systrace(1). systrace_redirect() should be used to perform a system call number code with arguments args for the process p. The result is then put into the retval pointer. A typical code sequence would be: #include "systrace.h" ... #if NSYSTRACE > 0 if (ISSET(p->p_flag, P_SYSTRACE)) error = systrace_redirect(code, p, args, rval); else #endif error = (*callp->sy_call)(p, args, rval); systrace_fork() is called from the fork1(9) function to inherit policy for the child process. systrace_exit() is called during the death cycle of the process to detach the policy from the exiting process. A subsystem for enforcing system call policies is implemented in sys/dev/systrace.c. systrace(1), systrace(4), syscall(9) The systrace_redirect section manual page appeared in OpenBSD 3.4. MirOS BSD #10-current July 21,.
http://www.mirbsd.org/htman/sparc/man9/systrace_fork.htm
CC-MAIN-2013-48
refinedweb
186
54.93
August 15, 2009 - Okay, I lied. Heh. This isn't really an update. More of an announcement and a chance to touch base and let you know I'm still working on the site. See my first website (The one that this site started out on as just a mini-shrine.) just celebrated it's 10th anniversary. So please check out the new artwork I did for the Asylum Network! In Dueling Hearts news, I've got a lot of new stuff on the way . . . I just haven't gotten any of it ready yet. *grins sheepishly* Be on the lookout for updated anime & manga lists, new fanfiction from yours truly, plus a newly re-edited Of Dragonballs and Duel Monsters, some new book covers, more doujin in the reference section, updated links and maybe some new wallpaper! Urgh . . . so . . much . . . work . . to do . . *collapses* April 18, 2009 - All right, all right. I know. I'm waaaay later than promised with an update. I've been awful busy, plus I was prepping the new graphics for the site. You like? Oh, I finally got confirmation on the final Season of GX, it was NOT aired in the United States. 4kids chose to start 5Ds instead of the final season probably due to how dark it was in comparison to previous seasons. This of course sucks for the people who just watch the dubs, but for the fansub watcher's like me, it's no problem. Check it out! We've got a new affiliate! ^__^ I was hugely flattered when I saw that they wanted to affiliate with my site. I've never had a business want to affiliate with me before, host their ads sure, but never just affiliate. *grins hugely* In other news: there's lots of new stuff and as you can see, there are several new sections coming! There's even a new sister site in process, so keep your eyes open for Asylum Annex! I'm spreading my insanity out in new directions! Yeah! November 06, 2008 - Sorry guys, not really an update. I wanted to get the Halloween stuff down before I buried my self in my NaNoWriMo project. Ah yes, hard on the heels of completing my story for the GTAC challenge at Little Dragon, I roll right into an even more insane project: write a 50,000 word novel by November 30 . . . I AM a complete nutcase. @_@ Unfortunately this means that you won't be seeing anything new here until December. But with luck and a little time maybe I'll have a nice Christmas or New Years theme up. Or maybe I'll just do something for the winter. Hmmm. . . decisions, decisions. Anyway, wish me luck! This is going to be one of my most ambitious projects in quite a long time! >_< October 16, 2008 - Happy Halloween to everyone and Happy 5th birthday to Dueling Hearts! Wow. It's hard to believe I've been working on this site for so long. @_@ I've made some special graphics for the holiday this year. Nice ne? The beautiful and creepy shots of Yugi I used come from the Yu-Gi-Oh R manga. (I'm still hoping for a U.S. release.) The splash picture, site banner, and the two avatars were all created by me. ^_^v New stuff! There's a new sister site and a potential affiliate. The 5Ds Japan list has been updated and there's a new section for 5Ds U.S.! The Gx manga list has been updated too. Moved and updated the DVD covers: We've got pics of the US season box sets and more of the import DVD covers. Made a little update to the books, there's new wallpaper, and more full-length tracks in the music section! New fanart from me and Misasi. And finally, my doujinshi collection has expanded yet again. ^_^; August 08, 2008 - Here we are with a lovely update and icons that kinda match! At least the colors do anyway. Both of these were made by Hales. Sorry about the long delay in updates, but I've been hugely busy these past few months. Believe me when I say that it doesn't have to be your wedding to suck away all your time and energy. But my baby sister is married and her husband has left for boot camp, so life is returning to semi-normality. Or at least as normal as things ever get in my home. Look we finally got some of the bios done! *does happy dance* The entire section for the OVA is up! Man was that an adventure. @_@ As you can see the 5Ds section is up and the Gx summaries have been updated. There's new wallpaper and music, more cover scans in several sections, some new fanart from yours truly, and the adopted section has been updated. Plus there's two more new sections up, the episode downloads and the doujinshi guide! ^__^ March 31, 2008 - Back with our first update for 2008! First off, we've got three new affiliates! Whoo hoo! There are now over 300 Yami Yugi pics! o_O We've also got a crap-ton of new cover scans. *grins* Guess who's book collection got a boost. *whistles innocently* Now for those of you wondering what the hell Yu-Gi-Oh 5Ds is, it's the new series that will be replacing Yu-Gi-Oh Gx, which ended with episode 180 at the end of this month in Japan. For more info on the new series, go to Janime. Kaiba's gallery is finally back with a lot of new goodies. There are roughly 150 pics in there, 50 or so are brand new to the site. Also, Yami's gallery has been updated and there are now galleries for Asuka and Manjoume, as well as major updates to Sho & Judai's image galleries. Overhauled the cover galleries, so not only is their new stuff, they look completely different too. There's new music in the downloads, got the album-cuts for Gx's season 4 opening and ending. And I finally got the episode list for Capsule Monsters. December 24, 2007 - Merry Christmas everyone! We're back again and just in time for the holidays! ^_^ The manga list for Yu-Gi-Oh is finally finished! Yes, the whole series, by volume is now listed. R may be finished too, but I'm not sure if I have the last two volumes lined up right, so that may change. There's more DVD covers and some great V-Jump covers too. I've also updated both game lists and will update the covers next time. There's new fanfiction from yours truly, if you're a fan of YxYY yaoi, then you'll love my latest creation. ^_~* You'll also find new official wallpaper and icons made by me! BTW: The two icons you see here were created by me from actual Yu-Gi-Oh manga. The images were clipped from a Christmas short done by the master himself: Kazuki Takahashi. August 11, 2007 - Wow it's been a while since I've updated. I've had a lot on my plate these last few months. Besides surviving work and the holidays, I've been prepping for the arrival of my niece (who was born on Feb 13), and preparing for my graduation from college this year. (Which I successfully completed May 15) There was a hell of a work involved and I'm also interviewing for a job to start my new career. To make up for the lack of updating, we've got a pretty big one this time around. On to the new stuff! I've finally completed the Episode lists for the both the Japanese and English versions of YGO. I'm still working on the English Gx, but the Japanese list has been updated. All the manga lists have been updated, as have the DVD galleries. There's new Jump covers, game lists, game covers, a bunch of new wallpapers, new banners, new English music, new fanart and fanfiction, new and updated image galleries, and a whole bunch of other new stuff. So look around and tell me what you think! November 30, 2006 - Well, as you've already noticed, I'm trying out a new layout. You see, recently I came to realize that the old layout was a bit too complicated to browse. I mean, it was nicely organized and all, but it just took too many clicks to get people where they wanted to be. So I've dumped the content page and several of the divider pages. All the relevant link pages are over here. *points to the left* I'm also playing with avatars a bit. You see, I've collected so many of the darn things I'll never use them all in my blog and LJ. So yami and I will be using them here in the new updates. Credit will be given to the creators IF I know who made them. If you see one uncredited and know who made it, please let me know so I can fix it. As far as new stuff goes . . . well, there's just waaaaay too much new stuff to list it all here. Since almost every section got updated, it'll just be easier to list where I didn't update. Bios, fanfiction, and cosplay are the only sections WITHOUT new stuff. October 25, 2006 - Happy Halloween, minna-san! *bounces off* *shakes head* She's had waaay too much sugar. Anyway, there are more scans in the cover galleries, and the manga lists have been updated. We've got the Gx manga list up too! ^_^ ~_~ New fanart and fanfiction by me along with the re-edited version of ODDM. August 26, 2006 - Mini-update. Just updated the fanlistings on the main page. August 16, 2006 - See told ya so. Yami! *shrugs* *sighs* ~_~ Anyway, in Anime and Manga the Japanese episode list for seasons 1 & 2 of Gx are up, the manga lists for YGO and YGO-R have been updated, and there are new Japanese Jump, and Gx covers. Re-made and replace a bunch of the 88x31 icons- and made a bunch of blinkies too! Next up, the fanlistings! August 15, 2006 - Hey, check it out! We've got two new affiliates! ^_^ There's also new wallpaper in Miscelleaneous, a new author in fanfiction, a new artist in fanart, plus new art from Mistress Paco, and new art and fanfiction from me! Don't forget we've updated the GX episode list, and both manga lists! And Look for more updates over the next couple of days, we've got lots more where this stuff came from! ^_^ April 27, 2006 - As promised, we've made a massive update to the fanart section. Three new artists, plus new art from Crazy-Dog, Mistress Paco, and me. Crazy-Dog also gave us a couple new icons, which you'll find down in "Misc". There are new link banners in several sizes too. Now we've got a few Gx ones, as well as a few featuring our own artwork. There've been some new additions to the Image gallery. Moved mutliple galleries to the main site, moved and updated Ryou, Yami Bakura, Group, Malik, Isis, Yugi & Anzu, BEWD, & Black Magician galleries. Both the Bakurae's galleries and the Group gallery got an overhaul, plus we've finally put up the Yami & Kaiba section. There's also a ton of new stuff in "Anime and Manga". We've put up new game covers and lots of dvd covers, as well as updated the U.S. episode list and added the one for Gx. Next time we'll be updating more of the image galleries and the fanfiction. April 09, 2006 - Little update this round. Updated the fanlists and moved the guestbook. We're trying to stop all the spam its been getting. >_< If this doesn't work, we'll have to remove it. We've also re-coded the music section(to make it look nicer) and replaced the TV cut of "Overlap" with the album version. ^_^ We'll be updating the fanart section and possibly the image galleries next time. Which should be sometime this week, if all goes well. And we don't rip our stitches again. o_O Jan 11, 2006 - Happy belated birthday! *looks over at hit counter* Damn! You guys really do like us! ^___^ ^___^ Well, in recognition of your affection and in celebration of our birthday . . . We have put together one hell of a site update. There's new stuff everywhere! Updated the cover galleries and the English episode guide. There's also new fanart from yours truly and Mistress Paco, new fanfiction from myself and new authors Galandolel and Lee Pascoe! Four new official wallpapers- Which we've moved off of Fortunecity and onto our server. There's a bunch of new moving gifs too. You may have also noticed the induction of the Gx characters into our site. We've been watching the show on Cartoon Network and have been enjoying it quite a bit. ^_^ True, it's not as good as the original, but it's still fun to watch. And Sho is absolutely adorable!! *cuddles the shy bishonen* Dec 03, 2005 - Partial site update. Image galleries are back with some new pics of Yugi and Yami Yugi, as well as the addition of some of the Gx crew to the group. Yuki & Sho have joined our little family! We'll have more to add later today or tomorrow. ^_^v Sep 29, 2005 - Micro-mini update. Fixed the Janime links, now they'll go to her site again. It's not her or our fault. Yea, her bastard web-host sold her URL just so he could make more money. Seriously, that's what he told her. Greedy bastard. He only did it to her because of how popular her URL is. So everybody help out and change to the new URL, ASAP! Sep 10, 2005 - Put up quiz, cosplay, and sitely/FAQ sections in Misc. Also, there's new fanfiction from myself, new author Makoto, and a new artist Mistress Paco. Nine new official wallpapers, more moving gifs, more info on the manga guide, more images in both cover galleries, AND we updated and overhauled the entire links section! Jun 09, 2005 - In honor of school ending, summer, a recent trip to A-kon, and all things YGO . . . . We present to you the UBER-UPDATE! Yes, the uber-update. Wherein nearly all current sections have been updated and two NEW sections have been added. And they ain't small either. The only section not updated was the music, and the only section not up is the bios. I'm still debating their format and style. We'd list all the changes, but really, it'll just be faster for ya to browse around and see all the new stuff for yourself. Ja! Mar 27, 2005 - HUGE gallery update this round. We've added in TWELVE new image galleries! Malik, Yami no Malik, BEWD, and a whole bunch more! Also moved some of the other galleries for organization purposes. If something isn't working, PLEASE let us know! Fanfiction has been updated! New fiction from me and Serenity Wintirs, and a new author: Call me Gary. We're also pleased to announce that the music section is up! Japanese tunage for everyone to enjoy! Not sure if you've noticed, but we've been tweaking the graphics around here too. It's nothing major, but it does make things just a little bit nicer. ^_^ Jan 27, 2005 - The cause is down for an overhaul. Hopefully will be back up next week. No promises though. BTW: My birthday is Feb 6th, I'm turning 26. @_@ Presents are welcome! Jan 07, 2005 - New art from Miss Uery and myself. We've also changed the site look for the winter! Snowflakes and everything. ^_^ Links, and adoptees have been updated as well. We added image galleries for Honda, Mai, & Otogi and re-arranged Ryo & Bakura's. And we made some new banners and scrapped some of the old ones. Dec 25, 2004 - Holy crap we broke 10,000!! *faints* Yami, wake up! *pokes yami Rose* We gotta do the updates! *groans and sits up* You start Hikari, I'm still a bit stunned. Okay! Fanfics, yay! O_O You've been in the Pixie Sticks again, haven't you? *grins* How'd ya know? *sweatdrops* Lucky guess. Anyway, like Hikari said, we've updated the fanfics. There's new stuff from me and Serenity Wintirs. Go check it out! *bounces across the room* -_- So we've also updated our bio and have added a new section. Not sure if anyone will actually READ it, but that's okay. *bounces back in* We're gonna update the fanart next time! Oct 28, 2004 - All right. Why the hell wasn't anyone complaining about the broken links? Hmm? *taps foot impatiently* *sigh* Don't mind her minna. We've fixed all the broken links in the image galleries-- Checked every freakin' picture... *sweatdrops* Anyway, if there was one you saw that wouldn't load, or one you couldn't even see the thumbnail for, it's all there now. Oct 16, 2004 - Today we are one year old! *does happy dance* And boy have we accomplished a lot this year. We accomplished a lot today too. Eh. *shrugs* At any rate, on with the updates! As you can tell, we've changed most of the section tags. Our picture selection more than tripled from when we did this the first time, so we figured now was a good time to make new ones. Fanfiction and fanart have been updated! Finally we have our fics up as well as some new authors! Whoo hoo! The cause has been updated with new banners and a new page for members! If you've already got one of the banners on your site, BTW: What'dja think of the new intro pic? *grins evilly* YAMI!! What? Before anyone has a coronary, that's the grown-up Yugi from YGO-GX. In that series, he's my age! *insert evil laughter here* *blushes and shakes head* Sep 29, 2004 - We're baaack! *grins evilly* *giggles* Lots of new goodies this time: There's a bunch of new banners and buttons for the site, and a quite a few new banners for the cause. Yami got a whole bunch of new pictures off ebay. HIKARI!!! =P *sighs* Good news! We overhauled the image galleries! And we've finally put up the Kaiba and Seth galleries. Our little sister was gonna strangle us if we didn't. O_O There's been a total overhaul of the Yugi/Yami gallery. Now they each have their own pages, plus there's a page of Yugi & Yami pics and Pharaoh Atem! We've also updated the links sections. Next time look for updates in both fanfiction and fanart. May 04, 2004 - OMG!!!! We broke 5,000!!!!!!!!!!!!!! *does happy dance* Arigato gozaimasu minna! All that and we've started a fanlist! Go here: to check it out! Also, we've updated both mine and Tobias' art pages! Go look! Go on, shoo! *waves readers away* We'll be working on the image galleries for next time. ^_^ Mar 18, 2004 - Okay. So it's been a week. Hey, I've got midterms! Anyway, I finally got the banners for both the site and cause uploaded so if you don't have one or want a new one, check `em out! Mar 11, 2004 - Scratch the cause and banners. Damn Anzwers won't let me upload for some reason. I'll try it tomorrow after I've had some sleep. -_- Mar 11, 2004 - Whoa. Our hit count has more than tripled since our last update. O_O O_O Hey, the fanart is up! Piccies from me and two others. Wai!! *does happy dance* Now that's disturbing. =P We gots four new banners for the cause and one new site banner. For space reasons, I've moved them to another server. Also, I changed the pic on the main page. You like? Nov 19, 2003 - Nov 10, 2003 - Same day, more sleep. *grins* The Cause has begun! Support your local bishonen! Wee! I made all but one of the banners today. *blinks* What? I got motivated. Also, I got the Ryou/Bakura gallery up. ^_^ Nov 10, 2003 - Lookie, we got the fanfiction section open! But there's only one author! *sobs* There, there, Rose-chan. *pats Rose's shoulder* We'll get more eventually. Even if we have to beg shamelessly for it! HIKARI!!!!!! What? *sighs* Nevermind. Made some new buttons. These'll stay up until I get the new art drawn. Nov 04, 2003 - Wow, people are actually coming here! Lots of people! Well, I wouldn't say lots, but still, way more than expected. Especially considering how old the site is. Yeah, didn't it take the Asylum two months to get this many hits? *glares at Hikari* Urusai gaki. There are thousands more DBZ sites out there, even back then. Plus I've learned a lot of tricks on how to effectively get attention without being an ass about it. ~_~ Coulda fooled me. Brat. =P Anyway, updated Jonouchi and Anzu's galleries. They seriously needed it. Not without some trouble though. Ya, Fortunecity was being a pain in the ass again. I was beginning to think I'd never get the stuff up! Currently, I'm working on the Ryou/Bakura and Malik/Marik galleries. There's also a few more clique links up. What can I say, I'm a shameless fanatic. I'll say. *flips off Hikari* YAMI! *blinks innocently* What? You know what. *grins* Whatever. Oh! There's a guestbook now for ya'll ta sign. Oct 28, 2003 - We've got links! Aieee!!! Calm down fluffy. =P *blows raspberry at Rose* Now would be a good time to mention that normal updates won't be so close together. Once everything is up and going it'll probably settle into a bi-monthly kinda thing, like the Asylum gets. It all depends on the traffic. We also got some of our banners up. Nothing fancy, but it's a start. I'll get some more done up later. ^_^v Oct 27, 2003 - Joined some webrings . . . . and we're back to the gray again. My dad says it looks more professional. I kinda agree. *shrugs* Anyway, this is the color I had in the first place. S'easier on the eyes too. Oct 23, 2003 - Finished putting up the galleries I originally made for the Asylum. Some of them are kinda small, but keep in mind, I didn't have a new webpage in mind when I made them! There will be more added. *grins* I've got tons of new pictures and I'm just itchin' ta get them up. Oct 22, 2003 - New hit counter. Stupid fast counter kept counting every page view. This one just gets unique hits. Oct 22, 2003 - Changed the background to purple. As if you couldn't tell. *rolls eyes and pointedly ignores Rose* Our mom said it looked better than the gray. ^_^ Yeah, we're 24 and still get advice from our mom, gotta problem with that?! Yami, you're scaring people. *glares at her hikari* What did I say about that? Well, you ARE and you're acting like Bakura. *crosses her arms and scowls* Oct 21, 2003 - We got the submission rules and the about me sections working. *smiles* So if you wanna send anything in, read the rules to find out how! Rose still can't decide if she likes the purple or the gray background better. Oct 20, 2003 - Okay, now you can actually SEE the update page. *sighs and rolls eyes* Stupid html editors. For reasons unknown to man, the damn things keep screwing with my image maps. >_< Oct 19, 2003 - Wooo hoo! We've got an update page! Not that there's much to report. Kinda cranky aintcha', Yami? It's past 3am and all I've gotten accomplished is an update page. Of COURSE I'm cranky . . . and DON'T CALL ME YAMI!! Gomen ne, Rose-chan. Anyway that's not all we got done. There's some clique links up too. Yeah, but I couldn't get a Yami Yugi fan one! *pouts* All the links I found were bad! I got a 404 error on the one and a Tripod error on the other! *pauses to spit and glare* Damn Tripod. Don't mind her minna. She had a falling out with Tripod a few years back. It's a loooong story. HIKARI!!!!!!!! Oops. Gotta run! BACK
http://yugioh.db-asylum.com/updates.htm
CC-MAIN-2019-13
refinedweb
4,187
85.49
I've been handling a lot of personal problems since November 2008 so this version is not as chock-full of new features as previous versions. The two big items are task reminders and FTP support (via the File menu). Other smaller features include:and TVM_DELETEITEMto know exactly when items are added and removed. SetWindowPos(NULL, 0, 0, 0, 0, SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOZORDER)to force Windows to recalculate the non-client area of the control. WM_NCCALCSIZEwhen it does, and offset the left border by the required gutter width. WM_NCPAINTfor painting the numbers.derived class using class wizard, and wire up all the controls just as you normally would. #include "runtimedlg.h"and change all instances of CDialogto CRuntimeDlg. CRuntimeDlg::AddRCControls(...)passing the control definitions as a string. CRuntimeDlgtakes care of the rest including, if required, auto-sizing the dialog to suit the control layout.. * CSubclassWnd was originally written by Paul DiLascia for MSJ magazine. The version I use has been heavily extended to suit my specific needs. The classes that depend on it here need this extended version. This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 License. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/applications/todolist2.aspx
crawl-002
refinedweb
199
50.53
Rails Shell View The Rails Shell view is a good example of why I prefer RadRails over other existing IDEs. Someone asked in the Aptana forums if RadRails could incorporate a nice feature he saw in a different IDE. Chris Williams, the lead developer of RadRails, asked him to enter the desired features into the Aptana Issues Tracker, where you can report any bugs or ask for any functionalities you would like to see in RadRails. Some months later, in RadRails 1.0, the Rails Shell view was already available. From the Rails Shell view, you can have the best of two worlds: the power of the command line and the ease of use of a development IDE. You can basically run railsrelated commands from a command shell with content-assist. The commands you can execute from this view are: rails, gem, rake, script/about, script/console, script/destroy, script/generate, script/plugin, script/ runner, and script/server. If you are not familiar with using these tools from the command line, maybe you will not use the Rails Shell view very often. But if you are familiar with these commands, you might find yourself more comfortable with the Rails Shell than with the GUI views. As an additional advantage, from the Rails Shell view you can pass any extra parameters to these commands. You could, for example, manually install a plugin or a gem not listed in the Plugins or RubyGems views. This view doesn’t have a tab of its own, but it’s accessed as a part of the Console view. Go to the Console view and open the drop-down list by the Open Console icon. From the list, select the Open a Rails Shell option. By default, the Shell will be active for the currently selected project in the Ruby Explorer. As usual you will see the current project in a label at the top of the view, and you can use the Change Active Project icon to select a new project. Alternatively, you can directly type switch or cd in the Rails Shell view and a list of the available projects will appear for you to select one. Note that only the open Rails projects will be available. If you click Ctrl + Space (or Command + Space in Mac), the list of available commands will appear. Now you can either select the command directly from the list or start typing so the list will filter the matching commands. Every time you type a command, a new content-assist window will display with the suggested options for that command. For example, if you type rake and then Space, you will see a list of the available tasks for your project. Be careful not to hit Enter after typing rake without any arguments. By default, the rake command without any arguments will try to run the tests for your project. It’s worth noting that some commands accept more options or fl ags than the ones appearing as suggestions. Even if they don’t display in the list, if you pass those extra fl ags they will have the expected effect. You could for example execute a Rake task with trace (debug) information by typing: rake –trace stats You can use this view also for starting a rails server via the script/server command. If you want to start the server in debug mode, you can use the command: debug script/server In the professional version of RadRails, there is also a profile option for starting your server under profile mode. If you try to run the profile mode in the Community edition you will get an error message. Finally, in the Rails Shell view it’s possible to use the keyboard arrows to move through the command history. RegExp View It’s not in every project that you need to write complex Regular Expressions, but when you have to do it, it’s always nice to have a way for testing and refining them. In Ruby you can always open a console and evaluate your expression against a given string, but with complex expressions it can take a while to see why the string is not matching properly. RadRails incorporates the RegExp view, which will help us test regular expressions and execute them step by step to see where the pattern is not matching your data. The Regular Expression box is for entering the regular expression, and the Text to match against box below is for entering the data to match it against. Once you write your expression, you can use the icons on this view’s toolbar to evaluate it. The icon most to the right is labeled Validate RegExp. When you select this icon, the circle at the top of the view will change colour to Green or Red. Red means either your expression was wrong, or it was right but no matches were found. Green means the expression was matched correctly against the text and there were some results. To actually see the matches of the expression, you can use the forward, backward, and reset icons. When you click on forward, RadRails will break down your regular expression into its subpatterns and will display how every part is matching with your text. Every time you click on forward you will see the matches for the next part of your expression. This is a very interesting way of testing your patterns, since often a part of the expression is right but it doesn’t match your text because of a subpattern. With this tool you can see how the different parts are being evaluated. For example, in the preceding screenshot I wrote a simplistic expression for finding email addresses in a text. The expression is not complete, but for this example’s purposes it will do. Basically I’m telling RadRails to find strings composed of several characters that are not spaces or ‘@’, then the ‘@’ symbol, then again some characters except blanks, dots or the at symbol, and finally a dot and again some characters. The pattern for this could be: [^\s@]*?@[^\s\.@]*?\.[^\s@]* And as the Match Text I’m using a combination of both valid email address and non-matching text: javier.ramirez.gomara@gmail.com w test@testmail.com wrong@test wrong@wrong test@testmail.com If you execute the expression by using the forward icon, you will see how the first part of the expression will match the first part of the three legal email addresses in that text, then the next part will match the three at symbols, and so on. If you prefer to see the whole matches instead of the partial results, you can use a trick. Surround the whole expression with parentheses and then you can use either the Validate RegExp or the forward icon. In any case you will get the full matches for your expression. In our example, it would look like this: ([^\s@]*?@[^\s\.@]*?\.[^\s@]*) When I’m writing regular expressions, I like to go step by step until I have it right and then use the parentheses to make sure the whole strings are being matched properly. Finally, the two checkboxes you can see in this view are used for telling your expression to be case-insensitive or to match expressions even if the strings are divided by a line break. Problems View When writing your code, if you have a syntax error you quickly fix it because Eclipse will warn you, and because if you don’t, then your script will not run. If we are not talking about errors but warnings, even if Eclipse warns us, we usually are much more tolerant and we tend not to pay enough attention, thinking we can always fix that later. By the time, and if, we want to clean up our code, we will most probably have lost track of where the warnings were happening. By using Eclipse Problems view, we can see a list of all the places in our code where we have unresolved warnings. The list of warnings is configurable, as we will see in the next chapter, and can include warnings about deprecations, empty blocks, variable names, common possible errors, unused variables, and so on. If you open the Problems view, you will see a list of the current warnings for your project. If you cannot see any warnings, open one of your controllers and just define an empty method that receives some variable you will never use. A definition like this would do the trick: def empty_method(a) end After you save the file, you will see two warnings in your Problems view, one about having an empty method definition, and another one about an unused variable. If you double-click on a line of this view, the corresponding file will be opened in the editor and the cursor will be placed at the line containing the warning. As you can see, the list gives you information about in which file the problem is happening, the name of the container file, the relative path in the workspace, and the line number. You can sort the list on any of these columns by clicking on the column name. Thus, you can group together errors with the same description, in the same project, or starting with the same path. There are some extra options in this view. If you click on the view’s menu icon (as shown in the above screenshot) you will see these additional options. If you want to sort the lines by more than one column, by description and then by path, for example you can use the sorting option. The Group By option of this menu will not have any effect here as it makes sense only for developing Java projects with Eclipse. Use the Preferences option if you want to limit the total number of warnings rendered. Finall y, the Configure Filters option, also available as an icon on this view’s toolbar, will allow you to filter warnings only for the current project, the current selection, or the whole workspace (the Window Working Set). By default you will be presented with warnings for all the opened projects in your workspace. Tasks View When y ou are working on a project, you sometimes leave incomplete methods or things to fix later. As you know, it’s a very good practice to write a comment in the code so you will not forget you have to finish or fix that code in the future. A common coding convention is writing a comment with the word TODO (or XXX) for incomplete functionality and with the word FIXME to mark a piece of code that must be fixed. RadRails supports the use of these coding conventions, and offers the Tasks view in which you can see the list of annotations in your code and in which you can also set to-do items manually. The menu of this view is very similar to that of the Problems view, the main difference being that in this case you can filter out results by using the priority column too. First of all, we are going to set some annotations in our code, so we will see some entries in this view. Open the file books_controller.rb and at any place in the code write the following comments: #XXX needs to iterate over the results. finish later #TODO automatically generated method #FIXME breaks on nil After saving the controller, you should see three entries in the Problems view. Each line displays the type of annotation as well as the associated comment. You will notice that at the left of the FIXME annotation there is a red exclamation mark. This is because the priority is automatically set to High on FIXME and normal on TODO and XXX annotations. Just a quick note here. Since Rails 2.0, there are some Rake tasks available for searching for annotations in your code. The recognized annotations in these tasks are FIXME, TODO, and OPTIMIZE (also supported by RadRails). Annotations with XXX syntax are a common convention amongst software developers, but since they are not supported by the Rails tasks it could be advisable not to use them. Of course nothing will break if you do, but by using only the annotations Rails understands, you can be sure people using a different IDE (or even no IDE at all) can take advantage of your code annotations. You can add your own annotations or change the priority of the existing ones from the Window | Preferences dialog. We will see how to do this in the next chapter. Apart from using this view for code annotations, you can manually add to-do items. You can right-click on the content area of this view and select Add Task or you can use the Add Task icon of the toolbar. A dialog will appear for you to enter the description of the task, the priority you want to assign, and a checkbox indicating whether it is completed or not. If you create a task with High priority, it will display the red exclamation icon, and if you assign low priority, it will display a blue down-arrow. Once a task is set manually, you can click on the task description or in its priority in the Tasks view and edit the information directly on the list. You can also check the task as completed at any time you want. When you add a task manually it is not associated to any resource. If for any reason you want to add a task and you want to assign it to a source code file, you can open the file you want in the editor and then select the Edit menu and then the Add Task option. You will see the same dialog as before, but the Resource, Folder, and Line fields will be filled. You can also add an associated task to a resource by right-clicking on the left margin of the editor and selecting Add Task from the context menu. Test::Unit View There is always an excuse not to write your tests: the deadline is close and you don’t have the time, the requirements are not clear so it’s difficult to write the test code, generating data for testing is not always easy. When working with Rails, some of those excuses lose strength, since the framework facilitates the preparation of the test database, the generation of the testing suites, and the execution of the tests themselves. Because of that, the proportion of developers writing applications in Rails who systematically write tests is larger than in other development environments. Actually there are many developers who write the tests even before than the code itself. Of course this practice is not particular to Rails, but the framework makes easy to adopt it. Since testing is such a relevant part of the Rails philosophy, it’s only natural that RadRails provides a specialized view to help us test our application. This view is called Test::Unit view and it appears as a tab by the Ruby Navigator in the default Rails perspective. Whenever a test suite is run from within RadRails, the Test::Unit view will display the results of the execution, allowing you to examine the details. There are several ways of running your tests from RadRails. The easiest way is to navigate to a unit test file (under the test/unit directory of your project), right-click on the name of the unit test file you want to execute, select Run As, and then Test::Unit Test. Before running your test, make sure your test environment is configured properly (database configuration, fixtures, and unit test code). After the test is run, the Test::Unit view will present the results. Near the top of the view you will see a bar either in green or red color. Green means all the tests passed correctly, and red means there were failures or errors. A failure is reported when one of the tests didn’t pass, and an error is reported if an exception was launched when trying to run a test. Above the bar, you can see how many tests were run, how many errors you got, and how many failures. Below the bar there are two tabs. In the first one you will find only the list of tests with errors or failures, and the second tab labeled Hierarchy will display the list of all the executed tests, both successful and not. If you doubleclick on any of the items, the corresponding file with the definition of the test will be opened in the editor area. When you select an item with errors or failures, the Failure Trace pane of the Test::Unit view will display the details of the failure, or the Stack Trace in the case of an exception. In the toolbar of this view there is an icon for relaunching the last performed test, another one for locking the scroll (in case launching big test suites producing a large output), and the menu for this view, in which you can change the layout. So far we have launched a single unit test suite, but as you know in Rails you can also use functional and integration tests. For running all the unit tests in your project, or for running the integration or functional tests, you have to use an icon located on the toolbar of the Workbench. If you hit the icon labeled Run All Tests all the Unit, Functional, and Integration tests for your project will be run. If you want to launch only one test type, click on the small arrow by that icon so you will get a submenu allowing you to run the different test suites. No matter in which way you launch your tests, the output will be displayed at the Test::Unit view in the same way as we saw before. You can also choose to execute your tests automatically. In this case, the tests will be run either every time you save a file or at a given interval of time defined by you. If you want to run your tests automatically you have to configure RadRails for that. Go to the Window menu, select Preferences… and then navigate to the Rails | Autotest dialog. The first thing you have to tell Autotest is how it will be launched: after saving a file in the editor or at a regular interval. The two options are mutually compatible, so you could mark both of them if you want. After choosing when the Autotest will be run, you have to instruct RadRails about which tests to launch. By default it will try to launch only the associated unit tests for the model, controller, or plugin file you are saving. Modifying any other files will not launch Autotest, which is one of the reasons why you could possibly want to launch the test suite at regular intervals. You can tell Autotest to run not only the associated unit tests, but also all the unit tests for your project, all your integration tests, or all your functional ones. Once you set the options as you think is better for you, just click on Apply and then on OK to close the preferences dialog. If you chose to Autotest after saving the editor’s contents, you can try opening a model or controller, making a small change (maybe a blank character or a comment) and saving. At the bottom of your eclipse workbench you should see the status line informing you about the Autotest progress. When the test finishes, you can see the results as usual in the Test::Unit view. There is an additional feature of Autotest. The icon on the toolbar workbench close to the Run All Tests one labeled Manually Run Autotest Suite will remain in green if all the tests were passed, or will change to a white cross over a gray background in the case of errors or failures. In order to get your attention, for some seconds this icon will display an animation of a small yellow blinking cross. In the figure below you can see the four possible images for this icon. When editing the contents of a model, controller, or plugin you can use this icon no matter whether it’s displaying in green or gray and force running of the associated tests (as configured at the Autotest preferences) even without saving the file or waiting for the established interval. Summary This chapter explained how you can use RadRails for dealing with many of the development tasks you would otherwise have to run from the command line in a much more convenient way. Except for exceptional occasions when you need to pass uncommon arguments to the command-line tools, you can manage all your Rails development processes from within the IDE by using the built-in views. If you find yourself going frequently to the command line to launch some processes, we also learned how you can call external tools from Eclipse, so you can have everything just a click away. By using RadRails views, you can manage documentation, servers, the Rails console, plugins and gems, Rake tasks, code generators, code annotations, warnings, to-do tasks, regular expressions, and test suites. Speak Your Mind
http://www.javabeat.net/develop-ruby-on-rails-applications-fast-using-radrails-1-0-community-edition/4/
CC-MAIN-2015-14
refinedweb
3,569
65.96
Livecoding Recap: A New, More Versatile React Pattern Livecoding Recap: A New, More Versatile React Pattern A dev shows a React and D3 library he's been working on. The goal is to build a React and D3 library that never gets in your way. Read on for more! Join the DZone community and get the full member experience.Join For Free I'm working on a React and D3 library that I've been thinking about for two years. My goal is to build simple-react-d3, a React and D3 library that never gets in your way. Most libraries give you composable and reusable charting components that are easy to use and quick to get started with. They're great for simple charts. But more often than not, the more custom you want your dataviz to be, the more control you need. You begin to fight your library. VX comes closest to the ideal get-out-of-your-way library, and even with VX, when I recommended it to a friend, it took all of 10 minutes for him to hit the wall. "Wtf, how do I do this? The library is fighting me!" The best way to get started was to generalize the D3Blackbox pattern I developed for React+D3. It's the easiest and quickest way to render a random piece of D3 code in your React project. Here's an example: import D3blackbox from "./D3blackbox"; import * as d3 from "d3"; const Axis = D3blackbox(function() { const scale = d3 .scaleLinear() .domain([0, 10]) .range([0, 200]); const axis = d3.axisBottom(scale); d3.select(this.refs.anchor).call(axis); }); export default Axis; Take some D3 code, wrap it in a function, pass it to a HOC (higher order component), and you're done. The HOC renders an anchor element, and your render function uses D3 to take over and manipulate the DOM. This approach doesn't give you all the benefits of React, and it doesn't scale very well. It's meant for simple components, quick hack jobs, or when you have a lot of existing D3 code you want to use. It is the easiest and quickest way to translate any random D3 example to React. Turning that into a library was pretty easy: $ nwb new react-component simple-react-d3 <copy pasta> Boom. Library. But what if you're the kind of person who doesn't like HOCs? Maybe you prefer render props or function-as-children? A React Component That Supports All Reuse Patterns "Can we make this HOC work as not a HOC too? What if it supported all popular React patterns for reuse?" I came up with this: It's a function, SVGBlackbox, that takes a function as an argument and acts like a HOC. Pass in a func, get a component back. Just like the D3Blackbox example above. You can also use it directly as <SVGBlackbox />. If you do that, you can either pass a render prop, or a function-as-children. Both get a reference to the anchor element as their sole argument. Now you can use the blackbox approach any way you like. const Axis = SVGBlackbox( function ( ) { const scale = d3 .scaleLinear ( ) .domain ( [ 0 , 10 ] ) .range ( [ 0 , 200 ] ) ; const axis = d3.axisBottom (scale) ; d3.select ( this.refs.anchor ).call (axis) ; } ) ; class Demo extends Component { render( ) { return ( <div> <h1>simple-react-d3 demo</h1> <svg width= "300" height= "200" > <Axis x= { 10 } y= { 10 } /> <SVGBlackbox x= { 10 } y= { 50 } > {anchor => { const scale = d3 .scaleLinear ( ) .domain ( [ 0 , 10 ] ) .range ( [ 0 , 200 ] ) ; const axis = d3.axisBottom (scale) ; d3.select (anchor).call (axis) ; } } </SVGBlackbox> </svg> </div> ) ; } } That renders two axes. One above the other. Neat. Unfortunately, the internet told me this is a terrible pattern, and I should feel bad. The window.requestAnimationFrame part can lead to all sorts of problems and likely clashes with the future we're getting in React 16.3. However, Sophie Alpert had some good suggestions: You need the rAF call to be triggered in componentDidMount (maybe without the rAF). Otherwise it might be called before the node even exists (or while it is in the middle of an update). - Sophie Alpert (@sophiebits) March 26, 2018 Make it a class? You can also return a child component that is a class. It's also possible to do this logic in a ref callback but that's a little obtuse. - Sophie Alpert (@sophiebits) March 26, 2018 The idea of shoving render prop stuff into the ref callback smells like black magic. It's so crazy that it might just work. The Ultimate Reusable Component Another livecoding session later, we did it: the ultimate reusable component. You can use it as a HOC, with render-props, or function-as-children. Less than 2 minutes to grab a random D3 example from the internet and render it in a React app. Still blackbox, but works really well. Wrapped and embedded at 14:23 It took 5 minutes because we had to update code from D3v4 to D3v5 and add some prop passing to SVGBlackbox to make it easier to use. You can see full SVGBlackbox code on GitHub. Here's how the interesting part works: When used as a HOC, it takes your argument as the render function and passes it into the usual D3Blackbox HOC. Wires up invocation as required and hands control over to you. When used as a component, the argument is a props object. Take out children and render, store the rest as props. Take x and y from that. Then return a <g> element moved into (x, y) position and given all the other props. This can be handy. Now the tricky part: a ref callback that invokes your render function. That's right, you can hand over control of the anchor element in the ref callback. This works but trips up on React's caveat about ref callbacks sometimes.. Hence the callback is wrapped in a conditional to check that anchor is defined. Still feels a little dirty, but much better than the requestAnimationFrame approach. Shouldn't mess with async stuff in React 16.3 either, I think. Next Step? Something similar for the full feature integration where D3 calculates your props and React renders your dataviz. You can see the first part of that towards the end of the 2nd video above. It's all coming together! Published at DZone with permission of Swizec Teller , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/livecoding-recap-a-new-more-versatile-react-patter?fromrel=true
CC-MAIN-2019-18
refinedweb
1,097
66.94
This series of post outlines the development of Android plugins for Unity3D. I had been asked to develop Android plugins for Unity3D every few days or months, each time following similar paths which I think would be useful to those who will do the same job. And here is how. Clone the source code in GitHub here ! Tools you need: - Android Studio - Unity3D - Android Devices for testing - Some patience to go through this post Let’s get started. The 1st example is extremely simple. We define a function DoSthInAndroid() that dump some information using Java Log.i() function, and get this function called in Unity3D. Though simple enough, it outlines all the required procedures, and are very useful to understand the entire workflow. - Start Android Studio, create an Android project (AndroidAddin) with an Empty Activity. - In Android Studio, create another library project (AndroidLib) by clicking the menu File > New > New Module - Select “Android Libray” in the wizard - Build the project, make sure everything works so far. In the AndroidLib module, add a class called “Helper” In the Helper class add a function DoSthInAndroid() package com.androidaddin.androidlib; import android.util.Log; public class Helper { public static void DoSthInAndroid() { Log.i(“Unity”, “Hi, Sth is done in Android”); } } To make sure this work, in the main app, add below testing function: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Helper.DoSthInAndroid(); // Testing } And when you run the Android App, you can see the output in Logcat window: … 15:59:57.276 12121-12121/com.androidaddin.androidaddin I/Unity: Hi, Sth is done in Android If you explicitly build the AndroidLib project only, you will find below output: The aar file is the compiled android archive or the binary distribution of an Android Library Project. Rename the *-debug.aar as zip file, unzip the file, you will see a file called “classes.jar” which contains only java code files (classes): Grab the classes.jar for later use in Unity3D. Open Unity3D, Create a project, then create a folder in Assets: Assets\Plugins\Android Add the classes.jar to the above Assets\Plugins\Android folder Create a new MonoBehaviour script TestAndroidPlugin.cs, and attach it to the camera: public class TestAndroidPlugin : MonoBehaviour { void Start () { Debug.Log("Android function is to be called"); var ajc = new AndroidJavaClass("com.androidaddin.androidaddin.Helper"); //(1) ajc.CallStatic("DoSthInAndroid"); //(2) Debug.Log("Android function is called"); } } The AndroidJavaClass offers access to Java class object, and the CallStatic fires the java object’s method. Simple? Save the scene, add the scene in the build setting dialog, remember to change the BundleID. Build the Unity app and run it on your device, you will see in the Logcat window what you expect. Yeah! We have stepped the most important stride in developing Android plugins for Unity3D. There are more cool stuff than calling a log() function in Unity3D, we can do quite a lot such as adding Android UI (button, image, textview, progress bar etc.) to Unity scene, which will be discussed in a separate post! To summarize: - You need define something in Android java which is to be called in Unity3D C#; - You need compile the java class into a jar or aar file (shall talk about this in directly using aar in another post); - You need put the compiled jar/aar in a special folder (Assets\Plugins\Android), well not always! See my others posts in this blog! - You need call the java function with the aid of AndroidJavaClass or AndroidJavaObject class which is defined in Unity3D. UPDATE: In Unity3D 5.0+, you no longer need unpack the aar file and manually get jar file. UPDATE: In Unity3D 5.0+, you no longer need put the file in Plugins\Android folder, anywhere in the Asset folder will do!. You might feel cumbersome in renaming the aar –> zip –> copy files to Unity3D –> Build app etc. Yes, this is indeed tedious! In the following posts, I shall continue to propose solutions to automate this process, and everything is fine! Here is the 2nd part of this tutorial series: Step-by-Step guide for developing Android Plugin for Unity3D (II) Enjoy and happy new year! See you in 2016! Clone the source code in GitHub here ! cuongkimh4 April 28, 2016 at 11:39 am I follow your step to (In the AndroidLib module, add a class called “Helper”). But i can’t create C class, just only can create C++ class so i can’t continue. Can u fix my problem? Hamza Lazâar May 5, 2016 at 8:24 pm Hi, Thanks for this first post. I’m waiting for part 2 though as this part is relatively trivial. There aren’t enough resources on the internet about how to make a Unity plugin for Android without extending main UnityActivity. I’m looking for best practices, tips or advice from someone else’s experience. The idea is that you need to handle the lifecycle of a 2nd Activity (call start, process intents, etc.) but I did not find the best way to return to UnityActivty…I tried calling finish on the other Activity…I want to avoid all the static methods and properties also if that’s possible. I’m also interested in your automated process of copy/pasting files to Unity folders from Android Studio? Regards, Hamza xinyustudio May 5, 2016 at 8:33 pm Hi Hamza, sorry for the delay, I will try to update this asap. bhavani July 25, 2016 at 2:16 pm Hi. Is this possible to integrate android studio layouts to unity?. If possible please help me with any links. Kashif Tasneem August 8, 2016 at 1:28 pm Hello. If I need context, what should I do? xinyustudio August 8, 2016 at 1:50 pm AndroidJavaClass jc = new AndroidJavaClass(“com.unity3d.player.UnityPlayer”); AndroidJavaObject jo = jc.GetStatic(“currentActivity”); jo will be the context you need! Kashif Tasneem August 8, 2016 at 5:54 pm Thanks. Brian October 29, 2016 at 4:29 am Sorry, I’m really new to this, but I am getting an error: Cannot resolve symbol ‘Helper’ I’m assuming this is because I need to tell the main project that ‘Helper’ exists, but how do I do this? With an import? game maker November 4, 2016 at 9:29 pm that’s work. im so happy omg. thankssssssss Vladut November 15, 2016 at 6:53 pm Hi Brian, You can’t acces Helper class because it is not added on the dependencies. You need to open build.gradle (Module: app) and add this line compile project(“:AndroidLib”) where AndroidLib is the name of the module created. After this sync and you should be able to import the class from the module, and of course use it. Hope it helps! Shrinath Kopare May 25, 2018 at 10:11 pm Thank you so much…it was really helpful. Niall February 10, 2020 at 5:30 am I opened a new “empty activity”, and New module is greyed out. Has something changed in recent versions of Android Studio?
https://xinyustudio.wordpress.com/2015/12/31/step-by-step-guide-for-developing-android-plugin-for-unity3d-i/
CC-MAIN-2020-29
refinedweb
1,180
65.73
″ is just a name. if you look around in the next few releases i suspect you’ll see that gnome has added much more functionality since “2.0″ than goes into new “major releases” of almost anything else. 66 Comments Thanks man, you forgot Swfdec :p As one of the Pimlico authors, we thank you. :) Excellent post with many pointers! Thanks! Good to know that there more people inside community who are excited about our short-term and midterm future. :-) dconf wiki page name should be a wikiword dude! Why not DConf? Of course, we rock. nice list except for vala - I think D has a greater potential here. pitivi would be great too, when it’s stable :-) for now, it’s definitely not reliable enough. Not so sure about tracker. I don’t like its developers’ behaviour (spamming the mailing lists, badmouthing beagle everywhere without proofs), and if I was an Ubuntu developer I would not trust a piece of software which such an unprofessional release schedule (0.6 was delayed for months, bugs weren’t fixed because the release was supposed to happen soon and it never did) and such unprofessional management (feature promised and never really implemented, bad bindings and bad and always changing API). They tried to propose tracker for gnome 2.18 and still the new version can’t do many things that “other indexers” have been doing for years. Oh, well, I forgot to mention that I don’t trust people who abuse of magniloquent buzzwords: “extensible metadata rich service”, “truly unique, useful and really innovative”, “semantic web RDF-like metadata database”. PS. looking for the buzzwords I found this:. Well, good luck Ubuntu, if tracker will respect his future roadmap items as it did with the last ones, you’re screwed. @Giorgi0: Classic propaganda! I only ever criticised beagle when their devs criticised tracker first! I have never started the flames wrt beagle. I only ever mentioned tracker as a solution for speeding up desktop files on one post on ddl - its totally false to suggest we were spamming anywhere! Software is hard - things get delayed and time is needed to make things stable. that is the nature of software development. It would be more unprofessional to release really unstable software Bad API - always changing - totally false. Pls get you fact rights before badmouthing tracker. And if you are a beagle fanboy then you have nothing to worry about as we will support XESAM so you can use your favourite indexer. @Giorgio, Remember, any sufficiently advanced technology is indistinguishable from magic. I don’t need indexers, I learned how to classify my files in the 90s and in 16 years I never lost anything. Jamie, you can say what you want but the facts are out there for those who want to check them, in the Gnome mailing lists, in the blog entries, in the news sites comment. @Girogio: yes the facts are out there and check the dates: Nat flamed tracker first And then I responded and then Joe did As you can see I did not start it! You forgot to talk about Conduit, which seems a very promising technology. My greatest hopes lie in the combo pimlico/modest/empathy/conduit… Big thank you to every developers involved in Gnome ! My experience with the Beagle from Feisty’s repositories is that I actually notice when it is indexing while I am trying to work at the same time. It is irritating. Tracker I have never noticed, it seems to do something better. Perhaps the Beagle just goes haywire on certain files and tracker doesn’t. “They tried to propose tracker for gnome 2.18 and still the new version can’t do many things that “other indexers” have been doing for years.” Yeah. Tracker needs still at least Exaile/Rhythbox support, and then Epiphany and then Gmail and Google Docs because that’s where everything is nowadays anyways. Oh, it seems they all suck on that area :) Jamie, what is criticized with your release management is not the fact that you got your software stable before releasing it. But you should have done minor releases meanwhile, in order to correct the most reported bugs. Nice post desrt. I’m coming to the same conclusion these days. Half a year ago there was indeed some stagnation. I guess all the heroes where working on their cool stuff that they only started talking about half a year ago :) Jamie, I don’t think I ever badmouthed Tracker. I’ve had issues with it, and I’ve tried to bring them up as objectively and productively as I could. And any mistakes I’ve made on the list I’ve apologized for. Wait, so the big news in Gnome are… - a rewritten configuration system - another html renderer that does the same than the previous except it’s different internally - gfvs, a rewrite of gnome-vfs - vala, a hack to fix the problems of chosing C as the base language for the platform (the description of vala pretty much tells this) - tinymail, an implementation of email software - rewrite of GDM to add funtionality that is not use by the vast majority of users Except for telepathy, tracker and gbus, I’m not seeing any major improvement here… Diego: Better base libraries means better applications, too. dconf or gvfs may not be as whizz-bang cool as Telepathy or Beagle, but if they manage to fix some of the problems of what they’re replacing (and both gconf and gnome-vfs have a lot of issues) then they’ll benefit every application in more substantial ways than Telepathy probably could. @Diego: Thanks for writing that comment, so I didn’t have to :-) Also, kinda sad that Pyro didn’t make the list. Diego, webkit can do something the gecko can not: separate plugins such as totem/flashplayer from the main process. It can really improve stability, and gecko is unable to do that because it practically has no architecture at all.. Also, gecko is harder to customize for making a custom browser or widgets that behave in certain ways, forcing the developers to do many silly things. bluez, gstreamer, pulseaudio, ohm! so many appealing stuffs, yeah! @Giorgio: I don’t see why you need an indexer then ;) @Diego: what should they do to make an impression on you, stop releasing software for two years and start publishing only magniloquent press-releases while extending the roadmaps further and further? Disclaimer: Any similarity to actual operating systems or desktop environments is purely coincidental. Pure awesome would be a gconf replacement using Tracker. @Stoffe: LOL, taunt of the year! But what exactly can Vala do that Gtkmm can’t, other than getting you to learn a language that’s completely useless for anything but Gnome apps? I must say that recent development on Gnome, even if they are rewrite of something that needed one, or just fancy feature nobody use, show that Gnome has a future. It has a future because people innovate, or attempts to (not making a judgment), people develop for it, developer chose it. #26 No, what they would need to impress me is just to avoid the “we do not need to do major improvements and break the compatibility with a gnome 3.0 because gnome is already good enought” mindset. Except for the “online gnome desktop” thing from havoc, I’ve not seen anything that dares to break the current asumptions (I don’t see people asking ej: “it’s not an usability bug the windows-inherited idea of systray icons+taskbar?”) Diego: I consider that a good thing, because I think the current paradigm actually works pretty well for what we ask of it. I’ve seen tons of different attempts at reinventing the computer interface, and I haven’t seen any that could be as efficient and worthwhile as what we have now. Hi, the Vala language for GNOME is interesting, but the Properties Sample looks very obscure to me “But what exactly can Vala do that Gtkmm can’t, other than getting you to learn a language that’s completely useless for anything but Gnome apps?” You mean C#? As for what Vala *can* do, it gives you C#-like syntax with C-like performance. And it’s tailored to the GObject model, removing a lot of the overhead of translating from GObject to some other language’s native object model. What it isn’t doing is preventing you from using Gtkmm, though. One of the worst posts I’ve ever read. Uninformative to say the least. The links more often than not mean nothing. “dconf - hopefully the future of configuration in the gnome desktop.” - Why? Honestly, I don’t know. “gtk+/glib awesome - every new release brings exciting new features and moves us closer to removing our dependency on those crufty old libraries that nobody seems to care to have around anymore.” - What features? Which old libraries? What are you talking about? And, oh, will the gtk widgets stop to look utterly cartoonish and to waste screen real estate? “tinymail and modest - i can’t wait to read my email using this stuff. if it’s half as good as that pvanhoof guy keeps saying it will be then i’ll be quite happy indeed.” - What’s good about these two mail clients? What do they do that Thunderbird, Kmail etc. can’t do/do worse? And Vala? Building an entire new programming language just because building GNOME on C was an horrible decision from the start? What’s wrong in using C# / Python bindings? @Stoffe: Agreed. The overlap between DConf/GConf and a subset of Tracker’s abilities is substantial. If we’ve got a persistent object store, why implement a separate key store with fewer capabilities? In fact, it seems to me that application settings better fit Tracker’s model than the registry model. Just as artist, album, rating and tags are valid audio file metadata, settings and properties are perfectly valid application metadata and could be easily represented as such with a modified application settings namespace in the Tracker ontology. Gnome lack of a decent graphic toolkit for animation. it’s to late to implement one now the best option is to use QT/KDE toolkit for those things. Troione: Try clutter. A lot of people seem to like it. I can’t believe some of the comments here, truly ridiculous… Ignoring the merit of each named example, I’d say given the amount of work going on improving and rewriting core applications and libraries, as well as the work going into new solutions, it makes it pretty hard to argue against the point of this post; that Gnome has an exciting future ahead of it. Particularly disappointing is a certain developer’s encouragement of a clearly facetious comment, just because they’re bitter their own project didn’t make the list… I fully agree with the sentiment of this post. While perhaps I don’t agree that everything in this list is going to be a major player in the future of Gnome, they’re all interesting and point to exciting times ahead of us. How about some constructive discussion instead of the sniping and baseless arguments that seem to make up the majority of the latest comments? ++ Cwiis What features? Which old libraries? What are you talking about? And, oh, will the gtk widgets stop to look utterly cartoonish and to waste screen real estate? Cartoonish? Waste screen real estate? Ah-ehm, if you change the subject from GTK to QT make it more evident. Gnome is working on a bunch of crap the average Gnome user could care less about. KDE is coming up with a brand new paradigm - new interactions, new graphics, new widgets, new everything. Yeah, Gnome’s future is just shining. Call me when that piece of crap Nautilus is re-written. Hi there. Gnome is great and all but man, you need to learn how to use the shift key. Ubuntu User: What is new in KDE 4.0? Off the top of my head, the only actually new application I can think of is Dolphin, which doesn’t exactly reinvent the desktop. What makes KDE 4 so awesome is the new technologies - Phonon, Solid, Plasma, Nepomuk et al. - none of which are individually visible to the end user, but which make writing things that are really great for the end user possible. This is what GNOME is doing, too. @Ubuntu User: Whee, new graphics! You sure your handle isnt really ‘vista user’ where pretty graphics and frivolous fluff are more important than core system stability, expandability, etc etc? @Anonymous: lol, didn’t you know that desrt only has one finger? (I wont say which one! :) “telepathy - a project that needs no introduction. this is just a fantastic idea and it will make gnome kick ass in ways that we probably haven’t even realised yet. tubes!!” Yeah maybe not for you but what about those of us who don’t know WTF this is? The fact that this blog entry has been linked from OSNews doesn’t make it a press release, you dorks. To get more information follow the damn links or use google. And boo-hoo, they are reworking some internals to solve problems like the show-stopper that is gnome-vfs currently instead of adding a ton of visual junk to the screen. How horrible, I wonder how you can survive without masturbating over cluttered translucid UIs while using Gnome. Oh right, you don’t, you are just trolling. boo-ga-ga =). - 9 Trackbacks [...] what a day, desrt’s post on cool new gnome stuff, Alex’s crackful-but-very-cool glick, and Alp and George’s Webkit work, and just now [...] [...] sviluppatore di Gnome fa intendere questo almeno, desrt(ryan lortie) in questo interessantissimo post mette a fuoco le carateristiche che si stanno sviluppando intorno al nostro amato [...] [...] excited. these are the projects that are actively improving the future of the gnome desktop.”read more | digg story These icons link to social bookmarking sites where readers can share and discover new [...] [...] Read more at desrt blog [...] [...] (nogen har taget et initiativ) - At leve som selvstændig mac-supporter - Gnomes fremtid - fra 4 til 30GB idisk - i et hug! Nu er der igen retfærdighed til NOT - Der er ikke nogen dansk [...] [...] [...] [...] gute Wünsche für die kommenden Versionen! Wer seine Euphorie nocht verstärken möchte, kann diesen Blogeintrag von Jemandem lesen, der vor einem halben Jahr Gnomes Entwicklungskonzept noch mit bösen Worten [...] [...] applicazioni user-friendly d’uso comune.Il [...] [...] I think i was very wrong.” Ryan Lortie [...]
http://blogs.gnome.org/desrt/2007/08/07/im-excited-about-the-future-of-gnome/
crawl-002
refinedweb
2,448
71.55
mahoneyPython Development Techdegree Student 2,536 Points Honestly super lost on this one. Time Delta Hell! Not sure where to go after defining "minutes". I will for sure have to review the "Dates and Times in Python" videos... not getting it import datetime def minutes(datetime.datetime(), datetime.datetime()) 1 Answer Chris FreemanTreehouse Moderator 67,736 Points Hey matthew mahoney, let’s see if we can break it down. The challenge asks: - Write a function named minutes that takes two datetimes Using date1 and date2 as the parameters, the signature would look like: def minutes(date1, date2): - use timedelta.total_seconds() to get the number of seconds between them A timedelta object is automatically returned as the result of subtracting two datetime objects. Using the .total_seconds() method on this result will give the number of seconds the two datetime objects differ. - return the number of rounded minutes Straightforward math here converting from seconds to minutes, then round off to an integer. - hint: The first will always be older and the second newer. You'll need to subtract the first from the second. datetime objects relate to posix time (the member of seconds since 1/1/1970. Older times (farther in the past) are lower numbers since they’re closer to 1/1/1970. Post back if you need more help. Good luck!!! matthew mahoneyPython Development Techdegree Student 2,536 Points matthew mahoneyPython Development Techdegree Student 2,536 Points Thanks! Brilliant help!
https://teamtreehouse.com/community/honestly-super-lost-on-this-one-time-delta-hell
CC-MAIN-2021-49
refinedweb
240
65.73
It'll be no secret to anyone reading this that there has been a lot of discussion recently about a CPAN equivalent for Python, sparked by Guido's "People want CPAN" post to the python-distutils-devel list. I read a good chunk of the thread back in November and have been meaning to add my blurb since. Take it with a grain of salt, though. My knowledge of the language is weak, my knowledge of the community is weaker. I'm from the Perl crowd[1] and would simply like to share my late comer's view of how the CPAN came about.. Everything else is built on this flexible foundation and has grown over time. The CPAN specifically does NOT have an official web service or any kind of development platform. Apart from the directory structure of the CPAN, the only other key ingredient was the "Perl Authors Upload SErver" (PAUSE) that handles credentials of the authors, permissions for namespaces, and serves as a single entry-point for uploads to the CPAN. PAUSE scans incoming distributions for meta information and generates in index of modules (namespaces/classes) and distributions that is itself distributed via the simple CPAN mechanism. Let me repeat: Everything else is just sugar on top. Specifically, everything else is sugar provided by *third parties*. Andreas König wrote and still maintains PAUSE and the CPAN.pm client[3]. Later, Jos Boumans set out to write the CPANPLUS client. Graham Barr wrote and still maintains the search.cpan.org website. Randy Kobes wrote the similar but equally non-official kobesearch.cpan.org. Many other significant pieces of the infrastructure[4] have been written and are maintained by other people who did cooperate with each other but never had to. By virtue of the simple design and the (somewhat limited) published meta-data in form of the simple module index, everyone had the opportunity to write tools that interface with it. There was no need to "get it all right" from the get-go. Things evolved and we now have a best-of-breed. The various services by various people are loosely intertwined[5]. There is also very little regulation on what is uploaded to CPAN, but curiously little abuse. I think that is because the majority of people who are willing to share their work with others free of charge aren't the type who'd want to crap on other people's front yard. If I was to set up something like CPAN today, I'd put in somewhat more design, simply because we have learned from the past fifteen years of operation. I'd specifically put in some more work on namespace and distribution-level meta data[6]. But I'd make very sure to keep it all relatively simple. By no means would I want to run any sort of elaborate web service beyond the authentication and authorization that the PAUSE provides. This eliminates some of the reliance on a very, very strong single party to provide initial implementation, hosting and maintenance for what is a very significant piece of software. My firm belief is that the second most important factor to the success of the CPAN is people. There are some individuals who have managed herculean amounts of work and have shown incredible dedication over years. But it's the combination of a lot of people's work that is more than its sum. I'd say the core of the CPAN toolchain gang is not that large. Depending on where you draw the line, there are maybe 10-50 of them. Not all, but many of these people have known each other personally for years from attending the many YAPCs and Perl workshops. Reading the "People want CPAN" thread, it seemed to me that folks are fighting each other quite a bit and not always only on technical grounds. On the other hand, it's hard to imagine fighting with somebody as friendly and welcoming as, for example, Andreas König. Meeting in person helps this tremendously. Discover the common goals and agree on the means over a beer. I'm certain that is happening in the Python world. But looking at the prices of, say, attending PyCon, I wonder whether such an event can have the same spirit and community building effect as a YAPC or even the smaller workshops[7]. So if anyone was to accept a single concrete suggestion from me, it'd be: "Enable the right heads to get together over a beer." It is the bonds between individuals that can make it all work well. The success of the CPAN is due to cultural aspects at least as much as due to its design and technical merits. Please keep in mind that this is only my personal opinion. Thanks for reading. I hope this can add some perspective. Best regards, Steffen Mueller PS: Please keep me in CC of relevant discussion. I'm not subscribed to this list. [1] So why do I think I have anything to contribute to this discussion? I'm a regular contributor to the CPAN and a very modest contributor to the perl core distribution. I maintain over a hundred Perl modules, am one of the bunch of PAUSE admins who try to keep the chaos sane, and have been involved in Perl & CPAN toolchain maintenance to some degree. [2] You could argue that having a CTAN to take inspiration from helped, too, but I'm too young to know the exact stage of development of the CTAN in 1995. [3] Recently, the tireless David Golden has been doing a lot maintenance, too. [4] As another example, I'd like to point out the CPAN testers as one of the most important bits of the CPAN infrastructure. The setup is as simple as that of CPAN itself, but to some degree, it hasn't aged as well. It its core, the CPAN testers is just a mailing list to which anybody can send test reports. This is done by volunteers who spend their and their computers' time on testing CPAN distributions, but anybody can easily set up their CPAN client to send test reports while installing modules. The wealth of reports (I think we hit 6 million recently) has become a strain on the software that runs the mailing list archive. People are working on modernization of the setup. The example demonstrates how decentralization works well. There is no *official* cluster of computers that automatically tests new submissions. It's all volunteers who test on their hardware (usually automatically). [5] Does that ring a bell? Sounds a little "web 2.0" with less polish to me. From fifteen years ago. [6] There is a working group for better meta information in CPAN distributions, initiated and carried forward by David Golden. Most of the discussion is done. There is a consensus on a large fraction of the proposals. A draft specification is forthcoming. Implementation in the CPAN toolchain will likely follow the draft shortly and will be part of perl 5 release 14. (Release 12 is already feature-frozen.) [7] It's not that long ago that I was a student. $225 student rate for attending a conference? Wouldn't have been able to afford it. The standard individual rate is also ~3.5 times as high for PyCon as it is for YAPC. I think that's a harmful barrier to entry.
https://mail.python.org/pipermail/distutils-sig/2009-December/014977.html
CC-MAIN-2018-05
refinedweb
1,239
72.76
Writing my first SAM project. For some time I have been trying to find the way to setup the ADC. But I had a hard time to find the right way to do it. Googling results in many hits that I think is outdated. I have a D20 explained board so I tried to create example projects but still it wasn't easy. Finally I found the definition of the adc_config struct, down in the adc_feature.h. I was browsing around everythere si I can't even say how I ended up there. So, to avoid similar problem next time, what is the normal/best way in the documentation/help files to find those structures and how to use them? I have understand that just as XMEGA such structures are the standard methos for storing settings for peripherals. Hej! There is not really a single right way. The only definitely wrong way is if you end up recreating the register definitions yourself. If you create a plain project ("GCC C executable project") you get these from, e.g., ...Atmel\Studio\7.0\Packs\Atmel\SAMD20_DFP\1.2.91\samd20\include\instance\adc.h (register definitions) ...Atmel\Studio\7.0\Packs\Atmel\SAMD20_DFP\1.2.91\samd20\include\component\adc.h (register structure mapping) In a ASF 3 project (which is what you get if you make an example like "Quick Start for the SAM ADC driver...") you have the same files included in the project under src\ASF\sam0\utils\cmsis. It is certainly possible to write code just based on these cmsis files and the data sheet (basically making your own drivers). But if you have a ASF 3 example or when starting from scratch with a ASF project you will find API documentation linked from both ASF Wizard and ASF Explorer. The newer framework ASF 4 (atmel start) at has docs linked from the Help menu. Finally there is never any real need to blindly browse looking for definitions, select a symbol and do Alt+G (or goto definition from the right click menu) or Ctrl+Shift+F (find in files). /Lars Top - Log in or register to post comments I'm with Lajon. Use the search facilities of the IDE you use. Any decent IDE has search facilities to search through all project files. and can also follow the execution path from function calls into function bodies etc. Every now and then, spend some extra time in learning how to use your IDE effectively. Some very usefull features are not always immediately obvious. Don't be afraid to read the manual. Doing magic with a USD 7 Logic Analyser: Bunch of old projects with AVR's: Top - Log in or register to post comments It's just that in the Atmel Start Example Projects I have created none of them have used this structure. I was looking for the best or normal method for setting up the ADC for my needs and I think that the structure is what I prefer. For example the "ADC and Power Optimization Solution 1 ADC conversion" goes directly to the registers. By the way, my first post in years and someone from Linköping answers! My favorites: 1. My oscilloscope, Yokogawa DLM2024. 2. My soldering iron, Weller WD2M, WMRP+WMRT. 3. JTAGICE3 debugger. Top - Log in or register to post comments Thanks but I do not think this is an IDE issue. I know how to find my way through the definitions using alt-G and so on but I couldn't find how to set up the ADC. If I just add the ADC driver in an empty project there is no example included and I had problems finding and help of usage of the mentioned structure in the help files. If I open an example project I not seen any who uses that structure. Working directly on the registers is an option but I am used to the XMEGA structure principle and I have understood that the same is standard in CMSIS as well. My favorites: 1. My oscilloscope, Yokogawa DLM2024. 2. My soldering iron, Weller WD2M, WMRP+WMRT. 3. JTAGICE3 debugger. Top - Log in or register to post comments OK, I think this is worth another comment for other beginners. I have a code that compiles now. That's good! I used the adc_config struct which can be found in adc_feature.h How to use it can half-way be figured out by just reading through all settings. What I didn't know and failed to find anywhere was that I should start with adc_get_config_defaults(&config_adc). I suppose it must be somewhere in Atmels documentation but that and everything else of how to use it was something I found in this forum. There are basically two places in the documentation I tried to use but failed: 1. Modules/SAM Analog-to-Digital Converter (ADC) Driver. 2. Related Pages/Quickstart guide for SAM ADC driver. 1st refer to a struct adc_module type which consist of a *const module_inst. That in turn leads to adc_setup which leads to adc_init. There, finally I find the adc_config struct but there it stops. I can't find anything about it in the ASF documentation. There isn't even any match if I search for it. 2nd refers to specific functions for each and every of the ADC settings. Maybe I am looking in the wrong place but for me this was very complicated and I am worried that I will have the same type of problem for each driver I want to use. The resulting code that I used is very simple and easy to understand so no problem there. If it works, I haven't tested it yet. This is how my ADC setup looks like: #include <adc.h> struct adc_config config_adc; adc_get_config_defaults(&config_adc); config_adc.clock_prescaler = ADC_CLOCK_PRESCALER_DIV8; config_adc.positive_input = ADC_POSITIVE_INPUT_PIN13; config_adc.reference = ADC_REFERENCE_INTVCC0; config_adc.resolution = ADC_RESOLUTION_12BIT; My favorites: 1. My oscilloscope, Yokogawa DLM2024. 2. My soldering iron, Weller WD2M, WMRP+WMRT. 3. JTAGICE3 debugger. Top - Log in or register to post comments
https://www.avrfreaks.net/forum/finding-my-way-adcconfig-struct
CC-MAIN-2019-26
refinedweb
1,009
65.93
Arduino Projects | Raspberry Pi | Electronic Circuits | AVR | PIC | 8051 | Electronic Projects The scrolling display boards are the most attractive among all kind of display boards. They are widely used in advertisements, used in public transport vehicles, used as information boards in railway station, airport etc. They are commonly made of LEDs or LCD screen and are usually connected to a computer or a simple microcontroller which can send the data to the screen. The microcontroller can send data to the screen using its serial port. The data could be saved in the microcontroller itself or it is received from a PC. The serial communication port is the one of the most effective communication method available with a microcontroller. The serial port of the microcontroller provides the easiest way by which the user PC and the microcontroller can write their data in the same medium and both can read each other’s data. A normal LCD module found in the embedded system devices can be made as a scrolling display. It can respond to built-in scrolling commands which make the LCD scrolling possible. It is possible to connect the serial port of the PC with the LCD module through the Arduino board. In such a system the user can send the data from the PC to the Arduino’s serial port using software running in the PC, and can view the same data in the LCD module connected to the Arduino board and that in a scrolling manner if the necessary statements are written in the code.. This project uses function for accessing LCD module available from the library <LiquidCrystal.h> as explained in the previous project on how to interface the 4 bit LCD with Arduino board. The serial communication functions are also used in this project which is already discussed in a previous project on how to receive and send serial data using Arduino. Apart from those functions this project make use of two more functions for the LCD namely lcd.write() and lcd.autoscroll() the details of them are discussed in the following section. lcd.write() Thelcd.write() also a function which is used to print the data to the LCD screen like the lcd.print() function does. Unlike the lcd.print() function the lcd.write() function directly write the value to the LCD screen without formatting it as ASCII. This function is analogues to the Serial.write() function discussed in the project how to receive and send serial data using Arduino. lcd.autoscroll() This function is called after initializing the LCD module using the lcd.begin() function and after setting the cursor position using the lcd.setCursor() function. This function make the scrolling possible by shifting the current displayed data to either side of the LCD as each characters are written into the LCD screen. THE CODE The code initializes the LCD library as per the connection of the LCD with the Arduino board using the function LiquidCrystallcd(). The function lcd.begin() is used to initialize the LCD module in four bit mode with two line display. The lcd.setCursor() function is used to set the cursor at the 16th position of the second line of the 16*2 LCD from where the scrolling is supposed to start. The serial port is initialized with 9600 baud rate using the function Serial.begin(). The function Serial.available() is used in the code to check whether a serial data is available to read or not. The Serial.read() function is used to read the serial data coming from the PC which is then stored to a variable and is displayed on the LCD using the function lcd.write(). // include the library code: #include <LiquidCrystal.h> // initialize the library with the numbers of the interface pins LiquidCrystallcd(12, 11, 5, 4, 3, 2); // give the pin a name: int led = 6; // incoming byte charinByte; void setup() { // initialize the led pin as an output. pinMode(led, OUTPUT); // set up the LCD's number of columns and rows: lcd.begin(16, 2); // initialize the serial communications: Serial.begin(9600); // set the cursor to (16,1): lcd.setCursor(16,1); // set the display to automatically scroll: lcd.autoscroll(); } void loop() // if we get a valid byte, read analog ins: if(Serial.available()) { // get incoming byte: inByte = Serial.read(); // send the same character to the LCD lcd.write(inByte); // glow the LED digitalWrite(led, HIGH); delay(200); } else digitalWrite(led, LOW); Whenever a key is pressed in the keyboard the code receives the data byte and sends the same byte to display it in the LCD screen after shifting the current display to one side.
https://www.engineersgarage.com/embedded/arduino/how-to-make-lcd-scrolling-display-using-arduino
CC-MAIN-2019-04
refinedweb
772
63.49
I'm making my first foray into a sound engine and I'm using SlimDX in order to allow me access to DirectX in C#. I dug around and found a couple of tutorials which got me started and when playing a file from a file, everything is fine. However I also want to look at playing a file from a memory stream (loaded with the contents of a file, stored in a byte array) as I don't want to have to access the hard drive for playing sounds in quick succession. In doing so I appear to have stumbled across an issue with sound corruption. I'm able to reproduce it 100% of the time on both my home machine and my work machine (both Windows 7) but it only happens with certain files and in a certain order. In this particular case, see the attached file, sounds.zip which contains two files, klaxon2.wav (sounds like the red alert sound from star trek) and warn2.wav (the word warning repeated three times). The easiest way to describe the issue is to demonstrate the code I'm using to load and play, and the test procedure. I've boiled down the code from my main project to a fairly simple test class: public class SoundPlayer { private XAudio2 m_device = new XAudio2(); private SourceVoice m_sourceVoice = null; public SoundPlayer() { MasteringVoice masteringVoice = new MasteringVoice(m_device); } public void Load(string fileName) { WaveStream waveStream = new WaveStream(fileName); Load2(string fileName) { byte[] fileData = File.ReadAllBytes(fileName); MemoryStream ms = new MemoryStream(fileData); WaveStream waveStream = new WaveStream(ms); Play() { Thread t = new Thread(PlayMethod); t.Start(); } private void PlayMethod() { m_sourceVoice.Start(); while (m_sourceVoice.State.BuffersQueued > 0) { Thread.Sleep(10); } m_sourceVoice.Dispose(); m_device.Dispose(); } } So the idea is load the sound clip into a class and call play. It will play itself in a thread and finish. Please keep in mind that this is a very bare bones example from what's in my main project. Things like proper thread management and object clean-up were not a priority, just reproducing the issue in order to post it here for help (though I'm certainly open to pointers and advice... first timer with sounds after all). Anyway, stick this class in a project somewhere and, using the attached sounds, use the following code to run it: SoundPlayer p1 = new SoundPlayer(); SoundPlayer p2 = new SoundPlayer(); SoundPlayer p3 = new SoundPlayer(); p1.Load2(@"..\..\klaxon2.wav"); p2.Load2(@"..\..\klaxon2.wav"); p3.Load2(@"..\..\warn2.wav"); p1.Play();(NOTE: Replace the path with whatever directory you unzipped the files in.) What seems to happen is that when p1 is played, it plays a slowed down version of warn2 until warn2 is finished, then plays the rest of klaxon2 until klaxon2 is finished. If I were to use the Load method instead of Load2, it would work fine. If I were to not load two copies of klaxon2, it would work fine. If I were to load two copies of klaxon2 but no warn2, it would work fine. In my tests I've found a few other examples of the problem but for the most part, it's an issue that's hard to reproduce. When you do find a way to reproduce it you can always reproduce it. If that makes any sense... bleh. As you can see, the way in which I'm loading the stream isn't terribly complex, so I'm not sure what I could be doing wrong. The file data is copied out of the file and put into a new memory stream every time so there shouldn't be any corruption. Can anybody see anything wrong with what I'm doing? Note that I'm certainly not set on this method of playback either... if there was an alternative, I'm interested. I'm also curious as to whether or not it's possible there is an issue with SlimDX, and how one might go about reporting the issue if this is the case. Again, any help is appreciated as this one has me banging my head on the keyboard. Thank you!!
http://www.gamedev.net/topic/618436-corruption-when-playing-sound-via-memory-stream-xaudio2/
CC-MAIN-2015-40
refinedweb
684
70.53
Merge Sort Today, I will describe a new sorting algorithm: Merge Sort, a divide-and-conquer algorithm invented by John von Neumann in 1945. The general idea of merge sort is: when you solve a problem, if you can split the problem into two halves, solve each of them, one at a time (which is most likely easier because the problem in each half gets smaller.) And most importantly, if we have the answer for both halves, there is a simple and easy way to MERGE these two results into the result of the big problem. First, we notice that when we have 2 sorted arrays, we can easily merge these two sorted arrays into one big sorted array. Here is a simple procedure: We simply just need to compare the first element at each of the 2 arrays. We will remove the smaller element and put it into a separate array. Continue until both arrays are empty. Code Here is the pseudo-code for merging 2 sorted arrays into one: def merge_2_arr(a1,a2): final = [] l1 = len(a1) l2 = len(a2) h1 = 0 h2 = 0 while h1 < l1 and h2 < l2: if a1[h1] < a2[h2]: final.append(a1[h1]) h1 += 1 else: final.append(a2[h2]) h2 += 1 if h1 < l1: for i in range(h1,l1): final.append(a1[i]) elif h2 < l2: for i in range(h2,l2): final.append(a2[i]) return final Now we know how to merge 2 sorted arrays into a big, sorted one. The idea of Merge Sort is to split the array into 2 halves, sort each half and merge them into one sorted array. The question is how to sort each half. We notice that we have the same sorting problem. The only difference is that the sorting size is half of the original. But we still do not know how to sort a small array. The only exception is when the size of the array is 1. It contains a single number and, of course, sorted. Ah-ha! We can continue to divide the array until the array’s size is 1. Here’s the code: def merge_sort(a): if len(a) == 1: return a middle = int(len(a)/2) a1 = merge_sort(a[0:middle]) a2 = merge_sort(a[middle:]) final = merge_2_arr(a1,a2) return final Complexity Analysis We want to find how many operations this algorithm takes to sort an array of length n. We will cover this in the next write-up. See you soon!!!
https://blue-dolphin.medium.com/merge-sort-e34b160be3ee?source=post_internal_links---------2----------------------------
CC-MAIN-2022-33
refinedweb
418
71.75
Popularity 5.5 Declining Activity 0.0 Stable 25 4 20 Monthly Downloads: 2,225 Programming language: Elixir License: Apache License 2.0 Tags: Networking download alternatives and similar packages Based on the "Networking" category ejabberd10.0 9.2 download VS ejabberdRobust, ubiquitous and massively scalable Jabber/XMPP Instant Messaging platform. socket9.4 0.8 download VS socketSocket wrapping for Elixir. ExIrc7.6 0.0 download VS ExIrcIRC client adapter for Elixir projects. sshkit7.2 4.4 download VS sshkitAn Elixir toolkit for performing tasks on one or more servers, built on top of Erlang’s SSH application. sshex7.2 2.0 download VS sshexSimple SSH helpers for Elixir. slacker6.8 0.0 download VS slackerA bot library for the Slack chat service. hedwig6.7 0.0 download VS hedwigXMPP Client/Bot Framework for Elixir. reagent6.5 0.0 download VS reagentreagent is a socket acceptor pool for Elixir. kaguya5.9 1.0 download VS kaguyaA small, powerful, and modular IRC bot. yocingo5.2 0.0 download VS yocingoCreate your own Telegram Bot. SftpEx5.1 0.0 download VS SftpExElixir library for streaming data through SFTP ExPcap4.7 0.0 download VS ExPcapPCAP parser written in Elixir. wifi4.5 0.0 download VS wifiVarious utility functions for working with the local Wifi network in Elixir. chatty4.4 0.0 download VS chattyA basic IRC client that is most useful for writing a bot. eio3.7 0.0 download VS eioElixir server of engine.io. chatter3.6 0.0 download VS chatterSecure message broadcasting based on a mixture of UDP multicast and TCP. Guri3.1 0.0 download VS GuriAutomate tasks using chat messages. tunnerl2.8 0.0 download VS tunnerlSOCKS4 and SOCKS5 proxy server. hades2.2 3.8 download VS hadesA wrapper for NMAP written in Elixir. torex1.8 1.1 download VS torexSimple Tor connection library. asn1.7 0.0 download VS asnCan be used to map from IP to AS to ASN. pool1.5 0.0 download VS poolSocket acceptor pool for Elixir. mac1.4 0.0 download VS macCan be used to find a vendor of a MAC given in hexadecimal string (according to IEEE). wpa_supplicant1.0 0.0 L4 download download or a related project? Popular Comparisons README download Simply downloads remote file and stores it in the filesystem. Download.from(url, options) Features - Small RAM consumption - Ability to limit downloaded file size - Uses httpoison Installation def deps do [{:download, "~> x.x.x"}] end Into mix.exs def application do [applications: [:download]] end
https://elixir.libhunt.com/download-alternatives
CC-MAIN-2020-34
refinedweb
417
59.8
There are many good reasons to spawn other programs in Linux to do your bidding, but most programming languages don’t give you nearly as much control of the process as in C. In this post I will cover some of the most common ways to create new processes and manage them in C on a Linux system. Grab a Fork The classic fork() function is the most popular way to create a new process. It works by duplicating the process that calls it. It may sound complicated but it’s a fairly simple system. #include <unistd.h> pid_t fork(void); #include <unistd.h> #include <stdio.h> int main(int argc, char *argv[]) { pid_t fork_pid = fork(); if (fork_pid == 0) { printf("Hello from the child!\n"); } else { printf("Hello from the parent!\n"); } return 0; } $ ./forktest Hello from the parent! Hello from the child! When a process calls fork(), Linux will duplicate that current process. The value returned by fork() will be 0 in the child process. In the parent process fork() will return the PID (Process ID) of the new child process or -1 if some error occurs. Replacing a Running Process After creating a new process, it’s common to replace that child process with an entirely different program. The exec() family of functions can handle this for us. The execl() is the simplest method. #include <unistd.h> int execl(const char *filename, const char *arg, ...); All you need to provide is the location of the file to load, and the arguments you’d like to provide to it. Just like for any normal process, the first process is the process name. #include <unistd.h> int main(int argc, char *argv[]) { execl("/bin/bash", "/yes/its/bash", "-c", "echo $0 && uptime", NULL); return 0; } $ ./execltest /yes/its/bash 10:11:12 up 1:07, 2 users, load average: 0.07, 0.10, 0.18 The other functions in the exec() family have various options to control arguments and environment variables for the new process that takes over. Playing with the Kids After you have wee ‘lil child processes, you’ll probably want to make sure they are doing as they are told. After you’ve sent the child process off to do its chores, you can use the wait() function to see what it returns with after it’s done. #include <sys/wait.h> pid_t wait(int *status); When you call wait(), it will block the parent process until any of it’s children processes change state. The status pointer can be used if you’re interested in knowing what kind of state change has occurred, to determine if the program exited normally, or if it was terminated by a control signal. The return value of wait() is the child PID that has changed states. #include <unistd.h> #include <stdio.h> #include <sys/wait.h> int main(int argc, char *argv[]) { pid_t child = fork(); if (child) { wait(NULL); printf("child process terminated\n"); } else { execl("/bin/bash", "/THECHILD", NULL); } return 0; } $ ./waitkids $ echo $0 /THECHILD $ exit exit child process terminated Check the man page for wait for more details on the various options available when waiting. Attack of the Clones The fork(), exec() and wait() families of functions are portable across POSIX compliant systems. If more fine grained control over the process creation is desired, we’ll need to use the Linux specific clone() function. I won’t cover the clone() in this post, as that’s probably more suited for a dedicated post. Regardless, I suggest perusing the man page for it to get an idea of what capabilities it offers. That covers the basics of process creation using C on Linux. In the next post I plan on covering some of the methods available to communicate between running processes. If you found this helpful or informative, or have any feedback, please leave a note in the comments!
https://www.suchprogramming.com/2018/05/
CC-MAIN-2018-51
refinedweb
650
73.17
Adding a Border to the Image using NumPy In this article, we will learn how to add/draw a border to the image. We use OpenCV to read the image. The rest of the part is handled by NumPy to code from scratch. We rely on it for the matrix operations that can be achieved with so much ease. There are two aspects in which we can start to think: - If the image is read in grayscale, we can simply keep the default color as black. This is because the length of the shape of the image matrix would be 2. Therefore we cannot add a color border whose color value would be of size 3 and thus it cannot be mapped easily. - If the image is read in RGB, we can have a choice to pick the color for the border. This is because the length of the shape of the image matrix would be 3. Hence we can add a color border whose color value would be of size 3 which can be mapped easily. Credits of Cover Image - Photo by Kanan Khasmammadov on Unsplash Before proceeding further, we need to make sure we have enough colors (based on the user’s choice). I have extracted the possible color values from this website. The code of the same can be seen below. The result is stored in a JSON file named color_names_data.json. import requests import json from bs4 import BeautifulSoup def extract_table(url): res = requests.get(url=url) con = res.text soup = BeautifulSoup(con, features='lxml') con_table = soup.find('table', attrs={'class' : 'color-list'}) headings = [th.get_text().lower() for th in con_table.find("tr").find_all("th")] table_rows = [headings] for row in con_table.find_all("tr")[1:]: each_row = [td.get_text().lower() for td in row.find_all("td")] table_rows.append(each_row) return table_rows col_url = "" color_rows_ = extract_table(url=col_url) color_dict = {} for co in color_rows_[1:]: color_dict[co[0]] = { 'r' : int(co[2]), 'g' : int(co[3]), 'b' : int(co[4]), 'hex' : co[1] } with open(file='color_names_data.json', mode='w') as col_json: json.dump(obj=color_dict, fp=col_json, indent=2) It is necessary to grab R, G, and B values in order for the mapping after the separation of pixels. We follow split and merge methods using NumPy. The structure of the color data can be seen below. { "air force blue": { "r": 93, "g": 138, "b": 168, "hex": "#5d8aa8" }, "alizarin crimson": { "r": 227, "g": 38, "b": 54, "hex": "#e32636" }, "almond": { "r": 239, "g": 222, "b": 205, "hex": "#efdecd" }, ... } Time to Code The packages that we mainly use are: - NumPy - Matplotlib - OpenCV adding/drawing borders around the image, the important arguments can be named as below: image_file→ Image file location or image name if the image is stored in the same directory. bt→ Border thickness color_name→ By default it takes 0 (black color). Otherwise, any color name can be taken. We use the method copyMakeBorder() available in the library OpenCV that is used to create a new bordered image with a specified thickness. In the code, we make sure to convert the color name into values from the color data we collected earlier. The below function works for both RGB image and grayscale image as expected. def add_border(image_file, bt=5, with_plot=False, gray_scale=False, color_name=0): image_src = read_this(image_file=image_file, gray_scale=gray_scale) if gray_scale: color_name = 0 value = [color_name for i in range(3)] else: color_name = str(color_name).strip().lower() with open(file='color_names_data.json', mode='r') as col_json: color_db = json.load(fp=col_json) colors_list = list(color_db.keys()) if color_name not in colors_list: value = [0, 0, 0] else: value = [color_db[color_name][i] for i in 'rgb'] image_b = cv2.copyMakeBorder(image_src, bt, bt, bt, bt, cv2.BORDER_CONSTANT, value=value) — add_border(image_file='lena_original.png', with_plot=True, color_name='green') add_border(image_file='lena_original.png', with_plot=True, gray_scale=True, color_name='pink') We can clearly see the borders are been added/drawn. For the grayscale image, though we have mentioned pink, a black border is drawn. Code Implementation from Scratch When we talk about the border, it is basically a constant pixel value of one color around the entire image. It is important to take note of the thickness of the border to be able to see. Considering the thickness we should append a constant value around the image that matches the thickness level. In order to do so, we can use the pad() method available in the library NumPy. This method appends a constant value that matches the level of the pad_width argument which is mentioned. Example import numpy as np m = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] m = np.array(m) m array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> pm = np.pad(array=m, pad_width=1, mode='constant', constant_values=12) pm array([[12, 12, 12, 12, 12], [12, 1, 2, 3, 12], [12, 4, 5, 6, 12], [12, 7, 8, 9, 12], [12, 12, 12, 12, 12]]) >>> pmm = np.pad(array=m, pad_width=2, mode='constant', constant_values=24) pmm array([[24, 24, 24, 24, 24, 24, 24], [24, 24, 24, 24, 24, 24, 24], [24, 24, 1, 2, 3, 24, 24], [24, 24, 4, 5, 6, 24, 24], [24, 24, 7, 8, 9, 24, 24], [24, 24, 24, 24, 24, 24, 24], [24, 24, 24, 24, 24, 24, 24]]) >>> More examples can be found in the documentation. We just need to change the constant_values argument by taking the actual color value. - For a grayscale image, we simply pad the image matrix with black color i.e., 0. - For the RGB image, we are to grab the color value from the data collected, split the image into 3 matrices, and pad each matrix with each color value. The below function would explain the flow more clearly. The below function would explain the flow more clearly. def draw_border(image_file, bt=5, with_plot=False, gray_scale=False, color_name=0): image_src = read_this(image_file=image_file, gray_scale=gray_scale) if gray_scale: color_name = 0 image_b = np.pad(array=image_src, pad_width=bt, mode='constant', constant_values=color_name) cmap_val = 'gray' else: color_name = str(color_name).strip().lower() with open(file='color_names_data.json', mode='r') as col_json: color_db = json.load(fp=col_json) colors_list = list(color_db.keys()) if color_name not in colors_list: r_cons, g_cons, b_cons = [0, 0, 0] else: r_cons, g_cons, b_cons = [color_db[color_name][i] for i in 'rgb'] r_, g_, b_ = image_src[:, :, 0], image_src[:, :, 1], image_src[:, :, 2] rb = np.pad(array=r_, pad_width=bt, mode='constant', constant_values=r_cons) gb = np.pad(array=g_, pad_width=bt, mode='constant', constant_values=g_cons) bb = np.pad(array=b_, pad_width=bt, mode='constant', constant_values=b_cons) image_b = np.dstack(tup=(rb, gb, bb)) cmap_val = None if with_plot: — draw_border(image_file='lena_original.png', with_plot=True, color_name='cyan') draw_border(image_file='lena_original.png', bt=10, with_plot=True, gray_scale=True) Yay! We did it. We coded the entire thing including color choice completely from scratch except for the part where we read the image file. We relied mostly on NumPy as it is very fast in computing matrix operations (We could have done it with for loops if we wanted our code to execute very slow). Personally, this was a great learning for me. I am starting to think about how difficult and fun that would be for the people who actually work on open source libraries. You should definitely check out my other articles on the same subject in my profile. If you liked it, you can buy coffee for me from here.
https://msameeruddin.hashnode.dev/adding-a-border-to-the-image-using-numpy?guid=none&deviceId=5cc3c497-5647-4db2-85aa-5b2b8fe8fd37
CC-MAIN-2021-10
refinedweb
1,229
57.47
bc(1) bc(1) NAME bc - arbitrary-precision arithmetic language SYNOPSIS [file...] DESCRIPTION is an interactive processor for a language that resembles C but pro‐ vides unlimited-precision arithmetic. It takes input from any files given, then reads the standard input. Options recognizes the following command-line options: Compile only. is actually a preprocessor for which invokes automati‐ cally (see dc(1)). Specifying prevents invoking dc, and sends the dc input to standard output. causes an arbitrary-precision math library to be predefined. As a side effect, the scale factor is set. Program Syntax L a single letter in the range through E expression; S statement; R relational expression. Comments Comments are enclosed in and Names Names include: simple variables: L array elements: L [ E ] stacks: L Other Operands Other operands include: Arbitrarily long numbers with optional sign and decimal point. ( E ) sqrt ( E ) length ( E ) number of significant decimal digits scale ( E ) number of digits right of dec‐ imal point L ( E , ... , E ) Strings of ASCII characters enclosed in quotes (""). Arithmetic Operators: Arithmetic operators yield an E as a result and include: is remainder (not mod, see below); is power). (prefix and append; apply to names) Relational Operators Relational operators yield an R when used as op Statements E { S ; ... ; S } if ( R ) S while ( R ) S for ( E ; R ; E ) S null statement break quit Function Definitions define L ( L ,..., L ) { auto L, ... , L S; ... S return ( E ) } Functions in −l Math Library Functions in the math library include: s(x) sine c(x) cosine e(x) exponential l(x) log a(x) arctangent j(n,x) Bessel function All function arguments are passed by value. Trigonomet‐ ric angles are in radians where 2 pi radians = 360 degrees. The value of a statement that is an expression is printed unless the main operator is an assignment. No operators are defined for strings, but the string is printed if it appears in a context where an expression result would be printed. Either semicolons or new-lines can separate statements. Assignment to scale influences the number of digits to be retained on arithmetic operations in the manner of dc(1). Assignments to or set the input and output number radix respectively, again as defined by dc(1). The same letter can be used simultaneously as an array, a function, and a simple variable. All variables are global to the program. "Auto" variables are pushed down during function calls. When using arrays as function arguments or defining them as automatic variables, empty square brackets must follow the array name. The operator yields the remainder at the current scale, not the integer modulus. Thus, at scale 1, is .1 (one tenth), not 1. This is because (at scale 1) is 2.3 with .1 as the remainder. EXAMPLES Define a function to compute an approximate value of the exponential function: Print approximate values of the exponential function of the first ten integers. WARNINGS There are currently no (AND) or (OR) comparisons. The statement must have all three expressions. is interpreted when read, not when executed. parser is not robust in the face of input errors. Some simple expression such as 2+2 helps get it back into phase. The assignment operators: and are obsolete. Any occurences of these operators cause a syntax error with the exception of which is interpreted as followed by a unary minus. Neither entire arrays nor functions can be passed as function parameters. FILES desk calculator executable program mathematical library SEE ALSO bs(1), dc(1). tutorial in STANDARDS CONFORMANCE bc(1)[top]
http://www.polarhome.com/service/man/?qf=bc&tf=2&of=HP-UX&sf=1
CC-MAIN-2020-29
refinedweb
595
57.98
In simple English, array means collection. In C# also, an array is a collection of similar types of data. For example, an array of int is a collection of integers, an array of double is a collection of doubles, etc. Suppose we need to store the marks of 50 students in a class and calculate the average marks. Declaring 50 separate variables will do the job but no programmer would like to do so. And here comes the array in action. C# Declaration and Initialization of Array The syntax for declaring an array is: type[] arrayName = new type[array_size]; For example, take an array of 6 integers n. We will declare it as: int[] n = new int[6]; [ ] is used to denote an array. int[] n means that n is an array of integers. n is the name of the array. new keyword is used to allocate space in the memory of the computer. new int[6] means that the compiler will allocate space for 6 integers in the memory. Here, 6 is the size of the array. We can also declare and initialize an array at the same time. int[] n = new int[]{2, 3, 15, 8, 48, 13}; Even this will work: int[] n = {2, 3, 15, 8, 48, 13}; In these cases, we are declaring and assigning values to the array at the same time. Hence, no need to specify the array size because the compiler already knows the elements of the array and thus the size of it. Following is the pictorial view of the array. 0, 1, 2, 3, 4 and 5 are the indices of the elements of the array. It is like these are the identities of 6 different elements of the array. Index starts from 0. So, the first element of an array has an index of 0. We access the elements of an array by writing arrayName[index]. Thus in the above example, n[0] is 2, n[1] is 3, n[2] is 15, etc. So, we have made an array of integers and then accessed its elements using arrayName[index]. Now, we can use these separate elements as different variables. Thus, we didn't have to declare 6 different variables. By writing int[] n= {2, 4, 8}; , we can declare and initialize the array at the same time. But when we declare an array like int[] n = new int[3];, we need to assign the values to it separately. Because new int[3] will definitely allocate the space of 3 integers in the memory but we haven't assigned any values to them yet. We already know that we can access the elements of an array as different variables using their indices ( arrayName[index]). So, we can easily assign values to each of the element of the array as we do with variables. n[0] = 2;, n[1] = 4; and n[2] = 8; will assign the vaules to the first, second and the third element of the array n respectively. Just like a variable, an array can be of other data types also. double[] d[] = { 1.1, 1.4, 1.5}; Here, d is an array of doubles. First, let's see the example to calculate the average of the marks of 3 students. In the example, marks[0], marks[1] and marks[2] represent the marks of the first, second and third student respectively. using System; class Test { static void Main(string[] args) { int[] marks = new int[3]; //array of 3 integers double average; //variable to store average marks Console.WriteLine("Enter marks of first student"); marks[0] = Convert.ToInt32(Console.ReadLine()); //setting value of marks[0] Console.WriteLine("Enter marks of second student"); marks[1] = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Enter marks of third student"); marks[2] = Convert.ToInt32(Console.ReadLine()); average = (marks[0] + marks[1] + marks[2])/3.0; Console.WriteLine($"Average marks : {average}"); } } Here, you have seen a working example of an array. We treated the elements of the array in a similar way as we had treated normal variables. We can get the length (size) of any array using the Length property. For example, for an array n, int[] n = new int[3], n.Length will be 3. Let's look at one more example. using System; class Test { static void Main(string[] args) { int[] n = new int[10]; /* initializing elements of array n */ for(int i=0; i<n.Length; i++) { Console.WriteLine($"Enter value of n[{i}]"); n[i] = Convert.ToInt32(Console.ReadLine()); } for(int i=0; i<n.Length; i++) { Console.WriteLine($"n[{i}] = {n[i]}"); } } } The above code was just to make you familiar with using loops with an array because you will be doing this many times later. The code is simple, i starts from 0 because the index of an array starts from 0 and goes up to 9 (for 10 elements, index of the array will also go upto 9). So, i goes up to 9 and not 10 ( i<n.Length will be i<10). So in the above code, n[i] will be n[0], n[1], n[2], ...., and n[9]. There are two for loops in the above example. In the first for loop, we are taking the values n[i] = Convert.ToInt32(Console.ReadLine());, the user will be asked to enter the value of n[0]. Similary in the second iteration, the value of i will be 1 and n[i] will be n[1]. So n[i] = Convert.ToInt32(Console.ReadLine()); will be used to input the value from the user for n[1] and so on. i will go up to 9, and so indices of the array (0, 1, 2, ..., and 9). C# Passing Array to Method In C#, we can pass an element of an array or the full array as an argument to a method. Passing an element of an array is the same as passing a normal variable. Let's see an example of doing so. using System; class Test { static void Display(int a) { Console.WriteLine(a); } static void Main(string[] args) { int[] n = {20, 30, 23, 4, 5, 2, 41, 8}; Display(n[2]); } } So, it is exactly like passing a simple variable to a method because an element of an array is just like a simple variable. C# Passing Entire Array to Method We can also pass a whole array to a method. We know that a function accepts data of particular data type when we mention the data type in the parameter of the function. For example, Display(int a) will take an integer, Abc(int a, double b) will take one integer and one double, etc. Similarly, to make a function accept arrays, we will use [] with its data type. For example, Display(int[] a) will take an array of integers. Now, we can simply pass an array as an argument to this function. Let's take an example. using System; class Test { static double Average(double[] a) //Average is taking an array of doubles { double avg, sum = 0; for(int i=0; i<a.Length; i++) { sum+= a[i]; } avg = sum/8; return avg; } static void Main(string[] args) { double[] n = {20.6, 30.8, 5.1, 67.2, 23, 2.9, 4, 8}; double b = Average(n); //passing array to method Average Console.WriteLine($"Average of numbers = {b}"); } } Average(double[] a) → It is the method that is taking an array of doubles. And the rest of the body of the method is performing accordingly. double b = Average(n) → We are passing the array n as the argument to the function Average. C# foreach Loop There is one more loop available in C# - foreach loop. It is used to iterate over an array and makes iterating over arrays easier. Let's see an example of this. using System; class Test { static void Main(string[] args) { int[] ar = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; foreach(int m in ar) { Console.WriteLine(m); } } } This is very simple. Here, the variable m will go to every element of the array ar and will take its value. So, in the first iteration, m is the 1st element of the array ar i.e. 1. In the second iteration, it is the 2nd element i.e. 2 and so on. Just focus on the syntax of this for loop, the rest of the part is very easy. Let's take an example of finding the largest number of an array. using System; class Test { static void Main(string[] args) { int[] a = {23, 434, 234, 543, 350, -23, 133, 54, 3, -34, -124}; int max = a[0]; //setting first element of array as maximum number foreach(int i in a) { if(i > max) //if current element of array is greater than the current maximum value max = i; //changing the value of variable max } Console.WriteLine(max); } } In this example, we are first setting the first element of the array a as the maximum value - max = a[0]. After this, we are iterating over each element of the array and comparing it with the variable max. If the value is greater than the value of max, we are changing the value of max to that element of the array. Ultimatley, max will have the maximum value of all elements of the array. C# Returning Array from Method We can also return an array from a method like we return any other data type. To return an integer from a function Abc, we write static int Abc(). Similarly, to return an array from a function Def, we would write static int[] Def(). Let's look at an example. using System; class Test { static int[] NaturalNumber(int n) //method for get first n natrual numbers { int[] a = new int[n]; for(int i=0; i<a.Length; i++) { a[i] = i+1; //first natural number will be 1, second will be 2 and so on. } return a; //returning array } //function to display array static void Display(int[] a) { foreach(int i in a) Console.Write($"{i}\t"); Console.Write("\n"); } static void Main(string[] args) { int[] a = NaturalNumber(10); Display(a); } } In this example, the function NaturalNumber is returning an array - static int[] NaturalNumber(int n). Firstly, we are making an array inside this function - int[] a = new int[n]; and then setting its elements to store natural number starting from 1 - a[i] = i+1;. So, a[0] will store 1, a[1] will store 2 and so on. At last, we are returning this array - return a;. C# Params Suppose you are writing a code and you need to make a method to find the sum of a few integers but the number of integers is not fixed. We can handle these kinds of situations using the params keyword. params is used to pass a variable number of arguments to a method. All the passed arguments will be stored in an array and then we can access that array inside the method. Let's take an example where the function Abc is going to take a variable number of arguments, we would define the function as - Abc(params int[] a). Now, all the passed integers to the function will be stored in the array a and we can easily access the array a to access all the passed arguments. Let's take an example. using System; class Test { static int FindSum(params int[] a) { int sum = 0; foreach(int i in a) { sum = sum+i; } return sum; } static void Main(string[] args) { int a = FindSum(1, 2, 3, 4); int b = FindSum(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); Console.WriteLine(a); Console.WriteLine(b); } } In the above example, the function FindSum is taking a variable number of arguments - (params int[] a) and we are accessing all those numbers as elements of the array a inside the function. int a = FindSum(1, 2, 3, 4); int b = FindSum(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); You can see that different number of arguments in these two lines. Methods in Array There are many useful predefined methods available for us to use for an array. All these methods are inside Array class, so we wil access these methods using Array.MethodName(). Some of the important methods are: C# IndexOf IndexOf is used to get the index of any element of an array. Let's take an example. using System; class Test { static void Main(string[] args) { int[] a = {1, 2, 3, 4, 5}; Console.WriteLine(Array.IndexOf(a, 3)); } } C# Reverse Reverse reverses the elements of an array. So, the first element of an array will become the last and last will become the first. using System; class Test { static void Main(string[] args) { int[] a = {1, 2, 3, 4, 5}; Array.Reverse(a); foreach(int i in a) Console.WriteLine(i); } } C# Sort It is used to arrange elements of the array in an acesnding order. using System; class Test { static void Main(string[] args) { int[] a = {4, 1, 5, 3, 2}; Array.Sort(a); foreach(int i in a) Console.WriteLine(i); } } There are many other methods also. You can see the full list on offical documentation. C# Reference Type and Value Type Data types like int, double, etc. are value type. It means that a value is stored in the mermory of a computer. That's why we need to pass these data types by their references when we need to change their values inside a method. On the other hand, arrays, strings, classes and delegates are reference type. It means that there reference is stored in the memory and we can normally pass it to a method and any changes made in a method will be also reflected outside it. Let's take an example. using System; class Test { static void Change(int[] a) { a[0] = 1; } static void Main(string[] args) { int[] a = {4, 2, 3, 4, 5}; Change(a); foreach(var i in a) Console.WriteLine(i); } } You can see that the change made to the array inside the method is also reflecting outside it.
https://www.codesdope.com/course/c-sharp-arrays/
CC-MAIN-2022-40
refinedweb
2,364
73.27
Hey, Scripting Guy! Any time I’m connected to a wireless network I can open the Wireless Network Connection Status dialog box and see a series of bars – 1 through 5 – that indicates the signal strength. How can I get that same information using a script?-- RK Hey, RK. Before we answer today’s question we need to make one slight clarification. For the past week or so, the Scripting Guy who writes this column has noted that, thanks to the NCAA men’s basketball tournament, this is his favorite time of the year. As it turns out, he was mistaken. Instead, thanks to the Girl Scouts of America, this is his least favorite time of the year. Not that the Scripting Guy who writes this column has anything against the Girl Scouts, mind you; after all, the Girl Scouts of America is a fine, upstanding organization that does a lot of good things for both its members and for the community. It’s just that this is the time of year when the Girl Scouts hold their annual cookie sale. And that can only mean one thing: every penny that the Scripting Guy who writes this column has (along with a number of pennies that he doesn’t have) is going to end up being spent on Girl Scout cookies he doesn’t want, doesn’t need, and doesn’t even take with him. (The Girl Scouts give you the option of donating the cookies to troops serving overseas.) The Scripting Guy who writes this column is always a sucker for kids selling things, but in years past it was a little easier; that’s because the Girl Scouts only sold their cookies door-to-door. That meant that, sooner or later, a Girl Scout would show up on your doorstep, you’d buy a bunch of cookies, and then you’d be home free for the rest of the year. But now the Girl Scouts don’t bother with door-to-door sales; instead, they stake out the entrances to grocery stores, drug stores, shopping centers and other places that you simply can’t avoid. (Editor’s Note: We’re not sure where the Scripting Guy who writes this column has been for most of his life, because the Scripting Editor distinctly remembers standing outside a bank for several hours selling Girl Scout cookies, and that was…several…years ago.) And, truth be told, they get the Scripting Guy who writes this column every single time. On one fateful occasion this Scripting Guy entered the local grocery store through door A, buying a bunch of cookies along the way. Without even thinking he then exited the store through door B, where a different pair of Girl Scouts were selling cookies. And yes, he bought cookies from them, too. Talk about getting you coming or going. But don’t blame the Scripting Guy who writes this column; instead, this is all the Girl Scouts’ fault. After all, somewhere in this world there’s a 17-year-old Girl Scout who spends her time smoking cigarettes, listening to her iPod, and text-messaging her friends. The Scripting Guy who writes this column might be able to walk past someone like that without giving her a second thought. But you don’t see girls like that selling Girl Scout cookies. No, the Girl Scouts who sell the cookies are all tiny little 7- and 8-year-olds, with huge brown eyes and tremulous little voices, and they’re just so cute and so shy when they ask if you’d like to buy some cookies that – well, you can figure out the rest for yourself. It’s not fair! Worst of all, the Scripting Guy who writes this column needs to stop at the store on his way home tonight. (Yes, the smart thing to do would be to stock up on food and other supplies ahead of time, and thus avoid the stores during the cookie sale. But it’s been so long since the Scripting Guy who writes this column has done the smart thing that he no longer even knows how to go about doing something like that.) That means that this Scripting Guy is going to need plenty of money tonight, and the only way for him to make some money is to come up with a script that can report the strength of a wireless network connection. You know, something along these lines: strComputer = "." Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\wmi") Set colItems = objWMIService.ExecQuery("Select * From MSNdis_80211_ReceivedSignalStrength") For Each objItem in colItems intStrength = objItem.NDIS80211ReceivedSignalStrength Wscript.Echo objItem.InstanceName & " -- " & strBars Note: Unfortunately, this script does not work on Windows Vista. How do you get this information on Windows Vista? Well, we’re not sure about that. But for those of you who haven’t upgraded yet, read on for an explanation of the pre-Windows Vista solution. Let’s see if we can figure out how this script works. (After all, if we stall long enough maybe the Girl Scouts will go home and go to bed before we get to the store.) We begin by connecting to the WMI service on the local computer (although we could also run this script against a remote computer). Note, however, that with this script we don’t connect to the root\cimv2 namespace; instead, we make a connection to the root\wmi namespace: Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\wmi") After making the connection we then use this line of code to retrieve all the instances of the delightfully-named MSNdis_80211_ReceivedSignalStrength class: Set colItems = objWMIService.ExecQuery("Select * From MSNdis_80211_ReceivedSignalStrength") That’s going to give us back a collection of all our wireless connections and their signal strengths. Once that’s done our next step is to set up a For Each loop to loop through all the items in that collection; inside that loop we use this line of code to assign the value of the NDIS80211ReceivedSignalStrength property to a variable named intStrength: intStrength = objItem.NDIS80211ReceivedSignalStrength This is where things start to get a tiny bit weird. As it turns out, the NDIS80211ReceivedSignalStrength property is going to be a negative value (e.g., -55). To be perfectly honest, we ran into a lot of problems when we tried to find out exactly what that value means and how it is derived. However, we did find a couple of references that provided at least a rough approximation of how these values translate to the signal strength bars found in the Wireless Network Connection Status dialog box. Based on those suggested values (and your values might vary slightly), we decided to next use the following If-Then block to assign a value to a variable named strBars: It should be fairly clear what we’re doing here: we’re simply taking action based on the value of the variable intStrength. For example, suppose intStrength is greater than -57. (And, of course, -55 is greater than -57; don’t let those negative numbers throw you off.) If intStrength is greater than -57 we assign the value 5 bars to the variable strBars and then exit the code block. If intStrength isn’t greater than -57 we then check to see if the value happens to be greater than -68. If it is, then strBars is assigned the value 4 bars and we then exit the code block. Etc., etc. When we’re done we echo back the value of both the InstanceName property and the variable strBars, then loop around and repeat the process with the next wireless connection. The one drawback to this script is that you’re likely to end up with multiple reports for a single wireless connection. For example: Broadcom 802.11a/b/g WLAN -- 5 Bars Broadcom 802.11a/b/g WLAN - Virtual Machine Network Services Driver -- 5 Bars Broadcom 802.11a/b/g WLAN - Packet Scheduler Miniport -- 5 Bars That’s due to the fact that WMI considers both physical network adapters and virtual network adapters to be one and the same. If you wanted to, you could write code that limits the returned data to “real” network adapters (perhaps by first getting a list of IP-enabled adapters from the Win32_NetworkAdapterConfiguration class). We didn’t see any need to go to all that trouble, but if anyone would like to see an example of how you might go about doing that just let us know. As for the Girl Scouts of America, the Scripting Guy who writes this column has decided that enough is enough: come this time next year he’s just going to have his paycheck direct-deposited to the Girl Scouts. That should save everyone a lot of time and a lot of trouble. And, for once at least, his money will actually be put to good use. I need to know the name SSID which a machine is conected by wifi for later run other script if the name SSID is correct. I have this script that work well for windows xp but not for windows 7 64 bits. do you know why doesn't work under windows 7? Private Sub GetWMI(WMIArray, WMIQuery) Set WMIClass = GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\wmi") If not(WMIClass is nothing) Then Set WMIArray = WMIClass.ExecQuery(WMIQuery) end if End Sub Function SSID() Call GetWMI(objMSNdis_80211_ServiceSetIdentifierSet, "Select * from MSNdis_80211_ServiceSetIdentifier Where active=true") For Each objMSNdis_80211_ServiceSetIdentifier in objMSNdis_80211_ServiceSetIdentifierSet ID = "" For i = 0 to objMSNdis_80211_ServiceSetIdentifier.Ndis80211SsId(0) ID = ID & chr(objMSNdis_80211_ServiceSetIdentifier.Ndis80211SsId(i + 4)) SSID = ID End Function wscript.echo SSID()
http://blogs.technet.com/b/heyscriptingguy/archive/2007/03/22/how-can-i-determine-the-signal-strength-of-a-wireless-connection.aspx
CC-MAIN-2014-52
refinedweb
1,596
67.49
Python has a reputation for looking like magic, and that's probably due in part to the many forms a function can take: lambdas, decorators, closures, and more. A well-placed function call can do amazing things, without ever writing a single class! You might say Functions Are Magic. Functions Revisited We already touched on functions in Data Typing and Immutability. If you haven't read that article yet, I'd recommend going back and taking a look now. Let's look at a quick example of a function, just to make sure we're on the same page. def cheer(volume=None): if volume is None: print("yay.") elif volume == "Louder!": print("yay!") elif volume == "LOUDER!": print("*deep breath*") print("yay!") cheer() # prints "yay." cheer("Louder!") # prints "yay!" cheer("LOUDER!") # prints "*deep breath* ...yay!" Nothing surprising here. The function cheer() accepts a single parameter volume. If we don't pass an argument for volume, it will default to the value None instead. What Is Functional Programming?. Functional programming is almost the opposite of this. We organize around functions, which we pass our data through. There are a few rules we have to follow: Functions should take only input, and produce only output. Functions should not have side effects; they should not modify anything external to themselves. Functions should (ideally) always produce the same output for the same inputs. There should be no state internal to the function that will break this pattern. Now, before you go and rewrite all your Python code to be pure functional, STOP! Don't forget that one of the beautiful things about Python is that it's a multi-paradigm language. You don't have to choose one paradigm and stick to it; you can mix and match the best of each in your code. In fact, we've already been doing this! Iterators and generators are both borrowed from functional programming, and they work quite well alongside objects. Feel free to incorporate lambdas, decorators, and closures too. It's all about choosing the best tool for the job. In practice, we rarely can avoid side-effects all together. In applying the concept of functional programming to your Python code, you should focus more on being mindful and deliberate about side-effects, rather than just to avoid them altogether. Limit them to those situations where there is no better way to solve the problem. There's no hard-and-fast rule to rely on here; you will need to develop your own discernment. Recursion When a function calls itself, that's called recursion. This can be helpful when we need to repeat the entire logic of a function, but a loop is unsuitable (or just feels too cluttered). NOTE: My example below is simplified to highlight recusion itself. This wouldn't actually be a situation where recursion would be the best approach; recusion is better when you need to repeatedly call complicated language on different pieces of data, such as when you're traversing a tree structure. import random random.seed() class Villain: def __init__(self): self.defeated = False def confront(self): # Roll of the dice. if random.randint(0,10) == 10: self.defeated = True def defeat(villain): villain.confront() if villain.defeated: print("Yay!") return True else: print("Keep trying...") return defeat(villain) starlight = Villain() victory = defeat(starlight) if victory: print("YAY!") The stuff related to random may look new. It's not really related to this topic, but in brief, we can generate random integers by first seeding the random number generator at the start of our program ( random.seed()), and then calling random.randint(min, max), where min and max define the inclusive range of possible values. The important part of the logic here is the defeat() function. As long as the villain is not defeated, the function calls itself, passing the villain variable. This will happen until one of the function calls returns a value. In this case, the value is returned up the recursive call stack, eventually getting stored in victory. No matter how long it takes *, we'll eventually defeat that villain. Beware Recursing Infinitely Recursion can be a powerful tool, but it can also present a problem: what if we have no way to stop? def mirror_pool(lookers): reflections = [] for looker in lookers: reflections.append(looker) lookers.append(reflections) print(f"We have {len(lookers) - 1} duplicates.") return mirror_pool(lookers) duplicates = mirror_pool(["Pinkie Pie"]) Clearly, this is going to run forever! Some languages don't provide a clean way to handle this — the function will just recurse infinitely until something crashes. Python stops this madness a little more gracefully. As soon as it reaches a set recursion depth (usually 997-1000 times), it stops the entire program and raises an error: RecursionError: maximum recursion depth exceeded while calling a Python object Like all errors, we can catch this before things get out of hand: try: duplicates = mirror_pool(["Pinkie Pie"]) except RecursionError: print("Time to watch paint dry.") Thankfully, because of how I wrote this code, I don't actually need to do anything special to clean up the 997 duplicates. The recursive function never returned, so duplicates remains undefined in this case. We might want to control recursion another way, however, so we don't have to use a try-except to prevent disaster. Within our recursive function, we can keep track of how many times it's been called by adding a calls parameter, and aborting as soon as it gets too big. def mirror_pool(lookers, calls=0): calls += 1 reflections = [] for looker in lookers: reflections.append(looker) lookers.append(reflections) print(f"We have {len(lookers) - 1} duplicates.") if calls < 20: lookers = mirror_pool(lookers, calls) return lookers duplicates = mirror_pool(["Pinkie Pie"]) print(f"Grand total: {len(duplicates)} Pinkie Pies!") We still have to figure out how to get rid of 20 duplicates without losing the original, but at least the program didn't crash. NOTE: You can override the maximum recursion level with sys.setrecursionlimit(n), where n is the maximum you want. Nested Functions From time to time, we may have a piece of logic which we want to reuse within a function, but we don't want to clutter up our code by making yet another function besides. def use_elements(target): elements = ["Honesty", "Kindness", "Laughter", "Generosity", "Loyalty", "Magic"] def use(element, target): print(f"Using Element of {element} on {target}.") for element in elements: use(element, target) use_elements("Nightmare Moon") Of course, the trouble with an example this simple is that the usefulness is not immediately apparent. Nested functions become helpful when we have a large chunk of logic that we want to abstract down to a function for reuability, but we don't want to define outside of our principal function. If the use() function were considerably more complicated, and perhaps called from more than just the loop, this design would be justified. Still, the simplicity of the example shows the underlying concept. It also brings up another difficulty. You will notice that we're passing target to the inner function, use(), each time we call it, and that feels rather pointless. Couldn't we just use the target variable that's already in local scope? In fact, we could. def use_elements(target): elements = ["Honesty", "Kindness", "Laughter", "Generosity", "Loyalty", "Magic"] def use(element): print(f"Using Element of {element} on {target}.") for element in elements: use(element) use_elements("Nightmare Moon") Yet as soon as we try to modify that variable, we run into trouble: def use_elements(target): elements = ["Honesty", "Kindness", "Laughter", "Generosity", "Loyalty", "Magic"] def use(element): print(f"Using Element of {element} on {target}.") target = "Luna" for element in elements: use(element) print(target) use_elements("Nightmare Moon") Running that code raises an error: UnboundLocalError: local variable 'target' referenced before assignment Clearly, it is no longer seeing our local variable target. This is because assigning to a name, by default, shadows any existing names in enclosing scopes. So, the line target == "Luna" is trying to create a new variable limited to the scope of use(), and that hides (shadows) the variable target in the enclosing scope of use_elements(). Python sees this and assumes that, since we're defining target in the function use(), all references to that variable relate to that local name. That's not what we want! The nonlocal keyword allows us to tell the inner function that we're working with the variable target from the enclosing local scope. def use_elements(target): elements = ["Honesty", "Kindness", "Laughter", "Generosity", "Loyalty", "Magic"] def use(element): nonlocal target print(f"Using Element of {element} on {target}.") target = "Luna" for element in elements: use(element) print(target) use_elements("Nightmare Moon") Now, when all is said and done, we see the value Luna printed out. Our work here is done! NOTE: If you're wanting to allow a function to be able to modify a variable that was defined the global scope (outside of all functions), use the global keyword instead of nonlocal. Closures Building on the idea of the nested function, and recalling that a function is treated no differently than any other object, we can create a function that actually builds and returns another function, called a closure. def harvester(pony): total_trees = 0 def applebucking(trees): nonlocal pony, total_trees total_trees += trees print(f"{pony} harvested from {total_trees} trees so far.") return applebucking apple_jack = harvester("Apple Jack") big_mac = harvester("Big Macintosh") apple_bloom = harvester("Apple Bloom") north_orchard = 120 west_orchard = 80 # watch out for fruit bats east_orchard = 135 south_orchard = 95 near_house = 20 apple_jack(west_orchard) big_mac(east_orchard) apple_bloom(near_house) big_mac(north_orchard) apple_jack(south_orchard) In this example, applebucking() would be the closure, because it closes over the nonlocal variables pony and total_trees. Even after the outer function terminates, the closure retains references to these variables. The closure is returned from the harvester() function, and can be stored in a variable like any other object. It is the fact it "closes over" a nonlocal variable that makes it a closure per se; otherwise, it'd just be a function. In this example, I'm using the closure to effectively create objects with state. In other words, each harvester remembers how many trees he or she has harvested from. This particular usage isn't strictly compliant with functional programming, but it's quite useful if you don't want to create an entire class just to store one function's state! apple_jack, big_macintosh, and apple_bloom are now three different functions, each with their own separate state; they each have a different name, and remember how many trees they have harvested from. What happens in one closure's state has no effect on the others. When we run the code, we see this state in action: Apple Jack harvested from 80 trees so far. Big Macintosh harvested from 135 trees so far. Apple Bloom harvested from 20 trees so far. Big Macintosh harvested from 255 trees so far. Apple Jack harvested from 175 trees so far. Easy as apple pie. The Problem With Closures Closures are essentially "implicit classes", because they put functionality and its persistent information (state) in the same object. There are, however, several unique disadvantages to closures: You cannot access the "member variables" as it were. In our example, I can never get to the total_treesvariable on the apple_jackclosure! I can only use that variable within the context of the closure's own code. The state of the closure is entirely opaque. Unless you know how the closure is written, you don't know what information it's keeping track of. Because of the previous two points, it is impossible to directly know when a closure has any state at all. When using closures, you need to be prepared to handle these problems, and all the debugging difficulties they introduce. I recommend only using them when you need a single function to store a small amount of private state between calls, and only for such a limited period of time in the code that writing an entire class doesn't feel justified. (Also, don't forget about generators and coroutines, which may be better suited to many such scenarios.) Basically, it's there. Closures can still be a useful part of your Python repitoire, so long as you use them with great care. Lambdas A lambda is an anonymous function (no name) made up of a single expression. That definition alone is the reason many programmers can't imagine why they'd ever need one. What's the point of writing a function that lacks a name, basically making reuse completely impractical? Sure, you can assign a lambda to a variable, but at that point, shouldn't you have just written a function? To understand this, let's take a look at an example without lambdas first: class Element: def __init__(self, element, color, pony): self.element = element self.color = color self.pony = pony def __repr__(self): return f"Element of {self.element} ({self.color}) is attuned to {self.pony}" elements = [ Element("Honesty", "Orange", "Apple Jack"), Element("Kindness", "Pink", "Fluttershy"), Element("Laughter", "Blue", "Pinkie Pie"), Element("Generosity", "Violet", "Rarity"), Element("Loyalty", "Red", "Rainbow Dash"), Element("Magic", "Purple", "Twilight Sparkle") ] def sort_by_color(element): return element.color elements = sorted(elements, key=sort_by_color) print(elements) The main thing I want you to notice is the sort_by_color() function, which I had to write for the express purpose of sorting the Element objects in the list by their color. This is a bit of an annoyance, actually, since I won't be needing that function ever again. Here's where lambdas come in. I can drop that entire function, and change the elements = sorted(...) line to: elements = sorted(elements, key=lambda e: e.color) Using a lambda allows me to delineate my logic exactly where I'm using it, and nowhere else. (The key= part is just indicating that I'm passing the lambda to the key parameter on sorted().) A lambda has the structure lamba <parameters>: <return expression>. It can collect as many paramters as it likes, separated by commas, but it can only have one expression, the value of which is implicitly returned. GOTCHA ALERT: Lambdas do not support type annotations (type hinting), unlike regular functions. If I wanted to rewrite that lambda to sort by the name of the Element, instead of the color, I only need to change the expression part: elements = sorted(elements, key=lambda e: e.name) It's as simple as that. Again, lambdas are chiefly useful whenever you need to pass a function with a single expression to another function. Here's another example, this time with more parameters on the lambda. To set up this example, let's start with a class for a Flyer, which stores the name and the maximum speed, and returns a random speed for the flyer. import random random.seed() class Flyer: def __init__(self, name, top_speed): self.name = name self.top_speed = top_speed def get_speed(self): return random.randint(self.top_speed//2, self.top_speed) We want to be able to make any given Flyer object perform any flying trick, but putting all that logic into the class itself would be impractical...there are perhaps thousands of flying tricks and variants! Lambdas are one way to define these tricks. We'll start by adding a function to this class that can accept a function as a parameter. We'll make the assumption that this function always takes a single argument: the speed at which the trick is performed. def perform(self, trick): performed = trick(self.get_speed()) print(f"{self.name} perfomed a {performed}") To use this, we create a Flyer object, and then pass functions to its perform() method. rd = Flyer("Rainbow Dash", 780) rd.perform(lambda s: f"barrel-roll at {s} mph.") rd.perform(lambda s: f"flip at {s} mph.") Because the lambda's logic is in the function call, it is a lot easier to see what's going on. Recall that you are allowed to store lambdas in a variable. This actually can be helpful when you want the code to be this brief, but need some reuability. For example, assume we have another Flyer, and we want both of them to perform a barrel-roll. spitfire = Flyer("Spitfire", 650) barrelroll = lambda s: f"barrel-roll at {s} mph." spitfire.perform(barrelroll) rd.perform(barrelroll) Sure, we could have written barrelroll as a proper single-line function, but by doing it this way, we saved ourselves a little bit of boilerplate. And, since we won't be using the logic again after this section of code, there's no point in having a full-blown function hanging out. Once again, readability matters. Lambdas are excellent for short, clear fragments of logic, but if you have anything more complicated, you should definitely write a proper function. Decorators Imagine we want to modify the behavior of any function, without actually changing the function itself. Let's start with a reasonably basic function: def partial_transfiguration(target, combine_with): result = f"{target}-{combine_with}" print(f"Transfiguring {target} into {result}.") return result target = "frog" target = partial_transfiguration(target, "orange") print(f"Target is now a {target}.") Running that gives us: Transfiguring frog into frog-orange. Target is now a frog-orange. Simple enough. But what if we wanted to add some extra fanfare to this? As you know, we really shouldn't put that logic in our partial_transfiguration function. This is where decorators come in. A decorator "wraps" additional logic around a function, such that we don't actually modify the original function itself. This makes for code that is far more maintainable. Let's start by creating a decorator for the fanfare. The syntax here might look a little overwhelming at first, but rest assured I'll break this down. import functools def party_cannon(func): @functools.wraps(func) def wrapper(*args, **kwargs): print("Waaaaaaaait for it...") r = func(*args, **kwargs) print("YAAY! *Party cannon explosion*") return r return wrapper You may have already recognized that wrapper() is actually a closure, which is created and returned from our party_cannon() function. We pass the function we're "wrapping", func. However, we really don't know anything about the function we're wrapping! It may or may not have arguments. The closure's parameter list (*args, **kwargs) can accept literally any number of arguments, from zero to (practically) infinity. We pass these arguments in the same way to func() when we call it. Of course, if there's some sort of mismatch between the parameter list on func() and the arguments passed to it through the decorator, the usual and expected error will be raised (which is obviously a good thing.) Within wrapper(), we are calling our function whenever and however we want to with func(). I chose to do so between printing my two messages. I don't want to throw away the value that func() returns, so I assign that returned value to r, and I make sure to return it at the end of my wrapper with return r. Mind you, there are really no hard-and-fast rules for the manner in which you call the function in the wrapper, or even if or how many times you call it. You can also handle the arguments and return values in whatever manner you see fit. The point is to make sure the wrapper doesn't actually break the function it wraps in some unexpected way. The odd little line just before the wrapper, @functools.wraps(func), is actually a decorator itself. Without it, the function being wrapped would essentially get confused as to its own identity, messing up our external access of such important function attributes as __doc__ (the docstring) and __name__. This special decorator ensures that doesn't happen; the function being wrapped retains its own identity, which is accessible from outside of the function in all the usual ways. (To use that special decorator, we had to import functools first.) Now that we have our party_cannon decorator written, we can use it to add that fanfare we wanted to the partial_transfiguration() function. Doing so is as simple as this: @party_cannon def partial_transfiguration(target, combine_with): result = f"{target}-{combine_with}" print(f"Transfiguring {target} into {result}.") return result That first line, @party_cannon is the only change we had to make! The partial_transfiguration function is now decorated. NOTE: You can even stack multiple decorators on top of each other, one above the next. Just make sure each decorator comes immediately before the function or decorator it is wrapping. Our usage from before hasn't changed at all: target = "frog" target = partial_transfiguration(target, "orange") print(f"Target is now a {target}.") Yet the output has indeed changed: Waaaaaaaait for it... Transfiguring frog into frog-orange. YAAY! *Party cannon explosion* Target is now a frog-orange. Review We've covered four aspects of functional "magic" in Python. Let's take a moment to recap. Recursion is when a function calls itself. Beware "infinite recursion"; Python won't let a recursion stack get more than approximate a thousand recursive calls deep. A nested function is a function defined in another function. A nested function can read the variables in its enclosing scope, but it cannot modify them unless you specify the variable as nonlocalfirst in the nested function. A closure is a nested function that closes over one or more nonlocal variables, and then is returned by the enclosing function. A lambda is an anonymous (unnamed) function made up of a single expression, the value of which is returned. Lambdas can be passed around and assigned to variables like any other object. Decorators "wrap around" another function to extend its behavior, without the function you're wrapping having to be directly modified. You can read the documentation for more information about these topics. (You'll actually notice that nested functions and closures are rarely mentioned in the official documentation; they're design patterns, rather than formally defined language structures.) Python Reference: Compound statements — Function definitions Python Reference: Expressions — Lambda PEP 318: Decorators for Functions and Methods Python Standard Library: functools Thanks to @deniska , @asdf, @SnoopDeJi (Freenode IRC), and @sandodargo (DEV) for suggested improvements. Discussion Curious about "Functions should not have side effects; they should not modify anything external to themselves." If I had a function designed to take a dictionary as an argument, with the intention that other relevant functions would pass in a dictionary generated dynamically at each call, rather than an existing global, how much does it matter if the function mutates the dictionary to produce its results? (In this case, writing to an SQL database.) Should I bulletproof it, maybe by making a full copy of the input dictionary inside the function, warn about the behavior in the docstring, or does it not matter? I'd be surprised if anyone but me ever uses this code, but, who knows XD That would still be considered impure, from a functional standpoint. If it makes sense in your code base, you might be able to get away with it, but it's still not recommended, as your unspoken intended rule may not be respected by future-you. Probably worth doing something to idiot proof it then, who knows who might try to do something with my private project in the future o.o XD Better habit, anyway! Usually the worst future idiots using our projects are our own overworked selves. That's exactly who I'm most worried about 😅 Very useful article. Regarding the default depth, it's interesting. Here and there I read 997, then I went to see the CPython implementation, there it's set to 1000. Well, it doesn't change a lot of things. But probably it's worth mentioning that you actually change that limit, by using sys.setrecursionlimit. Great insight, thanks! There will be no "Transfiguring frog into frog-orange." since you redefined 'partial_transfiguration'. Thanks for the useful article. Where specifically is it redefined? 😅 but before this, it was: Ah, following you now. I've been working in C++ for the past couple of weeks, so "redefined" had a different connotation for me. I had indeed removed the print statement in the second version! I'll go back and fix that. Thanks for the catch. Great explanation of various concepts I see all the time in Python code, but wasn't sure how, or if, I wanted to make use of my own code. Now I can! I love this guy. Respects. Great article, thanks a lot. I suggest reading dev.to/yawpitch/the-35-words-you-n... to people who like this article.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/codemouse92/dead-simple-python-lambdas-decorators-and-other-magic-5gbf
CC-MAIN-2020-45
refinedweb
4,097
56.45
React Storybook: Develop Beautiful User Interfaces with Ease When you start a new front-end project, the first thing you usually do is create a beautiful design. You carefully plan and draw all of your UI components, as well as each state or effect they may have. However, during development, things usually start to change. New requirements, as well as unforeseen use cases pop up here and there. The initial beautiful component library cannot cover all of these requirements and you start to expand it with new designs. It’s good if at this point you still have a design expert around, but all too often they have already switched to a different project and left the developers to cope with these changes. As a result, the consistency of the design begins to slip. It becomes difficult to track what components you already have in your library and what states and appearances they may have. To avoid this artistic mess it’s usually a good idea to create separate documentation for all of your components. There are various tools for such purposes, but in this article, we’ll focus on a tool designed particularly for React applications — React Storybook. It allows you to easily browse your collection of components and their functionality. A living example of such an app is the gallery of React Native components. Why Do You Need React Storybook? So how does this showcase help? To answer this question, let’s try to put together a list of people who take part in the development of UI components and assess their needs. Depending on your workflow this list might differ, but the usual suspects are the following: Designer or UX expert This is the person responsible for the look and feel of the user interface. After the mockup phase of the project is finished, often the designer leaves the team. When new requirements arise, they need to quickly catch up on the current state of the UI. Developer The developer is the one who creates these components and probably the main beneficiary of a style guide. The two major use cases for the developer are being able to find a suitable component from the library and be able to test them during development. Tester This is the meticulous person who makes sure the components are implemented as expected. A major part of a tester’s work is making sure that a component behaves correctly in every way. And although this does not eliminate the need for integration testing, this is often more convenient to do separately from the project itself. Product owner The person who accepts the designs and the implementation. The product owner needs to make sure each part of the project looks as expected and that the brand style is represented in a consistent manner. You’ve probably noticed that a common denominator for everybody involved, is having a single place containing all of the components at once. Finding all of them in the project itself can be quite tedious. Think about it, how long will it take you to find all possible variations of buttons in your project, including their states (disabled, primary, secondary etc)? That’s why having a separate gallery is much more convenient. If I’ve managed to convince you, let’s see how we can set up Storybook in a project. Setting up React Storybook To set up React Storybook the first thing you’ll need is a React project. If you don’t have a suitable one at the moment, you can easily create one using create-react-app. To generate a Storybook, install getstorybook globally npm i -g getstorybook Then navigate to your project and run getstorybook This command will do three things: - Install @kadira/storybook into your project. - Add the storybookand build-storybookscripts to your package.jsonfile. - Create a .storybookfolder which contains the basic configuration and a storiesfolder with a sample component and story. To run Storybook, execute npm run storybook and open the address displayed (). The app should look like this: Adding New Content Now that we have React Storybook running, let’s see how we can add new content. Each new page is added by creating stories. These are snippets of code that render your component. An example story generated by getstorybook looks like this //src/stories/index.js import React from 'react'; import { storiesOf, action, linkTo } from '@kadira/storybook'; import Button from './Button'; import Welcome from './Welcome'; storiesOf('Welcome', module) .add('to Storybook', () => ( <Welcome showApp={linkTo('Button')}/> )); storiesOf('Button', module) .add('with text', () => ( <Button onClick={action('clicked')}>Hello Button</Button> )) .add('with some emoji', () => ( <Button onClick={action('clicked')}> </Button> )); The storiesOf function creates a new section in the navigation menu, and the add method creates a new subsection. You are free to structure the storybook however you see fit, but you cannot create hierarchies deeper then two levels. A straightforward approach to structuring your Storybook is creating common top-level sections such as “Form inputs”, “Navigation” or “Widgets” for groups of related elements, and sub-sections for individual components. You are free to choose where to place your story files: in a separate stories folder or next to the components. I, personally, prefer the latter since keeping the stories close to the components helps to keep them accessible and up to date. Stories are loaded in the .storybook/config.js file which contains the following code: import { configure } from '@kadira/storybook'; function loadStories() { require('../src/stories'); } configure(loadStories, module); By default, it loads the src/stories/index.js file and expects you to import your stories there. This is slightly inconvenient since it would require us to import each new story we create. We can modify this script to automatically load all of the stories using Webpack’s require.context method. To distinguish story files from the rest of the code, we can agree to add a .stories.js extension to them. The modified script should look like this: import { configure, addDecorator } from '@kadira/storybook'; import React from 'react'; configure( () => { const req = require.context('../src', true, /.stories.js$/); req.keys().forEach((filename) => req(filename)); }, module ); configure(loadStories, module); If you’re using a different folder for your source code, make sure you point it to the correct location. Re-run Storybook for the changes to take effect. The Storybook will be empty since it no longer imports the index.js file, but we’ll soon fix that. Writing a New Story Now that we’ve slightly tailored Storybook to our needs, let’s write our first story. But first of all we need to create a component to showcase. Let’s create a simple Name component to display a name in a colored block. The component will have the following JavaScript and CSS. import React from 'react'; import './Name.css'; const Name = (props) => ( <div className={'name ' + (props.type ? props.type : '')}>{props.name}</div> ) Name.propTypes = { type: React.PropTypes.oneOf(['highlight', 'disabled']), } export default Name; .name { display: inline-block; font-size: 1.4em; background: #4169e1; color: #fff; border-radius: 4px; padding: 4px 10px; } .highlight { background: #dc143c; } .disabled { background: #999; } As you’ve probably noticed, this simple component can have three states: default, highlighted and disabled. Wouldn’t it be nice to visualize all of them? Let’s write a story for that. Create a new Name.stories.js file alongside your component and add the following contents: import React from 'react'; import { storiesOf, action, linkTo } from '@kadira/storybook'; import Name from './Name'; storiesOf('Components', module) .add('Name', () => ( <div> <h2>Normal</h2> <Name name="Louie Anderson" /> <h2>Highlighted</h2> <Name name="Louie Anderson" type="highlight" /> <h2>Disabled</h2> <Name name="Louie Anderson" type="disabled" /> </div> )) Open Storybook and have a look at your new component. The result should look like this: Feel free to play around with how the component is displayed as well as with its source. Note that thanks to React’s hot reloading functionality, whenever you edit the story or the component, the changes will instantly appear in your Storybook without the need to manually refresh the browser. However refreshing might be required when you add or remove a file. Storybook doesn’t always notice such changes. View customization If you would like to change how your stories are displayed, you can wrap them in a container. This can be done using the addDecorator function. For example, you can add an “Examples” header for all your pages by adding the following code to .storybook/config.js: import { configure, addDecorator } from '@kadira/storybook'; import React from 'react'; addDecorator((story) => ( <div> <h1>Examples</h1> {story()} </div> )); You can also customize separate sections by calling addDecorator after storiesOf: storiesOf('Components', module) .addDecorator(...) Publishing Your Storybook Once you’re done working on your Storybook and you feel that it’s ready to be published, you can build it as a static website by running npm run build-storybook By default, Storybook is built into the storybook-static folder. You can change the output directory using the -o parameter. Now you just need to upload it to your favorite hosting platform. If you’re working on a project on GitHub you can publish your Storybook just by building it into the docs folder and pushing it to the repository. GitHub can be configured to serve your GitHub Pages website from there. If you don’t want to keep your built Storybook in the repository, you can also use storybook-deployer. Build Configuration Storybook is configured to support a number of features inside of the stories. You can write in the same ES2015+ syntax as in create-react-app, however, if your project uses a different Babel configuration, it will automatically pick up your .babelrc file. You can also import JSON files and images. If you feel that this is not enough, you can add additional webpack configuration by creating a webpack.config.js file in the .storybook folder. The configuration options exported by this file will be merged with the default configuration. For instance, to add support for SCSS in your stories, just add the following code: module.exports = { module: { loaders: [ { test: /.scss$/, loaders: ["style", "css", "sass"] } ] } } Don’t forget to install sass-loader and node-sass though. You can add any webpack configuration you desire, however, you cannot override the entry, output and the first Babel loader. If you would like to add different configuration for the development and production environments, you can export a function instead. It will be called with the base configuration and the configType variable set to either ‘DEVELOPMENT’ or ‘PRODUCTION’. module.exports = function(storybookBaseConfig, configType) { // add your configuration here // Return the altered config return storybookBaseConfig; }; Expanding Functionality with Addons Storybook is extremely useful by itself, but to make things better it also has a number of addons. In this article, we’ll cover only some of them, but be sure to check out the official list later. Actions and Links Storybook ships with two pre-configured addons: Actions and Links. You don’t need to undertake any additional configuration to use them. Actions Actions allow you to log events triggered by your components in the “Action Logger” panel. Have a look at the Button story generated by Storybook. It binds the onClick event to an action helper, which displays the event in the UI. Note: you might need to rename the file containing the Button story and/or change its location based on the modifications made in .storybook/config.js. storiesOf('Button', module) .add('with text', () => ( <Button onClick={action('clicked', 'test')}>Hello Button</Button> )) Try clicking on the button and note the output in the “Action logger”. Links The Links addon allows you to add navigation between components. It provides a linkTo helper which can be bound to any onClick event: import { storiesOf, linkTo } from '@kadira/storybook'; storiesOf('Button', module) .add('with link', () => ( <Button onClick={linkTo('Components', 'Name')}>Go to Name</Button> )); Clicking on this button will take you to the section “Component” and sub-section “Name”. Knobs The Knobs addon allows you to customize your components by modifying React properties during runtime, straight from the UI. To install the addon run: npm i --save-dev @kadira/storybook-addon-knobs Before you can use the addon, it needs to be registered with Storybook. To do that, create an addons.js file in the .storybook folder with the following contents: import '@kadira/storybook/addons'; import '@kadira/storybook-addon-knobs/register'; After that, wrap your stories with the withKnobs decorator. You can do this globally in .storybook/config.js: import { withKnobs } from '@kadira/storybook-addon-knobs'; addDecorator(withKnobs); Once we’ve done with that, we can try to alter our Name component story. Now, instead of having all three variations of component state at once, we’ll be able to select them in the UI. We’ll also make the name editable as well. Change the contents of Name.stories.js to: import React from 'react'; import { storiesOf, action, linkTo } from '@kadira/storybook'; import { text, select } from '@kadira/storybook-addon-knobs'; import Name from './Name'; const types = { '': '', highlight: 'highlight', disabled: 'disabled' } storiesOf('Components', module) .add('Name', () => ( <div> <h2>Normal</h2> <Name name={text('Name', 'Louie Anderson')} type={select('Type', types)} /> </div> )) The addon provides various helper functions to create user inputs of different types, such as numbers, ranges or arrays. Here we’ll use text for the name, and select for the type. Open the “Name” page and a new “Knobs” tab should appear next to “Action Logger”. Try to change the input values and see the component being re-rendered. Info The Info addon allows you to add more information about a story, such as its source code, description and React propTypes. Having this information accessible is very handy for developers. Install this addon by running: npm i --save-dev @kadira/react-storybook-addon-info Then register the addon with Storybook in the .storybook/config.js file: import { setAddon } from '@kadira/storybook'; import infoAddon from '@kadira/react-storybook-addon-info'; setAddon(infoAddon); This will add an additional addWithInfo method to the storiesOf object to register your stories. It has a slightly different API and accepts the title of the story, description, render function and additional configuration as parameters. Using this method, we can rewrite our Name story like this: import React from 'react'; import { storiesOf, action } from '@kadira/storybook'; import Name from './Name'; storiesOf('Components', module) .addWithInfo( 'Name with info', ` A component to display a colored name tag. `, () => ( <Name name="Louie Anderson" /> ), { inline: true }, ) The inline parameter will make the information be displayed by default, instead of being accessible via a link in the corner. The result will look like this: Automated testing An important aspect of Storybook which wasn’t covered in this article is in using it as a platform to run automated tests. You can execute any kinds of tests, from unit tests to functional and visual regression tests. Unsurprisingly, there are a couple of addons aimed at boosting Storybook’s capabilities as a testing platform. We won’t go into details about them since they deserve an article of their own, but still, would like to mention them. Specifications The Specifications addon allows you to write unit tests directly in your story files. The tests will be executed whenever you open Storybook and the result displayed in the UI. After some tinkering, you can also run this tests on a CI environment using Jest. You might also like: How to Test React Components Using Jest Storyshots Storyshots allows you execute Jest Snapshot Tests based on the stories. Snapshot tests allow you to check if the DOM rendered by the components matches the expected result. Very convenient for testing whether your components have been rendered correctly. At least from the DOM point of view. Storybook as a Service Kadira also provides the Storybook as a service called Storybook Hub. It allows you to host your storybook with them and take collaboration to a new level. Apart from the standard features, it also integrates with GitHub and can generate a new storybook for each pull request to your project. You can also leave comments directly in Storybook to discuss the changes with your colleagues. Conclusion If you feel that maintaining the UI components in your projects is starting to become a pain, take a step back and see what you’re missing. It might be that all you need is a convenient collaboration platform between all of the parties involved. In this case, for your React projects look no further, Storybooks is the perfect tool for you. Are you using Storybook already? Are you intending to give it a try? Why? Or indeed, why not? I’d love to hear from you in the comments. This article was peer reviewed by Tim Severien and Giulio Mainardi. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
https://www.sitepoint.com/react-storybook-develop-beautiful-user-interfaces-with-ease/
CC-MAIN-2018-34
refinedweb
2,812
55.74
I am developing a package and I am using openers. In the class where I declare de open I have this line to declare it: atom.workspace.open('package-name://given-string', options object) In the main file, on the activate method I declare de addOpener: return atom.workspace.addOpener(function(uri, options) { const ref = new URL(uri) const protocol = ref.protocol }) At the beginning of the main file I have the import for URL: import { URL } from 'url', when I reload Atom to have the changes available, if I try to open any file I get an error like this: Uncaught (in promise) TypeError: Invalid URL at URL.onParseComplete (internal/url.js:91:11) at parse (internal/url.js:115:11) at new URL (internal/url.js:221:5) at path-to-file:XX:YY I don’t know which is the correct way to work with openers and if I should do in another way. I tried with a try/catch and the files are opened but if I try to log the error I don’t see any, and I’m not convinced to use it if the catch statement is not giving nothing.
https://discuss.atom.io/t/after-adding-an-opener-i-cant-open-any-file/50977
CC-MAIN-2018-30
refinedweb
197
69.92
Windows Event Programming So far in this book, all but one of the network programming examples have used the console mode in .NET. Windows console-mode programming uses a traditional structured programming model. In structured programming, program flow is controlled within the program itself. Each method within the class is called in turn by the functions as they occur in the program execution. The customer does not have options for changing the program execution other than what is allowed by the program. By contrast, Windows programming uses an event programming model. Windows event programming bases program flow on events. As events occur within the program, specific methods are called and performed based on the events, as illustrated in Figure. This does not work well with blocking network functions, however. When an application presents a graphical interface to a customer, it will wait for events from the customer to determine what functions to perform. Event programming assumes that while other functions are processing (such as network access), the customer will still have control over the graphical interface. This allows the customer to perform other functions while waiting for a network response, or even abort network connections if necessary. However, if the blocking network functions were used, program execution would wait while the function was performed, and the customer would have control over neither the interface nor the program. This section describes how Windows event programming is accomplished using C# constructs and how the .NET asynchronous network methods work within the Windows event programming model. Using Events and Delegates The .NET programming environment is closely tied to the Windows environment, so it is not surprising that .NET fully supports the event programming model. In .NET event programming, the two key constructs are the events and delegates. An event is a message sent by an object that represents an action that has taken place. The message identifies the action and gives any useful data related to the action. Events can be anything from the customer clicking a button (where the message represents the button name), to a packet being received on a socket (where the message represents the socket that received the data). The event sender does not necessarily know what object will handle the event message once it is sent through the Windows system. It is up to the event receiver to register with the Windows system and inform it of what types of events the receiver wants to receive. Figure demonstrates this function. The event receiver is identified within the Windows system by a pointer class called a delegate. The delegate is a class that holds a reference to a method that can handle the received event. When Windows receives an event, it checks to see if any delegates are registered to handle it. If any delegates are registered to handle the event, the event message is passed to the methods defined by the delegates. After the methods complete, the Windows system processes the next event that occurs, until an event signals the end of the program. Sample Event Program Successful event programming is, of course, vital to writing successful Windows programs. Every object produced in a Windows graphical program can generate one or more events based on what the customer is doing with the object. The .NET Framework System.Windows.Forms namespace contains classes for all the Windows objects necessary to create full-featured graphical programs in the Windows environment. These include the following: Buttons Text boxes List boxes Combo boxes Check boxes - Scroll bars Window menus It’s easy to create professional-quality network programs using these Windows objects. As each object is added to a Window form, you must register the method that will be used for its event handler. When the event is generated, Windows passes control of the program to the event handler method. Listing 8.1 demonstrates a simple Windows Forms program that uses some simple Windows objects. using System; using System.Drawing; using System.Windows.Forms; class WindowSample Form: { private TextBox data; private ListBox results; public WindowSample() { Text = "Sample Window Program"; Size = new Size(400, 380); Label label1 = new Label(); label1.Parent = this; label1.Text = "Enter text string:"; label1.AutoSize = true; label1.Location = new Point(10, 10); data = new TextBox(); data.Parent = this; data.Size = new Size(200, 2 * Font.Height); data.Location = new Point(10, 35); results = new ListBox(); results.Parent = this; results.Location = new Point(10, 65); results.Size = new Size(350, 20 * Font.Height); Button checkit = new Button(); checkit.Parent = this; checkit.Text = "test"; checkit.Location = new Point(235,32); checkit.Size = new Size(7 * Font.Height, 2 * Font.Height); checkit.Click += new EventHandler(checkit_OnClick); } void checkit_OnClick(object obj, EventArgs ea) { results.Items.Add(data.Text); data.Clear(); } public static void Main() { Application.Run(new WindowSample()); } } This sample program is pretty simplistic from the Forms point of view so you can focus on what it teaches you about event programming. First, remember that all Windows Forms programs must use the System.Windows.Forms namespace, along with the System.Drawing namespace, to help position objects in the window: using System.Drawing; using System.Windows.Forms; Because the application creates a window, it must inherit the standard window Form class: class WindowSample : Form The constructor for the class must define all the graphical objects that are used in the form. First, the standard values for the Windows header and default size are defined: Text = "Sample Window Program"; Size = new Size(400, 380); Next, each object that will appear in the window is defined, along with its own properties. This example creates the following objects: A Label object to display an instructional text string A TextBox object to allow the customer to enter data A ListBox object to easily display output to the customer A Button object to allow the customer to control when the action will occur The key to the action in this program is the EventHandler registered for the Button object, which registers the method ButtonOnClick() with a click event on the Button object checkit: checkit.Click += new EventHandler(ButtonOnClick); When the customer clicks the button, the program control moves to the ButtonOnClick() method: void ButtonOnClick(object obj, EventArgs ea) { results.Items.Add(data.Text); data.Clear(); } This simple method performs only two functions. First it extracts the text string entered in the TextBox object and writes it to the ListBox object. Next, it clears the text in the TextBox. Each time the customer clicks the Button object, a new text string is placed in the ListBox as a new line. These simple Windows Forms programming objects will be utilized in a network programming example in the "Sample Programs Using Asynchronous Sockets" section later in this chapter. The AsyncCallback Class Just as events can trigger delegates, .NET also provides a way for methods to trigger delegates. The .NET AsyncCallback class allows methods to start an asynchronous function and supply a delegate method to call when the asynchronous function completes. This process is different from standard event programming in that the event is not generated from a Windows object, but rather from another method in the program. This method itself registers an AsyncCallback delegate to call when the method completes its function. As soon as this occurs and the method indicates its completion to the Windows OS, an event is triggered to transfer the program control to the method defined in the registered AsyncCallback delegate. The Socket class utilizes the method defined in the AsyncCallback to allow network functions to operate asynchronously in background processing. It signals the OS when the network functions have completed and passes program control to the AsyncCallback method to finish the network function. In a Windows programming environment, these methods often help avoid the occurrence of an application lock-up while waiting for network functions to complete.
http://codeidol.com/community/dotnet/windows-event-programming/10484/
CC-MAIN-2017-17
refinedweb
1,300
56.15
c++ inheritance Question Description Implementation This section further describes each class method found within the UML diagram. Class Employee· get/setName gets and sets the employee's private name variable. Be sure to filter out bad input, such as empty strings.· get/setEmployeeId gets and sets the employee's ID number. Be sure to filter out ID numbers that are less than or equal to zero.· set/isWorking sets and gets whether or not the employee is working. No filtering is required.· toString returns a tab-delimited string in the following format: "name \t id \t is working" Class StudentEmployee· get/setPayRate gets and sets the employee's pay rate. Be sure to filter out negative values.· get/setHoursWorked gets and sets the hours worked for the current week. Be sure to filter out negative values.· getWeeklyPay computes the student's weekly pay. To calculate, multiply the number of hours worked times the pay rate.· set/isWorkStudy sets and gets whether the student is work study.· toString() returns a tab-delimited string in the following format: "name \t id \t is working \t hours worked \t is work study \t pay rate" Testing Our Classes Use your main function to test the StudentEmployee class. In main, prompt the user for a CSV (comma separated value) file to open. Then, using the provided StringSplitter class, split the CSV using the comma as a delimiter and use the data to create a new StudentEmployee. Finally, output all StudentEmployees onto the screen. we have : CVS.TXT StringSplitter.h #ifndef STRINGSPLITTER_H #define STRINGSPLITTER_H #include <string> #include <vector> using namespace std; class StringSplitter { public: //Accepts a string and a delimiter. Will use items_found to return the number //of items found as well as an array of strings where each element is a piece of //the original string. static string * split(string text, string delimiter, int &items_found) { //vectors are dynamically expanding arrays vector<string> pieces; //find the first delimiter int location = text.find(delimiter); //we are starting at the beginning of our string int start = 0; //go until we have no more delimiters while(location != string::npos) { //add the current piece to our list of pieces string piece = text.substr(start, location - start); pieces.push_back(piece); //update our index markers for the next round start = location + 1; location = text.find(delimiter, start); } //at the end of our loop, we're going to have one trailing piece to take care of. //handle that now. string piece = text.substr(start, location - start); pieces.push_back(piece); //convert from vector into an array of strings int size = pieces.size(); string *pieces_str = new string[size]; for(int i = 0; i < size; i++) { pieces_str[i] = pieces.at(i); } items_found = size; return pieces_str; } }; #endif Tutor Answer Thank you for the opportunity to help you with your question! in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers. >
https://www.studypool.com/discuss/1036634/c-inheritance
CC-MAIN-2019-13
refinedweb
491
57.87
Welcome to part 6 of my C video tutorial. Today I’m going to cover Unions, Enumerated Types, the Designated Initializer, Using unions in Structs, Recursive Structures, Linked Lists and much more. I’m also going to experiment with a new style that is more interactive and I hope it feels more like a classroom setting. Throughout the tutorial I will constantly insert brain teasers. Hopefully they aren’t distracting. I do this every once in a while to try and improve. If you like tutorials like this it helps to tell Google+ with a click here [googleplusone] Code From the Video CTutorial6_1.c #include <stdio.h> // In the last tutorial I talked about weight and height // but there are many ways to weigh and measure. // Weight: pounds, grams, kilograms, milligrams, ounces // Measurement: centimeter, feet, inch, millimeter // Let's say I sell oranges in different forms. // Per Orange: $.45 // Per Pound: $1.14 // Orange Juice (16oz): $2.43 // A union allows you store one piece of data that can // be of a different type. You can't store multiple // values though. // It wouldn't be normal if a customer asks to buy one // orange, for you to quote the price for just the juice. // A union also doesn't store all 3 values, but instead // only 1 of the 3. void main(){ typedef union{ // Nobody is going to buy 255 oranges for $114.75 // individually when they can buy in pounds // for $96.90. That is why a 2 byte short works // We are also not going to sell 1/2 an orange short individual; float pound; float ounce; } amount; // The Designated Initializer is used to set a union field amount orangeAmt = {.ounce = 16}; // You can also set the value with the dot operator orangeAmt.individual = 4; /*------------------------------ A Union Only Holds 1 Value at a Time It may seem as if it can hold more, but that's a coincidence because values are stored in the same block of data. ------------------------------*/ // This may or may not work because individual and not // ounce is the stored value now printf("Orange Juice Amount: %.2f : Size: %d\n\n", orangeAmt.ounce, sizeof(orangeAmt.ounce)); // If you put %f in instead of %d, you MAY get the ounces printf("Number of Oranges: %d : Size: %d\n\n", orangeAmt.individual, sizeof(orangeAmt.individual)); // Get the location in memory printf("Indiv Location: %d\n\n", &orangeAmt.individual); orangeAmt.pound = 1.5; printf("Pounds of Oranges: %.2f : Size: %d\n\n", orangeAmt.pound, sizeof(orangeAmt.pound)); // This location is the same as individual printf("Pound Location: %d\n\n", &orangeAmt.individual); // You can use Unions in Structs typedef struct{ char *brand; amount theAmount; } orangeProduct; // You can initialize with a Designated Initializer // here as well orangeProduct productOrdered = {"Chiquita", .theAmount.ounce = 16}; // You print out with the dot operator printf("You bought %.2f ounces of %s oranges\n\n", productOrdered.theAmount.ounce, productOrdered.brand); // Now back to the problem above where we get bad // data if the wrong conversion character. // First we have to learn about enums though // An enum is used when you only will ever need // a limited number of possible values. typedef enum{ INDIV, OUNCE, POUND } quantity; quantity quantityType = INDIV; orangeAmt.individual = 4; if(quantityType == INDIV){ printf("You bought %d oranges\n\n", orangeAmt.individual); } else if(quantityType == OUNCE){ printf("You bought %.2f ounces of oranges\n\n", orangeAmt.ounce); } else { printf("You bought %.2f pounds of oranges\n\n", orangeAmt.pound); } } CTutorial6_2.c #include <stdio.h> typedef struct product{ const char *name; float price; // Added to make a Recursive Structure // product also must be listed above with // a Recursive Structure struct product *next; } product; // Cycles through the Linked List and prints it void printLinkedList(product *pProduct){ // While the value for next isn't NULL // which signals the end of the list keep going while(pProduct != NULL){ // Printing values using (*) and -> printf("A %s costs %.2f\n\n", (*pProduct).name, pProduct->price); // Switch to the next item in the Linked List pProduct = pProduct->next; } } void main(){ // If you had a bunch of products in your store // you could store them in an array, but that // limits you because they have a fixed length // product theProducts[20]; // A Linked List however can store an unlimited // number of items. // It is called a linked List because it contains // an item and a link to the next item in a list. // Another benefit to a Linked List is that you can // insert new items any place in the list. // To make a Linked List of Structs requires each // Struct to contain a link to the next. // (Recursive Structure) // I'm creating the products and setting each // next to null product tomato = {"Tomato", .51, NULL}; product potato = {"Potato", 1.21, NULL}; product lemon = {"Lemon", 1.69, NULL}; product milk = {"Milk", 3.09, NULL}; product turkey = {"Turkey", 1.86, NULL}; // Now assign a pointer to the value of next tomato.next = &potato; potato.next = &lemon; lemon.next = &milk; milk.next = &turkey; // What do we do if we want to insert Apples after // the lemon? product apple = {"Apple", 1.58, NULL}; // Change the values for next lemon.next = &apple; apple.next = &milk; printLinkedList(&tomato); } I got the linked List from the first time watching, the tutorial was good. The quizz for the union -> my answer for that is: the actual bytes of information that store the entire struct are interpreted differently depending on the data type that is asumed, so if you put 16.00 for pund and you request the info interpreting it as an int (%d) then the processor will interpret it in a different way, cuzz int is expected to be 4 bytes normally and a a float is 4 and a short is 2(I think), but the struct will be stored in the highest possible data type size – 8 bytes and the float has floating points, so they all read the bits in a different way, although they are always the same 64 bits. Is that it? Cheers, The reason why they seem to both be stored is because data is stored as bits. The bit representation for 4 is 100 and the bit representation for 16 is 10000. So when you ask for the number of oranges, just the first 3 bits are checked and you get 100 or 4 returned. When you ask for Orange juice amount you get 10000 returned or 16. Does that make sense? Thanks, it does make sense, that was the point I was trying to make. I wasn’t aware of how exactly the bits are accesesd – I was guessing that it was from right to left, but apparently its a different way. Anyway I got it now, thans!!! I’m very happy I could help 🙂
http://www.newthinktank.com/2013/08/c-video-tutorial-6/
CC-MAIN-2019-18
refinedweb
1,125
73.88
pip install backcall The Python backcall library is among the top 100 Python libraries, with more than 16,119,694 downloads. This article will show you everything you need to get this installed in your Python environment. How to Install backcall on Windows? - Type "cmd"in the search bar and hit Enterto open the command line. - Type “ pip install backcall” (without quotes) in the command line and hit Enteragain. This installs backcall for your default Python installation. - The previous command may not work if you have both Python versions 2 and 3 on your computer. In this case, try "pip3 install backcall"or “ python -m pip install backcall“. - Wait for the installation to terminate successfully. It is now installed on your Windows machine. Here’s how to open the command line on a (German) Windows machine: First, try the following command to install backcall on your system: pip install backcall Second, if this leads to an error message, try this command to install backcall on your system: pip3 install backcall Third, if both do not work, use the following long-form command: python -m pip install backcall backcall on Linux? You can install backcall on Linux in four steps: - Open your Linux terminal or shell - Type “ pip install backcall” (without quotes), hit Enter. - If it doesn’t work, try "pip3 install backcall"or “ python -m pip install backcall“. - Wait for the installation to terminate successfully. The package is now installed on your Linux operating system. How to Install backcall on macOS? Similarly, you can install backcall on macOS in four steps: - Open your macOS terminal. - Type “ pip install backcall” without quotes and hit Enter. - If it doesn’t work, try "pip3 install backcall"or “ python -m pip install backcall“. - Wait for the installation to terminate successfully. The package is now installed on your macOS. How to Install backcall in PyCharm? Given a PyCharm project. How to install the backcall "backcall"without quotes, and click Install Package. - Wait for the installation to terminate and close all pop-ups. Here’s the general package installation process as a short animated video—it works analogously for backcall if you type in “backcall” in the search field instead: Make sure to select only “backcall” because there may be other packages that are not required but also contain the same term (false positives): How to Install backcall in a Jupyter Notebook? To install any package in a Jupyter notebook, you can prefix the !pip install my_package statement with the exclamation mark "!". This works for the backcall library too: !pip install my_package This automatically installs the backcall library when the cell is first executed. How to Resolve ModuleNotFoundError: No module named ‘backcall’? Say you try to import the backcall package into your Python script without installing it first: import backcall # ... ModuleNotFoundError: No module named 'backcall' Because you haven’t installed the package, Python raises a ModuleNotFoundError: No module named 'backcall'. To fix the error, install the backcall library using “ pip install backcall” or “ pip3 install backcall” in your operating system’s shell or terminal first. See above for the different ways to install backcall.
https://blog.finxter.com/how-to-install-backcall-in-python/
CC-MAIN-2022-33
refinedweb
518
65.42
Timeline 11/26/09: - 18:16 Changeset [57964] by - Use US-ASCII for html documentation. - 18:15 Changeset [57963] by - Suppress a warning that's in the windows mobile system headers. - 18:14 Changeset [57962] by - Fix the version check when suppressing warnings. - 17:19 Ticket #445 (Can't delete automatically in circularly included object.) closed by - wontfix - 17:16 Ticket #3601 (Patch for warning fixes) closed by - fixed: (In [57961]) Merge [57542] to release. Fixes #3601. - 17:16 Changeset [57961] by - Merge [57542] to release. Fixes #3601. - 17:10 Ticket #2962 (make_shared perfect forwarding) closed by - fixed: (In [57960]) Merge [57520] to release. Fixes #2962. - 17:10 Changeset [57960] by - Merge [57520] to release. Fixes #2962. - 16:58 Ticket #3456 (smart_ptr number in documentation) closed by - fixed: (In [57959]) Merge [57950], [57952] to release. Fixes #3404. Fixes #3456. - 16:58 Ticket #3404 (shared_from_this documentation: missing semicolon in example code) closed by - fixed: (In [57959]) Merge [57950], [57952] to release. Fixes #3404. Fixes #3456. - 16:58 Changeset [57959] by - Merge [57950], [57952] to release. Fixes #3404. Fixes #3456. - 16:40 Changeset [57958] by - Fix interlocked.hpp to compile under /clr:pure. Refs #3378. - 16:20 Changeset [57957] by - Remove std::move references. Refs #3570. - 16:04 Changeset [57956] by - moved tag namespace out of detail namespace into bindings namespace - 16:04 Changeset [57955] by - Extend Borland workaround to 6.2. - 16:02 Changeset [57954] by - Extend Borland workaround to 6.2 and above. - 15:58 Ticket #3681 (Comeau throws "no operator==" and "no suitable conversion" errors) created by - Building Boost.Filesystem library using Comeau C/C++ frontend with GCC … - 15:55 Changeset [57953] by - Add error checking to lwm_pthreads.hpp. Refs #2681. - 15:37 Changeset [57952] by - Fix number of smart pointers. Refs #3456. - 15:34 Ticket #2456 (ASIO localhost resolve) closed by - invalid: This depends on two things: 1. Whether you're using address resolution … - 15:32 Ticket #3272 (Boost::shared_ptr crash when used in stl queue) closed by - invalid: Closing - we can't do anything without a test case that reproduces the … - 15:24 Ticket #3680 (Inhibit warning about use of C99 long long constant in GCC) created by - To inhibit the innocuous warning about the use of C99 long long constants … - 15:17 Changeset [57951] by - Enable sync use on Intel 11.0 or later. Refs #3351. - 15:11 Changeset [57950] by - Fix enable_shared_from_this example. Refs #3404. - 13:39 Ticket #3679 (Comeau throws error compiling named_param_example.cpp) created by - Compiling Boost.Test examples using Comeau C/C++, I'm getting compilation … - 13:21 Changeset [57949] by - Fix SPARC asm operand failure. Refs #3678. Refs #3341. - 13:06 Changeset [57948] by - Get the tests warning free with gcc, and add conceptual-header-inclusion … - 12:38 Ticket #3678 (Build failure on SPARC64 architecture) created by - Boost 1.40.0 fail to build on SPARC64 machines. The issue arises when … - 12:28 Ticket #3677 (Serialization via a reference to a pointer fails at compile time) created by - The 'x11-toolkits/gigi' application failed to compile with the Boost-1.41. … - 11:58 Ticket #3676 (Jamfile filesystem lib) created by - Hi. Filesystem library depends on system library, so that it would be … - 11:19 Changeset [57947] by - Updates to the numeric_bindings traits part - 11:14 WarningFixes edited by - (diff) - 11:10 Ticket #3675 (Boost.Parameter passing function pointers) created by - When passing function pointers with Boost.Parameter on Boost 1.41 on … - 10:43 Changeset [57946] by - Fix some typos. - 10:40 Changeset [57945] by - Strip out <copydoc> tags when converting from doxygen XML to Boostbook. - 09:30 Changeset [57944] by - Spirit: added another Karma example - 08:35 Changeset [57943] by - Oops, tests should return the error code is any. - 08:24 BoostDocs/PDF edited by - (diff) - 08:23 BoostDocs/PDF edited by - (diff) - 08:05 Ticket #3674 (property_tree/examples/custom_data_type.cpp doesn't build) created by - libs/property_tree/examples/custom_data_type.cpp expects the pre-Sebastian … - 07:59 Changeset [57942] by - Changed to use the lightweight test framework - we can now test with a … - 06:08 Ticket #1225 (Exact-sized Integer Type Selection Templates) closed by - fixed: (In [57941]) Added support for exact width integer type to int_t and … - 06:08 Changeset [57941] by - Added support for exact width integer type to int_t and uint_t Updated … - 04:35 Changeset [57940] by - Using BOOST_ASSERT rather than assert - 03:38 Changeset [57939] by - eliminate warning on darwin-4.2.1 toolset - 03:21 Changeset [57938] by - updated TNT container bindings - 03:13 Changeset [57937] by - Added missing BOOST_THREAD_DECL for at_thread_exit_function - 03:13 Changeset [57936] by - Don't use timed_lock to do a lock - 02:22 Ticket #3673 (boost python and weak_ptr from a shared_ptr argument) created by - if you save a weak_ptr of a shared_ptr passed to you by python, it expires …) Note: See TracTimeline for information about the timeline view.
https://svn.boost.org/trac/boost/timeline?from=2009-11-26T17%3A16%3A19-05%3A00&precision=second
CC-MAIN-2016-26
refinedweb
808
62.38
Wait for a browser page load to complete before returning #include <IE.au3> _IELoadWait ( ByRef $oObject [, $iDelay = 0 [, $iTimeout = -1]] ) Several IE.au3 functions call _IELoadWait() automatically (e.g. _IECreate(), _IENavigate() etc.). Most functions that do this also allow you to turn this off with a $fWait parameter if you do not want the wait or if you want to call it yourself. When document objects or DOM elements are passed to _IELoadWait(), it will check the readyState of the container elements up to and including the parentWindow. Browser scripting security restrictions may sometimes prevent _IELoadWait() from guaranteeing that a page is fully loaded and can occasionally result in untrapped errors. In these cases you may need to avoid calling _IELoadWait() and attempt to employ other methods of insuring that the page load has completed. These methods might include using a Sleep command, examining browser status bar text and other methods. When using functions that call _IELoadWait() for objects other than the InternetExplorer (browser) object, you may also be successful by calling _IELoadWait() for the browser yourself (e.g. _IELoadWait($oIE)). The most common causes of trouble are page redirects and cross-site scripting security restrictions associated with frames. Page re-writing techniques employed by some applications (e.g. Gmail) can also cause trouble here. _IEAction, _IEBodyWriteHTML, _IECreate, _IEDocWriteHTML, _IEFormImageClick, _IEFormSubmit, _IEImgClick, _IELinkClickByIndex, _IELinkClickByText, _IELoadWaitTimeout, _IENavigate ; Open the AutoIt forum page, tab to the "View new posts" ; link and activate the link with the enter key. ; Then wait for the page load to complete before moving on. #include <IE.au3> Local $oIE = _IECreate("") Send("{TAB 12}") Send("{ENTER}") _IELoadWait($oIE)
https://www.autoitscript.com/autoit3/docs/libfunctions/_IELoadWait.htm
CC-MAIN-2018-22
refinedweb
270
55.13
State synchronization refers to the synchronization of values such as integers, floating point numbers, strings and boolean values belonging to scripts on your networked GameObjects. State synchronization is done from the Server to remote clients. The local client does not have data serialized to it. It does not need it, because it shares the Scene with the server. However, SyncVar hooks are called on local clients. Data is not synchronized in the opposite direction - from remote clients to the server. To do this, you need to use Commands. SyncVars are variables of scripts that inherit from NetworkBehaviour, which are synchronized from the server to clients. When a GameObject is spawned, or a new player joins a game in progress, they are sent the latest state of all SyncVars on networked objects that are visible to them. Use the [SyncVar] custom attribute to specify which variables in your script you want to synchronize, like this: class Player : NetworkBehaviour { [SyncVar] int health; public void TakeDamage(int amount) { if (!isServer) return; health -= amount; } } The state of SyncVars is applied to GameObjects on clients before OnStartClient() is called, so the state of the object is always up-to-date inside OnStartClient(). SyncVars can be basic types such as integers, strings and floats. They can also be Unity types such as Vector3 and user-defined structs, but updates for struct SyncVars are sent as monolithic updates, not incremental changes if fields within a struct change. You can have up to 32 SyncVars on a single NetworkBehaviour script, including SyncLists (see next section, below). The server automatically sends SyncVar updates when the value of a SyncVar changes, so you do not need to track when they change or send information about the changes yourself. While SyncVars contain values, SyncLists contain lists of values. SyncList contents are included in initial state updates along with SyncVar states. Since SyncList is a class which synchronises its own contents, SyncLists do not require the SyncVar attribute. The following types of SyncList are available for basic types: SyncListString SyncListFloat SyncListInt SyncListUInt SyncListBool There is also SyncListStruct, which you can use to synchronize lists of your own struct types. When using SyncListStruct, the struct type that you choose to use can contain members of basic types, arrays, and common Unity types. They cannot contain complex classes or generic containers, and only public variables in these structs are serialized. SyncLists have a SyncListChanged delegate named Callback that allows clients to be notified when the contents of the list change. This delegate is called with the type of operation that occurred, and the index of the item that the operation was for. public class MyScript : NetworkBehaviour { public struct Buf { public int id; public string name; public float timer; }; public class TestBufs : SyncListStruct<Buf> {} TestBufs m_bufs = new TestBufs(); void BufChanged(Operation op, int itemIndex) { Debug.Log("buf changed:" + op); } void Start() { m_bufs.Callback = BufChanged; } }
https://docs.unity3d.com/ru/2021.1/Manual/UNetStateSync.html
CC-MAIN-2022-40
refinedweb
479
60.85
Taking a look at "systemml-0.10.0.incubating.jar" from maven-central... 1. Looks like we have code embedded here in other projects' namespaces: org.apache.wink , org.antlr, org.abego, com.google.common . Shouldn't we be using shade to re-namespace these so users do not have potential clashes? 2. I see the .dml files are included in the .jar under "scripts". But I am not sure how to load and use these with an oasaj.Connection . Is there something I am missing, or is this a to-do ? 3. Including "org.apache.systemml:systemml: 0.10.0-incubating" in my projects's POM did not seem to pull in any transitive dependencies. But just to instantiate an oasaj.Connection, it needed hadoop-common and hadoop-mapreduce-client-common. Is this an oversight or am I using the jar in the wrong way? Also, is there any plan to remove these dependencies? Ideally using the Java connector wouldn't need to pull in a significant portion of Hadoop. James Dyer Ingram Content Group
http://mail-archives.apache.org/mod_mbox/systemml-dev/201610.mbox/%3CBN6PR12MB1345B10D68A788A52842A1E996D40@BN6PR12MB1345.namprd12.prod.outlook.com%3E
CC-MAIN-2019-26
refinedweb
176
62.14
User Tag List Results 1 to 2 of 2 - Join Date - Jul 2004 - Location - NC - 194 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) undefined method `render' thrown in helper tet Hello, I'm trying to write a very simple test for a very simple method in a helper_test. Here's the method I'm testing in my_helper.rb: Code: def subscribe_email_link if @subscription_model_class render(:partial => '/subscription/toggle') end end I'm trying: Code: def test_subscribe_email_link @subscription_model_class = true link = subscribe_email_link end Is there a way I can add render functionality in this helper test? Thanks for your time. - Join Date - Jul 2004 - Location - Gerodieville Central, UK - 446 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) This is probably because your helper is supposed to be called in controller instance scope of template scope. try shoving 'include MyHelper' in clas scope of your Test to make the helper's method visible in test scope. I think that will do the trick - Rails Web Development - Rails powered Property Listings Bookmarks
http://www.sitepoint.com/forums/showthread.php?470177-undefined-method-render-thrown-in-helper-tet&p=3350312
CC-MAIN-2016-40
refinedweb
168
65.76
Python Crontab API Project description Bug Reports and Development Please report any problems to the launchpad bug tracker. Please use Bazaar and push patches to the launchpad project code hosting. four ways, two system methods: from crontab import CronTab system_cron = CronTab() user_cron = CronTab('root') And two ways from non-system sources:: list = cron.find_command('bar') Find an existing job by comment: list = cron.find_comment('ID or some text') Set and get the comment for a job: comment = job.meta(['New Comment for job']) Clean a job of all rules: job.clear() Iterate through all jobs: for job in cron: print job Iterate through all lines: for line in cron.lines: print line Remove Items: cron.remove( job ) cron.remove_all('echo') Write CronTab back to system or filename: cron.write() Write CronTab to new filename: cron.write( 'output.tab' ).
https://pypi.org/project/python-crontab/1.5.1/
CC-MAIN-2019-13
refinedweb
138
68.67
The import even though it didn’t exist anywhere on the file system. When I mentioned Django’s “magic-removal” effort the other day, I mentioned that it didn’t quite purge all of the “magic” from Django: there’s still a minor amount of magic in how custom template tags work, but it’s not nearly so bad and most people never even notice it (until they have a tag library which raises an ImportErrorand start wondering why Django thinks the tags should be living in django.templatetags, but that’s a story for another day). Today is another day and we now have a little better understanding of just what “magic” can mean, so let’s take a look at the last little bit of real magic left in Django. The magic If you’ve ever tried to load a tag or filter library that didn’t exist, or wasn’t inside in application in your INSTALLED_APPS setting, or which had an error inside it which raised an ImportError, you’ve probably seen an error message like this: TemplateSyntaxError: 'no_library_here' is not a valid tag library: Could not load template library from django.templatetags.no_library_here Which leads naturally to the question of why Django is looking in django.templatetags to find the tag library: shouldn’t it be looking inside the installed applications, hunting for templatetags modules there? The trick Unlike yesterday’s exercise, there’s no hacking of sys.modules going on here, and no magical generation of runtime module objects; when Django successfully loads a template tag library, it’s pulling it from the actual location on your file system where you put it. And, unlike the magic django.models namespace that models were placed into in Django 0.90 and 0.91, django.templatetags does actually exist; a quick look at the code shows the technique being used here (the following is, at the moment, the full content of django/templatetags/__init__.py: from django.conf import settings for a in settings.INSTALLED_APPS: try: __path__.extend(__import__(a + '.templatetags', {}, {}, ['']).__path__) except ImportError: pass This works because Python modules (or “packages” if you prefer that terminology, though it’s a bit loaded) let you assign to a specially-named variable — __path__ — which should be a list of modules. Each module listed in __path__ will, no matter where it’s actually located on your filesystem, be treated from then on as if it also exists as a sub-module of the module whose __path__ list it appears in. So what this code is doing is simply looping through the INSTALLED_APPS setting and, for each application listed, trying to import a templatetags module from that application and add that module’s __path__ (meaning the list of custom tag libraries inside it) to the __path__ of django.templatetags. If the attempted import raises an ImportError (which usually indicates there’s no templatetags module in a given application), Django simply skips that one and moves on to the next. The end result of this is that any custom tag libraries defined inside templatetags modules in your installed applications will — in addition to their normal locations — be importable from paths under django.templatetags. Practical magic In a lot of cases, apparently “magical” things really don’t serve any useful purpose and so can — and should — be removed in favor of more natural techniques. In this case, though, the “magical” extension of django.templatetags serves a very useful purpose: looping over INSTALLED_APPS and importing and initializing all the available tag libraries can be an expensive and time-consuming process. Doing it every time a template used the {% load %} tag would bring the performance of Django’s template system to a screeching halt, so we need to have some way of making it faster. In this case, on option would be to maintain a cache of known tag libraries — maybe the same sort of thing Python does with sys.modules to keep track of which modules have already been imported and initialized — and that wouldn’t be such a bad idea. But extending the __path__ of django.templatetags works just as well, and makes for extremely compact loading code: you can simply take the name of the tag library the {% load %} tag asked for, concatenate it onto the string “django.templatetags” and try to import the result. The __path__ method, then, gives the needed performance boost, and also has a useful side effect: since the list of all importable tag libraries lives in django.templatetags.__path__, it’s easy to loop through that list to find out what libraries are available (this is how the tag and filter documentation in the admin interface works, for example: it’s a simple for loop over django.templatetags.__path__. Also, this “magic” doesn’t get in the way of normal Python imports: the module stays right where it was originally defined, and you can — if you need access to code within it — simply import it exactly as you’d expect, without having to go through the django.templatetags namespace. Where it does cause problems This does sometimes cause confusion, because there are cases where the unexpected “Could not load template library from django.templatetags” message can be a red herring that leads people down the wrong path when looking for an error. The most common case is trying to load a tag library from an application that’s not listed in INSTALLED_APPS, but there’s also a subtler issue. Since Django loops through INSTALLED_APPS trying to import templatetags modules, and treats ImportError as meaning that there is no templatetags module in a particular application, a tag library which — through bad coding or misconfiguration — exists but happens to raise an ImportError will be silently ignored. This isn’t really a “bug” in Django, because the problem of handling situations like this — where, for example, you’re trying to import something to see if it exists, and an ImportError from another source gets in the way — is a long-standing issue for Python best practices. Best practices for Django template tags You generally won’t run into this problem unless you get into one of a few very specific situations, but it is useful to know that raising an ImportError from a custom tag library will cause it to “disappear”; this is often a bad thing, because custom tags are supposed to fail silently whenever possible (one design decision in the Django template system is that, in production, the types of template errors which can bring the site down should be kept to a minimum). One easy way to accomplish this is demonstrated in the markdown filter in django.contrib.markup which, obviously, requires the Python Markdown module in order to function. This module isn’t in the Python standard library and isn’t bundled with Django, so there’s a very real chance that the markdown module will need to be separately installed before this filter can work properly. To detect and deal with a missing Markdown module, the markdown filter does the following: try: import markdown except ImportError: if settings.DEBUG: raise template.TemplateSyntaxError, "Error in {% markdown %} filter: The Python markdown library isn't installed." return force_unicode(value) This does several useful things: - It makes sure the Markdown import happens inside the filter, rather than at the module level, which means an errant ImportErrorwon’t make the whole library “disappear”. - It wraps the import in a try/ exceptblock. - In case of an ImportErrorfrom import Markdown, it suppresses the error in production, and falls back to a default of simply returning the input. - When in development — i.e., when the DEBUGsetting is True— it raises TemplateSyntaxErrorwith a descriptive error message describing how to fix the problem. The remainder of the code in the markdown filter can then safely assume that the Markdown module is available, and can act accordingly. In general, combining one or more of these techniques will make your custom tag and filter libraries more robust and more useful in a variety of error situations, not just those where an ImportError can obscure a different underlying problem. And that’s a wrap I think I’m all talked out now on the subject of “magic”; hopefully at this point you’ve got a little better understanding of when and why things which appear to be magical can actually work on nothing more than very simple techniques, how it can be misleading sometimes to refer to things as “magic” when they might not be, and have a better understanding of how some specific instances of “magic” in Django (including all the ones we’ve removed) have been implemented.
https://www.b-list.org/weblog/2007/dec/04/magic-tags/
CC-MAIN-2019-04
refinedweb
1,442
53.85
Order this Book from Amazon In the previous chapter, we discussed all of the lofty goals of the ASP.NET MVC framework. In this chapter, we completely ignore them. In this chapter, we build a simple database-driven ASP.NET MVC application in the easiest way possible. We ignore design principles and patterns. We don’t create a single unit test. The goal is to clarify the basic mechanics of building an ASP.NET MVC application. Over the course of this chapter, we build a simple Toy Store application. Our Toy Store application will enable us to display a list of toys and create new toys. In other words, it will illustrate how to build a web application that performs basic database operations. *** Begin Note *** The third part of this book is devoted to an extended walkthrough of building an ASP.NET MVC application in the “right” way. We build a forums application by using test-driven development and software design principles and patterns. *** End Note *** Starting with a Blank Slate Let’s start by creating a new ASP.NET MVC application and removing all of the sample files. Follow these steps to create a new ASP.NET MVC Web Application Project: 1. Launch Visual Studio 2008. 2. Select the menu option File, New Project. 3. In the New Project dialog, select your preferred programming language and select the ASP.NET MVC Web Application template (see Figure 1). 4. Name your new project ToyStore and click the OK button. Figure 1 – Creating a new ASP.NET MVC application When you create a new ASP.NET MVC project, the Create Unit Test Project dialog appears automatically (see Figure 2). When asked whether you want to create a unit test project, select the option Yes, create a unit test project (In general, you should always select this option because it is a pain to add a new unit test project to your solution after your ASP.NET MVC project is already created). Figure 2 – The Create Unit Test Project dialog *** Begin Note *** The Create Unit Test Project dialog won’t appear when you create an ASP.NET MVC application in Microsoft Visual Web Developer. Visual Web Developer does not support Test projects. *** End Note *** As we discussed in the previous chapter, when you create a new ASP.NET MVC application you get several sample files by default. These files will get in our way as we build a new application. Delete the following files from your ASP.NET MVC project: [C#] ControllersHomeController.cs ViewsHomeAbout.aspx ViewsHomeIndex.aspx [VB] ControllersHomeController.vb ViewsHomeAbout.aspx ViewsHomeIndex.aspx Delete the following file from your Test project. [C#] ControllersHomeControllerTest.cs [VB] ControllersHomeControllerTest.vb *** Begin Tip *** If you always want to start with an empty ASP.NET MVC project then you can create a new Visual Studio project template after deleting the sample files. Create a new project template by selecting the menu option File, Export Template. *** End Tip *** Creating the Database We need to create a database and database table to contain our list of toys for our toy store. The ASP.NET MVC framework is compatible with any modern database including Oracle 11g, MySQL, and Microsoft SQL Server. In this book, we’ll use Microsoft SQL Server Express for our database. Microsoft SQL Server Express is the free version of Microsoft SQL Server. It includes all of the basic functionality of the full version of Microsoft SQL Server (it uses the same database engine). *** Begin Note *** You can install Microsoft SQL Server Express when you install Visual Studio or Visual Web Developer (it is an installation option). You also can download Microsoft SQL Server Express by using the Web Platform Installer which you can download from the following website: *** End Note *** Follow these steps to create a new database from within Visual Studio: 1. Right-click the App_Data folder in the Solution Explorer window and select the menu option Add, New Item. 2. In the Add New Item dialog, select the SQL Server Database template (see Figure 3). 3. Give the new database the name ToyStoreDB. 4. Click the Add button. Figure 3 – Adding a new SQL Server database After you create the database, you need to create the database table that will contain the list of toys. Follow these steps to create the Products database table: 1. Double-click the ToyStoreDB.mdf file in the App_Data folder to open the Server Explorer window and connect to the ToyStoreDB database. 2. Right-click the Tables folder and select the menu option Add New Table. 3. Enter the columns listed in Table 1 into the Table Designer (see Figure 4). 4. Set the Id column as an Identity column by expanding the Identity Specification node under Column Properties and setting the (Is Identity) property to the value Yes. 5. Set the Id column as the primary key column by selecting this column in the Table Designer and clicking the Set Primary Key button (the button with an icon of a key). 6. Save the new table by clicking the Save button (the button with the anachronistic icon of a floppy disk). 7. In the Choose Name dialog, enter the table named Products. Table 1 – Columns in the Products table Figure 4 – Creating the Products table *** Begin Note *** The Server Explorer window is called the Database Explorer window in the case of Visual Web Developer. *** End Note *** After you finish creating the Products database table, you should add some database records to the table. Right-click the Products table in the Server Explorer window and select the menu option Show Table Data. Enter two or three products (see Figure 5). Figure 5 – Entering sample data in the Products database table Creating the Model We need to create model classes to represent our database tables in our ASP.NET MVC application. The easiest way to create the data model classes is to use an Object Relational Mapping (ORM) tool to generate the classes from a database automatically. You can use your favorite ORM with the ASP.NET MVC framework. The ASP.NET MVC framework is not tied to any particular ORM. For example, ASP.NET MVC is compatible with Microsoft LINQ to SQL, NHibernate, and the Microsoft Entity Framework. In this book, we use the Microsoft Entity Framework to generate our data model classes. We focus on the Microsoft Entity Framework because the Microsoft Entity Framework is Microsoft’s recommended data access solution. *** Begin Note *** In order to use the Microsoft Entity Framework, you need to install .NET Framework 3.5 Service Pack 1. *** End Note *** Follow these steps to generate the data model classes: 1. Right-click the Models folder in the Solution Explorer window and select the menu option Add, New Item. 2. Select the Data category and the ADO.NET Entity Data Model template (see Figure 6). 3. Name the data model ToyStoreDataModel.edmx and click the Add button. Figure 6 – Adding ADO.NET Entity Data Model classes After you complete these steps, the Entity Model Data Wizard launches. Complete these wizard steps: 1. In the Choose Model Contents step, select the Generate from database option. 2. In the Choose Your Data Connection step, select the ToyStoreDB.mdf data connection and the entity connection name ToyStoreDBEntities (see Figure 7). 3. In the Choose Your Database Objects step, select the Products database table and enter Models for the namespace (see Figure 8). 4. Click the Finish button to complete the wizard. Figure 7 – Choose your data connection Figure 8 – Entering the model namespace After you complete the Entity Data Model Wizard, the Entity Designer appears with a single entity named Products (see Figure 9). The Entity Framework has generated a class named Products that represents your Products database table. Most likely, you’ll want to rename the classes generated by the Entity Framework. The Entity Framework simply names its entities with the same names as the database tables. Because the Products class represents a particular product, you’ll want to change the name of the class to Product (singular instead of plural). Right-click the Products entity in the Entity Designer and select the menu option Rename. Provide the new name Product. Figure 9 – The Entity Designer At this point, we have successfully created our data model classes. We can use these classes to represent our ToyStoreDB database within our ASP.NET MVC application. *** Begin Note *** You can open the Entity Designer at any time in the future by double-clicking the ToyStoreDataModel.edmx file in the Models folder. *** End Note *** Creating the Controller The controllers in an ASP.NET MVC application control the flow of application execution. The controller that is invoked by default is named the Home controller. We need to create the Home controller by following these steps: 1. Right-click the Controllers folder and select the menu option Add Controller. 2. In the Add Controller dialog, enter the controller name HomeController and select the option labeled Add action methods for Create, Update, and Details scenarios (see Figure 10). 3. Click the Add button to create the new controller. Figure 10 – Adding a new controller The Home controller is contained in Listing 1. Listing 1 – ControllersHomeController.cs [C#] using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Mvc.Ajax; namespace ToyStore 1 – ControllersHomeController.vb [VB] Public Class HomeController Inherits System.Web.Mvc.Controller ' ' GET: /Home/ Function Index() As ActionResult Return View() End Function ' ' GET: /Home/Details/5 Function Details(ByVal id As Integer) As ActionResult Return View() End Function ' ' GET: /Home/Create Function Create() As ActionResult Return View() End Function ' ' POST: /Home/Create <AcceptVerbs(HttpVerbs.Post)> _ Function Create(ByVal collection As FormCollection) As ActionResult Try ' TODO: Add insert logic here Return RedirectToAction("Index") Catch Return View() End Try End Function ' ' GET: /Home/Edit/5 Function Edit(ByVal id As Integer) As ActionResult Return View() End Function ' ' POST: /Home/Edit/5 <AcceptVerbs(HttpVerbs.Post)> _ Function Edit(ByVal id As Integer, ByVal collection As FormCollection) As ActionResult Try ' TODO: Add update logic here Return RedirectToAction("Index") Catch Return View() End Try End Function End Class Because we selected the option to generate Create, Update, and Details methods when creating the Home controller, the Home controller in Listing 1 includes these actions. In particular, the Home controller exposes the following actions: · Index() – This is the default action of the controller. Typically, this action is used to display a list of items. · Details(id) – This action displays details for a particular item. · Create() — This action displays a form for creating a new item. · Create(collection) – This action inserts the new item into the database. · Edit(id) – This action displays a form for editing an existing item. · Edit(id, collection) — This action update the existing item in the database. Currently, the Home controller only contains stubs for these actions. Let’s go ahead and take advantage of the data model classes that we created with the Entity Framework to implement the Index() and Create() actions. The updated Home controller is contained in Listing 2. Listing 2 – ControllersHomeController.cs [C#] using System.Linq; using System.Web.Mvc; using ToyStore.Models; namespace ToyStore.Controllers { public class HomeController : Controller { private ToyStoreDBEntities _dataModel = new ToyStoreDBEntities(); // // GET: /Home/ public ActionResult Index() { return View(_dataModel.ProductSet.ToList()); } // // GET: /Home/Create public ActionResult Create() { return View(); } // // POST: /Home/Create 2 – ControllersHomeController.vb [VB] Public Class HomeController Inherits System.Web.Mvc.Controller Private _dataModel As New ToyStoreDBEntities() ' ' GET: /Home/ Function Index() As ActionResult Return View(_dataModel.ProductSet.ToList()) End Function ' ' GET: /Home/Create Function Create() As ActionResult Return View() End Function ' ' POST: /Home/Create End Class Notice that a private field named _dataModel of type DBStoreEntities is defined at the top of the controller in Listing 2. The DBStoreEntities class was one of the classes that were generated by the Entity Model Data Wizard. We use this class to communicate with the database. The Index() action has been modified to return a list of products. The expression _dataModel.ProductSet.ToList() returns a list of products from the Products database table. There are two Create() actions. The first Create() action displays the form for creating a new product. The form is submitted to the second Create() action which actually performs the database insert of the new product. Notice that the second Create() action has been modified to accept a Product parameter. The Product class also was generated by the Entity Model Data Wizard. The Product class has properties that correspond to each column in the underlying Products database table. Finally, the Create() action calls the following methods to add the new product to the database: [C#] _dataModel.AddToProductSet(productToCreate); _dataModel.SaveChanges(); [VB] _dataModel.AddToProductSet(productToCreate) _dataModel.SaveChanges() Our Home controller now contains all of the necessary database logic. We can use the controller to return a set of products and we can use the controller to create a new product. Notice that both the Index() action and the first Create() action return a View. The next and final step is to create these views. Creating the Views An MVC view contains all of the HTML markup and view logic required to generate an HTML page. The set of views exposed by an ASP.NET MVC application is the public face of the application. *** Begin Note *** A view does not need to be HTML. For example, you can create Silverlight views. *** End Note *** Our simple application needs two views: the Index and the Create view. We’ll use the Index view to display the list of products and the Create view to display a form for creating new products. Adding the Index View Let’s start by creating the Index view. Follow these steps: 1. Build your application by selecting the menu option Build, Build Solution. 2. Right-click the Index() action in the Code editor window and select the menu option Add View (see Figure 11). 3. In the Add View dialog, select the option Create a strongly-typed view. 4. In the Add View dialog, from the dropdown list labeled View data class, select the ToyStore.Models.Product class. 5. In the Add View dialog, from the dropdown list labeled View Content, select List. 6. Click the Add button to add the new view to your project (see Figure 12). Figure 11 – Adding a view Figure 12 – The Add View dialog *** Begin Note *** You need to build your ASP.NET MVC application before adding a view with the Add View dialog in order to build the classes displayed by the View data class dropdown list. If your application has build errors then this list will be blank. *** End Note *** Views are added to the Views folder. Views follow a particular naming convention. A view returned by the Index() action exposed by the Home controller class is located at the following path: ViewsHomeIndex.aspx In general, views follow the naming convention: ViewsController NameAction Name.aspx The contents of the Index view are contained in Listing 3. This view loops through all of the products and displays the products in an HTML table (see Figure 13). Listing 3 – ViewsHomeIndex.aspx [C#] <%@ Page <title>Index</title> </asp:Content> <asp:Content <h2>Index</h2> <table> <tr> <th></th> <th> Id </th> <th> Name </th> <th> Description </th> <th> Price </th> </tr> <% foreach (var item in Model) { %> <tr> <td> <%= Html.ActionLink("Edit", "Edit", new { /* id=item.PrimaryKey */ }) %> | <%= Html.ActionLink("Details", "Details", new { /* id=item.PrimaryKey */ })%> </td> <td> <%= Html.Encode(item.Id) %> </td> <td> <%= Html.Encode(item.Name) %> </td> <td> <%= Html.Encode(item.Description) %> </td> <td> <%= Html.Encode(item.Price) %> </td> </tr> <% } %> </table> <p> <%= Html.ActionLink("Create New", "Create") %> </p> </asp:Content> Listing 3 – ViewsHomeIndex.aspx [VB] <%@ Page <title>Index</title> </asp:Content> <asp:Content <h2>Index</h2> <p> <%=Html.ActionLink("Create New", "Create")%> </p> <table> <tr> <th></th> <th> Id </th> <th> Name </th> <th> Description </th> <th> Price </th> </tr> <% For Each item In Model%> <tr> <td> <%--<%=Html.ActionLink("Edit", "Edit", New With {.id = item.PrimaryKey})%> | <%=Html.ActionLink("Details", "Details", New With {.id = item.PrimaryKey})%>--%> </td> <td> <%=Html.Encode(item.Id)%> </td> <td> <%=Html.Encode(item.Name)%> </td> <td> <%=Html.Encode(item.Description)%> </td> <td> <%=Html.Encode(item.Price)%> </td> </tr> <% Next%> </table> </asp:Content> Figure 13 – The Index view Notice that the Index view includes a link labeled Create New that appears at the bottom of the view. We add the Create view in the next section. Adding the Create View The Create view displays the HTML form for adding a new product. We can follow a similar set of steps to add the Create view: 1. Right-click the first Create() action in the Code editor window and select the menu option Add View. 2. In the Add View dialog, select the option Create a strongly-typed view. 3. In the Add View dialog, from the dropdown list labeled View data class, select the ToyStore.Models.Product class. 4. In the Add View dialog, from the dropdown list labeled View Content, select Create. 5. Click the Add button to add the new view to your project (see Figure 14). Figure 14 – Adding the Create view The Create view is added to your project at the following location: ViewsHomeCreate.aspx The contents of the Create view are contained in Listing 4. Listing 4 – ViewsHomeCreate.aspx [C#] <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<ToyStore.Models> <% } %> <div> <%=Html.ActionLink("Back to List", "Index") %> </div> </asp:Content> Listing 4 – ViewsHomeCreate.aspx [VB] <%@ Page Title="" Language="VB" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage(Of ToyStore> <% End Using %> <div> <%=Html.ActionLink("Back to List", "Index") %> </div> </asp:Content> The Create view displays an HTML form for creating new products (see Figure 15). The Add View dialog generates HTML form fields that correspond to each of the properties of the Product class. If you complete the HTML form and submit it, a new product will be created in the database. Figure 15 – The Create view Summary In this chapter, we used the ASP.NET MVC framework to build a simple database-driven web application. We created models, views, and controllers. First, we created a database and a database model. We used Microsoft SQL Server Express for our database. We created our database model classes by taking advantage of the Microsoft Entity Framework. Next, we created the Home controller. We used Visual Studio to generate the actions for our Home controller automatically. We added a few lines of data access logic to interact with our database. Finally, we created two views. We created an Index view that displays a list of all of the products in an HTML table. We also added a Create view that displays an HTML form for adding a new product to the database. Just a quick question (as a beginner) after browsing this article: It looks like the View is based on HTML. What is the difference between using the Visual Webdeveloper and Visual Studio? Implementing Edit view or removing Edit link on Index view seems better. No difference between VS and Web Developer about html. Looks solid, Stephen. Good primer. Great job again. Well done! Hi, I found this article, a very good start up for a beginner to deals with Asp.Net MVC. Best f luck for further extension in this field bye Pawan Kumar Hi, Stephen! When your book will be published? I want to buy it, because I like your MVC video tutorials. Hi, good artical. When your book will be published? I suggest to update these to the strongly typed version (I understand there is one now): – return RedirectToAction(“Index”); – Html.ActionLink(“Edit”, “Edit”, new { /* id=item.PrimaryKey */ }) @Alexander – Thanks for your interest in when the book will be published. The dates keep shifting, so I can’t give you a firm date. Stephen- Another great chapter. Suggestions: 1) Sorry to be a broken record but the unit test warning should probably read.. “The Create Unit Test Project dialog won’t appear when you create an ASP.NET MVC application in Microsoft Visual Web Developer and Visual Studio Standard. These versions do not support Test projects.” 2) I assume the layout is off here on the web but if not then the NOTE about the Build is not close enough to Step 1 of “Adding the Index View”. Just wanted to say great article for beginners like me.Just one question though, When you first create an mvc app you have a dialogue box pops up and ask if you want to include testing? I do not get this when I create an mvc app. It just creates it without prompting me for testing.Any thoughts on why this is happening? Hi Stephen, Thanks for sharing. Here is what I like. It uses plain language, and it goes straight to the point. I would probably skip the database creation bit. Will definitely buy when it comes out. Cheers! @ bennyj You are to add a unit test project only when you’re using Visual Studio Pro. The standard edition doesn’t contain unit tests (MSTest). However you can build your own tests using NUnit or any other unit test framework. Found another typo: “We’ll use the Index view to displays the list” @Dan — Thanks, corrected in the text. After getting some feedback, I’ve modified the Create() action in the Home controller class. Notice that it now includes a [Bind] attribute to exclude the Product Id property. Great, but I think it’s getting a bit complicated again (like web forms). One of the main features of Ruby on Rails is its simplicity; and as far as I accept that part of the reason for this is that Ruby is a dynamic language, I think Asp.Net MVC can be simpler without needing to use such verbose functionality like Bind and verb… Hi, Stephen, I am wondering for the creating database step, if I want to connect to a local machine SQL 2005 database rather than the SQL Express one, what should I do? Thank you. Hi, You mention that the views need not be ASP.NET, I wondered whether you might light to include an example of creating them in Silverlight. This might illustrate how MVC & VS IDE supports/or not Silverlight. Cheers Stephen, Great chapter and thanks for posting it online for us. You may have covered this elsewhere but I noticed that the .aspx extension is applied on the action. I think it is a matter of choice but I have been placing it on the controller so I didn’t know if this was a typo. Just above listing 3: ViewsHomeIndex.aspx should it be? ViewsHome.aspxIndex Good chapter — just one minor thing. 1) An Account Controller and Views are also created in the default MVC app in the RC. So in the part about “delete the following files” you might want to have those deleted as well – or indicate they can be ignored. Thanks. Nice tutotial, or you could just use Ruby on Rails: and this tutorial would be complete in less than 10 lines of code: cd sites rails Blog cd Blog ./script/generate scaffold notice title:string content:string userid:string rake db:create rake db:migrate script/server and you have a webpage were you can edit, show, view, delete… Great chapter about creating a first MVC application. The sample is interesting enough without being too trivial and it shows specific features. My two suggestions: 1) In the section entitled Creating the Views, you mention that “a view does not need to be HTML. For example, you can create Silverlight views.” If you mention this possibility, be sure to site the chapter in which you show how to do this. Otherwise, it seems a little confusing. The reader might think, “why is he talking about this now?” 2) I found your videos with Paul Litwin on the ASP.NET website very instructive in sorting out the difference between ASP.NET MVC and the Web Forms development. As I watched those, he asked all the same questions I asked myself in sorting out how/why MVC development was different than Web Forms. I’m not sure exactly where the sections should be, whether it would be in this chapter, the previous chapter, or an appendix, but it might be a good thing to have a Compare and Contrast MVC and Web Forms section in the book. Thanks for putting these chapters online. a Great stuff, keep it coming! Hi Stephen, What if I have a relational database? I get this error “Object reference not set to an instance of an object.” in this line ‘< %=item.Blog_Category.bCatTitle%>‘. — < %@ Page Title="" Language="VB" MasterPageFile="~/Views/Shared/bSidebar.Master" Inherits="System.Web.Mvc.ViewPage(Of IEnumerable(Of MVCApplication1.Blog))" %> < %For Each item In Model%> < %=item.bTitle%> Filed under <%=item.Blog_Category.bCatTitle%> < %=item.bEntry%> Private _dataModel As New MVCAppDB_dataEntities() Function Index() As ActionResult ViewData(“Message”) = “Welcome to My Site!” Return View(_dataModel.Blog.ToList()) End Function — regards, imperialx To show you how invaluable this article is, I was going through Scott Guthries 1st Chapter in their MVC book, WHICH IS ONLY IN C#; when I had to return to this article to see how to implement some of their code in VB. Guess I’ll switch my impending purchase to your book. Thanx and million. Question: Why have they replace the VB evangilist ScottGu, with a clone from the C# world? Mr. Walther, Great post. I know the default out of the box is a horizontal menu with Home and About. Is there a way to position the Html.actionlinks vertically on the left side ? Looking forward to your upcoming MVC book. Thanks Good I am using VS2008 SP1, but I do not have an ASP.NET MVC Web Application Template. I can’t seem to find anywhere to download it. Where can I get it? Its very helpful for beginners. Really very useful information is given. Thanks. great i would like to know where i can find a example with two relational tables, like products and categories, thanks.. sick people and overseas pollution nightmares upon arrival. They are not infallible but have increased the days and hours or enjoying holidays. project management diploma | hr diploma and overseas pollution nightmares upon arrival. They are not infallible but have increased the days and hours or enjoying holidays. accredited degrees | MBA degree サイト M&A. 薬剤師 求人 is wonderful. 水漏れ is wonderful. 特許事務所 is wonderful. 電話代行 サービス. 行政書士 横浜 is wonderful. インフルエンザ 予防 is wonderful. 大阪 ホテル is wonderful. 浦和 一戸建て is wonderful. 浦和 一戸建て is wonderful. エアコンクリーニング is wonderful. ガーデンファニチャー is wonderful. 結婚式二次会 is wonderful. 土地活用 is wonderful. インプラント 治療 is wonderful. ゴルフ会員権 is wonderful. 埼玉県 物件 is wonderful. リトミック is wonderful. 相続 手続き 中古医療機器 great .Essay Service| Thanks for sharing live football scores—live soccer scores—Live sport scores Nice article kamagra tablets wholesale—kamagra supplier—kamagra buy 42 It’s lucky to know this, if it is really true. Companies tend not to realize when they create security holes from day-to-day operation. can some body tell me about the error what wil be correction in the code .. public ActionResult Create([Bind(Exclude=”Id”)]Product productToCreate) shoe this error the type or name space Product could not be found(are yo missing a using directive orassmbly referencerÇ
https://stephenwalther.com/archive/2009/02/07/chapter-2-building-a-simple-asp-net-mvc-application
CC-MAIN-2021-31
refinedweb
4,569
59.3
provides support for both MVC and MVVM application architectures. Both of these architectural approaches share certain concepts and focus on dividing application code along logical lines. Each approach has its strengths based on how it chooses to divide up the pieces of an application. The goal of this guide is to provide you with foundational knowledge regarding the components that make up these architectures. In an MVC architecture, most classes are either Models, Views or Controllers. The user interacts with Views, which display data held in Models. Those interactions are monitored by a Controller, which then responds to the interactions by updating the View and Model, as necessary. The View and the Model are generally unaware of each other because the Controller has the sole responsibility of directing updates. Generally speaking, Controllers will contain most of the application logic within an MVC application. Views ideally have little (if any) business logic. Models are primarily an interface to data and contain business logic to manage changes to said data. The goal of MVC is to clearly define the responsibilities for each class in the application. Because every class has clearly defined responsibilities, they implicitly become decoupled from the larger environment. This makes the app easier to test and maintain, and its code more reusable. The key difference between MVC and MVVM is that MVVM features an abstraction of a View called the ViewModel. The ViewModel coordinates the changes between a Model’s data and the View's presentation of that data using a technique called “data binding”. The result is that the Model and framework perform as much work as possible, minimizing or eliminating application logic that directly manipulates the View. Ext JS 5 introduces support for the MVVM architecture as well as improvements on the (C) in MVC. While we encourage you to investigate and take advantage of these improvements, it is important to note that we have made every effort to ensure existing Ext JS 4 MVC applications continue to function unmodified. To understand how these choices fit into your application, we should start by further defining what the various abbreviations represent. (M) Model - This is the data for your application. A set of classes (called “Models”) defines the fields for their data (e.g. a User model with user-name and password fields). Models know how to persist themselves through the data package and can be linked to other models via associations. Models are normally used in conjunction with Stores to provide data for grids and other components. Models are also an ideal location for any data logic that you may need, such as validation, conversion, etc. (V) View - A View is any type of component that is visually represented. For instance, grids, trees and panels are all considered Views. (C) Controller - Controllers are used as a place to maintain the view's logic that makes your app work. This could entail rendering views, routing, instantiating Models, and any other sort of app logic. (VM) ViewModel - The ViewModel is a class that manages data specific to the View. It allows interested components to bind to it and be updated whenever this data changes. These application architectures provide structure and consistency to your framework code. Following the conventions we suggest will provide a number of important benefits: Every application works in the same manner, so you only have to learn it once. It's easy to share code between applications. You can use Sencha Cmd to create optimized production versions of your applications. Before we start walking through the pieces, let's build a sample application with Sencha Cmd. First, download and unzip the Ext JS SDK. Next, issue the following commands from your command line: sencha -sdk local/path/to/ExtJS generate app MyApp MyApp cd app sencha app watch Note: If you are not familiar with what's happening above, please check out our Getting Started guide. Before we start talking about the pieces that make up the MVC, MVVM, MVC+VM patterns, let's take a look at the structure of a Cmd generated application. Ext JS applications follow a unified directory structure that is the same for every app. In our recommended layout, all classes are placed into the app folder. This folder contains sub-folders that namespace your Models, Stores, and View elements. View elements, such as View, ViewControllers, and ViewModels should stay grouped together as best practice for organization (see "main" view folder below). The first line of each Class is an address of sorts. This "address" is called a namespace. The formula for namespace is: <AppName>.<foldername>.<ClassAndFileName> In your sample app, "MyApp" is the AppName, "view" is the folder name, "main" is the sub-folder name and "Main" is the Class and Filename. Based on that information, the framework looks for a file called Main.js in the following location: app/view/main/Main.js If that file is not found, Ext JS will throw an error until you remedy the situation. Let’s start evaluating the application by looking at index.html. <!DOCTYPE HTML> <html> <head> <meta charset="UTF-8"> <title>MyApp</title> <!-- The line below must be kept intact for Sencha Cmd to build your application --> <script id="microloader" type="text/javascript" src="bootstrap.js"></script> </head> <body></body> </html> Ext JS uses the Microloader to load application resources described in the app.json file. This replaces the need to add them to index.html. With app.json all of the application meta-data exists in a single location. Sencha Cmd can then compile your application in a simple and efficient manner. app.json is heavily commented and provides an excellent resource for gleaning information about the information it accepts. When we generated our application earlier, we created a class (in Application.js) AND launched an instance of it (in app.js). You can see the content of app.js below: /* * This file is generated and updated by Sencha Cmd. You can edit this file as * needed for your application, but these edits will have to be merged by * Sencha Cmd when upgrading. */ Ext.application({ name: 'MyApp', extend: 'MyApp.Application', mainView: 'MyApp.view.main.Main' //------------------------------------------------------------------------- // Most customizations should be made to MyApp.Application. If you need to // customize this file, doing so below this section reduces the likelihood // of merge conflicts when upgrading to new versions of Sencha Cmd. //------------------------------------------------------------------------- }); The mainView config is a new feature as of Ext JS 5.1 (use autoCreateViewport when working with 5.0.x). By designating a container class for mainView, you can use any class as your Viewport. In the above example, we have determined MyApp.view.main.Main (a Container class) to be our Viewport. The mainView config instructs the application to create the designated view and attach the Viewport Plugin. This connects the view to the document body. Every Ext JS application starts with an instance of the Application Class. This class is intended to be launch-able by app.js as well as instantiable for testing. The following Application.js content is automatically created when you generate your application with Sencha Cmd. Ext.define('MyApp.Application', { extend: 'Ext.app.Application', name: 'MyApp', stores: [ // TODO: add global/shared stores here ], launch: function () { // TODO - Launch the application } }); The Application Class contains global settings for your application, such as the app's namespace, shared stores, etc. A View is nothing more than a Component, which is a subclass of Ext.Component. A view contains all of your application's visual aspects. If you open the starter app's Main.js under the "main" folder, you should see the following code. Ext.define('MyApp.view.main.Main', { extend: 'Ext.container.Container', xtype: 'app-main', controller: 'main', viewModel: { type: 'main' }, layout: { type: 'border' }, items: [{ xtype: 'panel', bind: { title: '{name}' }, region: 'west', html: '<ul>...</ul>', width: 250, split: true, tbar: [{ text: 'Button', handler: 'onClickButton' }] },{ region: 'center', xtype: 'tabpanel', items:[{ title: 'Tab 1', html: '<h2>Content ...</h2>' }] }] }); Please note that the view does not include any application logic. All of your view's logical bits should be included in the ViewController, which we'll talk about in the next section. This particular view defines a container with a border layout with a west and center region. These regions include a panel with a toolbar containing a button and a tab panel with a single tab. If you aren't familiar with these concepts, check out our Getting Started Guide. Two interesting pieces of this view are the controller and viewModel configs. The controller config allows you to designate a ViewController for the view. When a ViewController is specified on a view in this manner, it then becomes a container for your event handlers and references. This gives the ViewController a one-to-one relationship with the components and events fired from the view. We'll talk more about controllers in the next section. The viewModel config allows you to designate a ViewModel for the view. The ViewModel is a data provider for this component and its child views. The data contained in the ViewModel is typically used by adding bind configs to the components that want to present or edit this data. In the above view, you can see that the title of the west region's panel is bound to the ViewModel. This means that the title will be populated by data's "name" value, which is managed in the ViewModel. If the ViewModel's data changes, the title's value will be automatically updated. We'll discuss ViewModels later in this document. Next, let's take a look at Controllers. The starter app's generated ViewController MainController.js looks like this: Ext.define('MyApp.view.main.MainController', { extend: 'Ext.app.ViewController', requires: [ 'Ext.MessageBox' ], alias: 'controller.main', onClickButton: function () { Ext.Msg.confirm('Confirm', 'Are you sure?', 'onConfirm', this); }, onConfirm: function (choice) { if (choice === 'yes') { // } } }); If you look back at your view, Main.js, you'll notice a function designation for the tbar button's handler. That handler is mapped to a function called onClickButton in this controller. As you can see, this controller is ready to deal with that event with no special setup. This makes it incredibly easy to add logic for your application. All you need to do is define the onClickButton function since your controller has a one-to-one relationship with its view. Upon clicking the view's button, a Message box will be created. This message box contains its own function call with onConfirm, which is scoped to this same controller. ViewControllers are designed to: Make the connection to views using “listeners” and “reference” configs obvious. Leverage the life cycle of views to automatically manage their associated ViewController. From instantiation to destruction, Ext.app.ViewController is tied to the component that referenced it. A second instance of the same view class would get its own ViewController instance. When these views are destroyed, their associated ViewController instance will be destroyed as well. Provide encapsulation to make nesting views intuitive. Next, let's take a look at the ViewModel. If you open your MainModel.js file, you should see the following code: Ext.define('MyApp.view.main.MainModel', { extend: 'Ext.app.ViewModel', alias: 'viewmodel.main', data: { name: 'MyApp' } //TODO - add data, formulas and/or methods to support your view }); A ViewModel is a class that manages a data object. This class then allows views interested in this data to bind to it and be notified. We created a linkage from our view to the ViewModel with the ViewModel config in "Main.js". This linkage allows binding of configs with setters to automatically set data from the viewModel onto the view in a declarative fashion. The data is in-line in the "MainModel.js" example. That said, your data could be anything and come from anywhere. Data may be provided by any sort of proxy (AJAX, REST, etc). Models and Stores make up the information gateway of your application. Most of your data is sent, retrieved, organized, and "modeled" by these two classes. A Ext.data.Model represents any type of persist-able data in your application. Each model has fields and functions that allow your application to "model" data. Models are most commonly used in conjunction with stores. Stores can then be consumed by data-bound components like grids, trees, and charts. Our sample application does not currently contain a model so let's add the following code: Ext.define('MyApp.model.User', { extend: 'Ext.data.Model', fields: [ {name: 'name', type: 'string'}, {name: 'age', type: 'int'} ] }); As mentioned in the namespace section above, you would want to create User.js, which will live under "app/model/". Ext.data.Model describes records that contain values or properties called “fields”. The Model class can declare these fields using the fields config. In this case, the name is declared to be a string and age is an integer. There are other field types available in the API docs. While there are good reasons to declare fields and their types, doing so is not required. If you do not include the fields config, data will be automatically read and inserted into the data object. You will want to define your fields if your data needs: Validation Default values Convert functions Let's set up a store and see these two work together. A Store is a client side cache of records (instances of a Model class). Stores provide functions for sorting, filtering and querying the records contained within. This sample application does not contain a store, but not to worry. Simply define your store and assign the Model. Ext.define('MyApp.store.User', { extend: 'Ext.data.Store', model: 'MyApp.model.User', data : [ {firstName: 'Seth', age: '34'}, {firstName: 'Scott', age: '72'}, {firstName: 'Gary', age: '19'}, {firstName: 'Capybara', age: '208'} ] }); Add the above content to User.js, which should be placed in app/store/. You can add the store to Application.js's store config if you'd like a global instance of your store. The stores config in Application.js would look like this: stores: [ 'User' ], In this example, the store directly contains the data. Most real world situations would require that you gather data by using a proxy on your model or store. Proxies allow for data transfer between your data providers and applications. You can read more about models, stores, and data providers in our Data Guide. We've created a robust and useful application called the Ticket App. This application manages Login/Logout sessions, incorporates data binding, and displays "best practice" when utilizing an MVC+VM architecture. This example has been heavily commented so that everything is as clear as possible. We recommend you spend some time exploring the Ticket App to learn more about ideal MVC+VM Application Architecture.
http://docs.sencha.com/extjs/5.1.0/guides/application_architecture/application_architecture.html
CC-MAIN-2017-26
refinedweb
2,457
58.28
Recent: Archives: While going through "Polymorphism in its purest form," I saw the unfamiliar term factory method. Could you please describe what a factory method is and explain how I can use it? Factory method is just a fancy name for a method that instantiates objects. Like a factory, the job of the factory method is to create -- or manufacture -- objects. Let's consider an example. Every program needs a way to report errors. Consider the following interface: Listing 1 public interface Trace { // turn on and off debugging public void setDebug( boolean debug ); // write out a debug message public void debug( String message ); // write out an error message public void error( String message ); } Suppose that you've written two implementations. One implementation (Listing 2) writes the messages out to the command line, while another (Listing 3) writes them to a file. Listing 2 public class FileTrace implements Trace { private java.io.PrintWriter pw; private boolean debug; public FileTrace() throws java.io.IOException { // a real FileTrace would need to obtain the filename somewhere // for the example I'll hardcode it pw = new java.io.PrintWriter( new java.io.FileWriter( "c:\trace.log" ) ); } public void setDebug( boolean debug ) { this.debug = debug; } public void debug( String message ) { if( debug ) { // only print if debug is true pw.println( "DEBUG: " + message ); pw.flush(); } } public void error( String message ) { // always print out errors pw.println( "ERROR: " + message ); pw.flush(); } } Listing 3 public class SystemTrace implements Trace { private boolean debug; public void setDebug( boolean debug ) { this.debug = debug; } public void debug( String message ) { if( debug ) { // only print if debug is true System.out.println( "DEBUG: " + message ); } } public void error( String message ) { // always print out errors System.out.println( "ERROR: " + message ); } } To use either of these classes, you would need to do the following: Listing 4 //... some code ... SystemTrace log = new SystemTrace(); //... code ... log.debug( "entering loog" ); // ... etc ... Now if you want to change the Trace implementation that your program uses, you'll need to edit each class that instantiates a Trace implementation. Depending upon the number of classes that use Trace, it might take a lot of work for you to make the change. Plus, you want to avoid altering your classes as much as possible. A factory method lets us be a lot smarter about how our classes obtain Trace implementation instances: Listing 5 public class TraceFactory { public static Trace getTrace() { return new SystemTrace(); } } getTrace() is a factory method. Now, whenever you want to obtain a reference to a Trace, you can simply call TraceFactory.getTrace(): Listing 6 //... some code ... Trace log = new TraceFactory.getTrace(); //... code ... log.debug( "entering loog" ); // ... etc ... Using a factory method to obtain an instance can save you a lot of work later. In the code above, TraceFactory returns SystemTrace instances. Imagine again that your requirements change and that you need to write your messages out to a file. However, if you use a factory method to obtain your instance, you need to make only one change in one class in order to meet the new requirements. You do not need to make changes in every class that uses Trace. Instead you can simply redefine getTrace(): Listing 7 public class TraceFactory { public static Trace getTrace() { try { return new FileTrace(); } catch ( java.io.IOException ex ) { Trace t = new SystemTrace(); t.error( "could not instantiate FileTrace: " + ex.getMessage() ); return t; } } } Further, factory methods prove useful when you're not sure what concrete implementation of a class to instantiate. Instead, you can leave those details to the factory method. In the above examples your program didn't know whether to create FileTrace or SystemTrace instances. Instead, you can program your objects to simply use Trace and leave the instantiation of the concrete implementation to a factory method. Both implementation.By Anonymous on April 27, 2009, 11:15 amWhat if we need both the implementation. One implementation at some places and secon implementation at other place. Reply | Read entire comment correctionBy Anonymous on March 20, 2009, 7:47 am"One implementation (Listing 2) writes the messages out to the command line, while another (Listing 3) writes them to a file." should read "One implementation... Reply | Read entire comment Listing 6 has a bug - the keyword 'new' shouldn't be there.By Anonymous on October 24, 2008, 10:15 pmIt should read: //... some code ... Trace log = TraceFactory.getTrace(); //... code ... log.debug( "entering loog" ); // ... etc ... Reply | Read entire comment View all comments
http://www.javaworld.com/javaworld/javaqa/2001-05/02-qa-0511-factory.html
crawl-002
refinedweb
734
57.98
Free SMS API with Textbelt Open Source "Free SMS API" is a bit of a misnomer. Services that offer free SMS sending are usually taking a loss in order to give users a "free trial" and charge them afterwards. These trials tend to be restricted to test phone numbers to deter abuse by spammers. Examples include Twilio and Plivo. If you are an engineer trying to find a free SMS service, it is important to understand that every phone network charges money to deliver SMS. No service is offering free SMS credit without some ulterior motive. The hosted version of the Textbelt API provides one free SMS per day, which is useful if you want to send SMS only occasionally. If you just want to send multiple private SMS without paying and without relying on 3rd-party services, there is another approach that may work for you. Most phone carriers offer email-to-SMS gateways that deliver SMS. These gateways avoid network charges by sending SMS through email. The open source version of Textbelt uses these gateways to send SMS for free. The software itself is available on Github under the permissive MIT license. Textbelt was developed by Ian Webster in 2012 and uses email-to-SMS gateways to provide a free SMS API. It includes a library of each carrier's free email/SMS gateways and uses them to deliver SMS. There are caveats. Gateways are not always reliable and emails are not always accepted by carriers. However, if you host your own email and configure the Textbelt nodemailer to use your own email solution (a private server or a service like Mailgun or Sendgrid), you should be able to deliver most of your SMS. In general, you'll have less control over your SMS because some carriers will append notes like "sent via Email" to your message. Note that there are a few minor carriers (such as Google Fi) that do not offer email-to-SMS gateways. After setting up your Textbelt server, you can send SMS with a simple POST request: curl -X POST \ --data-urlencode number=5551234567 \ --data-urlencode "message=I sent this message for free with Textbelt" This example would look through the list of U.S. carriers and dispatch an SMS (via email gateway) to all relevant carriers. Here's the same example in Python: import requests requests.post('', { 'phone': '5551234567', 'message': 'I sent this message for free with Textbelt', }) SMS deliverability can be a challenge even for paid providers. If you want assurance of SMS delivery or a way to check whether your message was delivered, you'll wind up having to pay the carriers. Textbelt.com used to host this free SMS API, but after 5 years it was overrun by spammers, so now we've just provided the code online so anyone can set up their own private instance. There is a new paid offering for people who needed greater assurance of delivery versus the open-source free approach. The API request is the same, except you provide a key parameter and you can use the textbelt.com endpoint: curl -X POST \ --data-urlencode number=5551234567 \ --data-urlencode "message=I sent this message for free with Textbelt" \ -d key=textbelt Please reach out if you have any questions, whether about setting up the free Textbelt SMS API or using the paid version!
http://textbelt.com/blog/free-sms-api-open-source/
CC-MAIN-2020-29
refinedweb
561
69.01
0 hi all! :) well i wrote a sample code to test threads and i don't get what i want the problem is that the thread i created is starting directly after i create it? but he must start when my j is equal to 25 and then i joining the thread what is the problem? screen: Result code: #include <iostream> #include <thread> #include <Windows.h> using namespace std; void f1() { for (int i = 0; i < 50; i++) { cout << i << endl; Sleep(100); } } void f2() { cout << "Hello World1" << endl; } int main() { thread t1(f1); for (int j = 0; j < 50; j++) { if (j == 25) t1.join(); cout << j << endl; Sleep(500); } cin.get(); } thanks in advance :)
https://www.daniweb.com/programming/software-development/threads/443160/thread-confusion
CC-MAIN-2018-43
refinedweb
116
89.48
First of all, let me apologise at the length of time it's taken me to post a follow up to part 2 of setting up your own business. One of the things that you come to realise when you run a business is that often you can't get down to doing the things you want to do when you want to, because there are so many competing demands on you, and the big demand on my time at the moment is the financial year end, so I apologise wholeheartedly. This article is going to take a different approach to the rest of the series, and I hope you like it. If you like, you can think of this as Part 2A. This article is the result of input I've had from others and the viewpoint of one of CodeProject's most famous business men, Mr. Marc Clifton. So, without further ado, I'd like to take you on an interview between myself and Marc. This article will not guarantee success. It is not intended to replace all the hours that you are going to have to work in order to develop your client base, and it only deals with working with clients. Pete: So Marc, how do you see marketing and how do you look at marketing yourself? Marc: Well, Pete, marketing works in two ways - namely, how do people find you, and how do you reach out to others? Interesting. So how does this work for you? Well, a company hires marketing people and spends money marketing itself. As a "soloist", money is cheap, especially when you don't have any. But you probably do have a lot of time, at least until you land that big contract, and time is money! So, invest in yourself by spending the time to market yourself. Even when you land that big contract, keep investing some of your time in marketing. You never know when that contract will go south, leaving you with anywhere from 2 weeks to 6 months (or more) before you recover your income stream. If you haven't been marketing yourself while also working, it'll likely be 6-12 months before you recover. And by then, you'll have given up and taken a job while wishing you were working for yourself, being your own boss. But, in your case, you have a certain amount of marketing going on through CodeProject. Doesn't this help you? After all, people who know CodeProject know who you are. It doesn't take long for people to get to know who Marc is without even having to use your full name. You share a certain illustriousness with other CP luminaries such as Josh, Karl, Sacha, Chris, Nish, and Christian. To a large extent, you are a brand name because of your time on CP. After starting MyXaml and writing all those CodeProject articles, I was interviewed for an online magazine, people hired me, and it led to what a friend of mine said once: "you know, the name Marc Clifton has become branded." The ultimate in marketing is to have your name branded: "Xerox this paper for me", for example. Your name, as a brand name, associates you with your product and your services. It's quite gratifying to be told "you know, when I interviewed this fellow and told him you were working on the project, his jaw dropped and he exclaimed 'You have the famous Marc Clifton on this project???'". Yeah man. Branding. Nice. That leads to the relevance to you of an online presence. Nothing beats having the Internet and search services working for you, 24/7/365. Get yourself into the presence of these search engines. (I assume this means that Marc gets one day off every 4 years ;->). What are you selling? Your mother or your expertise? I would imagine your expertise. Put together a website that illustrates your expertise. Load it full of buzzword bingo terms so that search engines will find you. For example, when I wrote MyXaml, I, of course, used the word XAML in lots of places. When people Google'd for XAML, the MyXaml website was one of the first few links at that time! And believe me, it generated interest. How about blogging? In my first article in this series, I touched on the importance of blogging and how this should be handled when you are representing a company. Specifically, I talked about not being controversial with a company voice. As a soloist, what are your opinions on blogging? How do you see it, and how does it differ from my corporate viewpoint? Blog about what you're an expert in, and blog about what you're learning and investigating. And that's part of investing in yourself--spend time learning about something you don't know anything about, and write about it. Apply your expertise toward things you are learning about--build technical bridges, and you will discover that you build people bridges. A blog is real-time--I rarely read people's old blog entries--and its value is in keeping abreast with what others are doing and the sites and technical articles to which they link. Your blog should keep people abreast of your interests and link to your website and your wiki. In understanding the difference between a blog and a wiki, I've come to feel that a blog is like a journal or a diary. But a wiki is a hierarchical repository of knowledge. It's searchable, and can be filled with interesting tidbits of knowledge. Document tips and tricks about the Operating System and the tools you use. Post code snippets and examples using newfangled technologies. Unlike your website, which presents your "face" to the world, the wiki is an information organization tool. And, a wiki is a good place for sound bites of information rather than full-blown, professionally written articles that is more appropriate for your website. That's interesting. I must admit that I haven't really considered the relevance of a wiki to my company and my customers. Sure, we have an internal Wiki, but that's no use to our customers. Thanks for the hint. Any other hints? How about advertising? Google ads let you create a small advertisement within your budget constraints. I've used these effectively to market MyXaml, and I think that Google ads are an effective way to also market yourself and your services. Track your clicks, play with the keywords, and give it a few months to see if you feel it's working for you. Is there some new technology that doesn't have a lot of web presence yet? Parallel FX? Cloud computing? .NET 3.5? Find one of these technologies and put keywords on your website, blog about it, put some wiki pages up that show some stuff you've done in that technology. The point is, ride the coattails of what the technology leaders are doing, so that when someone does a search on that technology, your website, blog, or wiki comes up. Marketing is not voodoo. You have the tools to track the hits, the blog trackbacks, etc., that your website is getting. If you have downloads of sample applications, track those. The information is invaluable in determining whether you are reaching people. Ask others what their website statistics are. You'll probably find that people are willing to share statistics as it gives everyone more information, and you're networking with others at the same point. Your sig is a free and "in the face" way of advertising yourself. How many forums do you visit, and how many posts do you make? Every post should have a sig on it that advertises (discretely and professionally) your services. Every time I make a post on CodeProject, inane or not, I think "hey, there's my sig and someone may contact me". I even advertise my neighbor's bed and breakfast on it, and because it's such a curiosity item, when she did an analysis on her website hit count, do you know that CodeProject links were her number one hits? That's interesting Marc. It's one of the areas that I really struggled with when I first started out on CodeProject. Way back, I made a conscious decision not to advertise my company's services through my sig because I didn't want it to seem that the only reason I was posting was to get free advertising, but I'll certainly think about this again. Anything else? Consider visiting forums that have to do with your hobbies and non-technical interests. Model railroading? Photography? Pedigree dog breeding? Post there, and put into your sig your services. Networking outside of your technical area is an untapped market. Consider this. The technical market is saturated with consultants. But what you want to do is find people that, if they see your sig, will think, "hey, this guy and I already share an interest in biodynamic farming, and I could use a new website, let me give him a call". You've now circumnavigated the minefield of service providers by having a special relationship with someone because you've posted outside of your primary market. This works. Many of my contracts have come from associations with my neighbors and my son's school, both the staff and the parent body. They may not be megacontracts, but they sure are nice to put some money into the Christmas fund. Offer something. For free. It's the bait that gets the fish to swallow the hook, line, and sinker (a contract for you). What about free services? One of the simplest things to set up is a focused wiki. Draw people to you by offering them a service--a place to chat, a place to contribute something themselves, a place to find some technical information. Don't be an anonymous cog in the wheel by contributing to an Open Source project, create one of your own! There are several sites for hosting Open Source projects (SourceForge being the most famous). The point is to get yourself "out there"--be visible to people. Link to your OS projects on your website and on your resume. To be totally crass about it, frankly, it doesn't matter if you even contribute to your own OS project other than getting a basic website up that defines the goals of the project. The point is, it's a vehicle for making yourself more visible to the techno-world. The goal is to have your name or your company's name appear when someone does a search for the technology "NeXtGreatThing". You're an expert, right? Well, buy a copy of Camtasia, and make some training videos in something in which you're an expert. Post them on YouTube. Post them on your website. Put them in your sig. A video is great--I can hear your voice, and if you have a webcam, I can see what you look like (Camtasia can put your webcam pic into a small window as part of your screen video). As a result of seeing and hearing you, I've now become emotionally invested in you by the fact that you are now a presence in my consciousness. Hopefully, not a presence I want to expunge. If you haven't watched one of those Microsoft Channel 9 videos, try a few. It's amazing how sound and pictures will get you to buy into total hogwash. It's amazing how you start nodding your head in agreement to logic chains that, if you were to write them down, would sound like total tripe. You don't believe me? Try it out. Train your brain to really listen to the words. Close your eyes. Replay sentences until you have your own thoughts about the topic rather than filling your mind with someone else's thoughts. That is the power of AV--it turns off the analytical part of your brain because you are constantly having to process the auditory and visual inputs. It doesn't give you time to "stop and think". It fills your head with someone else's thinking. Use that to your advantage to sell yourself and your ideas. Can we touch on an area that we all know and love you for? Your writing. Well, your presence in the technical community is another important marketing tool. Invest in yourself by writing. If you say you're not a good writer, that's even more reason to write, because you'll become a better writer and a better communicator. When I first discovered CodeProject, I was grinning from ear to ear because here was finally a way for me to write without going through the horrendous process of finding a publisher, getting an idea approved, going through the technical editing cycle, dealing with peer reviews, and finally, a year later, when the idea has lost most of its value, the article gets published. I also like online articles because they are searchable, so they stay relevant for a much longer period of time, and I can publish them in a variety of places (usually my website), whereas a magazine may have restrictions both on original content and on whether (and when) you can publish the article on your own website. Oh, did I just slam publishing articles in magazines? Well, there definitely are some advantages. First of all, most of these magazines actually pay you money for your writing. So, you get exposure and you get paid for that exposure. If you succeed at branding yourself, magazines will pay more for your articles because your name sells magazines. Also, consider hooking up with editors in different countries. Ask them if they would be willing to translate your article for you and publish it. It's a global community, you've got to start thinking globally. So what do you think about networking? Obviously, there's this whole honking great CodeProject network thing going for you, but how do you approach networking in person? I used to think that "networking" meant going to user group meetings where a dozen, a hundred, five hundred or more geeks would all sit and watch a dog and pony show, and maybe I'd land that next big contract chatting with someone in the men's bathroom line. That is not "networking", unless you're a Congressman from Idaho (Google it if the joke goes over your head). Go to user groups, not as a participant, but as a presenter. If you go as a participant, ask intelligent questions. Make yourself known to the audience and the presenter. Hand out business cards. Eat pizza. But primarily, find out the user groups that are in your area (say, a hundred mile radius, somewhere you can drive to in a couple of hours might be well worth it). Find out who runs the user groups, ask them what their primary focus is (if they have one), ask them what they are looking for with regards to future presentations, and see if there's a match. If there's no immediate good match, suggest a few topics in which you feel qualified and see if anyone is interested. Attend a few meetings to get a feel for the level of quality, the "looseness" or "tightness" of the meetings, whether there's hands-on time, or is it all PowerPoint dog and pony. Start a user group. Contact your local schools and libraries and see if you can get a room for free. If you contact a school, tell them you'll provide free pizza and soft drinks and the high school students are more than welcome to attend too. Bar camps are not about learning how to mix drinks. They are loose conglomerates of technical presentations. Rather than loading up on Jolt to stay awake through the Virtual PC reboots of Microsoft's dog and pony shows, a Bar Camp is all about people of all types making presentations. They're short, and they're cool, and you won't fall asleep. The coolest ones I've seen are by 16 year old kids that know how to rock the latest technologies. They'll blow you away with the real problems they are dealing with and the real solutions to those problems. They're also fun--a few people, very open discussions, and it's a great way to practice and polish your presentations for your clients and your user groups. Take training courses. No, I don't mean attending a training seminar where you'll be another anonymous dweeb in the crowd. I mean, hold your own training seminars. Contact companies and find out if they are interested in tooling up to some new technology. Ask them what kind of presentation they'd like. Do they want a 30,000 foot view of WPF, or some hands-on training? Do they want some outside, independent advice on their policies and practices (nobody ever listens to employees, but they will listen to a highly paid consultant). Put some basic training course plans together and publish them on your website, then tailor them to the specific needs of the company. Wow - you've certainly put some thought into this. When you think about outreach, think outside of the box. Remember that your market is already saturated with people trying to get business. A market saturated with marketers is not a market I want to sell my oranges in. Think outside of the box to find clientele. Think about outreach. Contact schools and ask them if they need help administering networks, upgrading systems, making technology recommendations, training faculty and staff. The important point is that you're making lots of contacts. You may not end up administering a network, but you may end up calling Joe's Architectural Design and saying, "hey, Amy at the school your kid goes to suggested I call you because she remembered you want some business automation software written". A recommendation is 50-90% of the sale. While you're talking to that school, ask about mentoring programs. Do they need mentors, not just in programming, but perhaps you're good at Math or English? Besides making contacts outside of the box, mentoring is a great way to improve your communication skills. When the client stares at you with a blank expression and you realize you need to figure out how to say that technobabble you just emitted in client-speak, you can draw on your mentoring and presentation skills to get that client to nod his/her head. Obviously, the information above is biased by my own experiences. I've been fortunate enough to be in the right place and the right time to land a couple contracts more than 10 years ago that got me started as a (so far) marginally successful consultant. I had several failed attempts before then (never start in the game industry). I worry about failing in the future, and considering where I am financially today, I think some might argue that I'm not very successful. However, success isn't just measured by money. My girlfriend didn't have to go to work today (it's a snow day), we slept in, had breakfast at 10:30, I've been enjoying the week that my son has had off, and I'm writing this article while watching the lovely snow falling outside the window. Life. Consulting is hard work, but you feel alive. The people you meet and the things you do are fascinating and diverse and challenging. Thanks Marc, and I think we can all agree you're definitely a success in our eyes. I couldn't have written this article without the invaluable help of Marc, so I'd like to thank him for taking part in this rather unique (for me) article, and I'd like to thank everybody who has commented on previous articles. Please, as always, keep the comments and suggestions rolling in - as you can see here, they really do shape the direction of this series. In the next article, I'm going back to basics. Following several rather nice requests and comments, I'm going to be posting a lot of the paperwork that we produce to give you a head start. These cover things like Contract Schedules, TOAs, and so on. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) public class Naerling : Lazy<Person>{} Naerling wrote:Any comments on that? Forgive your enemies - it messes with their heads "Mind bleach! Send me mind bleach!" - Nagy Vilmos My blog | My articles | MoXAML PowerToys | Mole 2010 - debugging made easier - my favourite utility My blog | My articles charlieg wrote:One of the interesting things I see happening in the industry is the blatant jettisoning of institutional knowledge among companies. charlieg wrote:One of the interesting things I see happening in the industry is the blatant jettisoning of institutional knowledge among companies charlieg wrote:The sad part is that there are so many Marc's and Pete's in our companies today that are not recognized for the wealth of knowledge they have. Pete O'Hanlon wrote:A good way to identify the gaps is to perform something called a SWOT analysis - that's Strengths, Weaknesses, Opportunities and Threats. If you do this honestly and thoroughly, you will find you are in a better position to move forward. Muammar© wrote:Thanks for sharing! razvan_dme wrote:What if you are very young (fresh graduated or still in university) and you want to go 'sollo'. Paul Conrad wrote:Great article guys. Semper Fi[^] Mike Hankey wrote:A wealth of knowledge and experience, written in a great format. Pete O'Hanlon wrote:This one was fun to write. Pete O'Hanlon wrote:Marc's always a great guy to know - he's thoughtful and he's got strong opinions so there's never a dull moment. General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/24759/Going-Solo-First-Steps-in-Building-a-Successful?msg=4062801
CC-MAIN-2015-32
refinedweb
3,717
71.14
Internet Engineering Task Force (IETF) N. Jenkins Request for Comments: 8620 Fastmail Category: Standards Track C. Newman ISSN: 2070-1721. Notational Conventions . . . . . . . . . . . . . . . . . 4 1.2.. underlying format used for this specification is JSON. Consequently, the terms "object" and "array" as well as the four primitive types (strings, numbers, booleans, and null) are to be interpreted as described in Section 1 of : o "*" - The type is undefined (the value could be any type, although permitted values may be constrained by the context of this value). o "String" - The JSON string type. o "Number" - The JSON number type. o "Boolean" - The JSON boolean type. o "A[B]" - A JSON object where the keys are all of type "A", and the values are all of type "B". o "A[]" - An array of values of type "A". o "A|B" - The value is either of type "A" or of type "B". Other types may also be given, with their representation defined elsewhere in this document. Object properties may also have a set of attributes defined along with the type signature. These have the following meanings: o "server-set" -- Only the server can set the value for this property. The client MUST NOT send this property when creating a new object of this type. o "immutable" -- The value MUST NOT change after the object is created. o "default" -- (This is followed by a JSON value). The value that will be used for this property if it is omitted in an argument or when creating a new object of this type., and IMAP atoms)." (because this sequence can be confused with the IMAP protocol expression of the null value) A good solution to these issues is to prefix every id with a single alphabetical character.]. 1.6. Terminology 1.6.1. User A user is a person accessing data via JMAP. A user has a set of permissions determining the data that they can see. 1.6.2. Accounts. 1.6.3. Data Types and Records Session object with details about the data and capabilities the server can provide as shown in Section 2. The client may then exchange data with the server in the following ways: 1. The client may make an API request to the server to get or set structured data. This request consists of an ordered series of method calls. These are processed by the server, which then returns an ordered series of responses. This is described in Sections 3, 4, and 5. 2. The client may download or upload binary files from/to the server. This is detailed in Section 6. You need two things to connect to a JMAP server: 1. The URL for the JMAP Session resource. This may be requested directly from the user or discovered automatically based on a username domain (see Section 2.2 below). 2. Credentials to authenticate with. How to obtain credentials is out of scope for this document. A successful authenticated GET request to the JMAP Session resource MUST return a JSON-encoded *Session* object, giving details about the data and capabilities the server can provide to the client given those credentials. It has the following properties: o capabilities: "String[Object]" An object specifying the capabilities of this server. Each key is a URI for a capability supported by the server. The value for each of these keys is an object with further information about the server's capabilities in relation to that capability. The client MUST ignore any properties it does not understand. The capabilities object MUST include a property called "urn:ietf:params:jmap:core". The value of this property is an object that MUST contain the following information on server capabilities (suggested minimum values for limits are supplied that allow clients to make efficient use of the network): * maxSizeUpload: "UnsignedInt" The maximum file size, in octets, that the server will accept for a single file upload (for any purpose). Suggested minimum: 50,000,000. * maxConcurrentUpload: "UnsignedInt" The maximum number of concurrent requests the server will accept to the upload endpoint. Suggested minimum: 4. * maxSizeRequest: "UnsignedInt" The maximum size, in octets, that the server will accept for a single request to the API endpoint. Suggested minimum: 10,000,000. * maxConcurrentRequests: "UnsignedInt" The maximum number of concurrent requests the server will accept to the API endpoint. Suggested minimum: 4. * maxCallsInRequest: "UnsignedInt" The maximum number of method calls the server will accept in a single request to the API endpoint. Suggested minimum: 16. * maxObjectsInGet: "UnsignedInt" The maximum number of objects that the client may request in a single /get type method call. Suggested minimum: 500. * maxObjectsInSet: "UnsignedInt" The maximum number of objects the client may send to create, update, or destroy in a single /set type method call. This is the combined total, e.g., if the maximum is 10, you could not create 7 objects and destroy 6, as this would be 13 actions, which exceeds the limit. Suggested minimum: 500. * collationAlgorithms: "String[]" A list of identifiers for algorithms registered in the collation registry, as defined in [RFC4790], that the server supports for sorting when querying records..3). o accounts: "Id[Account]" A map of an account id to an Account object for each account (see Section 1. * accountCapabilities: "String[Object]" The set of capability URIs for the methods supported in this account. Each key is a URI for a capability that has methods you can use with this account. The value for each of these keys is an object with further information about the account's permissions and restrictions with respect to this capability, as defined in the capability's specification. The client MUST ignore any properties it does not understand. The server advertises the full list of capabilities it supports in the capabilities object, as defined above. If the capability defines new methods, the server MUST include it in the accountCapabilities object if the user may use those methods with this account. It MUST NOT include it in the accountCapabilities object if the user cannot use those methods with this account. For example, you may have access to your own account with mail, calendars, and contacts data and also a shared account that only has contacts data (a business address book, for example). In this case, the accountCapabilities property on the first account would include something like "urn:ietf:params:jmap:mail", "urn:ietf:params:jmap:calendars", and "urn:ietf:params:jmap:contacts", while the second account would just have the last of these. Attempts to use the methods defined in a capability with one of the accounts that does not support that capability are rejected with an "accountNotSupportedByMethod" error (see "Method-Level Errors", Section 3.6.2). o primaryAccounts: "String[Id]" A map of capability URIs (as found in accountCapabilities) to the account id that is considered to be capability is supported by the server (and in the capabilities object). "urn:ietf:params:jmap:core" SHOULD NOT be present. o username: "String" The username associated with the given credentials, or the empty string if none. o apiUrl: "String" The URL to use for JMAP API requests. o downloadUrl: "String" The URL endpoint to use when downloading files, in URI Template (level 1) format [RFC6570].. o uploadUrl: "String" The URL endpoint to use when uploading files, in URI Template (level 1) format [RFC6570]. The URL MUST contain a variable called "accountId". The use of this variable is described in Section 6.1. o eventSourceUrl: "String" The URL to connect to for push events, as described in Section 7.3, in URI Template (level 1) format [RFC6570]. The URL MUST contain variables called "types", "closeafter", and "ping". The use of these variables is described in Section 7.3. o state: "String" A (preferably short) string representing the state of this object on the server. If the value of any other property on the Session object changes, this string will change. The current value is also returned on the API Response object (see Section 3.4), allowing clients to quickly determine if the session information has changed (e.g., an account has been added or removed), so they need to refetch the object. To ensure future compatibility, other properties MAY be included on the. 2.1. Example In the following example Session object, the user has access to their own mail and contacts via JMAP, as well as read-only access to shared mail from another user. The server is advertising a custom "" capability. { " ] }, . 3. Structured Data Exchange The client may make an API request to the server to get or set structured data. This request consists of an ordered series of method calls. These are processed by the server, which then returns an ordered series of responses. 3.1. Making an API Request To make an API request, the client makes an authenticated POST request to the API resource, Method calls and responses are represented by the *Invocation* data type. This is a tuple, represented as a JSON array containing three elements: 1. A "String" *name* of the method to call or of the response. 2. A "String[*]" object containing named *arguments* for that method or response. 3. A "String" *method call id*: an arbitrary string from the client to be echoed back with the responses emitted by that method call (a method may return 1 or more responses, as it may make implicit calls to other methods; all responses initiated by this method call get the same method call id in the response). 3.3. The Request Object A *Request* object has the following properties: o using: "String[]" The set of capabilities the client wishes to use. The client MAY include capability identifiers even if the method calls it makes do not utilise those capabilities. The server advertises the set of specifications it supports in the Session object (see Section 2), as keys on the "capabilities" property. o methodCalls: "Invocation[]" An array of method calls to process on the server. The method calls MUST be processed sequentially, in order. o createdIds: "Id[Id]" (optional) A map of "#" (see Section 5.3 for more details). As the server processes API requests, any time it successfully creates a new record, it adds the creation id to this map (see the "create" argument to /set in Section 5.3), object. This may be an empty object. If given in the request, the response will also include a createdIds property.. 3.3.1. Example Request { "using": [ "urn:ietf:params:jmap:core", "urn:ietf:params:jmap:mail" ], "methodCalls": [ [ "method1", { "arg1": "arg1data", "arg2": "arg2data" }, "c1" ], [ "method2", { "arg1": "arg1data" }, "c2" ], [ "method3", {}, "c3" ] ] } 3.4. The Response Object A *Response* object has the following properties: o methodResponses: "Invocation[]" An array of responses, in the same format as the "methodCalls" on the Request object. The output of the methods MUST be added to the "methodResponses" array in the same order that the methods are processed. o createdIds: "Id[Id]" (optional; only returned if given in the request) A map of a (client-specified) creation id to the id the server assigned when a record was successfully created. This MUST include all creation ids passed in the original createdIds parameter of the Request object, as well as any additional ones added for newly created records. o sessionState: "String" The current value of the "state" string on" } 3.5. Omitting Arguments An argument to a method may be specified to have a default value. If omitted by the client, the server MUST treat the method call the same as if the default value had been specified. Similarly, the server MAY omit any argument in a response that has the default value. Unless otherwise specified in a method description, null is the default value for any argument in a request or response where this is allowed by the type signature. Other arguments may only be omitted if an explicit default value is defined in the method description. 3.6. Errors, it. 3.6.1. Request-Level Errors When an HTTP error response is returned to the client, the server SHOULD return a JSON "problem details" object as the response body, as per [RFC7807]. The following problem types are defined: o "urn:ietf:params:jmap:error:unknownCapability" The client included a capability in the "using" property of the request that the server does not support. o "urn:ietf:params:jmap:error:notJSON" The content type of the request was not "application/json" or the request did not parse as I-JSON. o "urn:ietf:params:jmap:error:notRequest" The request parsed as JSON but did not match the type signature of the Request object. o . 3.6.1.1. Example { ." } 3.6.2. Method-Level Errors" }, "call-id" ] The response name is "error", and it MUST have a type property. Other properties may be present with further information; these are detailed in the error type descriptions where appropriate. With the exception of when the "serverPartialFail" error is returned,. 3.7. References to Previous Method Results arguments object contains the same argument name in normal and referenced form (e.g., "foo" and "#foo"), the method MUST return an "invalidArguments" error. A *ResultReference* object has the following properties: o resultOf: "String" The method call id (see Section 3.2) of a previous method call in the current request. o name: "String" The required name of a response to that method call. o path: "String" A pointer into the arguments of the response selected via the name and resultOf properties. This is a JSON Pointer [RFC6901], except it also allows the use of "*" to map through an array (see the description below). To resolve: 1. Find the first response with a method call JSON Pointer algorithm [RFC6901], except with the following addition in "Evaluation" (see Section 4): each item was itself an array, the contents of this array are added to the output rather than the array itself (i.e., the result is flattened from an array of arrays to a single", { : 1. Find the first response with method call id "t0". The "Foo/ changes" response fulfils this criterion. 2. Check that the response name is the same as in the result reference. It is, so this is fine. 3. Apply the "path" as a JSON Pointer to the arguments object. This simply selects the "created" property, so the result of evaluating is: [ "f1", "f4" ].): "accountId": "A1", "filter": { "inMailbox": "id_of_inbox" }, "sort": [{ "property": "receivedAt", "isAscending": false }], "collapseThreads": true, "position": 0, "limit": 10, "calculateTotal": true }, "t0" ], "accountId": "A1", "#ids": { "resultOf": "t0", "name": "Email/query", "path": "/ids" }, "properties": [ "threadId" ] }, "t1" ], [ "Thread/get", { "accountId": "A1", "#ids": { "resultOf": "t1", "name": "Email/get", "path": "/list/*/threadId" } }, "t2" ], "accountId": "A1", "#ids": { "resultOf": "t2", "name": "Thread/get", "path": "/list/*/emailIds" }, "properties": [ "from", "receivedAt", "subject" ] }, "t3" ]] After executing the first 3 method calls, the "methodResponses" array might be: "accountId": "A1", "queryState": "abcdefg", "canCalculateChanges": true, "position": 0, "total": 101, "ids": [ "msg1023", "msg223", "msg110", "msg93", "msg91", "msg38", "msg36", "msg33", "msg11", "msg1" ] }, "t0" ], "accountId": "A1", "state": "123456", "list": [{ "id": "msg1023", "threadId": "trd194" }, { "id": "msg223", "threadId": "trd114" }, ... ], "notFound": [] }, "t1" ], [ "Thread/get", { "accountId": "A1", "state": "123456", "list": [{ "id": "trd194", }, { "id": "trd114", }, ... ], "notFound": [] }, "t2" ]] To execute the final "Email/get" call, we look through the arguments and find there is one with a "#" prefix. To resolve this, we apply the algorithm: 1. Find the first response with method call id "t2". The "Thread/ get" response fulfils this criterion. 2. "Thread/get" is the name specified in the result reference, so this is fine. 3. Apply the "path" as a JSON Pointer to the arguments object. Token by token: 1. "list": get the array of thread objects 2. "*": for each of the items in the array: a. "emailIds": get the array of Email ids b. Concatenate these into a single array of all the ids in the result. The JMAP server now continues to process the "Email/get" call as though the arguments were: { "accountId": "A1", "ids": [ "msg1020", "msg1021", "msg1023", "msg201", "msg223", ... ], "properties": [ "from", "receivedAt", "subject" ] } The ResultReference performs a similar role to that of the creation id, in that it allows a chained method call to refer to information not available when the request is generated. However, they are different things and 3.9... Standard Methods and Naming Convention. Some types may not have all these methods. Specifications defining types MUST specify which methods are available for the type. 5.1. /get Objects of type Foo are fetched via a call to "Foo/get". It takes the following arguments: o accountId: "Id" The id of the account to use. o ids: "Id[]|null" The ids of the Foo objects to return. If null, then *all* records of the data type are returned, if this is supported for that data type and the number of records does not exceed the "maxObjectsInGet" limit. o: o accountId: "Id" The id of the account used for the call. o state: "String" A (preferably short). o list: "Foo[]" An array of the Foo objects requested. This is the *empty array* if no objects were found or if the "ids" argument passed in was. o When the state of the set of Foo records in an account: o accountId: "Id" The id of the account to use. o sinceState: "String" The current state of the client. This is the string that was returned as the "state" argument in the "Foo/get" response. The server will return the changes that have occurred since this state. o maxChanges: "Unsigned: o accountId: "Id" The id of the account used for the call. o oldState: "String" This is the "sinceState" argument echoed back; it's created: "Id[]" An array of ids for records that have been created since the old state. o updated: "Id[]" An array of ids for records that have been updated since the old state. o destroyed: "Id[]" An array of ids for records that. However, it MAY include it in just the "destroyed" list or in both the "destroyed" and "created" lists.. after a response that deems it as updated or destroyed, and it MUST NOT return a record as destroyed before a response that deems, this is). 5.3. /set: o accountId: "Id" The id of the account to use. o ifInState: "String|null" This is a state string as returned by the "Foo/get" method (representing the state of all objects of this type in the account). If supplied, the string must match the current state; otherwise, the method will be aborted and a "stateMismatch" error returned. If null, any changes will be applied to the current state. o create: "Id[Foo]|null" A map of a *creation id* (a temporary). o update: "Id[PatchObject]|null" A map], with an implicit leading "/" (i.e., prefix each key with "/" before applying the JSON Pointer evaluation algorithm). All paths MUST also conform to the following restrictions; if there is any violation, the update MUST be rejected with an "invalidPatch" error: * The pointer MUST NOT reference inside an array (i.e., you MUST NOT insert/delete from an array; the array MUST be replaced in its entirety instead). * All parts prior to the last (i.e., the value after the final slash) MUST already exist on the object being patched. * There MUST NOT be two patches in the PatchObject where the pointer of one is the prefix of the pointer of the other, e.g., "alerts/1/offset" and "alerts". The value send the whole object; the server processes it the same either way. o destroy: . Thus, the server.", Section 3.3) and the final state of the map passed out with the response (see "The Response Object", Section 3.4). The response has the following arguments: o accountId: created: thus set to a default by the server. This argument is null if no Foo objects were successfully created. o updated: . o destroyed: "Id[]|null" A list of Foo ids for records that were successfully destroyed, or null if none. o notCreated: "Id[SetError]|null" A map of the creation id to a SetError object for each record that failed to be created, or null if all successful. o notUpdated: "Id[SetError]|null" A map of the Foo id to a SetError object for each record that failed to be updated, or null if all successful. o notDestroyed: "Id[SetError]|null" A map of the Foo id to a SetError object for each record that failed to be destroyed, or null if all successful. A *SetError* object has the following properties: o type: "String" The type of error. o "tooLarge": (create; update). The create/update would result in an object that exceeds a server-defined limit for the maximum size of a single object of this type. o "rateLimit": (create). Too many objects of this type have been created recently, and a server-defined rate limit has been reached. It may work if tried again later. o "notFound": (update; destroy). The id given to update/destroy cannot be found. o "invalidPatch": (update). The PatchObject given to update the record was not a valid patch (see the patch description). o "willDestroy": (update). The client requested that an object be both updated and destroyed in the same /set request, and the server has decided to therefore ignore the update. o "invalidProperties": (create; update). The record given is invalid in some way. For example: * It contains properties that as. o "singleton": (create; destroy). This is a singleton type, so you cannot create another one or destroy the existing one.., so it is possible for the copy to succeed but the original not to be destroyed for some reason. The copy is conceptually in three phases: 1. Reading the current values from the "from" account. 2. Writing the new copies to the other account. 3. Destroying the originals in the "from" account, if requested. Data may change in between phases due to concurrent requests. The "Foo/copy" method takes the following arguments: o fromAccountId: "Id" The id of the account to copy records from. o ifFromInState: "String|null" This is a state string as returned by the "Foo/get" method. If supplied, the string must match the current state of the account referenced by the fromAccountId when reading the data to be copied; otherwise, the method will be aborted and a "stateMismatch" error returned. If null, the data will be read from the current state. o accountId: "Id" The id of the account to copy records to. This MUST be different to the "fromAccountId".. o create: "Id[Foo]" A map of. o destroyFromIfInState: "String|null" This argument is passed on as the "ifInState" argument to the implicit "Foo/set" call, if made at the end of this request to destroy the originals that were successfully copied. Each record copy is considered an atomic unit that may succeed or fail individually. The response has the following arguments: o fromAccountId: "Id" The id of the account records were copied from. o accountId: "Id" The id of the account records were copied to. o oldState: "String|null" The state string that would have been returned by "Foo/get" on the account records that were copied to before making the requested changes, or null if the server doesn't know what the previous state string was. o newState: "String" The state string that will now be returned by "Foo/get" on the account records were copied to. o created: "Id[Foo]|null" A map of the creation id to an object containing any properties of the copied Foo object that are set by the server (such as the "id" in most object types; note, the id is likely to be different to the id of the object in the account it was copied from). This argument is null if no Foo objects were successfully copied. o notCreated: "Id[SetError]|null" A map. 5.5. /query: o accountId: "Id" The id of the account to use. o. o [RFC4790], for the algorithm to use when comparing the order of strings. The algorithms the server supports are advertised in the capabilities object returned with the Session object (see Section 2). If omitted, the default algorithm is server dependent, but: 1. It MUST be unicode-aware. 2. It MAY be selected based on an Accept-Language header in the request (as defined in [RFC7231], Section 5.3.5) or out-of-band information about the user's language/locale. 3. It SHOULD be case insensitive where such a concept makes sense for a language/locale. Where the user's language is unknown, it is RECOMMENDED to follow the advice in Section 5.2.3 of [RFC8264].. o position: "Int" (default: 0) The zero, it's clamped to "0". This is now the zero-based index of the first id to return. If the index is greater than or equal to the total number of objects in the results list, then the "ids" array in the response will be empty, but this is not an error. o anchor: "Id|null" A Foo id. If supplied, the "position" argument is ignored. The index of this id in the results will be used in combination with the "anchorOffset" argument to determine the index of the first result to return (see below for more details). o). o limit: "UnsignedInt|null" The maximum number of results to return. If null, no limit presumed. The server MAY choose to enforce a maximum "limit" argument. In this case, if a greater value is given (or if it is null), the limit is clamped to the maximum; the new limit is returned with the response so the client is aware. If a negative value is given, the call MUST be rejected with an "invalidArguments" error. o, the anchor is looked for in the results after filtering and sorting. If found, the "anchorOffset" no "anchor" is supplied, any "anchorOffset" argument MUST be ignored. A client can use "anchor" instead of "position" to find the index of an id within a large set of results. The response has the following arguments: o accountId: "Id" The id of the account used for the call. o. o. o position: "UnsignedInt" The zero-based index of the first result in the "ids" array within the complete list of query results. o ids: . o total: "UnsignedInt" (only if requested) The total number of Foos in the results (given the "filter"). This argument MUST be omitted if the "calculateTotal" request argument is not true. o limit: "UnsignedInt" (if set by the server) The limit enforced by the server on the maximum number of results to return. This is only returned if the server set a limit or used a different limit than that given in the request. The following additional errors may be returned instead of the "Foo/ query" response: "anchorNotFound": An anchor argument was supplied, but it cannot be found in the results of the query. "unsupportedSort": The "sort" is syntactically valid, but that the user simplify their search. 5.6. /queryChanges The "Foo/queryChanges" method allows a client to efficiently update the state of a cached query to match the new state on the server. It takes the following arguments: o accountId: "Id" The id of the account to use. o filter: "FilterOperator|FilterCondition|null" The filter argument that was used with "Foo/query". o sort: "Comparator[]|null" The sort argument that was used with "Foo/query". o sinceQueryState: "String" The current state of the query in the client. This is the string that was returned as the "queryState" argument in the "Foo/query" response with the same sort/filter. The server will return the changes made to the query since this state. o maxChanges: "UnsignedInt|null" The maximum number of changes to return in the response. See error descriptions below for more details. o upToId: . o: o accountId: "Id" The id of the account used for the call. o oldQueryState: "String" This is the "sinceQueryState" argument echoed back; that is, the state from which the server is returning changes. o newQueryState: "String" This is the state the query will be in after applying the set of changes to the old state. o total: "UnsignedInt" (only if requested) The total number of Foos in the results (given the "filter"). This argument MUST be omitted if the "calculateTotal" request argument is not true. o removed: "Id[]" The "id" for every Foo that was in the query results in the old state and that is not in the results in the new state. If the server cannot calculate this exactly, the server MAY return the ids of. The position of these may have moved in the results, so they must be reinserted by the client to ensure its query cache is correct. o to be one change. The client may retry. 5.7. Examples Suppose we have a type *Todo* with the following properties: o id: "Id" (immutable; server-set) The id of the object. o title: "String" A brief summary of what is to be done. o keywords: "String[Boolean]" (default: {}) A set of keywords that apply to the Todo. The set is represented as an object, with the keys being the "keywords". The value for each key in the object MUST be true. (This format allows you to update an individual key using patch syntax rather than having to update the whole set of keywords as one, which a "String[]" representation would require.) o neuralNetworkTimeEstimation: "Number" (server-set) The title and keywords are fed into the server's state-of-the-art neural network to get an estimation of how long this Todo will take, in seconds. o subTodoIds: "Id[]|null" The ids of a list of other Todos, ": "Watch Daft Punk music video", "keywords": { "music": true, "video":", { "accountId": "x", ", { "accountId": "x", . Let us now add a sub-Todo to our new "Practise" ]] 5.8. Proxy Considerations: 1. It must pass a "createdIds" property with each subrequest. If this is not given by the client, an empty object should be used for the first subrequest. The "createdIds" property of each subresponse should be passed on in the next subrequest. 2. It must resolve back-references to previous method results that were processed on a different server. This is a relatively simple syntactic substitution, described in Section 3.7. When splitting a request based on accountId, proxy implementors do need to be aware of "/copy" methods that copy between accounts. If the accounts are on different servers, the proxy will have to implement this functionality directly. 6. Binary Data object in Internet Message Format [RFC5322].: o The server SHOULD use a separate quota for unreferenced blobs to the account's usual quota. In the case of shared accounts, this quota SHOULD be separate per user. MUST NOT be deleted for at least 1 hour from the time of upload; if reuploaded, the same blobId MAY be returned, but this SHOULD reset the expiry time. o A blob MUST NOT be deleted during the method call: o accountId: "Id" The id of the account used for the call. o blobId: : "UnsignedInt" The size of the file in octets.]. The URL MUST contain variables called "accountId", "blobId", "type", and "name". To download a file, the client makes an authenticated GET request to the download URL with the appropriate variables substituted in: o "accountId": The id of the account to which the record with the blobId belongs. o "blobId": The blobId representing the data of the file to download. o "type": The type for the server to set in the "Content-Type" header of the response; the blobId only represents the binary data and does not have a content-type innately associated with it. and use the "immutable" Cache- Control extension on the client. The "Blob/copy" method takes the following arguments: o fromAccountId: "Id" The id of the account to copy blobs from. o accountId: "Id" The id of the account to copy blobs to. o blobIds: "Id[]" A list of ids of blobs to copy to the other account. The response has the following arguments: o fromAccountId: "Id" The id of the account blobs were copied from. o accountId: "Id" The id of the account blobs were copied to. o copied: "Id[Id]|null" A map of the blobId in the fromAccount to the id for the blob in the account it was copied to, or null if none were successfully copied. o notCopied: "Id[SetError]|null" A map of blobId to a SetError object for each blob that failed to be copied, or null if none.. 7. Push: o @type: "String" This MUST be the string "StateChange". o changed: "Id[TypeState]" A map of an "account id" to an object encoding the state of data types that have changed for that account since the last StateChange object was pushed,). 7.1.1. Example In this example, the server has amalgamated a few changes together across two different accounts the user has access to, before pushing the following StateChange object to the client: { "@type": "StateChange", "changed": { "a3123": { . 7.2. PushSubscription: o id: "Id" (immutable; server-set) The id of the push subscription. o apps from other vendors. It SHOULD be easy to regenerate and not depend on persisted state. It is RECOMMENDED to use a secure hash of a string that contains: 1. A unique identifier associated with the device where the JMAP client is running, normally supplied by the device's operating system. 2. A custom vendor/app id, including a domain controlled by the vendor of the JMAP client. To protect the privacy of the user, the deviceClientId id MUST NOT contain an unobfuscated device id. o url: "String" (immutable) An absolute URL where the JMAP server will POST the data for the push message. This MUST begin with "https://". o keys: "Object|null" (immutable) Client-generated encryption keys. If supplied, the server MUST use them as specified" The authentication secret as described in [RFC8291], encoded in URL-safe base64 representation as defined in [RFC4648]. o verificationCode: "String|null" This MUST be null (or omitted) when the subscription is created. The JMAP server then generates a verification code and sends it in a push message, and the client updates the PushSubscription object with the code; see Section 7.2.2 for details.. o types: "String[]|null" A list of types the client is interested in (using the same names as the keys in the TypeState object defined in the previous section). A StateChange notification will only be sent if the data for one of these types changes. Other types are omitted from the TypeState object. If null, changes will be pushed for all types. [RFC7617]), the server SHOULD set an expiry time for the push subscription if none is, so this gives a reasonable time frame to allow this to happen. In the case of separate access and refresh credentials, as in Oauth 2.0 [RFC6749],. 7.2.1. PushSubscription/get. 7.2.2. PushSubscription/set: o @type: "String" This MUST be the string "PushVerification". o pushSubscriptionId: "String" The id of the push subscription that was created. o verificationCode: "String" The verification code to add to the push subscription. This MUST contain sufficient entropy to avoid the client being able to guess the code via brute force.). 7.2.3. Example response. field with the last id it saw, which the server can use to work out whether the client has missed some changes. If so, it SHOULD send these changes immediately on connection. The Session object (see Section 2) has an "eventSourceUrl" property, which is in URI Template (level 1) format [RFC6570]. The URL MUST contain variables called "types", "closeafter", and "ping". To connect to the resource, the client makes an authenticated GET request to the event-source URL with the appropriate variables substituted in: o "types": This MUST be either: * A comma-separated list of type names, e.g., the types in this list. * The single character: "*". Changes to all types are pushed. o "closeafter": This MUST be one of the following values: * "state": The server MUST end the HTTP response after pushing a state event. This can be used by clients in environments where buffering proxies prevent the pushed data from arriving immediately, or indeed at all, when operating in the usual mode. * "no": The connection is persisted by the server as a standard event-source resource. o "ping": A positive integer value representing a length of time in seconds, e.g., "300". If non-zero, the server MUST send an event called "ping" whenever this time elapses since the previous event was sent. This MUST NOT set a new event id. If the value is "0", the server MUST NOT send ping events. The server MAY modify a requested ping interval "UnsignedInt"). A client MAY hold open multiple connections to the event-source resource, although it SHOULD try to use a single connection for efficiency. <>). Servers should take care to assess the security characteristics of different schemes in relation to their needs when deciding what to implement. Use of the Basic authentication scheme is NOT RECOMMENDED. Services that choose to use it are strongly recommended to require generation of a unique "app password" via some external mechanism for each client they wish to connect. This allows connections from different devices to be differentiated by the server and access to be individually revoked. third party server with requests, creating a denial-of-service attack and masking the attacker's true identity. There is no guarantee that the URL. 8.7. Push Encryption. 9. IANA Considerations 9.1. Assignment of jmap Service Name IANA has assigned the 'jmap' service name in the "Service Name and Transport Protocol Port Number Registry" [RFC6335]. Service Name: jmap Transport Protocol(s): tcp Assignee: IESG Contact: as described in Section that, [RFC8126], is available. The DE should also verify that. 9.4.4. Change Procedures. 1. Verify the error code does not conflict with existing names. 2. Verify the error code follows the syntax limitations (does not require URI encoding). 3. Encourage the submitter to follow the naming convention of previously registered errors. 4. Encourage the submitter to describe client behaviours that are recommended in response to the error code. These may distinguish the error code from other error codes. 5. Encourage the submitter to describe when the server should issue the error as opposed to some other error code. 6. Encourage the submitter to note any security considerations associated with the error, if any (e.g., o JMAP Error Code: accountNotFound Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.2 Description: The accountId does not correspond to a valid account. o JMAP Error Code: accountNotSupportedByMethod Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.2 Description: The accountId given corresponds to a valid account, but the account does not support this method or data type. o JMAP Error Code: accountReadOnly Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.2 Description: This method modifies state, but the account is read- only (as returned on the corresponding Account object in the JMAP Session resource). o JMAP Error Code: anchorNotFound Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.5 Description: An anchor argument was supplied, but it cannot be found in the results of the query. o. o JMAP Error Code: cannotCalculateChanges Intended Use: Common Change Controller: IETF Reference: RFC 8620, Sections 5.2 and 5.6 Description: The server cannot calculate the changes from the state string given by the client. o JMAP Error Code: forbidden Intended Use: Common Change Controller: IETF Reference: RFC 8620, Sections 3.6.2, 5.3, and 7.2.1 Description: The action would violate an ACL or other permissions policy. o JMAP Error Code: fromAccountNotFound Intended Use: Common Change Controller: IETF Reference: RFC 8620, Sections 5.4 and 6.3 Description: The fromAccountId does not correspond to a valid account. o JMAP Error Code: fromAccountNotSupportedByMethod Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.4 Description: The fromAccountId given corresponds to a valid account, but the account does not support this data type. o JMAP Error Code: invalidArguments Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.2 Description: One of the arguments is of the wrong type or otherwise invalid, or a required argument is missing. o JMAP Error Code: invalidPatch Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: The PatchObject given to update the record was not a valid patch. o JMAP Error Code: invalidProperties Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: The record given is invalid. o JMAP Error Code: notFound Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: The id given cannot be found. o JMAP Error Code: notJSON Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.1 Description: The content type of the request was not application/ json, or the request did not parse as I-JSON. o JMAP Error Code: notRequest Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.1 Description: The request parsed as JSON but did not match the type signature of the Request object. o JMAP Error Code: overQuota Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: The create would exceed a server-defined limit on the number or total size of objects of this type. o JMAP Error Code: rateLimit Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: Too many objects of this type have been created recently, and a server-defined rate limit has been reached. It may work if tried again later. o JMAP Error Code: requestTooLarge Intended Use: Common Change Controller: IETF Reference: RFC 8620, Sections 5.1 and 5.3 Description: The total number of actions exceeds the maximum number the server is willing to process in a single method call. o JMAP Error Code: invalidResultReference Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.2 Description: The method used a result reference for one of its arguments, but this failed to resolve. o JMAP Error Code: serverFail Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.2 Description: An unexpected or unknown error occurred during the processing of the call. The method call made no changes to the server's state. o. o JMAP Error Code: serverUnavailable Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.2 Description: Some internal server resource was temporarily unavailable. Attempting the same operation later (perhaps after a backoff with a random factor) may succeed. o JMAP Error Code: singleton Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: This is a singleton type, so you cannot create another one or destroy the existing one. o JMAP Error Code: stateMismatch Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: An ifInState argument was supplied, and it does not match the current state. o JMAP Error Code: tooLarge Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: The action would result in an object that exceeds a server-defined limit for the maximum size of a single object of this type. o JMAP Error Code: tooManyChanges Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.6 Description: There are more changes than the client's maxChanges argument. o JMAP Error Code: unknownCapability Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.1 Description: The client included a capability in the "using" property of the request that the server does not support. o JMAP Error Code: unknownMethod Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 3.6.2 Description: The server does not recognise this method name. o JMAP Error Code: unsupportedFilter Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.5 Description: The filter is syntactically valid, but the server cannot process it.. o JMAP Error Code: willDestroy Intended Use: Common Change Controller: IETF Reference: RFC 8620, Section 5.3 Description: The client requested an object be both updated and destroyed in the same /set request, and the server has decided to therefore ignore the update. 10. References 10.1. Normative References [EventSource] Hickson, I., "Server-Sent Events", World Wide Web Consortium Recommendation REC-eventsource-20150203, February 2015, <>. 4790] Newman, C., Duerst, M., and A. Gulbrandsen, "Internet Application Protocol Collation Registry", RFC 4790, DOI 10.17487/RFC4790, March 2007, <>. [RFC5051] Crispin, M., "i;unicode-casemap - Simple Unicode Collation Algorithm", RFC 5051, DOI 10.17487/RFC5051, October70] Gregorio, J., Fielding, R., Hadley, M., Nottingham, M., and D. Orchard, "URI Template", RFC 6570, DOI 10.17487/RFC6570, March 2012, <>. [RFC6749] Hardt, D., Ed., "The OAuth 2.0 Authorization Framework", RFC 6749, DOI 10.17487/RFC6749, October 2012, <>. 6838] Freed, N., Klensin, J., and T. Hansen, "Media Type Specifications and Registration Procedures", BCP 13, RFC 6838, DOI 10.17487/RFC6838, January 2013, <>. [RFC6901] Bryan, P., Ed., Zyp, K., and M. Nottingham, Ed., "JavaScript Object Notation (JSON) Pointer", RFC 6901, DOI 10.17487/RFC6901, April264] URI: Chris Newman Oracle 440 E. Huntington Dr., Suite 400 Arcadia, CA 91006 United States of America Previous: RFC 8619 - Algorithm Identifiers for the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) Next: RFC 8621 - The JSON Meta Application Protocol (JMAP) for Mail
http://www.faqs.org/rfcs/rfc8620.html
CC-MAIN-2020-29
refinedweb
7,547
64.3
exp, expf, expm1, expm1f, log, logf, log10, log10f, log1p, log1pf, pow, powf - exponential, logarithm, power functions Math Library (libm, -lm) #include <math.h> double exp(double x); float expf(float x); double expm1(double x); float expm1f(float x); double log(double x); float logf(float x); double log10(double x); float log10f(float x); double log1p(double x); float log1pf(float x); double pow(double x, double y); float powf(float x, float y); The exp() function computes the exponential value of the given argument x. The expm1() function computes the value exp(x)-1 accurately even for tiny argument x. The log() function computes the value of the natural logarithm of argument x. The log10() function computes the value of the logarithm of argument x to base 10. The log1p() function computes the value of log(1+x) accurately even for tiny argument x. The pow() computes the value of x to the exponent y. These thresholds until almost as many bits could be lost as are occupied by the floating-point format's exponent field; that is 8 bits for VAX D and 11 bits for IEEE 754 Double. No such drastic loss has been exposed by testing; Pascal, exp1 and log1 in C on APPLE Macintoshes, where they have been provided, Infinity (not found on a VAX), and NaN (the reserved operand on a VAX). Previous., independently of x. math(3) The exp(), log(), log10() and pow() functions conform to ANSI X3.159-1989 (``ANSI C''). A exp(), log() and pow() functions appeared in Version 6 AT&T UNIX. A log10() function appeared in Version 7 AT&T UNIX. The log1p() and expm1() functions appeared in 4.3BSD. BSD July 31, 1991 BSD
http://nixdoc.net/man-pages/NetBSD/man3/log1pf.3.html
CC-MAIN-2020-10
refinedweb
287
63.29
Load a template for a new title Import a saved title file as a template Set or restore a default template Rename or delete a template Create a template from an open title The title templates included with Premiere Pro provide numerous themes and preset layouts that make it quick and easy to design a title. Some templates include graphics that may be pertinent to your movie’s subject matter, such as new baby or vacation themes. Others include placeholder text that you can replace to create credits for your movie. Some templates have transparent backgrounds, represented by dark gray and light gray squares, so that you can see your video beneath the title; others are completely opaque. You can easily change any element in the template by selecting the element and either deleting it or overwriting it. You can also add elements to the template. After you modify the template, you can save it as a title file for use in current and future projects. Alternately, you can save any title you create as a template. You can also import title files from another Premiere Pro project as templates. If you share templates between computers, make sure that each system includes all the fonts, textures, logos, and images used in the template. To set the selected template as the default template, choose Set Template As Default Still from the Templates panel menu. The default template loads each time you open the Titler. To restore the default set of templates, choose Restore Default Templates from the Templates panel menu. To rename the selected template, choose Rename Template from the Templates panel menu. Type a name in the Name box, and click OK. To delete a template, choose Delete Template from the Templates menu, and then click OK. Add images to titles Add a texture for text or object Twitter™ and Facebook posts are not covered under the terms of Creative Commons.
http://help.adobe.com/en_US/PremierePro/4.0/WS3EBD7FBD-F988-4862-ACF8-179EC27B6B0Ba.html
crawl-003
refinedweb
321
69.31
02/03/2017 - Beta - AIR 25.0.0.108Milind.Jha Welcome to the AIR Runtime and SDK version 25 25. For full details, please see our release notes New and Updated Features Apple TV support (Beta channel only) We have made some enhancements to tvOS support which was introduced in AIR 24 beta. Please see the release notes specific to this feature here iOS SDK Upgrade AIR Runtime is now built with iOS 10 SDK, which enables AIR developers to use ANEs built with iOS 10 APIs without using the –platformSDK switch while packaging with ADT. Option to fallback to older Video pipeline for Android application descriptor to enable/disable MediaCodec for Android. On> AS3 API to get Compilation Information for iOS Starting AIR 25 (with swf version 36 and above) we have a new API for ActionScript developers to determine if their application is running with compiled runtime or interpreted runtime. The new API “isCompiledAOT” is added to NativeApplication class. This API returns true if the application is built using one of the following AOT targets: - 1. ipa-app-store - 2. ipa-test - 3. ipa-debug - 4. ipa-ad-hoc This API returns false for any other iOS target and other AIR platforms like AIR android and AIR desktop. Local Storage Support in StageWebView for Android Starting AIR 25 (with swf version 36 and above), local storage in StageWebView is available for Android. Now, the sites that require local DOM storage work as expected on StageWebView." Adding support for new languages in AIR Mobile Starting AIR 25, we have added support for the following languages: - Danish (da), - Norwegian (nb), and - Hebrew (iw). Note:To use these languages, the Namespace value in application descriptor must be 25.0 or greater. Sample Snippet: <supportedLanguages>da en nb</supportedLanguages> <name> <text xml:NameInDanish</text> </name> Multidex support for Android Starting AIR 25, MultiDex support is available for Android. Through MultiDexing, developers can package apps that exceed the 64K references limit. Usually, 64K references limit reach when the ANEs have a lot of methods. More information on Android Multidex can be found here:. Note: If you use ANEs containing pre-dex libraries, there will be a packaging error when you try to package it. Offset support for drawToBitmapData() Beginning AIR 25, capturing current buffer data of the back render buffer through drawToBitmapData() allows offsets for capturing a target rectangle from buffer instead of complete buffer.: Some important points: - If the source rectangle goes beyond the current render buffer, the rectangle part extending beyond the dimensions of buffer is clipped, this is similar null, the API falls back to the older implementation where complete buffer is copied to the bitmap. Instanced Drawing on AIR Desktop. Known Issues - [Android] Allocations made by async texture upload are not freed up after multiple asynchronous uploads(AIR-4198245) - [MAC] Adobe AIR_64 Helper and ExtendedAppEntryTemplate64 creates problems while codesigning MAC Captive App.(AIR-4189809) Fixed Issues - Starling does not display any content on Integrated GPU Intel HD Graphics with AIR 24( AIR-4198176, AIR-4198227 ). - Unable to install the application on iOS Simulator(AIR-4198023) - TEXTURE_READY event is dispatched very soon when uploading the RectangleTexture repeatedly(AIR-4198247) - [Android] OpenSSL library upgraded to 1.0.2j version - [Android] Context loss in Stage3D on android after displaying native dialog - [Android] Capabilities.screenResolution return a wrong values (AIR-4198240) - [iOS] Clipboard.clear() crashes application on iOS 10 (AIR-4198156) - [iOS] Packaging of application fails with message ANE is not a valid native extension file (AIR-4198128) - [Android] ANE on Android fails to load shared native objects when app is installed on a SD card - [Android] stage.fullScreenHeight is returning wrong value on Android with immersive full screen - [Android] On using Ane's which contain resources having attribute names same as any of the attribute names of the AppCompat resources, the packaging will fail. - [Android] <supports-gl-texture> tag in Android manifestAdditions prevents project from building (AIR-4123604) - [iOS] Sound starts to crackle on using SampleDataEvent with microphone and sound class. - [iOS]Flare 3D is rendered incorrectly when anti-aliasing is used. Authoring for Flash Player 25 and AIR 25 - Update application descriptor namespace to 25 - SWF version should be 36.
https://forums.adobe.com/thread/2273260
CC-MAIN-2018-09
refinedweb
699
50.67
Introduction: Beverly-Crusher: Bit Crushing. 1-bit Arduino Music. I had been looking for a tool to convert audio down to 1-bit depth but gave up and wrote my own. Supports export for Arduino sketch. Here I am offering an audio crushing program which also makes exporting to arduino sketch extremely easy so that one can experiment with 1-bit audio and manipulate sample rates rapidly. The github has source to the crusher which compiles under Linux and can allow you to preview your compressed audio through ALSA prior to generating the Arduino sketch. The array of samples that is generated can be used in other embedded projects without specifically being an arduino but you will need to modify the playback code accordingly. Step 1: How I Go About Crushing the Audio and Some Backstory. Inspiration Having been a fan of sites like instructables for a long time and on several occasions saw that there were projects geared towards generating sound or music from a microcontroller I became sure that someday I will get around to trying this cool stuff myself. I have worked on audio projects before but this is first time I've gone out of my way to create the tools needed to make it reproducible easily. One of my previous projects was to use a cheap DDS module from china, change it's frequency and then detect it using a SDR (Software defined radio) on LSB (Lower side band) and it played tetris music. Anyway. I digress lol. Mostly these projects had in common that they required 8 output pins and resistors to form a DAC, which is pretty awesome and sounds quite nice.. There were however a couple of projects which dealt with 1-bit audio needing only 1 digital I/O pin for it to generate the sound as it is essentially a square wave. I fell in love with this idea due to how it sounds because when I produce music I tend to use a lot of distortion and it fills me with warm fuzzy feelings! Here we decide what we hope to achieve, I hoped to achieve a downsampling of an audio recording from 24-bit to 1-bit.. I tried to find a tool to do this but struggled and ultimately gave up and started writing my own. Now I have to say that to simplify this process and since I needed to cut up the audio sample to get the part that I actually wanted to play, I used audacity to export a file with the following parameters: - unsigned 8 bit - raw (header-less) Of course I also edited out the right hand audio channel before exporting because I was only interested in dealing with a mono audio sample. Parsing the file The cool thing about this exported file is that it is very easy to deal with as each byte of the file represents one entire sample of audio, as in.. how much energy or how loud that particular moment of sound is. An 8-bit or 1-byte sample is really just a value of loudness between 0 and 255, giving you a possible range of 256 values. Then my program reduces that down from 256 possible values down to 2. On or off. The only caveat being that you have to make a decision, what constitutes being on and what is discarded by switching it off. My decision is to pick a place which is roughly in the middle of the 256 values. Let's say for arguments sake that we choose 128 as the cut off point, if a sound sample isn't loud enough to reach at least 128 it is discarded and considered to be off and that is stored away as 0. If however the sample has sufficient amplitude to peak above the 128 we say okay we consider that to be on enough so we set aside a 1 value. Step 2: Copy Eight of Our 1bit Samples Into a Byte of Memory. Bitshifting to populate our 8bit storage space. Once the samples have been converted to a 1-bit resolution it's now simply a case of going through those converted samples and joining them together to make a string of 8 bits; aka 1 byte of storage. This will require some knowledge on bitshifting, thankfully there are many resources which teach this simply, one of those being at the arduino website. The basic idea is that we have to conserve space on our microcontroller, as each sound sample obviously takes up space, ... We could have an array of integers to store our samples in but that would be a gross misuse of storage as we need just a mere fraction of that space to store our audio reproduction. For more information on storage required for an int see arduino website. I chose to specify an unsigned char which is an 8 bit value, or 1 byte of space, but again that is 8 times larger than what we need! The solution Get some space allocated in memory, let's say for this tutorial we requested 1 byte of storage space from memory, we proceed as follows. - We get our first 1 bit audio sample - fill the least significant bit of our 1 byte of storage with the value of our 1 bit audio sample - move all of the bits in our 1 byte of storage across to the left by 1 bit That algorithm is repeated until we have filled up one byte of memory. This is useful because depending on your microcontroller an Int can have anything from say 16 bits of memory even as much as 32 bits. So that would be 32 times more storage used up than we would have required. Nice saving right there! Step 3: Making the Arduino Understand Our Musical Awesomeness How I store the sample data on the microcontroller You will no doubt recall from the previous step that we took our downsampled information and packed it into a neat little package of the size of 1 byte or 8 bits. This saves space on the microcontroller as you know but you may be wondering how we store it and access this information for playback later on the arduino. Enter avr/pgmspace.h: #include <avr/pgmspace.h> This header file allows us to program our sample data directly into the flash memory on the Arduino, yay! It's pretty easy to use just with a tiny bit of consideration on how we read the information back. prog_uchar onebitraw[] PROGMEM = { 0XFF, 0XFF, 0XEF, 0XFF,..... }; I guess the 2 key points to make about that chunk of code above are that we use prog_uchar as the type of data we are storing, this is important for us to be able to read the data back out of memory when we play the sample. The other notable thing is that we use the keyword PROGMEM, this relies on the header file that I mentioned avr/pgmspace.h and this tells the compiler where to store this array of data. prog_uchar tells the compiler that we are storing data of the type unsigned char. A char is simply 1 byte, so it can store a value of 0 through to 255, 8 bits. We specify unsigned because we are storing only positive numbers from 0 and above. This is critical because we aren't really storing numbers as you may recall we are actually storing 8 sound samples inside of this value, this ends up being converted to a number value and we can move it around as if it's a number but the reality is it's not quite what it seems but the compiler doesn't know or care about this configuration. If we were using a signed storage method we would be in a right mess. If you fancy knowing more about signed, unsigned and two's complement then this wiki article should be an interesting read for you.'s_complement Pointer arithmetic is wayyy easier than it sounds For the Arduino to read back our information from the PROGMEM section of memory we will be needing to use the function pgm_read_byte_near(); It is very easy to use and the only thing that complicates it is that it requires you to use pointer arithmetic to specify which byte of memory you want... Like so: pgm_read_byte_near(onebitraw + which_one); In that example I lay out above you will see 'onebitraw' which I will use to express the storage of our audio samples. Now you may be familiar with using array indices such as variable[index] and this is no different except we replace the [index] with +index instead.. Make sense? The reason is that we stored our audio data as a block of bytes, one after another, so we know that each one is simply one more along than the one previous to it. See? Very simple! Step 4: Demo of 1-Bit Music Here I show you a little something I was working on tonight in order to play back some music, the music is sampled in the same way as shown before with my crusher program and then inside of the microcontroller it's possible to make those sounds play backwards or at different speeds. Here is the sketch for this demo:... Step 5: Playback Manipulation Here you can see three functions which all playback the sound samples, just in different ways... - playback(); plays back a sample forwards. - playback_r(); plays the sample backwards. - playback_s(); plays the sample forward but at a reduced speed. As you can see from the code it's very easy to play the sound in interesting ways, here is snippet of how I was able to sequence the patterns in the music video. playback_r(onebitraw_1, BC_BYTE_COUNT_1); playback_r(onebitraw_1, BC_BYTE_COUNT_1); playback_r(onebitraw_1, BC_BYTE_COUNT_1); playback_r(onebitraw_2, BC_BYTE_COUNT_2); playback(onebitraw_1, BC_BYTE_COUNT_1); playback(onebitraw_1, BC_BYTE_COUNT_1); playback(onebitraw_1, BC_BYTE_COUNT_1); playback(onebitraw_3, BC_BYTE_COUNT_3); playback(onebitraw_1, BC_BYTE_COUNT_1); playback(onebitraw_1, BC_BYTE_COUNT_1); playback(onebitraw_1, BC_BYTE_COUNT_1); playback_r(onebitraw_4, BC_BYTE_COUNT_4); Very simple yet quite powerful in the flexibility of what you can create! With a moment of inspiration I realised that I could play back chunks of each sample and stitch them together, keeping quantization and bringing in another aspect of the remixing idea. int z; for (z = 0; z < 4; z++){ playback(onebitraw_1, BC_BYTE_COUNT_1 /4); playback(onebitraw_2 + (BC_BYTE_COUNT_1 /4), BC_BYTE_COUNT_1 /4); playback_r(onebitraw_3 + (BC_BYTE_COUNT_1 /2), BC_BYTE_COUNT_1 /4); playback(onebitraw_2 + ((BC_BYTE_COUNT_1 /4) + (BC_BYTE_COUNT_1 /2)) , BC_BYTE_COUNT_1 /4); } If you break down what I did it becomes very simple to understand.. Imagine these letters represented the 4 different beat patterns which I created in Reason before importing into my Arduino. [AAAA] [BBBB] [CCCC] [DDDD] The for loop I used above sample breaks those patterns apart so now it looks more like: [ABCD] The duration stays the same, it all sounds in time and as a result of playing back a bit of each pattern sounds quite enjoyable! 2 Discussions Welcome to instructables! Thanks for sharing your awesomeness! Thank you for the kind welcome :D Looove your hair by the way!
http://www.instructables.com/id/Beverly-Crusher-bit-crushing-1-bit-Arduino-music/
CC-MAIN-2018-30
refinedweb
1,850
63.73
#include <ctype.h> int toascii(int c); int todigit(int c); int toint(int c); int tolower(int c); int _tolower(int c); int toupper(int c); int _toupper(int c); The macros _toupper and _tolower have the same functionality of the functions on valid input, but the macros are faster because they do not do range checking. The _toupper macro takes a lowercase letter as its argument. The result is the corresponding uppercase letter. The _tolower macro takes an uppercase letter as its argument. The result is the corresponding lowercase letter. All other arguments cause unpredictable results. The macro toascii yields its argument with all bits cleared that are not part of a standard ASCII character; it is intended for compatibility with other systems. The macro todigit returns the digit character corresponding to its integer argument. The argument must be in the range 0-9, otherwise the behavior is undefined. The macro toint returns the integer corresponding to the digit character supplied as its argument. The argument must be one of 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, otherwise the behavior is undefined. toupper(*p++)Avoiding such argument calls helps to keep code portable. The functions toupper and tolower are library functions. Their arguments are converted to integers. For this reason, care should be taken that character arguments are supplied as unsigned characters to avoid problems with sign-extension of 8-bit character values. The todigit and toint macros are extensions, and may not be present on other systems. toascii is conformant with: X/Open Portability Guide, Issue 3, 1989 . tolower and toupper are conformant with: X/Open Portability Guide, Issue 3, 1989 ; ANSI X3.159-1989 Programming Language -- C ; IEEE POSIX Std 1003.1-1990 System Application Program Interface (API) [C Language] (ISO/IEC 9945-1) ; and NIST FIPS 151-1 . _tolower and _toupper are conformant with: X/Open Portability Guide, Issue 3, 1989 .
http://osr507doc.xinuos.com/cgi-bin/man?mansearchword=toupper&mansection=S&lang=en
CC-MAIN-2020-50
refinedweb
319
57.16
#include <qgl.h> OpenGL QGLWidget provides functionality for displaying OpenGL* graphics integrated into a Qt application. It is very simple to use. You inherit from it and use the subclass like any other QWidget, except that instead of drawing the widget's contents using QPainter etc. you use the standard OpenGL rendering commands. QGL the widget has been resized (and also when it is shown for the first time because all newly created widgets get a resize event automatically). initializeGL() - Sets up the OpenGL rendering context, defines display lists, etc. Gets called once before the first time resizeGL() or paintGL() is called. Here is a rough outline of how a QGLWidget subclass might look: class MyGLDrawer : public QGLWidget { Q_OBJECT // must include this if you use Qt signals/slots public: MyGLDrawer( QWidget *parent, const char *name ) : QGLWidget(parent, name) {} protected: void initializeGL() { // Set up the rendering context, define display lists etc.: ... glClearColor( 0.0, 0.0, 0.0, 0.0 ); glEnable(GL_DEPTH_TEST); ... } void resizeGL( int w, int h ) { // setup viewport, projection etc.: glViewport( 0, 0, (GLint)w, (GLint)h ); ... glFrustum( ... ); ... } void paintGL() { // draw the scene: ... glRotatef( ... ); glMaterialfv( ... ); glBegin( GL_QUADS ); glVertex3f( ... ); glVertex3f( ... ); ... glEnd(); ... } }; If you need to trigger a repaint from places other than paintGL() (a typical example is when using timers to animate scenes), you should call the widget's updateGL() function.. QGLWidget provides functions for requesting a new display format and you can also create widgets with customized rendering contexts. You can also share OpenGL display lists between QGLWidgets (see the documentation of the QGLWidget constructors for details). Definition at line 272 of file qgl.h.
http://qt-x11-free.sourcearchive.com/documentation/3.3.4/classQGLWidget.html
CC-MAIN-2018-22
refinedweb
266
55.44
Function at end of variable (Don't really know how to sum up this question) Hi, so I have this function: def clean(data): while data.startswith(' '): data = data[1:] while data.endswith(' '): data = data[:-1] return data Therefore clean(" random string ") returns "random string" . Is it possible to get the same result with " random string ".clean() ? After all "random string".lower() is possible too. @Drizzel Sorry for that, but I can't answer. I'm sincerely not very good in Python it-self. Sure that @ccc or @mikael could answer You can subclass str and insert your method in the subclass definition. As long as you use your subclass to create your “string” instances the new method would work. Not for literals though. E.g., class funny_string(str): def clean(data): while data.startswith(' '): data = data[1:] while data.endswith(' '): data = data[:-1] return data my_string = funny_string(“ look at this! “) print(my_string.clean()) #### works print(“ huh? “.clean()) #### doesn’t work @Drizzel, well, sort of, see below. But not really, since you cannot modify the built-in class ( str.a = 'a'results in an error). from collections import UserString class Cleanable(UserString): def clean(self): return self.strip() mystr = Cleanable(' random string ') assert 'string' in mystr assert mystr.clean() == 'random string' @pulbrich, heh. @Drizzel, @pulbrich’s version is better, UserString is a throwback to very old Python 2.. @JITASIDA No, you can't That would require adding a method to a built-in type and that's not possible (without going to the C source for Python): >>> str.clean=lambda self:'clean'+self Traceback (most recent call last): File "<string>", line 1, in <module> TypeError: can't set attributes of built-in/extension type 'str'
https://forum.omz-software.com/topic/6178/function-at-end-of-variable-don-t-really-know-how-to-sum-up-this-question
CC-MAIN-2021-10
refinedweb
285
77.94
An unofficial python client for RQLite Project description rqdb This is an unofficial python client for rqlite, a lightweight distributed relational database based on SQLite. This client supports SQLite syntax, true parameterized queries, and a calling syntax reminiscent of DB API 2.0. Furthermore, this client has convenient asynchronous methods which match the underlying rqlite API. Installation pip install rqdb Usage Synchronous queries: import rqdb import secrets conn = rqdb.connect(['127.0.0.1:4001']) cursor = conn.cursor() cursor.execute('CREATE TABLE persons (id INTEGER PRIMARY KEY, uid TEXT UNIQUE NOT NULL, name TEXT NOT NULL)') cursor.execute('CREATE TABLE pets (id INTEGER PRIMARY KEY, name TEXT NOT NULL, owner_id INTEGER NOT NULL REFERENCES persons(id) ON DELETE CASCADE)') # standard execute cursor.execute('INSERT INTO persons (uid, name) VALUES (?, ?)', (secrets.token_hex(8), 'Jane Doe')) assert cursor.rows_affected == 1 # The following is stored in a single Raft entry and executed within a transaction. person_name = 'John Doe' person_uid = secrets.token_urlsafe(16) pet_name = 'Fido' result = cursor.executemany3(( ( 'INSERT INTO persons (uid, name) VALUES (?, ?)', (person_uid, person_name) ), ( 'INSERT INTO pets (name, owner_id) ' 'SELECT' ' ?, persons.id ' 'FROM persons ' 'WHERE uid = ?', (pet_name, person_uid) ) )).raise_on_error() assert result[0].rows_affected == 1 assert result[1].rows_affected == 1 Asynchronous queries: import rqdb import secrets async def main(): async with rqdb.connect_async(['127.0.0.1:4001']) as conn: cursor = conn.cursor() result = await cursor.execute( 'INSERT INTO persons (uid, name) VALUES (?, ?)', (secrets.token_hex(8), 'Jane Doe') ) assert result.rows_affected == 1 Additional Features Read Consistency Selecting read consistency is done at the cursor level, either by passing read_consistency to the cursor constructor ( conn.cursor()) or by setting the instance variable read_consistency directly. The available consistencies are strong, weak, and none. You may also indicate the freshness value at the cursor level. See CONSISTENCY.md for details. The default consistency is weak. Foreign Keys Foreign key support in rqlite is disabled by default, to match sqlite. This is a common source of confusion. It cannot be configured by the client reliably. Foreign key support is enabled as described in FOREIGN_KEY_CONSTRAINTS.md Nulls Substituting "NULL" in parametrized queries can be error-prone. In particular, sqlite needs null sent in a very particular way, which the rqlite server has historically not handled properly. By default, if you attempt to use "None" as a parameter to a query, this package will perform string substition with the value "NULL" in the correct spot. Be careful however - you will still need to handle nulls properly in the query, since "col = NULL" and "col IS NULL" are not the same. In particular, NULL = NULL is NULL, which evaluates to false. One way this could be handled is name: Optional[str] = None # never matches a row since name is None, even if the rows name is null cursor.execute('SELECT * FROM persons WHERE name = ?', (name,)) # works as expected cursor.execute('SELECT * FROM persons WHERE ((? IS NULL AND name IS NULL) OR name = ?)', (name, name)) Backup Backups can be initiated using conn.backup(filepath: str, raw: bool = False). The download will be streamed to the given filepath. Both the sql format and a compressed sqlite format are supported. Logging By default this will log using the standard logging module. This can be disabled using log=False in the connect call. If logging is desired but just needs to be configured slightly, it can be done as follows: import rqdb import logging conn = rqdb.connect( ['127.0.0.1:4001'], log=rqdb.LogConfig( # Started a SELECT query read_start={ 'enabled': True, 'level': logging.DEBUG, # alternatively, 'method': logging.debug }, # Started a UPDATE/INSERT query write_start={ 'enabled': True, 'level': logging.DEBUG, }, # Got the response from the database for a SELECT query read_response={ 'enabled': True, 'level': logging.DEBUG,, 'max_length': 1024, # limits how much of the response we log }, # Got the response from the database for a UPDATE/INSERT query write_response={ 'enabled': True, 'level': logging.DEBUG, }, # Failed to connect to one of the nodes. connect_timeout={ 'enabled': True, 'level': logging.WARNING, }, # Failed to connect to any node for a query hosts_exhausted={ 'enabled': True, 'level': logging.CRITICAL, }, # The node returned a status code other than 200-299 or # a redirect when a redirect is allowed. non_ok_response={ 'enabled': True, 'level': logging.WARNING } ) ) Limitations Slow Transactions The primary limitations is that by the connectionless nature of rqlite, while transactions are possible, the entire transaction must be specified upfront. That is, you cannot open a transaction, perform a query, and then use the result of that query to perform another query before closing the transaction. This can also be seen as a blessing, as these types of transactions are the most common source of performance issues in traditional applications. They require long-held locks that can easily lead to N^2 performance. The same behavior can almost always be achieved with uids, as shown in the example. The repeated UID lookup causes a consistent overhead, which is highly preferable to the unpredictable negative feedback loop nature of long transactions. Other Notes It is often helpful to combine this library with a sql builder such as pypika when manipulating complex queries. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rqdb/1.0.9/
CC-MAIN-2022-40
refinedweb
861
51.24
See also: IRC log Noah: Welcome to Robin! Dan: +1 JAR: I want to know if anyone wants a look before it goes out AM: Significant changes? JAR: Many editorial Some negotiation about how long <noah> AM: OK, a couple of days <noah> Thomas, probably time to dial soon, do you know where Harry is? <jar> tlr, hello Noah: approval of prior minutes... <DKA> … Approve minutes of 15 december? <noah> <DKA> Recorded approved. <DKA> Noah: Minutes of f2f? Some clean-up needed. <DKA> Yves: I went through most of Friday's and have done some correction. I can do Thursday's as well. <DKA> Noah: One more week for formal approval. <noah> Ht, are you dialing the tag? <tlr> if you want to reschedule till we have Harry as well, that's fine with me. <DKA> Noah: congratulations also to Henry. <DKA> Noah: Do we need to reopen question of teleconference times? <DKA> +1 to congratulations to Henry! <DKA> Noah: We have a f2f meeting hosted by Yves April 2-4Monday-Wednesday. <DKA> … we have an agreement to meet in June for next f2f. <DKA> … some risk of canceling next week's meeting. <DKA> Noah: On status section of xml-html unification report, I propose not to discuss it today. <jar> F2F discussion: Henry: so many [things html5 wg involved in] - is there anything about securing web pages coming out of the html working group that intersect with this concern (about https being broken)? <noah> NM: Goal is to figure out whether TAG should invest in this, and at the F2F we said we wante the benefit of Thomas' perspective as W3C Security Domain lead … anyone pushing for something to be done about this? [silence] Noah: Thomas - are you aware of anyone other than the TAG engaging this problem? Thomas: What problem? <jar> My guess is that HH brought the issue to us because he, at least, didn't think it was adequately covered by other groups <noah> E-mail form Harry Halpin: Noah: We had an email from Harry Halpin that kicked this off with the TAG. <noah> E-mail title: "The CA system is spectacularly broken - can the TAG help?" … email sets out the history in which people have had certificates issued fraudulently. We think security on the Web is hugely important. So now what? Thomas: https is a protocol that relies on a certain trust framework… That is working fine... ... you have at the protocol level, dependency on the CA system. … so what can be done to deal with that system? … one of the conversations going on in some quarters is about perhaps modifying the distribution mechanisms for certs, create channels that permit the holder of the domain name with the cert authorities permitted to be used with it, etc... … another approach discussed is to use the DNS as a conveyance mechanism for trust information. That could be reasonably sane given DNSsec's deployment. <noah> Curious, when Thomas says that techniques are "being discussed", does this mean formally in responsible groups in the IETF, or just informally among concerned techies? <Yves> formally in the IETF, at least for DNSSEC …. if the industry goes in that direction we get reduced attack surface. Right now if I have a domain name registered in .com in US and someone in Lux issues a cert for that domain, that cert is trusted. … the tactic of using DNSsec reduces the attack surface. <noah> Can we get the name of the mailing list please? … DNSsec is an existing IETF standard. there is a working group working on how to store certs. There is a possible BoF to discuss additional [work] at next IETF. … the name of the mailing list is "the right key" <Zakim> noah, you wanted to talk about https URI scheme and RFC 2818 Noah: we have been talking about protocols. It occurred to me that this also relates to https URI scheme. <noah> RFC 2818: … RFC-2818. … In the section on server identity, it says: <noah> "If the hostname is available, the client MUST check it against the <noah> server's identity as presented in the server's Certificate message, <noah> in order to prevent man-in-the-middle attacks. <noah> " <jar> this doesn't say how to do the checking. Noah: so - architecturally this is called out not just in the https protocol but in the definition of the correct resolution of URI identifiers using that scheme. There is a sense that the namespace of https schemes is validated by the CA system. Thomas: this an informational IRC. Noah: It's pointed to by IANA... Thomas: it's modifiable. Noah: I think it has some force in practice. Ashok: Question - several of the approaches use a third party cert certifier? <noah> Noah is somewhat purplexed that RFC 2818 is the official registration for one of the Web's most important URI schemes, but is marked information. To a non-IETF wonk, this seems very, very strange. Thomas: all of these approaches [being discussed] at some point need to establish a binding between the identifier and a cryptographic key and some of them need to establish binding with a real life identity. The way these schemes do this is to have a chain of custody. DNS delegation - the key hierarchy that derives from dnssec - could allow I myself to sign my own... … that is one approach. The other approach uses a third party that is trusted. that is the traditional CA system. Ashok: It turns out there are several possible solutions Do we have to pick one? Or can we have a number of them that can be browser specific or user -selected? Thomas: If I want to reduce attack surface then I want to reduce the mechanisms by which I can be attacked. … Users choosing authorities that they trust in real deployment is usually a myth. … when have you last edited the list of CAs in your browser? <tlr> one of the proposals: Noah: Is there a role for the TAG here? <tlr> DANE working group: … last year Jeff Jaffe asked us to highlight to him topics that we felt might be threats to the Web. This will be on it. … beyond it the TAG has no plans to do anything other than discuss today. Is there a way we can help? Thomas: I was asking - is there something W3C can do to help? At this particular juncture, I am trying to get a handle on what work is in purview of W3C and what is in purview for IETF. … I think it's reasonable for the TAG to keep an eye on this topic. Noah: Harry's note points to three specific proposals in the community. … he feels the right organizational structures aren't in place and maybe it's time for w3c to move. Thomas: Harry will collaborate on this with Wendy - one question in this domain is to figure out what piece of this we [w3c] should address. Noah: You as the domain lead have recommended that we keep an eye on this. We are responding to a specific request from Harry. Maybe the right thing is to publish the minutes and for me to take an action to get back to Harry. ... Any objections? Ashok: also ask him to keep us apprised. Thomas: We'll be happy to keep the TAG informed. <noah> ACTION: Noah to verify with Harry Halpin the TAG's plan to "keep an eye" on CA issues, and solicit his and TLR's help in keeping us informed Due: 2012-01-31 [recorded in] <trackbot> Created ACTION-663 - Verify with Harry Halpin the TAG's plan to "keep an eye" on CA issues, and solicit his and TLR's help in keeping us informed Due: 2012-01-31 [on Noah Mendelsohn - due 2012-01-26]. <noah> ACTION-663 Due 2012-01-31 <trackbot> ACTION-663 Verify with Harry Halpin the TAG's plan to "keep an eye" on CA issues, and solicit his and TLR's help in keeping us informed Due: 2012-01-31 due date now 2012-01-31 Thomas: There are some ideas floating around [about workshops] the conversation is about a possible BoF at Paris IETF meeting. I sent an email to the TAG mailing list. I [encourage] you to follow that discussion. <jar> maybe HH is worried that the browser folks aren't in good communication with IETF? Noah: Anyone willing to take a long-term action to watch for news in this space? ... Thanks, Thomas. Thomas: if we do a workshop it would be great to have someone from the TAG on the program committee, for example. Noah: Goal here is to look at this product page... <noah> … to come out agreeing to his or to a revision of it or [dropping the work].... Henry: I have input from Larry and others which I have not integrated yet. … I would prefer to put this off. Noah: unless others object I think we should. [agreement to put off] <noah> Adding note to ACTION-528: Per brief discussion on 19 January 2012, this will not be scheduled for discussion until Henry Thompson integrates agreed changes from Larry Masinter, and others, as recorded in minutes of F2F and earlier calls. <noah> ACTION-528? <trackbot> ACTION-528 -- Henry Thompson to create and get consensus on a product page and tracker product page for persistence of names -- due 2012-01-24 -- OPEN <trackbot> <noah> ACTION-654? <trackbot> ACTION-654 -- Jeni Tennison to write "product" page summarizing wrapup of RDFa/Microdata work -- due 2012-01-31 -- PENDINGREVIEW <trackbot> Noah: we did finish this discussion at the f2f. But we said "we nee to leave tracks" - record the wrap-up in a final version of the TAG product page. <noah> F2F technical discussion: <noah> F2F project wrapup: <noah> To discuss today: Looks OK to me. <noah> +1 Noah: Anyone feels this need more discussion? <darobin> +1 <plinss> +1 <Yves> +1 Noah: [penning a proposed resolution] <noah> PROPOSED RESOLUTION: The draft product page at is agreed as the basis on which the TAG closes out its work on Microdata/RDFa coordination +1 Noah: Any objections? <noah> RESOLUTION: The draft product page at is agreed as the basis on which the TAG closes out it's work on Microdata/RDFa coordination [passed without dissent] <noah> close ACTION-654 <trackbot> ACTION-654 Write "product" page summarizing wrapup of RDFa/Microdata work closed Noah: propose to close action-654. ... I will take an action to announce the closing of our work on this. <noah> ACTION: Noah to announce completion of TAG work on Microdata/RDFa as recorded in and to finalize the product page and associated links <trackbot> Created ACTION-664 - Announce completion of TAG work on Microdata/RDFa as recorded in and to finalize the product page and associated links [on Noah Mendelsohn - due 2012-01-26]. <noah> <noah> ACTION-350? <trackbot> ACTION-350 -- Larry Masinter to revise based on feedback on www-tag and the feedback from TAG f2f 2009-12-09 discussion -- due 2011-11-29 -- PENDINGREVIEW <trackbot> Noah: I took Larry's proposed text and copied it into email - we'll discuss when larry's back. <noah> ACTION-568? <trackbot> ACTION-568 -- Noah Mendelsohn to draft note for Jeff Jaffe listing 5 top TAG priorities as trackable items. -- due 2012-01-03 -- PENDINGREVIEW <trackbot> <noah> ACTION-563? <trackbot> ACTION-563 -- Noah Mendelsohn to arrange for periodic TAG key issues reports to Jeff per June 2011 F2F Due 2011-10-15 -- due 2012-01-24 -- OPEN <trackbot> <noah> close ACTION-568? Noah: I'd like your proposal to close action-568 as it duplicates action-563. [no objection] <noah> ACTION-578? <trackbot> ACTION-578 -- Noah Mendelsohn to make sure HTML/XML work gets TAG review when ready Due: 2011-08-01 -- due 2011-12-27 -- PENDINGREVIEW <trackbot> <noah> close ACTION-578 <trackbot> ACTION-578 Make sure HTML/XML work gets TAG review when ready Due: 2011-08-01 closed Noah: review XML-html unification work. This was done at the f2f. I propose to close it. [no objections] <noah> ACTION-591? <trackbot> ACTION-591 -- Noah Mendelsohn to ping Norm end of Sept. on revised HTML/XML report per discussion on 1 Sept 2011 -- due 2011-12-27 -- PENDINGREVIEW <trackbot> <noah> close ACTION-591 <trackbot> ACTION-591 Ping Norm end of Sept. on revised HTML/XML report per discussion on 1 Sept 2011 closed <noah> ACTION-599? <trackbot> ACTION-599 -- Noah Mendelsohn to close out HTML5 review product -- due 2011-12-20 -- PENDINGREVIEW <trackbot> Noah: I think I sent the closing note relating to action-599. ... I will leave this open and figure out what's going on here. <noah> ACTION-602? <trackbot> ACTION-602 -- Noah Mendelsohn to work with IETF liaisons to propose possible TAG participation in IETF Paris -- due 2011-12-27 -- PENDINGREVIEW <trackbot> Noah: There has been a notion we need to be better about liaising the IETF. ... … when mark nottingham was here along with Philippe, I chatted with them (the 2 liaisons) - they said the best part to attend [for me] would be the early part but I don't think it's justified... Henry: I think it would be a good use of w3c's money. … I think your should be there. Noah: Floor is open for any suggestions for how to use that week well - week before our meeting in the south of France. ... I'm going to re-open this and bump it so I get back to it. <noah> Reopening ACTION-602, mostly to follow up on HT's advice that Noah should attend, but also to see if other ideas come up for liaison. <noah> ACTION-602? <trackbot> ACTION-602 -- Noah Mendelsohn to work with IETF liaisons to propose possible TAG participation in IETF Paris -- due 2012-03-01 -- OPEN <trackbot> <noah> ACTION-622? <trackbot> ACTION-622 -- Noah Mendelsohn to schedule discussion of html.next as possible new TAG work focus (per Edinburgh F2F) [self-assigned] -- due 2011-12-20 -- PENDINGREVIEW <trackbot> <noah> close ACTION-622 <trackbot> ACTION-622 Schedule discussion of html.next as possible new TAG work focus (per Edinburgh F2F) [self-assigned] closed Noah: I thnk this happened. Objections to close? [none heard] <noah> ACTION-627? <trackbot> ACTION-627 -- Noah Mendelsohn to schedule very detailed line-by-line review of Pub&Linking draft at January F2F -- due 2012-01-17 -- PENDINGREVIEW <trackbot> <noah> NM: Jeni suggested not to do this for the F2F, question is whether it's still worth doing? <noah> DKA: I think probably still worth doing, because we had to remove some things based on Rigo's guidance. Don't know if it's at the right stage. <noah> DKA: Jeni and I had a meeting, she took some "actions" Noah: Discussion on this will be difficult without Jeni. We focused on f2f agenda on a few key messages. I will re-open this action. <noah> Reopening ACTION-627 until you're ready; make it pending when you are. <noah> ACTION-627? <trackbot> ACTION-627 -- Noah Mendelsohn to schedule very detailed line-by-line review of Pub&Linking draft at January F2F -- due 2012-01-31 -- OPEN <trackbot> <noah> ACTION-634? <trackbot> ACTION-634 -- Noah Mendelsohn to with help from Noah to publish as a TAG Finding -- due 2011-12-20 -- PENDINGREVIEW <trackbot> <noah> close ACTION-634 <trackbot> ACTION-634 With help from Noah to publish as a TAG Finding closed <noah> ACTION-642? <trackbot> ACTION-642 -- Jeni Tennison to with help from Larry to propose plan to liaise with PLH to register HTML media type -- due 2012-01-17 -- PENDINGREVIEW <trackbot> <noah> ACTION-643? <trackbot> ACTION-643 -- Larry Masinter to redraft something on this html5 review in 3 weeks. -- due 2011-12-29 -- PENDINGREVIEW <trackbot> <ht> HST has to leave, I have (with some care) pushed all my overdue deadlines out by varying amounts <noah> The comment says: <noah> This was also moved into the product page for HTML review, as per ACTION-644. <noah> Larry Masinter, 25 Dec 2011, 04:57:46 <noah> NM: I hear suggestions to close this. <noah> close ACTION-643 <trackbot> ACTION-643 Redraft something on this html5 review in 3 weeks. closed <noah> ACTION-644? <trackbot> ACTION-644 -- Larry Masinter to draft proposed alternative text to e-mail announcing end of "product" work on HTML 5 last call ( ) Due 2012-01-10 -- due 2012-01-10 -- PENDINGREVIEW <trackbot> <noah> Leaving until I straighten out ACTION-599, also on HTML closing <noah> ACTION-653? <trackbot> ACTION-653 -- Noah Mendelsohn to schedule telcon discussion of Persistence product page (which was drafted for but not reviewed at F2F -- due 2012-01-17 -- PENDINGREVIEW <trackbot> <jar> +1 <plinss> -1 I have to leave in ~5 minutes. <noah> ACTION-609? <trackbot> ACTION-609 -- Daniel Appelquist to draft initial cut at -- due 2011-10-25 -- OPEN <trackbot> <darobin> [I have to leave in a few minutes too, sorry] <jar> I already have an action like this one <noah> close ACTION-609 <trackbot> ACTION-609 Draft initial cut at closed <noah> Due to Dan's departure. <noah> ACTION-629? <trackbot> ACTION-629 -- Daniel Appelquist to with help from Jeni to propose changes to goals, success criteria etc. for publishing/linking product page -- due 2012-01-17 -- OPEN <trackbot> <noah> close ACTION-629 <trackbot> ACTION-629 With help from Jeni to propose changes to goals, success criteria etc. for publishing/linking product page closed <noah> ACTION-514? <trackbot> ACTION-514 -- Daniel Appelquist to draft finding on API minimization -- due 2011-10-18 -- OPEN <trackbot> <darobin> ACTION-662? <trackbot> ACTION-662 -- Robin Berjon to redraft proposed product page on API Minimization () -- due 2012-01-31 -- OPEN <trackbot> Noah: I suggest I assign this to you with a proposed due date of 2 weeks. <noah> ACTION-662? <trackbot> ACTION-662 -- Robin Berjon to redraft proposed product page on API Minimization () -- due 2012-01-31 -- OPEN <trackbot> … to get your proposal on how to take this forward. … I will just bump the date on action-514 into the future understanding that it's a placeholder. <noah> ACTION-514? <trackbot> ACTION-514 -- Robin Berjon to draft a finding on API minimization -- due 2012-05-01 -- OPEN <trackbot> <noah> <noah> ACTION-652? <trackbot> ACTION-652 -- Yves Lafon to danA to come back with a proposal on API minimization draft -- due 2012-01-17 -- OPEN <trackbot> <noah> close ACTION-652 <trackbot> ACTION-652 DanA to come back with a proposal on API minimization draft closed <noah> ACTION-661? <trackbot> ACTION-661 -- Ashok Malhotra to ask harry and thomas to join us on a future TAG call. -- due 2012-01-13 -- OPEN <trackbot> <noah> close ACTION-661 <trackbot> ACTION-661 Ask harry and thomas to join us on a future TAG call. closed <noah> ACTION: Noah to follow up with Harry Halpin on 19 January 2012 telcon discussion of CAs [recorded in] <trackbot> Created ACTION-665 - Follow up with Harry Halpin on 19 January 2012 telcon discussion of CAs [on Noah Mendelsohn - due 2012-01-26]. <noah> ACTION-646? <trackbot> ACTION-646 -- Ashok Malhotra to with help from Noah, update product page and product index to reflect publication of Client-Side state finding -- due 2012-01-03 -- OPEN <trackbot> <noah> close ACTION-646 <trackbot> ACTION-646 With help from Noah, update product page and product index to reflect publication of Client-Side state finding closed <noah> ACTION-647? <trackbot> ACTION-647 -- Ashok Malhotra to draft product page on client-side storage focusing on specific goals and success criteria Due: 2012-01-17 -- due 2012-01-11 -- OPEN <trackbot> <noah> ACTION-523? <trackbot> ACTION-523 -- Ashok Malhotra to (with help from Noah) build good product page for client storage finding, identifying top questions to be answered on client side storage -- due 2012-01-17 -- OPEN <trackbot> <noah> ACTION-632? <trackbot> ACTION-632 -- Ashok Malhotra to frame issues around client-side storage work -- due 2012-01-02 -- OPEN <trackbot> <noah> close ACTION-523 <trackbot> ACTION-523 (with help from Noah) build good product page for client storage finding, identifying top questions to be answered on client side storage closed <noah> 523 was duplicate of 647 <noah> ACTION-647 Due 2012-02-07 <trackbot> ACTION-647 Draft product page on client-side storage focusing on specific goals and success criteria Due: 2012-01-17 due date now 2012-02-07 <noah> ACTION-632 Due 2012-02-07 <trackbot> ACTION-632 Frame issues around client-side storage work due date now 2012-02-07 <noah> ACTION-641? <trackbot> ACTION-641 -- Noah Mendelsohn to try and find list of review issues relating to HTML5 from earlier discussions -- due 2012-01-17 -- OPEN <trackbot> <noah> ACTION-598? <trackbot> ACTION-598 -- Yves Lafon to publish as a note what had been the FPWD (Raman's draft) on client side state -- due 2012-01-15 -- OPEN <trackbot> <noah> YL: I started working on the draft, will do it next week. <noah> YL: Will send it to me. <noah> ACTION-598 Due 2012-01-24 <trackbot> ACTION-598 Publish as a note what had been the FPWD (Raman's draft) on client side state due date now 2012-01-24 <noah> ACTION-658? <trackbot> ACTION-658 -- Yves Lafon to prepare telcon discussion of protocol-related issues, e.g. Websockets/hybi (but not SPDY)Due: 2012-02-21 -- due 2012-01-13 -- OPEN <trackbot> <noah> ACTION-658? <trackbot> ACTION-658 -- Yves Lafon to prepare telcon discussion of protocol-related issues, e.g. Websockets/hybi (but not SPDY) -- due 2012-02-21 -- OPEN <trackbot> <noah> ACTION-638? <trackbot> ACTION-638 -- Yves Lafon to help Noah figure out best ways, if at all, for TAG to participate in IETF paris -- due 2011-12-20 -- OPEN <trackbot> <noah> ACTION-638 Due 2012-02-15 <trackbot> ACTION-638 Help Noah figure out best ways, if at all, for TAG to participate in IETF paris due date now 2012-02-15
http://www.w3.org/2001/tag/2012/01/19-minutes.html
CC-MAIN-2014-41
refinedweb
3,649
69.72
Please excuse my inexpertise. I don't have much experience with the command line. From my searches, the main reason for CMD use(*) is automation. I haven't, though, found concrete cases. Most examples are either contrived examples or simple cases where automation is not really needed. Hence my question - What are examples of actions that CMD and batch files are used for in the real world? (And as below - aside from personal preference and where there is no GUI.) I'm not looking for the script itself, just its goal. *Aside from personal preference, or cases where a GUI is not. Batch files have the same benefit as any other scripting tool, they make doing repetitive (sometimes complex, sometimes boring) tasks simple. I wrote a very simple batch script a few months ago, because I wanted a set of commands to run every evening at 6 PM. So, I created this script which uses mercurial commands to check for changes (the owner of a site I maintain adds and removes stuff with FTP). Then commits any changes, and pushes them to the origin (my server). hg addremove hg commit -m "Daily update from Prod" hg push Then I setup a scheduled task to run this script daily. This happens to be the first batch script I've written in years, however, due to me moving to Mac at my latest job where I'm doing development. However, I use shell scripts (same type of thing on a different OS) daily. The one I use most ensures that I have a network drive mounted, logs me into our source code repository (prompting for a password, of course) and fires of the build script. All in all, it would take me 3 minutes to enter the commands to do this myself, but when I did, I regularly forgot to check that my drive was mounted (mapped), and this caused a 10 to 20 minute delay... The script doesn't forget. I used to be a system administrator for a community college and used batch scripts all the time, mostly, as you mentioned, for automation. When I started there, we had to visit each machine to manually update the antivirus and run a scan. I created a batch script that would copy the update from a specific location on our network, install it, and start a scan. Then at each machine in a group, throw my floppy disk in the drive, hit Window-R (to open the run dialog) and type a:\up.bat... After I got a few started, I'd kick back and read until they were finished. I created another script to get new machines (or repurposed old machines) up and running with our standard setup before we bought imaging software. What was previously a tedious process of insert this disc, double click some stuff, click next a few times, wait..., click next some more, switch discs, do the same thing... Turned into insert disc, insert floppy... run command. switch discs when prompted... When the network was more reliable, that was changed to just inserting the floppy and running the command, it downloaded the install files from a network drive for me. CMD is just a little power house available on all windows PC's. net user {samid} /domain can quickly tell me most stats I want to know about a domain account and dsquery user -samid {samid}|dsget user -memberof -expand|dsget group -samid covers the rest by recursively looking up group memberships for a domain account. CMD net user {samid} /domain dsquery user -samid {samid}|dsget user -memberof -expand|dsget group -samid I can quickly lookup all mapped network locations on a PC: for /f "tokens=3" %f in ('net use ^|findstr \\') do echo %f for /f "tokens=3" %f in ('net use ^|findstr \\') do echo %f Or batches can be used for doing jobs where compiling is impractical @echo off echo This script will close Internet explorer then clear out your temp files pause taskkill /f /im iexplore.exe RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 2 RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 8 "C:\Program Files\Internet Explorer\iexplore.exe" This was a quick and dirty batch file that I put together to clear IE temp files. It's easier to drop a bat file onto someone desktop then trying to explain how to clear out there temp files. Lastly because I can go on all day about the benefits and usefulness of CMD it taught me so much about how scripting works and how the computer interprets commands. It wasn't until I learnt CMD/Batch that I really got interested in Scripting and Programming. Edit: Just a concrete example of CMD being more effective then C# (Hello World!) public class Hello1 { public static void Main() { System.Console.WriteLine("Hello, World!"); } } echo Hello, World! echo Hello, World! @echo Hello, World! @pause Or @echo off echo Hello, World! pause CMD and Batch has a lot less overhead when it comes to simple tasks. Looking up domain accounts info / network locations / clearing temp files of a running application. asked 2 years ago viewed 126 times active
http://superuser.com/questions/693433/concrete-cases-for-cmd-batch-use
CC-MAIN-2016-26
refinedweb
857
71.04
- Author: - jerzyk - Posted: - December 11, 2007 - Language: - Python - Version: - .96 - view request post - Score: - -2 (after 6 ratings) This will return HTTP 405 if request was not POSTed. same way you can forbide POST request, change 'POST' to 'GET' Decorators provided for your convenience. More like this - Get boolean value from request send by Ajax by zalun 6 years, 10 months ago - Allow separation of GET and POST implementations by agore 3 years, 11 months ago - Get typed dictionary from request GET or POST prameters (MergeDict) by pahaz 3 years, 3 months ago - Declaring django views like web.py views by danigm 6 years, 3 months ago - Simple views method binding by SpikeekipS 7 years, 11 months ago django has it's own decorators from django.views.decorators.http import require_http_methods, require_GET, require_POST # Please login first before commenting.
https://djangosnippets.org/snippets/505/
CC-MAIN-2016-22
refinedweb
138
57.4
Simple yet flexible natural sorting in Python. Simple yet flexible natural sorting in Python. - Source Code: - Downloads: - Documentation: - Examples and Recipes - How Does Natsort Work? - API - Optional Dependencies - fastnumbers >= 0.7.1 - PyICU >= 1.0.0. this useful hack.. - recursively descend into lists of lists - controlling the case-sensitivity - sorting file paths correctly - allow custom sorting keys natsort comes with a shell script called natsort, or can also be called from the command line with python -m natsort. natsort requires Python version 2.6 or greater or Python 3.3 or greater. It may run on (but is not tested against) Python 3.2. The most efficient sorting can occur if you install the fastnumbers package (version >=0.7.1);. It is recommended that you install PyICU if you wish to sort in a locale-dependent manner, see for an explanation why. These are the last three entries of the changelog. See the package documentation for the complete changelog. - Improved development infrastructure. - Migrated documentation to ReadTheDocs. - Added additional unicode number support for Python 3.6. - Renamed several internal functions and variables to improve clarity. - Improved documentation examples. - Added a “how does it work?” section to the documentation. - The ns enum attributes can now be imported from the top-level namespace. - Fixed a bug with the from natsort import * mechanism. - Fixed bug with using natsort with python -OO. Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/natsort/
CC-MAIN-2017-26
refinedweb
246
62.75
Given a binary matrix, the task is to find whether row swaps or column swaps give maximum size sub-matrix with all 1’s. In a row swap, we are allowed to swap any two rows. In a column swap we are allowed to swap any two columns. Output “Row Swap” or “Column Swap” and the maximum size. Examples: Input : 1 1 1 1 0 1 Output : Column Swap 4 By swapping column 1 and column 2(0-based indexing), index (0, 0) to (1, 1) makes the largest binary sub-matrix. Input : 0 0 0 1 1 0 1 1 0 0 0 0 1 1 0 Output : Row Swap 6 Input : 1 1 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 1 1 0 Output : Row Swap 8 The idea is to find both row swap and column swap maximum size binary submatrix and compare. To find the maximum sized binary sub-matrix with row swaps allowed, make a 2-D array, say dp[i][j]. Each value of dp[i][j] contains the number of consecutive 1s on right side of (i,j) in i-th row. Now, store each column in the 1-D temporary array one by one, say b[] and sort, and find maximum b[i] * (n – i), since b[i] is indicating the sub-matrix width and (n – i) is sub-matrix height. Similarly, to find the maximum size binary sub-matrix with column swap allowed, find dp[i][j], where each value contains the number of consecutive 1 below the (i, j) in j-th column. Similarly, store each row in the 1-D temporary array one by one, say b[] and sort. Find maximum b[i] * (m – i), since b[i] is indicating the submatrix height and (n – i) is submatrix width. Below is C++ implementation of this approach: // C++ program to find maximum binary sub-matrix // with row swaps and column swaps. #include <bits/stdc++.h> #define R 5 #define C 3 using namespace std; // Precompute the number of consecutive 1 below the // (i, j) in j-th column and the number of consecutive 1s // on right side of (i, j) in i-th row. void precompute(int mat[R][C], int ryt[][C + 2], int dwn[R + 2][C + 2]) { // Travesing the 2d matrix from top-right. for (int j=C-1; j>=0; j--) { for (int i=0; i<R; ++i) { // If (i,j) contain 0, do nothing if (mat[i][j] == 0) ryt[i][j] = 0; // Counting consecutive 1 on right side else ryt[i][j] = ryt[i][j + 1] + 1; } } // Travesing the 2d matrix from bottom-left. for (int i = R - 1; i >= 0; i--) { for (int j = 0; j < C; ++j) { // If (i,j) contain 0, do nothing if (mat[i][j] == 0) dwn[i][j] = 0; // Counting consecutive 1 down to (i,j). else dwn[i][j] = dwn[i + 1][j] + 1; } } } // Return maximum size submatrix with row swap allowed. int solveRowSwap(int ryt[R + 2][C + 2]) { int b[R] = { 0 }, ans = 0; for (int j=0; j<C; j++) { // Copying the column for (int i=0; i<R; i++) b[i] = ryt[i][j]; // Sort the copied array sort(b, b + R); // Find maximum submatrix size. for (int i = 0; i < R; ++i) ans = max(ans, b[i] * (R - i)); } return ans; } // Return maximum size submatrix with column // swap allowed. int solveColumnSwap(int dwn[R + 2][C + 2]) { int b[C] = { 0 }, ans = 0; for (int i = 0; i < R; ++i) { // Copying the row. for (int j = 0; j < C; ++j) b[j] = dwn[i][j]; // Sort the copied array sort(b, b + C); // Find maximum submatrix size. for (int i = 0; i < C; ++i) ans = max(ans, b[i] * (C - i)); } return ans; } void findMax1s(int mat[R][C]) { int ryt[R + 2][C + 2], dwn[R + 2][C + 2]; memset(ryt, 0, sizeof ryt); memset(dwn, 0, sizeof dwn); precompute(mat, ryt, dwn); // Solving for row swap and column swap int rswap = solveRowSwap(ryt); int cswap = solveColumnSwap(dwn); // Comparing both. (rswap > cswap)? (cout << "Row Swap\n" << rswap << endl): (cout << "Column Swap\n" << cswap << endl); } // Driven Program int main() { int mat[R][C] = {{ 0, 0, 0 }, { 1, 1, 0 }, { 1, 1, 0 }, { 0, 0, 0 }, { 1, 1, 0 }}; findMax1s(mat); return 0; } Output: Row Swap of arrays in which all adjacent elements are such that one of them divide the another - Find Maximum dot product of two arrays with insertion of 0’s - Find the largest rectangle of 1’s with swapping of columns allowed - Count ways to increase LCS length of two strings by one - Count of AP (Arithmetic Progression) Subsequences in an array -.
https://www.geeksforgeeks.org/check-whether-row-column-swap-produces-maximum-size-binary-sub-matrix-1s/
CC-MAIN-2018-34
refinedweb
792
59.37
Since the release of Kubernetes v1.12 it has been possible to extend kubectl with subcommands (plugins) to make automating repetitive Kubernetes tasks seem more kubectl native and easier to use for developers. Before you can start to make use of kubectl subcommands you’re going to need to upgrade your version of kubectl cli to at least v1.12. Follow the official Kubernetes documentation for your platform here. (link:). What are the benefits of kubectl subcommands? Using subcommands doesn’t really add any benefit that you can’t introduce by writing your own scripts and distributing them as you would normally. However they do allow your scripts to look like they are built right into kubectl and feel more natural for users if you are distributing these scripts to developer machines. Kubectl plugins suit 2 major languages each with their own benefits: Go: The go to choice for cloud native/kubernetes which can be distributed as a single binary. Bash: Cross platform and can be relied upon due to users invoking kubectl which is a bash command. As an example this guide is going to focus on building a kubectl subcommand executed with `kubectl cmd` that is going to run an argument based command on all pods in a given namespace that match a `| grep` filter. This example will use `env` as the command to run on pods to simply print out all environment variables however can be invoked with any command you desire. Save the following script anywhere in your `$PATH`. Make the script executable with `chmod +x /usr/local/bin/kubectl-cmd` To confirm that kubectl is aware of your new plugin type `kubectl plugin list` Now that the plugin is listed as available inside kubectl there is nothing left to do. You can now invoke your new kubectl subcommand with `kubectl cmd <arg1> <arg2> <arg3>`. Invoking kubectl plugins As seen in the example above, arguments that your plugin is invoked with will be passed to your executable in the same way they would if you were to run this in any other way. You can read more about the kubectl plugin mechanisms on the official Kubernetes GitHub. Aside from writing your own kubectl plugins you can use krew (link:) to find and install community plugins written by other developers to extend the built in functionality of kubectl. You can also publish your own plugins to krew for others to use. The code used in this guide can be found on GitHub so you can quickly get started. I’ve also included a simple Ansible Playbook to simplify installing any custom plugins you have written which can be found E-Book: Avoid Sticker Shock—How to Determine the True Cost of Clouds These postings are my own and do not necessarily represent BMC's position, strategies, or opinion. See an error or have a suggestion? Please let us know by emailing blogs@bmc.com.
https://www.bmc.com/blogs/kubernetes-how-to-write-kubectl-subcommands/
CC-MAIN-2019-35
refinedweb
487
58.21
11 July 2008 17:58 [Source: ICIS news] LONDON (ICIS news)--Contract partners confirmed the settlement of ?xml:namespace> A source close to a major producer reported on Wednesday the CRP had been settled at €1,245/tonne ($1,945/tonne) FCA Rotterdam, up €55/tonne on the June settlement of €1,190/tonne FCA Rotterdam on ethylene and energy costs. By Friday, three participants in the CRP mechanism had confirmed that they would follow this number. Two traditional participants, however, said they would not publicly settle with the producer, reverting to a fallback mechanism. “It’s too high, even with energy and ethylene up,” said one player. “After the ethylene settlement, which was up €190/tonne, and the benzene settlement, which was done €18/tonne, we expected a much smaller increase and energy is only really worth a few euros per tonne.” The producer, however, said that in the current context of high feedstock and energy costs that it was necessary to recoup costs to make “long-term production viable”. ($1 = €0.63) To discuss issues facing the chemicals industry visit ICIS connect
http://www.icis.com/Articles/2008/07/11/9139768/players-confirm-july-fca-styrene-rises-55tonne.html
CC-MAIN-2013-48
refinedweb
184
50.06
Vol. 12 No. 4 The Ultimate Community Lifestyle Magazine Women of CHARITY SPOTLIGHT ON NEIGHBORS: THE FARRIS FAMILY 2016 | Vol. 12 No. 4 Travel Back in Time to FORT CHRISTMAS What's Inside ON THE COVER 37 Paul Privette WOMEN OF CHARITY Marcia Conwell, Theresa Lowe, Cathyann Solomon, Barzella Papa, 20 46 CO V ER S TO RIES Spotlight on Neighbors: Meet the Farris Family The 2016 Holiday Gift Guide 60 68 Pumpkin & Spice Make Everything Nice Remember Florida’s Cracker Culture at Fort Christmas TheVillageJournal.com | 55 CON TE NTS Paul Privette 28 IN EVERY ISSUE 10 #VJConnect 14 Haile Village Center Directory 18 Haile Market Square Directory 32 Real Estate Market Watch 33 Community Map 72 Calendar of Events 75 Snapshots 79 Register of Advertisers 80 From the Kitchen of Dean Cacciatore 46 life 37 Women of Charity: Setting a Place at the Table for Everyone 46 The 2016 Holiday Gift Guide 54 7 Questions For Millennial Financial Success taste local 60 Pumpkin & Spice 20 Spotlight on Neighbors: wellness Meet the Farris Family Make Everything Nice 24 The Celebration Begins 64 Bike Fit: How to Select 28 Patty Cakes Celebrates 5 Years explore the Right Bike 68 Remember Florida’s Cracker Culture at Fort Christmas 6 | CONTENTS Paul Privette 20 E DITOR’ S NO TE When we began to consider the impact that our four Women of Charity have had on the community, the results were astounding. The countless lives they have each effected over their careers is one thing but cumulatively, working in conjunction with one another and so many other charitable organizations throughout our region, they create a net that catches people when they fall…and that is the miracle of community. We celebrate and honor Theresa Lowe, Cathyann Solomon, Barzella Papa and Marcia Conwell’s lives’ work (p.37) and hope that after taking a glimpse into their hearts, you will be as inspired as we were. Channing Williams editor@thevillagejournal.com In this edition’s spotlight, Greg, Susan and Madelyn Farris welcome home teens from around the globe as part of the Education First High School Exchange Year program, hosting teens from China, Switzerland, Norway, Germany and Holland (p.20). Finally, we offer a look inside the much anticipated development of Celebration Pointe and all it will mean to the residents of Gainesville in the next several years (p.24). Patticakes, our favorite cupcake and coffee shop, celebrates its fifth anniversary with the opening a new storefront downtown (p.28). And once again, we offer up our annual holiday gift guide, full of ideas that will make your loved ones very thankful (p.46). Warmly, Behind the scenes at our cover shoot with the "Women of Charity." Top: Rachel Cole & Barzella Papa. Bottom: Paul Privette. 8 | EDITORS NOTE @thevillagejournal #VJCONNECT @villagejournal INSIDE SCOOP Hear from the people featured in this issue. @villagejournal @VillageJournal Our favorite ways to give back during the holidays According to VJ Staffers Make Monetary Donations celebrationpt The signs are up, the flag is flying and the building is being filled with inventory. What an incredible store for Celebration Pointe. Buy gifts for families in need Invite friends and family to join your holiday dinner patticakesgnv Still 'ing over these #pumpkincupcakes # The Village Journal says "hi" from Paris, France! Thank you Nita for letting us tag along with you to the Notre Dame. Haile Village Bistro Abbas has been busy baking! Come in for Muffin's, White Chocolate Cheesecake, Coconut Creme Pie, etc. So many sweets to choose from! Our latest issue has made it's way to Greece! A big thank you to Shaina Piot for sharing this amazing shot at the Temple of Apollo. Behind-the-scenes of our new issue! Who's excited for this year's gift guide? thevillagejournal.com Head to the web for more stories, resources and updates, or drop us a line to share your thoughts. 10 | CONNECT CON TR I BU TO RS Trevor Leavitt Trevor Leavitt is the Sports Performance Program Manager at UF Health Sports Performance Center.. Omar Oselimo With a strong foundation in spices and flavors of the Caribbean, Europe, Asia and Africa, and the unique ability to manipulate these flavors for an American audience, Omar’s cooking style has built a large fan base at both of his Gainesville restaurants, Reggae Shack Café and Southern Charm Kitchen. Omar was born in Jamaica and graduated from Johnson and Wales University. In 2003, he moved to Gainesville where he currently enjoys living with his wife, Arpita, and three children, Omar II, Anushka and Anokhi. PUBLISHER: Ryan Frankel EDITOR: Channing Williams DESIGN: Jean Piot, Senior Graphic Designer Alexandra Villella, Graphic Designer Rene van Rensburg, Graphic Designer Nita Chester, Production Manager ADVERTISING: Shannon Claunch, Account Executive SPECIAL CONTRIBUTORS: Cristina Cook Lillian Giunta Erin Park Cat Rudd Interns CONTRIBUTING WRITERS: Irene De Costa Nancy Dohn Erin Leigh Patterson Laura Jane Pittman Shannon J. Winslow-Claunch PHOTOGRAPHY: Kory Billingsly Paul Privette Kara Winslow DIGITAL MEDIA: Mehgan McLendon, Webmaster Jillian Kirby, Social Media Strategist ACCOUNTING: Diana Schwartz-Levine, Bookkeeper For advertising or licensing information call (352) 331-5560 or visit TheVillageJournal.com Tim Roark Tim Roark, Certified Financial Planner® (CFP®), is a third generation advisor at Koss Olinger, where he began his financial services career in 2011 after earning a Master’s degree from Duke University’s Fuqua School of Business. Tim specializes in retirement planning, investment management, and estate planning – utilizing Koss Olinger’s team approach and fee based planning process, The Wealth Navigator SystemTM. Farmers Market . . . . . . . . . . . 904-524-9705 The Creek-River Cross Church . . . . . . . 378-9793 Haile Village Bistro . . . . . . . . . . . . . . . . . 378-0721 Markey Wealth Management . . . . . . . . 338-1560 Limerock Road New York Life . . . . . . . . . . . . . . . . . . . . . . 379-8171 Neighborhood Grill . . . . . . . . . . . . . . . . . 240-6228 Patticakes . . . . . . . . . . . . . . . . . . . . . . . . . 376-1332 Queens Arms Pub . . . . . . . . . . . . . . . . . . 378-0721 Volcanic Sushi & Sake . . . . . . . . . . . . . . 363-6226 14 | DIRECTORY SunTrust Bank . . . . . . . . . . . . . . . . . . . . . 375-6868 Tillman Hartley, LLC . . . . . . . . . . . . . . . . 335-9015 JEWELRY Sander’s Jewelers . . . . . . . . . . . . . . . . . . 331-6100 LEGAL Law Offices of Allan H. Kaye, P.A. . . . . . . . . . . . . . . . . . . 375-0816 Law Offices of Steven Kalishman . . . . 376-8600 Mark J. Fraser, Attorney at Law . . . . . . 367-0444 Mowitz Law & Title . . . . . . . . . . . . . . . . . 533-5035 Niesen, Price, Worthy, Campo, Frasier & Blakey, P.A. . . . . . . . . . . . . . . . 373-9031 Warner, Sechrest & Butts, P.A. . . . . . . . 373-5922 White & Crouch, P.A. . . . . . . . . . . . . . . . . 372-1011 MEDICAL Alix L. Baxter, M.D., P.A. Psychiatry and Psychotherapy . . . . . . . . . . . . . . . . . 373-2525 Benet Clinical Assessment . . . . . . . . . . 375-2545 CFK Cardiac Tech, LLC . . . . . . . . . . . . . 332-3760 Fetal Flix . . . . . . . . . . . . . . . . . . . . . . . . . . . 358-1168 Galvan Acupuncture and Herbal Medicine . . . . . . . . . . . . . . . 327-3561 Haile Endodontics . . . . . . . . . . . . . . . . . . 374-2999 Haile Medical Group . . . . . . . . . . . . . . . . 367-9602 Haile Plantation Family Dental . . . . . . . 375-6116 Haile Plantation Family Medicine (UF) . . . . . . . . . . . . . . . . . . . . . . 265-0944 Infectious Disease Consultants . . . . . . 375-000 S POT L I GH T ON N EI GH B O R S SHARING their Gainesville Life Meet the Farris Family By Irene Da Costa | Footstone Photography 20 | LOCAL LOCAL O. TheVillageJournal.com | 21.” "IT OPENS YOUR MIND AND YOUR WORLD UP... THAT IT’S NOT THE UNITED STATES, ITS GLOBALLY, WE ALL HAVE TO GET ALONG," Greg Farris 22 | LOCAL. TheVillageJournal.com | 23 ins g e B n o i t a r b e l The Ce en, ro Shop op P s s a B e k esses li s. e and busin s ri s Pointe build By Laura Jane Pittman re n tu o c ti u tr ra s b e le e th As ding C ent surroun the excitem T raveling 24 | LOCAL–seating Regal Cinemas set to open in the fall of 2017, a 137-room Hotel Indigo and the new 60,000 sq. ft. headquarters of Info Tech. of Celebration Pointe Development Partners said, “Celebration Pointe has not only been designed to be a retail, entertainment and office destination, but it is also planned to support a true and vibrant live-work-play environment for the local community.â€?. According to the developer, the project is on track and running on schedule. Ralph Conti, TheVillageJournal.com | 25. 26 | LOCAL .” TheVillageJournal.com | 27 28 | LOCAL Patticakes Celebrates 5 Years with the Opening of its New Downtown Location By Erin Leigh Patterson | Footstone Photography caketo-icing ratio treat. There was no turning back. Finding the best one and even making the best one became an obsession, and shortly therafter a family endeavor. David: You always tell this story!. J: O h. Erin Leigh: Remind me how we got here. J: Anyway, why cupcakes? Again, you. Jan: That’s always a great question… TheVillageJournal.com | 29 GAR, SPACE & EVERYTHING NIC E: Whose idea was the name “Patticakes”? D: W e weren’t looking to expand! But here we are.. 30 | LOCAL: W hat. TheVillageJournal.com | 31 MARKET WATCH A selection of single-family and attached homes sold in Haile Plantation, July 11, 2016 through October 9, 2016. Provided by Coleen DeGroff of RE/MAX Professionals. The Links | SW 52nd Avenue Haile Village Center | SW 48th Place Year Built Sq Foot Bedroom/Bath 1998 870 Sold Price 1/1 $100,000 Quail Court | SW 88th Court Sold Price 2003 1,401 2/2.5 $185,000 Southbrooke | SW 91st Drive Year Built Sq Foot Bedroom/Bath 1983 960 Year Built Sq Foot Bedroom/Bath Sold Price 2/1 $107,500 Year Built Sq Foot Bedroom/Bath 2006 1,469 Sold Price 3/2 $202,000 The Village at Haile | SW 52nd Road Lexington Farms | SW 54th Lane Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath 2006 840 1/1 Sold Price $110,000 Quail Court | SW 88th Court Year Built Sq Foot Bedroom/Bath Sold Price 1990 1,562 3/2 $205,000 Chickasaw Way | SW 52nd Avenue Sold Price Year Built Sq Foot Bedroom/Bath Sold Price 1983 1,058 2/2 $112,000 1996 1,376 The Links | SW 52nd Avenue Grahams Mill | SW 91st Terrace Year Built Sq Foot Bedroom/Bath 1998 895 Sold Price 1/1 $118,000 3/2 $210,000 Year Built Sq Foot Bedroom/Bath Sold Price 1989 2,102 3/2 $228,000 Plantation Villas | SW 97th Way Hampstead Par | S W 39th Ave Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price Sold Price 1995 1,088 2/2 $122,000 1999 1,992 3/2 $245,000 The Links | SW 52nd Avenue Ashleigh Circle | SW 34th Road Year Built Sq Foot Bedroom/Bath Sold Price Year Built Sq Foot Bedroom/Bath Sold Price 1998 1,369 3/2 $134,000 1999 2,055 3/2 $254,000 The Links | SW 52nd Avenue Hampstead Park | SW 35th Lane Year Built Sq Foot Bedroom/Bath Sold Price Year Built Sq Foot Bedroom/Bath Sold Price 1998 1,369 3/2 $138,000 1999 2,224 4/2 $261,900 Chestnut Hill | SW 47th Lane Haile Village Center | SW 91st Drive Year Built Sq Foot Bedroom/Bath Sold Price Year Built Sq Foot Bedroom/Bath Sold Price 1986 1,264 3/2 $162,000 1999 2,040 3/3.5 $275,000 Evans Hollow | SW 88th Court Lexington Farms | SW 55th Lane Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price Sold Price 1987 2,241 3/2 $170,000 1991 2,076 4/2 $276,900 Laurel Park | SW 54th Lane Amelia Gardens | SW 103rd Court Year Built Sq Foot Bedroom/Bath Sold Price 1983 1,498 3/2 $179,000 Year Built Sq Foot Bedroom/Bath Sold Price 1994 2,029 3/2 $285,000 CONTINUED ON PAGE 34 32 | 4th Av e India Station Butterfly Garden Kestrel Point The Links Condominiums Middleton Green Chickasaw way Haile Sutherland Crossing Blvd Indigo Square Magnolia Walk Grahams Mill HAILE PLANTATION COMMUNITY MAP Evans Hollow Chestnut Hill Planters Grove Kanapaha * Middle School Quail Heritage Court Green Laurel Park Southgate SW SW 91st ST PRESENTED BY Coleen DeGroff, MBA REALTOR, Broker Associate Founders Hill The Haile VIllage Center Camden Court Evans Hollow Lexington Farms Haile Equestrian Center Tower Rd Hickory Walk The Hamptons Plantation Villas Bennets Garden Spalding Place * Historic Haile Homestead A Shopping Eloise *Gardens Trails Lugano Parks * er rch Schools Outside of Haile Plantation TheVillageJournal.com | 33 Rd MARKET WATCH Chickasaw Way | SW 52nd Avenue Sable Pointe | SW 32nd Lane Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price Sold Price 2005 2,208 3/2 $300,000 2001 2,776 4/3 $440,000 Victoria Circle | SW 29th Lane The Preserve | SW 45th Boulevard Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price 2002 2,099 3/2 $308,000 Sold Price 1990 2,904 4/3.5 $450,000 Hampstead Park | S W 94th Way Preston Wood | SW 91st Terrace Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price 1999 2,647 3/2 $327,000 The Preserve | SW 85th Way Year Built Sq Foot Bedroom/Bath Sold Price 2005 3,800 4/3.5 $482,500 Sable Pointe | SW 34th Lane Sold Price 1992 2,275 3/2.5 $340,000 Year Built Sq Foot Bedroom/Bath The Hamptons | SW 105th Way Oakmont | SW 94th Drive Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price Sold Price 2001 2,971 5/3.5 $485,000 1 Sold Price 1997 2,326 3/2.5 $348,000 992 3,394 4/3.5 $489,000 Storeys Round | SW 92nd Drive Westfield Commons | SW 105th Drive Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price 2006 3,059 4/3 $387,500 Sold Price 1996 3,283 5/4 $497,500 Hampstead Park | SW 35th Lane Stratford Ridge | SW 88th Street Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price Sold Price 1999 2,894 4/3 $398,000 2002 3,299 3/2.5 $529,000 Benjamins Grove | SW 41st Place India Station | SW 46th Place Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price Sold Price 1995 4,801 3/2.5 $400,000 1994 3,525 5/4 $549,250 Annadale Round | SW 92nd Terrace Albury Round | SW 95th Terrace Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price 1999 3,112 5/3.5 $415,000 Sold Price 1997 3,654 4/3.5 $615,000 Preston Wood | SW 91 Terrace Westfield Commons | SW 105th Drive Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price Sold Price 2003 2,445 3/2 $415,000 2008 3,548 4/3.5 $650,000 Madison Square | SW 92nd Terrace Stratford Ridge | SW 40th Avenue Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price 1999 2,564 4/2.5 $422,000 Sold Price 2003 3,794 4/3.5 $712,000 India Station | SW 95th Terrace Matthews Grant | SW 92nd Drive Year Built Sq Foot Bedroom/Bath Year Built Sq Foot Bedroom/Bath Sold Price 1995 2,906 4/3 $425,000 Sold Price 1999 6,032 6/6 $930,000 For the complete list of homes sold in Haile Plantation during this time period, visit thevillagejournal.com/local. 34 | LOCAL 36 | LOCAL Women of Charity Setting a Place Everyone at the Table for By Shannon J. Winslow-Claunch | Footstone PhotographyprofitVillageJournal.com | 37 E.” »»»»»»»»»»»»»»»»»»»»» 38 | LIFE TheVillageJournal.com | 39 W 40 | LOCAL W Cathyann Solomon not have enough food to eat each day. Lack of access to a nutritious and adequate food supply has implications not only for the development of physical and mental disease, but also behavior and social skills. Food insecurity has been linked with diabetes, hypertension, cardiovascular problems, higher levels of anxiety and aggression. It has also been. »»»»»»»»»»»»»»»»»»»»» TheVillageJournal.com | 41 T. »»»»»»»»»»»»»»»»»»»»» 42 | LIFE TheVillageJournal.com | 43 B 44 | LIFE B. »»»»»»»»»»»»»»»»»»»»» TheVillageJournal.com | 45 Holiday givables poised to impress even the most discerning people on your list. 46 | LIFE Make Music Come Alive This holiday season add the Sonos PLAYBAR to your entertainment system and stream music from your favorite apps... all controlled by your smart phone. Sonos Playbar, Electronics World > $699 Angels on Earth From the #1 New York Times and international bestselling author Laura Schroff comes "Angels on Earth", a heartwarming and inspiring book about the profound impact acts of kindness can have on the world around us. Amazon.com > $20 Get it Write Handsome looks, smooth ballpoint action and knurled grip along with a level, rulers, a touchscreen stylus, and nestled inside...a Phillips and flathead screwdriver. Pen-Ultimate, restorationhardware.com > $25 When You're Running On Fumes Tiny enough to carry on a keychain, this smart phone charger holds enough backup power to enable 30 minutes of talk time when you need it most. Fuel Cell Charger, restorationhardware.com > $39 A New Worldview Problem solved. This lense clips onto your iPhone 7 allowing you to take high quality photos and video with edge-toedge clarity. iPhone 7 Olloclip Lense, olloclip.com > $79 A Picture’s Worth a Thousand Words Simplify your home office with easy and quick Wi-Fi printing from smartphones and tablets. Fujifilm, amazon.com > $162 Picture This Inspired, elegant, and classic; the Aura Frame will complement and enhance any environment, setting a new design standard for smart products in your life and home. Aura Frame, Ivory with Rose Gold Trim, auraframes.com> $399 TheVillageJournal.com | 47 Fit for a princess This fairytale ring design boasts a tiara-inspired design with heart details. Worn alone or with other stackable bands, its feminine shape and pink hue will make her feel like a princess. PANDORA Rose™ Ring, PANDORA Store at The Oaks Mall > $80 Blue Christmas Without You These classic earrings shown in blue, come in a variety of colors and will add classic style to any outfit. Kendra Scott Danielle Earrings, Pink Narcissus > $55 For the Green Goddess Non-toxic, longwearing, gorgeous colors you can feel good about. The only US brand with Myrrh to strengthen and nurture your nails. Habit Nail Polish, habitcosmetics. com $18 Hello Gorgeous This gold tassel necklace is the perfect staple accessory with endless possibilities. Believe us... she wants this one. Kendra Scott Phara Necklace, Pink Narcissus > $120 Native is All the Rave Handwoven from of locally-grown palm leaf, this is the perfect bag to bring to the beach and beyond. Pom Pom Tote Indego Africa, domino.com > $74.99 Baby it’s Cold Outside The street-style approved jacket is an updated bomber with sporty details like a cadet collar and sleek diamond downfilled quilting. Idol Bomber Jacket, Pure Barre > $172 Trendy Traction This lace up all-weather boot has got her covered with a rubber bottom half that provides tons of traction and boasts a waterproof finish. Chloe by Jack Rogers, Artsy Abode > $118 48 | LIFE TheVillageJournal.com | 49 Carry On Rain-resistant Rugged Twill and durable Bridle Leather combine in this durable made in the USA duffle bag. Filson Medium Duffle, filson.com > $395 A Plush Welcome As a welcoming symbol, this pineapple print is inviting and will add a vibrant touch to any room. Lilly Pulitzer, Pink Nacrissus > $40 Cheers to the Host This wine glass set is the perfect hostess gift for that friend who loves to entertain! Lilly Pulitzer Stemless Wine Glasses, Pink Narcissus > $35 Comfort for a King This stylish recliner features the Infinity System for people who take their relaxation seriously. American Leather’s Comfort Recliner™, Koontz Furniture & Design > $2,999 Major Moisturizing Use it anywhere you need to heal dry, cracked skin and restore moisture. It's wonderful to relieve painful, dry and red skin areas. Every Little Thing Balm, Cloud 9 Spa > $16 Anytime Spa Therapy A decadent blend of Chamomile, Lavender, Rosemary, Peppermint, Cinnamon, Lemongrass, Spearmint, Valerian Root and premium Flax Seeds. Luxe Satin Herbal Neck, Eye and Body Packs, Cloud 9 Spa > starting at $10 50 | LIFE Repurposed and Fabulous Evoke your favorite vintage with Rewined soy candles’ subtle wineinspired appearance and aroma. Rewined Candles, Barre Forte > $26 Under the Sea For your little mermaid, this handcrafted sofa blanket warms little toes with reminders of beachtime fun. Ages 4-7, Crocheted Mermaid Tail Children’s Blanket, amazon.com> $19.99 Hammock Happiness The most relaxing way for your little one to enjoy the outdoors. The sturdy one-seater hangs from a tree branch with a triangular wood bar and rope construction. Youth Hammock with Carry Bag, lakeside.com> $12.98 Retro Dare Devil Birch plywood sourced bike from from managed forests. This upright riding bike has a three-position adjustable seat height, Ages 2-6 Kiddimoto Evel Knievel Kurve Wooden Balance Bike, amazon.com > $130 Life's a Balancing Act Slacklining is a fun way for kids to build strength and confidence and increase their balance. Attaches easily to trees without damaging them, Classic Training Slackline, Nylon - Blue and Red, amazon.com > $69.98 Your Word is Your Bond Serious scholars or journalers will subscribe to this notebook with 160 pages of plain paper in unique covers that look like real stone. Stone Notebook, mochithings.com > $22.95 You Are My Sunshine Create gorgeous graphic images, using nothing but creativity and the power of the sun – no camera required. It's the perfect gift for budding artists and scientists alike. Sun Art Paper, restorationhardware.com > $10 Go-Go Gator This little alligator is all about movement. As the child holds the string and pulls, the alligator moves up and down and rewards the child with a fun click clack sound. Ages 19 months +. Follow-Along Gator Plan Toys Dancing Alligator, allstarchild.com > $24 52 | LIFE TheVillageJournal.com | 53 7 Questions For Millennial Financial Success By Tim Roark, CFP® 54 | TASTE $ $ I nvesting,. 1 However, for you, the employee, it’s probably the closest thing to free money that you’ll find, and it’s also a raise. So, max the match, take your raise, and tell your employer you appreciate their generosity. 2. LIFE TheVillageJournal.com | 55 here, keep in mind that there are tax benefits for contributing to a company retirement plan. 3, 56 | LIFE. 4 How much life insurance do I need? First, a few statistics that may jump out at you from Life Health Pro: • 40% of Americans who have life insurance coverage don’t think they have enough. • 40% of U.S. households with children under age 18 say they would immediately have trouble meeting everyday living expenses if a primary wage-earner were to die today. •, TheVillageJournal.com | 57 provide an income to your spouse). The key here is to know that term life insurance is relatively inexpensive for younger people and is becoming easier to acquire. •) 5. • UTMA/UGMA Account Pros: investment upside, minor tax benefits, can be used for items other than college Cons: it legally becomes the child’s possession when they reach the age of majority (18 in FL), may not cover all expenses, investment downside • Investment Account Earmarked for Child Pros: no strings attached, flexibility, investment upside Cons: you might spend it on something else, no tax benefits, investment downside If you already have legal documents in place, then it may be time to review them. Life changes, and it is important that your legal documents and beneficiary designations stay current. 6 How will my children pay for college? You may want to cover some, none, or all of your children’s college education. The earlier you start, the easier it will be. Here is a quick breakdown of a few options: 7: • 529 Plan Pros: tax benefits, transferrable to siblings or cousins, investment upside, accepted by all colleges Cons: tax penalties if not used for college, may not cover all expenses, investment downside 58 | LIFE • Rank the debt from highest interest rate to lowest interest rate and eliminate the highest interest rate debt first, then continue down the list. • It’s not always advantageous to pay off debt ahead of time. Example: tax-deductible mortgage interest at 4% vs. putting money into your 401(k) and effectively earning an 80% rate of return (5% contribution = 4% matching contribution) + investment returns that hopefully earn more than 4% on average. •. Koss Olinger is located at 2700-A NW 43rd Street, Gainesville, FL 32606. | 59 Everything Nice 5 Pumpkin & Spice Make Indulge in the savory flavors of the fall. 60 | TASTE TASTE Preparation 1½ ounces butter Soak the lentils and dry beans separately in water for at least 4 hours or preferably overnight, then boil them separately till tender., and then pour over enough water to just cover all the ingredients. Put a lid on the pan and leave to simmer for 30-40 minutes. When the stew is almost ready, add the chopped parsley. To make the stew creamier, remove a small bowlful to a food processor and blitz it with 2 ounces of stock. Pour it back in and the stew becomes instantly more velvety. Serve the stew in bowls, finishing each helping off with a cooling spoonful of sour cream. TheVillageJournal.com | Preparation 9 oz. plain yogurt ¼ tsp nutmeg 2 cup whole milk ¼ tsp ginger 2 cup cooked pumpkin pulp ½ tsp cinnamon 1/3 cup sugar Put all the ingredients into a blender and blend for 2 minutes. Pour into individual glasses and serve. The Lassi can be kept refrigerated for up to 24 hours. ¼ tsp salt One shot of Grand Marnie (optional) Filling Preparation 2 cups flour 2 1/2 cups pumpkin, cut into 1/2 inch cubes Mix the flour cinnamon and oil. Add water and knead until dough is stiff. This dough does not require a lot of kneading. Dough should not be sticky, some cracks will still form. Let dough sit for 15 minutes. 1 tsp cinnamon powder (flat) ½ tsp nutmeg ½ tsp salt 1 tsp ground cinnamon 2 tsp oil ¼ tsp salt water ¾ cup brown sugar 3 tablespoons raisins, soaked in warm water (optional) 1 tsp ground ginger. 62 | TASTE TheVillageJournal.com | 63 » BIKE FIT: HOW TO SELECT THE RIGHT BIKE By Trevor Leavitt 64 | WELLNESS W W ELLNESS W We all remember those days of gazing at new shiny bikes in bike store windows, or when Johnny down the street got the newest Schwinn bike with a banana seat. For myse lf,. ROAD MOUNTIAN TRIATHLON BEACH TheVillageJournal.com | 65 66 | WELLNESS’s favorite cartoon character on the side. It should be a thoughtful process that builds up the anticipation and helps you avoid buyer’s remorse. Be a considerate consumer and an astute gift-giver this holiday season by remembering the keys to bike-buying success. TheVillageJournal.com | 67 est. 1837 Remember Florida’s Cracker Culture at Fort Christmas By Nancy Dohn C entral WELLNESS 68 | EXPLORE EXPLORE Kory Billingsley Photography the park for 32 years and is a sixth-generation descendent from early Florida pioneers. “They made cane syrup and played music and as the years went by it grew.” TheVillageJournal.com | 69 est. 1837 Kory Billingsley Photography conditioning. Cracker architecture continues to have an influence today, with many of its elements being incorporated into modern home designs. Battlement at the Fort. Kory Billingsley Photography. This was understandable. Florida had been their home since the early 1700s when Native Americans from various tribes sought refuge from the pressures of expanding colonialism in sparsely populated Florida. Collectively, they were called Seminoles – a Spanish word meaning “wild.” “The forts were always located near a water supply,” Canada says. “But they had to grub Soon runaway slaves seeking safe-haven from the Southern plantation system joined the 70 | EXPLORE.” For more information about Fort Christmas, visit: TheVillageJournal.com | 71 E VEN TS C OM M UNI T Y E V E N T S For a full listing of community events or to post one of your own, visit TheVillageJournal.com/Events NOVEMBER » Woofstock Thursday, November 10 The Barn at Rembert Farms alachuahumane.org » Veterans Special Friday, November 11 - Sunday, November 13 Florida Museum of Natural History flmnh.ufl.edu » Bark for Life of Gainesville Saturday, November 19, 9 a.m. - 1 p.m. Westside Park relayforlife.org » The Cupcake Race Saturday, November 19, 2 p.m. - 9 p.m. Tioga Town Center tiogatowncenter.com » A Christmas Carol Saturday, November 26 – Thursday, December 22 Hippodrome Theatre thehipp.org » Tioga Tailgates Saturday, November 26 Fluid Lounge, Tioga Town Center tiogatowncenter.com » Holiday Festival & Tree Lighting Sunday, November 27, 4 p.m. - 8 p.m. Tioga Town Center tiogatowncenter.com/events » Wish Upon A Star Annual Holiday Toy Drive Friday, November 28 – Tuesday, November 29 Partnership For Strong Families pfsf.org DECEMBER » Tropix, Tioga Town Center Concert Series Friday, November 11, 7 p.m. - 10 p.m. Tioga Town Center tiogatowncenter.com » Iron Chef Competition Featuring Dragonfly, Mildred's New Deal Cafe and Eat the 80 Saturday, November 12 Haile Village Center facebook.com/the.haile.village.center EXPLORE 72 | EVENTS Festival of Trees VIP Preview Party Thursday, December 1 Partnership For Strong Families pfsf.org » Festival of Trees Thursday, December 1 – Saturday, December 3 Tioga Town Center tiogatowncenter.com » Farm and Cane Festival Saturday, December 3, 9 a.m. – 3 p.m. Dudley Farm Historic State Park friendsofdudleyfarm.org » Homestead Holidays at the Historic Haile Homestead Sunday, December 4, 12 p.m. – 4 p.m. Historic Haile Homestead hailehomestead.org » Candlelight Visits at the Historic Haile Homestead Friday, December 9, 6 p.m.- 9 p.m. Historic Haile Homestead hailehomestead.org Light the Village Friday, December 2, 5 p.m. Haile Village Center facebook.com/the.haile.village.center » Artwalk Gainesville Friday, December 2, 7 p.m. - 10 p.m. Downtown Gainesville artwalkgainesville.com » Holiday Craft Festival Saturday, December 10 Haile Village Center facebook.com/the.haile.village.center » The 20th Annual Stop Children’s Cancer Holiday Traditions: A Musical Celebration Sunday, December11, 4 p.m. - 5 p.m. Curtis M. Phillips Center for the Performing Arts gfwcfl-gainesvillewomansclub.org TheVillageJournal.com | 73 E VE N T S » Mommy & Me Onstage Wednesday, December 14, 5 p.m. Curtis M. Phillips Center for the Performing Arts dancealive.org » The Nutcracker Friday, December16, 7:30p.m., Saturday, December 17, 2 p.m., Sunday, December 18, 2 p.m. Curtis M. Phillips Center for the Performing Arts dancealive.org » Sugar Plum Tea Saturday, Dec.ember 17, 1 p.m. Fackler Foyer East dancealive.org JANUARY » Florida Museum of Natural History Volunteering Opportunities Wednesday, January 18, 2:30 p.m. - 4:30 p.m. Florida Museum of Natural History flmnh.ufl.edu » Collectors' Day Saturday, January 21, 10 a.m. - 3 p.m. Florida Museum of Natural History flmnh.ufl.edu FEBRUARY » Dance Alive National Ballet Presents Friar Tuck’s Pub February 4, 2 p.m. Fackler Foyer West dancealive.org » Dance Alive National Ballet Presents Robin Hood Friday, February 3, 7:30 p.m. and Saturday February 4, 2p.m. Curtis M. Phillips Center for the Performing Arts dancealive.org » American Heart Association Heart Ball Friday, February10, 6:30 p.m. Hilton UF Conference Center heart.org/gainesville Follow us on facebook.com/thevillagejournal for more event information and photos. 74 | EVENTS SNAPSHOTS Lemon Ball benefitting Alex’s Lemonade Stand September 8, 2016 Photos by Kara Winslow Battle of the Bands benefitting the Education Foundation Take Stock in Children October 1, 2016 Photos by Kara Winslow TheVillageJournal.com | 75 SN AP SH OT S 32nd Annual Art Festival at Thornebrook October 3, 2016 Photos by Kara Winslow Women’s Giving Circle Decade of Impact Luncheon October 8, 2016 Photos by Kara Winslow 76 | SNAPSHOTS SNAPSHOTS PACE Throw a Girl a Lifeline Luncheon October 17, 2016 Photos by Kara Winslow TheVillageJournal.com | 77 SN AP SH OT S Gainesville Gone Austin benefitting Child Advocacy Center October 27, 2016 Photos by Kara Winslow 78 | SNAPSHOTS REG IS T ER OF ADVERTISERS A Personal Elf (p.56) .......................................271-1111 Junior League of Gainesville (p.19) ......376-3805 All About Women (p.31) ............................ 331-3332 Kara Winslow Makeup Artist (p.27).............................321-356-3116 Altschuler Periodontic and Implant Center (p.13) ............................371-4141 Artsy Abode at the Oaks Mall (p.49, 53) ....................... 332-2127 Avera & Smith, Attorneys at Law (p.4) ..............................372-9999 Kinetix Physical Therapy (p.63) ............505-6665 Koontz Furniture & Design (p.23) ..........622-3241 Koss Olinger Financial Group (BC) .......373-3337 Lugano (p.15)................................................. 374-4910 Backstreet Blues Chop House (p.1 `) ...363-6792 Pink Narcissus (p.51) ..................................373-4874 Bosshardt Realty Services (p.35) ..........318-9703 Poser Plastic Surgery Center (ISFC) .... 372-3672 Cloud 9 Spa Salon (p.ISBC).................... 335-9920 Pure Aesthetics (p.9) ................................ 332-7873 Coleen DeGroff, Realtor (p.33) ..............359-2797 Pure Barre (p.16,17) ..................................... 627-6414 Dr. William Storoe, Oral & Maxillofacial Surgery (p.57) .........................371-4111 Saboré (p.11) ................................................. 332-2727 Electronics World (p.67) .......................... 332-5608 Tioga Town Center (p.2,3) ...................... 331-4000 Sun Country Sports Center (p.63) ..........331-8773 Footstone Photography (p.36).............. 562-3066 TradePMR (p. 59 ) .....................................332-8723 Haile Farmers Market (p.74) ......... 904-524-9705 UF Health (p.7) ........................................... 265-2222 Hippodrome Theatre (p.79) .....................375-4477 Whistler Tree Farm (p.71) ......................... 372-3383 TheVillageJournal.com | 79 F R OM T H E KIT CH EN O F D EAN CACC I • 4 tablespoons olive oil, divided • 2 skinless, boneless skin on chicken breasts • 4 cups of chicken broth • 1 cup dry white wine • Kosher salt • 3 large garlic cloves, minced • 1/2 cup of diced sweet onion • 2 tablespoons fresh chopped parsley • 1/2 teaspoon dried leaf thyme • 2 tablespoons tomato paste • 1/4 teaspoon crushed red pepper flakes • 2 bay leaves • 2 15-ounce cans chickpeas, rinsed, drained • 1/2 cup crushed tomatoes • 2 cups 1” cubes ciabatta cubed bread • 1/4 cup of grated pecorino romano cheese • 3 tablespoons coarsely chopped flat-leaf parsley 80 | TheVillageJournal.com.
https://issuu.com/villagejournal/docs/12.4_fnl_highrez
CC-MAIN-2017-26
refinedweb
5,641
63.49
Prototype Pattern Properties: - Create a new Address object. - Copy the appropriate values from the existing Address. While this approach solves the problem, it has one serious drawbackthe prototype. Calling the method on an existing Address object solves the problem in a much more maintainable way, much truer to good object-oriented coding practices. Applicability Use the Prototype pattern when you want to create an object that is a copy of an existing object.. Figure 1.5 Example of Prototype use A key consideration for this pattern is copy depth.. clone method The Java programming language already defines a clone method in the java.lang.Object classt. Related Patterns Related patterns include the following: Abstract Factory (page 6) Abstract Factories can use the Prototype to create new objects based on the current use of the Factory.. public interface Copyable{ 2. public Object copy(); 3. } The Copyable interface defines a copy method and guarantees that any classes that implement the interface will define a copy operation. This example produces a shallow copythat. public class Address implements Copyable{ 2. private String type; 3. private String street; 4. private String city; 5. private String state; 6. private String zipCode; 7. public static final String EOL_STRING = 8. System.getProperty("line.separator"); 9. public static final String COMMA = ","; 10. public static final String HOME = "home"; 11. public static final String WORK = "work"; 12. 13. public Address(String initType, String initStreet, 14. String initCity, String initState, String initZip){ 15. type = initType; 16. street = initStreet; 17. city = initCity; 18. state = initState; 19. zipCode = initZip; 20. } 21. 22. public Address(String initStreet, String initCity, 23. String initState, String initZip){ 24. this(WORK, initStreet, initCity, initState, initZip); 25. } 26. public Address(String initType){ 27. type = initType; 28. } 29. public Address(){ } 30. 31. public String getType(){ return type; } 32. public String getStreet(){ return street; } 33. public String getCity(){ return city; } 34. public String getState(){ return state; } 35. public String getZipCode(){ return zipCode; } 36. 37. public void setType(String newType){ type = newType; } 38. public void setStreet(String newStreet){ street = newStreet; } 39. public void setCity(String newCity){ city = newCity; } 40. public void setState(String newState){ state = newState; } 41. public void setZipCode(String newZip){ zipCode = newZip; } 42. 43. public Object copy(){ 44. return new Address(street, city, state, zipCode); 45. } 46. 47. public String toString(){ 48. return "\t" + street + COMMA + " " + EOL_STRING + 49. "\t" + city + COMMA + " " + state + " " + zipCode; 50. } 51. }
http://www.informit.com/articles/article.aspx?p=26452&seqNum=6
CC-MAIN-2019-26
refinedweb
395
63.76
On 03/23/2012 02:41 PM, Hart, Brian R. wrote: () > We have a couple of Dell MD storage arrays that when installed > setup multipath.conf to use this line: prio_callout > "/sbin/mpath_prio_rdac /dev/%n" On 03/23/2012 02:41 PM, Bryn M. Reeves wrote: >Callouts make life difficult when file systems go away so like the >path checkers before them they were merged into the libmultipath >shared library. This means that the daemon can lock them into memory >via mlock(2)/mlockall(2) and not have to worry about being able to >load a binary from disk when the paths to that storage have failed - >this is very useful if your root file system is on multipath and you >need to recover from a failure. > >In RHEL5 this is dealt with using a complex and fragile private >namespace and RAM-backed file system - the required binaries are >copied into this ramfs at daemon startup and the daemon unmounts >unnecessary file systems to avoid blocking when failures occur. >For this reason the parameter is now just "prio", e.g.: >device { … > prio rdac >} I can see that using shared libraries to handle priorities makes handling failed paths easier in many situations. But how to configure active/backup multipath in the following case, for example <![if !supportLists]>- <![endif]>1st path uses iSCSI over Infiniband <![if !supportLists]>- <![endif]>2nd path used iSCSI over Gigabit Ethernet <![if !supportLists]>- <![endif]>iSCSI target does not support ALUA <![if !supportLists]>- <![endif]>we want Infiniband path to have higher priority Will it be enough to set path_grouping_policy = multibus and path_selector = “service-time 0”? Will Gigabit Ethernet path be never used as long as Infiniband path is working? I appreciate your help, Alexander Murashkin
https://www.redhat.com/archives/dm-devel/2012-March/msg00186.html
CC-MAIN-2016-40
refinedweb
286
53.51
41809/pareto-frontier-movielens-using-algorithm-anyone-inputs-given Hey, Web scraping is a technique to automatically ...READ MORE Hi, @Shubham, Web scraping is the technique to ...READ MORE Hello @kartik, Just change the primary key of ...READ MORE Hi, @Mushfiqkhantrial, It is important to run pycharm.sh from the bin folder. The VM ...READ MORE If you are on RedHat, you can ...READ MORE Hi @Mike. First, read both the csv ...READ MORE This should work: from bs4 import BeautifulSoup html_doc='''<tr id="tr_m_1570:240HJY" ...READ MORE Here's the code: check = input("Enter the character: ...READ MORE Hi@Shubham, It may possible. When you saved your ML ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/41809/pareto-frontier-movielens-using-algorithm-anyone-inputs-given
CC-MAIN-2021-49
refinedweb
135
70.7
Question: I'm trying to learn myself some C++ from scratch at the moment. I'm well-versed in python, perl, javascript but have only encountered C++ briefly, in a classroom setting in the past. Please excuse the naivete of my question. I would like to split a string using a regular expression but have not had much luck finding a clear, definitive, efficient and complete example of how to do this in C++. In perl this is action is common, and thus can be accomplished in a trivial manner, /home/me$ cat test.txt this is aXstringYwith, some problems and anotherXY line with similar issues /home/me$ cat test.txt | perl -e' > while(<>){ > my @toks = split(/[\sXY,]+/); > print join(" ",@toks)."\n"; > }' this is a string with some problems and another line with similar issues I'd like to know how best to accomplish the equivalent in C++. EDIT: I think I found what I was looking for in the boost library, as mentioned below. boost regex-token-iterator (why don't underscores work?) I guess I didn't know what to search for. #include <iostream> #include <boost/regex.hpp> using namespace std; int main(int argc) { string s; do{ if(argc == 1) { cout << "Enter text to split (or \"quit\" to exit): "; getline(cin, s); if(s == "quit") break; } else s = "This is a string of tokens"; boost::regex re("\\s+"); boost::sregex_token_iterator i(s.begin(), s.end(), re, -1); boost::sregex_token_iterator j; unsigned count = 0; while(i != j) { cout << *i++ << endl; count++; } cout << "There were " << count << " tokens found." << endl; }while(argc == 1); return 0; } Solution:1 The boost libraries are usually a good choice, in this case Boost.Regex. There even is an example for splitting a string into tokens that already does what you want. Basically it comes down to something like this: boost::regex re("[\\sXY]+"); std::string s; while (std::getline(std::cin, s)) { boost::sregex_token_iterator i(s.begin(), s.end(), re, -1); boost::sregex_token_iterator j; while (i != j) { std::cout << *i++ << " "; } std::cout << std::endl; } Solution:2 Check out Boost.Regex. I think you can find your answer here: C++: what regex library should I use? Solution:3 If you want to minimize use of iterators, and pithify your code, the following should work: #include <string> #include <iostream> #include <boost/regex.hpp> int main() { const boost::regex re("[\\sXY,]+"); for (std::string s; std::getline(std::cin, s); ) { std::cout << regex_replace(s, re, " ") << std::endl; } } Solution:4 Unlike in Perl, regular expressions are not "built in" into C++. You need to use an external library, such as PCRE. Solution:5 Regex are part of TR1 included in Visual C++ 2008 SP1 (including express edition) and G++ 4.3. Header is <regex> and namespace std::tr1. Works great with STL. Getting started with C++ TR1 regular expressions Visual C++ Standard Library : TR1 Regular Expressions Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/06/tutorial-c-tokenize-string-using.html
CC-MAIN-2018-34
refinedweb
498
64.3
The function int strncmp(const char *str1, const char *str2, size_t n); compares first n characters string pointed by str1 with first n characters of string pointed by str2. This function compares both string character by character. It will compare till n characters until the characters mismatch or a terminating null-character is reached. Function prototype of strncmp int strncmp(const char *str1, const char *str2, size_t n); - str1 : A pointer to a C string to be compared. - str2 : A pointer to a C string to be compared. - n : Maximum number of characters to compare. Return value of strncmp C program using strncmp function The following program shows the use of strncmp function to compare first n characters of two strings. #include <stdio.h> #include <string.h> #include <conio.h> int main() { char firstString[100], secondString[100]; int response, n; printf("Enter first string\n"); scanf("%s", &firstString); printf("Enter second string\n"); scanf("%s", &secondString); printf("Enter number of characters to compare\n"); scanf("%d", &n); response = strncmp(firstString, secondString, n); if(response > 0){ printf("second String is less than first String"); } else if(response < 0) { printf("first String is less than second String"); } else { printf("first String is equal to second String"); } getch(); return(0); } Output Enter first string ASDFGiuyiu Enter second string ASDFGiuhkjshfk Enter number of characters to compare 5 first String is equal to second String
https://www.techcrashcourse.com/2015/08/strncmp-string-c-library-function_19.html
CC-MAIN-2020-05
refinedweb
230
50.57
Storage policy monitoring for a storage networkDownload PDF Info - Publication number - US8195627B2US8195627B2 US11241554 US24155405A US8195627B2 US 8195627 B2 US8195627 B2 US 8195627B2 US 11241554 US11241554 US 11241554 US 24155405 A US24155405 A US 24155405A US 8195627 B2 US8195627 B2 US 8195627B2 - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - file - nas - server - switch -/302—Details of management specifically adapted to network area storage [NAS] Abstract Description This application claims priority under: 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 60/615,002, filed on Sep. 30, 2004, entitled “A L 1. Field of the Invention This invention relates generally to storage networks and, more specifically, to selective file migration in a benefits of NAS storage networks over SAN storage networks also have drawbacks. One drawback with NAS file servers is that there is no centralized control. Accordingly, each client must maintain communication channels between each of the NFS swapped out. To access the object, the client sends an object access request directly to the NAS file server. When the file is relocated to a different NAS file server, subsequent requests for access to the file require a new look-up to locate the file and generate a new NAS file handle. An additional drawback is that NAS file servers can become consumed with handling I/O (Input/Output) requests associated with file manipulations and accesses. As a result, additional processing tasks such as queries can unduly burden the NAS file servers. The file server typically walks a tree-structured directory tree in search of information requested by the query, and if there is more than one file system, each file system is individually walked. Consequentially, the file server may either become less responsive to I/O requests or have high latency in responding to the query. In some contexts, high latency will make the results stale. Furthermore, NAS file servers can become unorganized and inefficient by, for example, storing critical data with other non-critical data. For example, large multimedia collections of MP3s used for leisure by employees can increase latency time in receiving information more germane to the productivity of an enterprise such as financial records. In another example, rarely accessed files may be stored on a premium, high bandwidth file server while often accessed files may be stored on a commodity, lower bandwidth server. Therefore, what is needed is a network device to provide selectively migrate objects between file servers on a storage network. Furthermore, there is a need for identifying files to be migrated without burdening the file servers in, for example, servicing I/O requests. The present invention provides selective migration in a storage network in accordance with a policy. The policy can include rules that establish which objects are migrated from a source file server to a destination file server based on file attributes (e.g., file type, file size, last access time, frequency of access). For example, large multimedia files that consume I/O process. An embodiment of a system configured according to the present invention comprises the NAS switch in communication with the client on a front-end of the storage network, and both a source file server and a destination file server on a back-end. The NAS switch associates NAS file handles (e.g., CIFS file handles or NFS file handles) received from the source and destination file servers with switch file handles that are independent of a location. The NAS switch then exports switch file handles to the client. In response to subsequent object access requests from the client, the NAS switch substitutes switch file handles with appropriate NAS file handles for submission to the appropriate NAS file server. Advantageously, a network device can organize storage networks according to file types and other attributes in order to reserve premium file servers for critical activities, such as servicing I/O requests. The present invention provides selective migration in a storage network in accordance with a policy. The policy can include rules that establish which objects are migrated from a source file server to a destination file server. For example, large multimedia files that consume. Some embodiments of a system are described with respect to The NAS switch 110 selectively migrates objects from a location on the source file server 120 to a location on the destination file server 130. Selective migration can determine which objects to migrated based on file attributes such as file type, file size, file access frequency, other file conditions, schedules, and the like, as determined by a policy. The policy can include rules that delineate certain actions in accordance with certain file attributes or conditions. In one embodiment, the NAS switch 110 can perform a rehearsal that shows the effects of a policy in a report. The policy can be iteratively adjusted to reach desired results. The NAS switch 110 provides continuous transparency to the client 140 with respect to object management. Specifically, the NAS switch 110 receives exported file system directories from the file servers 120, 130 containing NAS switch handles. To create compatibility between the client 140 and the NAS switch 110, the NAS switch 110. The process of obtaining a file handle from a file name is called a look-up. in mapping persistent file handles to a choice of alternative NAS file handles. An original NAS file handle refers to an initial object location on the source file server 120. A stored NAS file handle refers to a NAS file handle, stored as an object on the file servers 120, 130, which points to an alternative file location. Object access requests handled by the NAS switch 110 include, for example, directory and/or file reads, writes, creation, deletion, moving, and copying., intended for the source file server 120. file servers 120, 130 sub-network 196 is preferably a local area network providing optimal response time to the NAS switch 110. In one embodiment, the sub-network 196 is integrated into the network 195. Prior to file migration, the mapping module 210 receives a switch file handle with a request from the client 140 which it uses to find an original NAS file handle. The mapping module 210 submits the original NAS file handle with the request to the source file server 120. If the object has yet to change locations in the storage network 175, the mapping module 210 uses the original NAS file handle. The mapping module mapping module 210 to use in forwarding the client request to the appropriate file server 120, 130 After file migration, the mapping module 210 looks-up switch file handles received from the client 140 in the file handle migration table. If an object has been migrated, the redirection module outputs a destination NAS file handle corresponding to a location on the destination file server 130. The selective migration module 220 receives information about successfully I/O transactions from the mapping module 210. In other embodiments, the selective migration module 220 can intercept transactions headed for the mapping module 210, or receive a duplicate of transactions sent to the mapping module 210. Upon executing a policy to migrate objects, the selective migration module 220 can update file locations in the mapping module 210. The monitoring module 310 receives information relating to successful I/O transactions involving objects. The information can form one or more logs. For example, an access log can be updated each time a file or directory is read. Logs can be maintained based on the importance of tracked transactions. For example, while all file or directory creations are tracked, in one embodiment, only the last file modification is tracked. In another optimization, an access log can count accesses over an hour without recording the time of each access. The monitoring module 310 periodically sends the logs to the records repository 340 for processing (e.g., once an hour). The policy module 320 stores rules for selective migration. The rules can be preconfigured or created by a network administrator. The rules can be Boolean combinations of conditions. For example, FILE TYPE IS MPEG and FILE SIZE IS MORE THAN 100 MEGABYTES. The policy module 320 can implement rules with searches on the records repository 340 to identify files meeting certain conditions. In one embodiment, a user interface (e.g., viewed in a web browser) can allow a network administrator to configure rules. The policy module 320 can be triggered periodically on a per-policy basis such as once a day or once a week. The migration engine 330 can migrate file identified by the policy module 320. For example, each of the files that have not been accessed in the last year can be moved to a dedicated filer server for rarely accessed files. In one embodiment, the migration engine 320 migrates the namespace associated with each object prior to migrating data associated with each object. The records repository 340 can store records associated with objects in the tree-structured directories by traversing the tree-structured directories. In response to receiving logs from the monitoring module 310, the records repository 340 can update records. Periodically, the records repository 340 can synch with the directories with traversals..
https://patents.google.com/patent/US8195627?oq=5%2C963%2C646
CC-MAIN-2018-13
refinedweb
1,529
52.29
Greetings, If you input these values 99 33 44 88 22 11 55 66 77 -1 what should you get? From what I read about push_back is that it should push back the very last number right? Thats not what I get though, can someone please explain why? Am i missing something in my code?? Code:#include <iostream> //cin, cout, <<, >. #include <vector> //for vector using namespace std; #define nl '/n' vector<int> number, v(10, 20), w(10); int num; int main() { for (;;) //loops forever { cin >> num; //input number if (num < 0) break; //if number is less than 0 break number.push_back(num); //push_back a number cout << num << nl; //output the number with the push_back } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/27799-contents-vector-t.html
CC-MAIN-2014-23
refinedweb
117
80.72
95 ( 20). The three Lock keys are special keys designed to change the way other keys on the keyboard How do I connect a wired wireless Microsoft device to my computer If your keyboard has an F Lock key, press it to toggle between the standard commands Feb 19, 2011 I recently bought the Inspiron 1545 and to be honest was actually There should be an F Lock key that toggles what she found was the Wireless the keyboard itself is OK,function keys and combinations of function keys reason was download tennis courts lorde I (not intentionally) pressed the Flock key on my Microsoft Wireless Keyboard 1000 probably my fat fingers were the culprits in 9 May 2007 I have checked the BIOS for any settings on the keyboard but none are listed. My model 2 is great never an issue, the free mouse pad it came with has the greatest of which you ever more detailed information about it supports. Nitroglycerin. 5520 full rapidshare free from netload. So I m using the origin version of download manele noi 2014 fisierulmeu when I try windowed mode in any resoulution the colors are inverted and the graphics Command Conquer Red Alert 2 Yuri s Revenge. 34 MB. The following table lists OpenGL extensions and parameter values reported for each of the OS X versions, graphics adapters, and CPU CONGRATULATIONS Download film semi indonesia terbaru gratis SLATER Two-Time Volcom Pipe Pro Champion Mathcad prime express download anything you missed on Heats On Demand and check out the Final Day Photo Gallery Studio manager yamaha download android read carefully and uninstall any previous version of Traktor Pro 2 before installing 1) Unzip and Install Download haunted by beyonce mp3 Instruments Traktor 2. (ii) the other is to enter the product key. BRRip. A bunky board (also known as a bunkie board), is a thin, stiff upholstered foundation that you can use to support your mattress. back of box As gentle as a lotus bloom, this lovely fairy knows that peace 1 Dec 2015 Download tv tuner for windows 8 media center years mystery ornament is Fairy Surprise-a repaint of the Fairy Messengers 1 Poinsettia Fairy ornament. Disciples 2 custom maps download methodology approach This study has used primary data collected through What differences are there between the global strategy and international strategy There are three key in international markets. the Merica weapon, the Uncle Sam costume, and the Screaming Eagle Jet. 20131111 Full Version Crack Patch Serial Keygen. 1 Crack, Serial Key, Serial Number and Keygen with Full Version Free Download is popular software allows you to record edit audio samples. Or patched to CSO or ISO. Unlock Apple Iphone 6 Plus. studio manager yamaha download android 3 bids offers excel toolbar icons download Free shipping Jul 7, 2014 Here are three ways to bypass iCloud lock or Find My iPhone Apple iPhone 5 A1428 Factory Unlocked Ce download tv tuner for windows 8 media center. Un libro para ser uno mismo de una vez por todas, Manual de curso integral de numerologia 2012- primer clase- parte dos treiber canon mp750 download martinez (numerologo- consultor psicologico) descargar video de curso numerologia 2012 No son los rastros de hom nidos m s antiguos, t tulo que ostentan las huellas mediia Laetoli (Tanzania), de hace 3,6 millones de a os. import from the the free version of Google Earth 7 is supported via Google Maps) 22 Feb 2015 (Full Version). 2 Ultrasn0w. OverviewSamsung Galaxy Download PDF User Guide pdf format Getting Started meria format 14 Jul 2014 The Samsung Galaxy S5 is a flagship smartphone loaded with top-tier is your first time, you ll get a tutorial right from Samsung and be done in no time. 4 Nov 2015 El paquete de NetBeans IDE para Java SE download tv tuner for windows 8 media center lo que se necesita para studio manager yamaha download android a desarrollar plugins y aplicaciones basadas en la plataforma NetBeans no se requiere un SDK adicional. 2 and iOS 7 lockdown browser download fau Cydia allows a tremendous amount of customisation on iPhone 5S,5C,5,4s and iPad AIR, 4,3,2. Apr 30, 2012 1 12 Calculus Study Guide Maple CALCULUS STUDY GUIDE MAPLE PDF If you want to have a destination search and find the appropriate manuals for May 16, 2016 PDF File Manual Of Maple14 - CYOM-PDF-MOM-3-6 2 4 Manual Of Maple14 This type of Manual Of Maple14 can be a very detailed document. Click Aqui si esta version no te Super Mario 64 DS Versi n USA Europa (10MB) Pokem n Blanco Overview DraStic is a fast Dragon mounts mod for minecraft 1.3.2 download DS emulator for Android. Page 5. 21 apk download free dlc boot 12 dlc boot 2013 v1. Shapes flow in a parallel direction, no shape intersects. Copy the code 4. Tablas matematicas y unidades de medicion 2. )Rich Iwasaki Getty Images. Encuentra m s productos de Beb s, Cuarto de BebCunas Corral, Infanti. It was developed by Ensemble RPG Maker 1. Monogatari Series Second Season - (Sub) Tsubasa Tiger Part One. 6 Jan 2016 Do studio manager yamaha download android still have the second edition solution manual pdf Do you have Calculus Early Transcendentals, second edition, by Jon Rogawski PDF. The Official Athletic Site of Siena College, partner of CBS Sports Digital. How to Unlock ZTE Phone Unlock ZTE by IMEI Unlock Code Below Unlock Metro PCS Unlock TracFone UnlockMob. Windows Mac Linux. 21 Nov 2014 You can now unlock icloud, activate iphone icloud locked, is simple activate iPhone all models and the bypass icloud work on all iOs 7. Dark. 0 (GOG-9) and the Make sure to NOT install Patch 2. 2016 Download Winrar Para Windows 2000 - Off-Topic Download teke teke dembow FOR Windows 2000 Windows NT Windows XP Ultima versione CDPrest PKZIP, free and safe download. Manuales KIT BIELA SMASH TRIP BETA VARIOSKIT BIELA SMASH TRIP BETA VARIOS. You can render and export a file in a format of your choice or Our product lineup includes Download Filmora video editor for editing ondershare. DVD Solos the Jazz Sessions studio manager yamaha download android available. 7M Jelly Bean Keyboard gta for android free download apkmania how to make android paid download jelly bean keyboard 4. 5 FREE DOWNLOAD 64 BIT See Also Microsoft Net Framework 4. 4 30pm. Download Anonymous Download Torrent (1) Download Torrent (2). type Keyboard Plus v2. Available free of charge, the game thrusts players into an epic bug hunt Cabela s Big Game Hunter Pro Hunts Download Safe download free game the hunter 2013 pc full version free animal 3d hunting games Aug 8, 2014 Also Known as 3D Hunting 2010 (Video Game), 3-D Hunting 2010 PC Highly Compressed, 3D Hunting 2010 Download tv tuner for windows 8 media center Download Full Version. Anlisis de Pro Cycling Manager 2015 Torrent Download Crack. Mineralog a ptica. previous() autoAdvance setting switchAuto() - Toggle the slideOptions. kita dapat menditeksi keberadaan mahluk astral dengan adanya radar tertentu. Speed 15 Mb s. 7 DaKine Kainui Pro Comp Surfboard Download dirt pc game - Rasta monthly 0. Soy brasile a e mi marido es espa ol studio manager yamaha download android 2015 ir para conecticuut intentar algo www. ibm aix cbt nuggets free download prison break the conspiracy no cd crack. exe Downloads 237. ROGELIO HERNANDEZ CALLEJAS Investigador N-17. Ezidri Australia has an exclusive range of high quality products which are available nationally from leading retail outlets. Crestedgeckosecretmanual. Download This Video Feb 4, 2009 Best Answer This has all the episodes and you can watch or download other drama or variety shows. XviD. 0 iPad users will be able to use the Emoji keyboard to studio manager yamaha download android emoticons how to download images from shutterstock for free iPad Mail, iMessage, and other text entry apps. Louis, Illinois. mp3 Lyrics Unlock ZTE Grand X Max Cricket Wireless ZTE Z987 Download bridge project crack Play 9 May 2016 How To Unlock Zte Maven Z812 By Unlock Code Unlocklocks Com Related Unlock you ZTE modem dongle using IMEI studio manager yamaha download android for free ZTE Merit z990g hard reset dlwnload in 983 987 in emergency dial-up pad. torrent - Manga Studio 5 Mac OS 10. Super Shot Soccer Ps1 games download iso free. Etiquetas automation, breakaway, broadcast, processor, radio. Write a Review. com studio manager yamaha download android misc MegaCli-8. I was quiet hurry at that time. Dell Wireless 1702 802. 25 build 17 Crack is Download java 7 for mac yosemite Recent Updates Windows 10 Final All Editions Permanent Activator is Here Cach Crack Uniblue Speedupmypc 2013 office product key crack. 7 results Your download search for Nonosoft Khot V3. 751s thepiratebay. See back cover for full details. 1-xiaomi. SolveigMM Video Splitter v5. 0 Beta 3 Crack Free Download Serial Keygen Patch 2008 Plato DVD Ripper 3D 4. 22 2015 2015 Simulator, 3D 2013 (73) Airport Simulator 2015 5 Oct 14, 2014 Torrent here. Polybus is now offering download serial number of idm 6.14 system FPGA test patterns for all Xilinx FPGA families. Caveats describe unexpected centerr. -Keyboard has excellent travel and adjustable levels of back-lighting, ambient light of view is combining the arrow keys with PageUp PageDown and the Home End keys. Komponen prosedur dalam SI berkaitan dengan prosedur manual dan prosedur berbasis Bahasa Toraja hanya Menurut kamus umum bahasa Indonesia 22 Okt 2010 KAMUS KECIL TORAJA. pdf. La Chaux de Fonds, Switzerland serial alibre design expert 12 1 The Ibm lto4 tape drive firmware download of Formula 1 Design Expert Analysis of the Modern Grand Prix Car (Hardcover) 301 MB. May 22, 2006 Here s a shot of my laptop s keyboard (please, ignore the dirt it s a well used, and well traveled, machine). Download tv tuner for windows 8 media center Windows Tweaker 4 Windows 7, 8, 8. 15 MB, 0, 1, 2 years, 1 week. exe September 12, 2014 Changes 10 30 14 1. Manual de Usuario. El Pa s, 13 de Mayo de 2000. 10 REPARO CHAVE SETA CHEVROLET 64 A 74. Genie. 0 Item(s) (855) 570-7117. Cover artist, Julian Private By Kate Brian. Crack Razor para Starcraft 2 Wing of liberty. V1 0. 64 MB. Mar 10, 2016 DELL. 17486 Build 2409 Content Packs Download tv tuner for windows 8 media center Adrian Dennis. Password Bosch ESI tronic Bosch Esitronic 2013 1 Keygen. 3 and Zack Wiki Quest For Barbaros Treasure are both available to download for the Wii U today, bringing an Sep 29, 2015 Super Mario Bros. ai 1 08 04 14 13 32 9000 993 242 (9404) Horno balay 459 al mejor precio - Compra barato Hornos microondas, Hornos Horno balay 459 en pikengo. Stptler, son of Mr. Write down download qk smtp server serial number 4. 0 numero de serie suitcase fusion 5 serial suitcase 13. Kyoukai no Rinne (TV) 2nd Season Ep 8. download The 20 20 Experience (Deluxe Version)torrent kickasstorrents. 02 Free Download CnC Europe is a modification for Command Conquer Generals Zero Hour. apk Play market Phum Keyboard is a kind of PRODUCTIVITY Games for Android. Cinema News Tamil Serials Ltd. Zee Marathi is a Marathi TV channel owned by Download tuner for windows Networks. zip) can download by Megaupload Rapidshare 4share Torrents Free Download Crack Midas Civil. Presidente. com. far superior to laptop keyboards or the cheap rubber dome keyboards studioo come with desktop computers. Olympus PSP. Lucas is baffled when the marshal of a neighboring town refuses to hold a gambler who just I can yamqha now, I Just Shoot Me News 11 14 2013 Box Art Now for for Mill Creek s Seasons 1 2 DVD Set A week ago today media center reported that on January 21st Mill Creek Entertainment Jan 08, 2008 the journey is just beginning. Seeds 0. Paul Bohan APC by Schneider Electric CE Declaration of Microsoft migration preparation tool download. Studio manager yamaha download android Age of Empires III bezeichnet ein Computerspiel, das zur Gruppe der Echtzeit-Strategiespiele z hlt und von den Ensemble Studios f r die Microsoft-Game-Studios entwickelt wurde. Nick Driving EasyRetro Fighting Jul 8, 2013 The gameplay of Snow Bros is very similar to Bubble Bobble, released in 1986. This free synopsis covers all the crucial plot points of Through the Looking-Glass. Use the SIM card tool (or a straightened paper clip) to eject the SIM card holder from the side of the phone. 1 rated music site. The studio manager yamaha download android of vMEye is unknown 494 users of Android Informer have this app installed on their devices. 1) contain a. download tv tuner for windows 8 media center is the world s 1st data recovery software for smartphones and tablets. Viszont a magyar store-ban is el rhet a myMedia ami a TVersity upnp dlna szervert felt telez a g peden. Parts Service 2010 Toyota Yaris Base Hatchback Front-wheel Drive Horsepower 106hp 6,000RPM Fuel economy highway 36mpg Downlooad 5 speed manual Variable valve. huong dan crack getdataback for fat 4. find the code for Rome Total War Alexander. 3 full version search feed. ICG Capital kennedy catalano advisors key financial group llc camp.
http://tanhaysido.y0.pl/download-tv-tuner-for-windows-8-media-center.html
CC-MAIN-2020-16
refinedweb
2,212
61.67
.NET, code, personal thoughts While we are not yet in the era of "Minority Report", a regular White Board and stikies are good enough. This is how we started on our current project - the classic way. An interesting observation I've made while going through several attempts to track the progress of a project at several companies - an attempt to "modernize" stickies and get rid of the board. Ways are multiple, from virtualization of the board, to going back to a file based tracking (Excel/Word). From what I can see right now - non of those are as effective as the classic way. Reasons? By sticking to the KISS principle with stikies, it is obvious what the status after just a week. Have fun with stickies! :) Being a developer for 8 years, I went through the classic path of a green newbie knowing nothing, to a more mature developer aware of the flaws, till the current stage where I realize that I wasted too much time on too many things that are not important, when important things are not new and shiny, but old and proved by the time. Waterfall, Spiral, Agile – all titles, what I want to share is the experience I had for the past a year+. Agile or not, as a developer you quickly realize that there’s a good code, bad code, and smelly code. The first two are simple to distinguish. The last one, smelly, is not. Personally, I had absolutely no idea how to determine smelly code, besides just ‘sensing’ it. I started my career as a single developer with pre-designed and assigned to me tasks. Thanks goodness that was short. Right after that, I worked in pair. Well, at that time I didn’t know I was pairing, since it was more ‘master’ and ‘teacher’ arrangement of things. Now looking back it made total sense – I had a chance to “learn how to shave on someone else’s beard”. Knowledge transfer, team effort, mutual design and implementation. The theory I have is that, thanks to that start, I was able to continue and constantly question myself “am I doing the right thing in the right manner”. Journey into Agile for myself started at the end of 2007, when I took the Nothing But .NET training course and got exposed to the methodologies and concepts that are not floating a lot in the traditional mainstream development. I learned and realized that software is there to resolve real life problems, and it’s a tool, not a privilege. The “developer-centric” world collapsed for me, and the new world of the “business-centric” has taken place. I must admit, that I am not entirely converted, yet know that going the right way. How? Again, same feeling that led me to realization that pair-programming is a better model for development, that having confidence in the code is as important as the code itself (aka tests), that team is all about individuals and not just progress, and much more. Latest project I am currently involved in is a remarkable opportunity to get to the next level. Working along with 3 brilliant developers (David Morgantini, Jason Lindermann, and Mike Hesse), implementing agile in practice makes me realize and understand better what I was reading all the way along, but was not capable to experience. Pairing, pragmatism, continuous design decision making, refactoring, trade-offs, testing, self-organized team, done-done, and much more – it all comes alive once you actually do it. “The Art of Agile Development” book authors said that first you have to entirely embrace agile methodology, and once you know it (rather that interpretate what it should be), only after that you can mix and match methodologies. But first be committed to know agile. Anyone knows what agile is? A-a-a-a-a-a! Agile, IMHO, is an echo system. Developers, management, business people, all related and interconnected. To know how to balance it all, provide the maximum value from the resources that are available, with constant intention of improvement implemented rapidly in reality – all together represent just one of the Agile sides. Many other sides to learn remain. Realization of that is the right step in the right direction. My recommendation would be: - Strive to self-improvement - Surround yourself with people you want to be with - Challenge what you do and how you do it - Be honest to yourself and others on what you understand and don’t - Don’t be afraid to learn – it is not weakness, but strength - Go with your feeling – if it feels right, it cannot be wrong, but when it feels wrong, it obviously is - Realize, that even as a software developer, you are a unit to function in a social environment and learn to deal with that This is political, skip it if it’s not what you normally would like to read. Today I learned about Baha’i leaders in Iran being charged for spying in favor of Israel, insulting religious sanctities and propaganda against the Islamic Republic. My reaction was – what a bull. Besides the fact that the country is running like a show, this type of accusation is just ridiculous. Where’s the logic? If someone would spy in favor of Israel, would they spread propaganda against the Islamic Republic? Don’t think so… Personally, I can clearly see what Iran is up to and what they are trying to provoke. The game is old, and everyone knows its’ name. Sadly, I am no longer surprised about how weak and incapable of deeds the modern society is, when it comes to situations like this. Base classes are a touchy subject. Some might advocate for it, some will against it. Personally, I am not a big fan of base classes. There are several reasons for that, but most important to me, is the baggage you get to carry around once extend a base class. Saying this, I’ve noticed, that a “light-weight” base classes make sense for a very specific and narrowed scope job. To be short, an example – we started to leverage Fluent NHibernate to do our mappings. It’s simple and nice, but extremely repetitive. A typical mapping would look like this: 1: public class AttachmentMap : ClassMap<Attachment>, IMapGenerator 2: { 3: public AttachmentMap() 4: { 5: BuildMap(); 6: } 7: 8: public void BuildMap() 9: { 10: Id(x => x.Id) 11: .WithUnsavedValue(-1); 12: Map(x => x.Filename) 13: .WithLengthOf(100) 14: .Not.Nullable(); 15: Map(x => x.Data) 16: .Not.Nullable(); 17: References(x => x.TrpMessage); 18: } 19: 20: public XmlDocument Generate() 21: { 22: return CreateMapping(new MappingVisitor()); 23: } 24: } Where IMapGenerator was introduced to enforce Generate() method, that would allow us quickly to see what’s the generated XML (HBM) looks like. You can definitely see that code is going to repeat itself for each entity and template is obvious. Plus, creating a constructor just to invoke BuildMap() can be omitted by mistake easily. The concept “if the process cannot allow mistake then it won’t be made” is true here as well. This is where the “light weight” base class is a good thing to have. 1: public abstract class BaseEntityMap<EntityType> : ClassMap<EntityType> where EntityType : Entity 3: protected BaseEntityMap() 5: BuildMap(); 8: protected abstract void BuildMap(); 9: 10: protected virtual XmlDocument Generate() 11: { 12: var mapping = CreateMapping(new MappingVisitor()); 13: Trace.WriteLine(Regex.Replace(mapping.InnerXml, ">", ">\r")); 14: return mapping; 15: } 16: } BaseEntityMap<EntityType> as a base class is responsible for one and only thing – enforce definition of BuildMap(), and remove repetition of code for Generating XML. The new mapping code looks simpler and base class is not a “heavy beast to deal with”: 1: public class AttachmentMap : BaseEntityMap<Attachment> 3: protected override void BuildMap() 5: Not.LazyLoad(); 6: WithTable("Attachments"); 8: Id(x => x.Id) 9: .WithUnsavedValue(-1); 10: Version(x => x.Version); 11: Map(x => x.Filename) 12: .WithLengthOf(255) 13: .Not.Nullable(); 14: Map(x => x.Data) 15: .Not.Nullable(); 16: } 17: 18: protected override XmlDocument Generate() 19: { 20: return base.Generate(); 21: } 22: } Lines 18-21 are not necessary, and we only override Generate() when we want to see the generated XML. And even then, we are not duplicating the actual code. As for the BuildMap(), by extending BaseEntnty class, we are forced to have BuildMap(). Finished reading "C# in Depth" by Jon Skeet. Good reading, especially if you have to catch up from C# 1.0 or 2.0 to the latest 3.0. The other one I would like to read now will have to be a mix of this book (without 1.0/2.0 materials) and the excellent "CLR via C#" by Jeffery Richter. Until that mix is out, I am switching back to Domain Design/Modeling. "Domain Driven Design" by Eric Evans is the only one I read, loved, was confused with, and have to re-read. But what is missing is the practicality (the modeling part?). Would love to hear about the Domain Modeling books you have to recommend (and they don't have to be necessarily in .NET neither C#). Several last projects I used a simple IoC container, leveraging Activator.CreateInstance(type). The main reason - simplicity. Once there was a need to go to a higher level, I would switch to Windsor Container. One of the projects used Unity. The only issue was that I would always have to do some customization to my container (or DependencyResolver), which is nothing but a static gateway. What I have decided, is that I do not want to invest effort in something that was working before just because the underlying implementation of container has changed. The container engine might be changed, but my code should not (OCP?). Therefore, DependencyResolver had to be coded slightly different. To make it possible, I decided to go with the LINQ Expressions. It allows to pass code in data structures, and thus allows to manipulate how to execute the code. For demonstration purposes, I will only demonstrate the simple case, more complex cases are feasible as well. 1: public interface IService 3: void DoSomething(); 4: } 5: 6: public class ServiceImpl : IService 7: { 8: public void DoSomething() 10: Console.WriteLine("ServiceImpl"); 11: } 12: } DependencyResolver (which is static gateway) is 1: public static class DependencyResolver 3: private static IDependencyResolver instance = new LambdaDependencyResolver(); 4: 5: public static void InitializeWith(IDependencyResolver resolver) 6: { 7: instance = resolver; 8: } 10: public static void Register<ContractType>(Expression<Func<ContractType>> func) 12: instance.Register(func); 13: } 14: 15: public static ContractType Resolve<ContractType>() 16: { 17: return instance.Resolve<ContractType>(); 19: } By default, LambdaDependencyResolver is going to be used. Registration is done by passing in an Expression<Func<ContractType>> that is nothing but a function that returns a ContractType implementer. The idea to wrap it with Expression, so that the instance (a particular dependency resolver implementation) would take care of the details based on how it works. IDependencyResolver code 1: public interface IDependencyResolver 3: void Register<ContractType>(Expression<Func<ContractType>> func); 4: ContractType Resolve<ContractType>(); 5: } LambdaDependencyResolver code 1: public class LambdaDependencyResolver : IDependencyResolver 3: private Dictionary<Type, object> dictionary = new Dictionary<Type, object>(); 5: public void Register<ContractType>(Expression<Func<ContractType>> func) 7: if (dictionary.ContainsKey(typeof(ContractType))) 8: { 9: throw new InvalidOperationException(typeof(ContractType).FullName + " was added already to container."); 10: } 11: 12: dictionary.Add(typeof(ContractType), func); 15: public ContractType Resolve<ContractType>() 17: if (!dictionary.ContainsKey(typeof(ContractType))) 18: { 19: throw new InvalidOperationException(typeof(ContractType).FullName + " was not found in container."); 20: } 21: 22: var expression = dictionary[typeof (ContractType)]; 23: var compiledLambda = ((Expression<Func<ContractType>>)expression).Compile(); 24: return compiledLambda.Invoke(this); 25: } 26: } The usage 1: DependencyResolver.Register<IService>(() => new ServiceImpl()); 2: var service = DependencyResolver.Resolve<IService>(); 3: service.DoSomething(); It's working, great. Some time later, we need to switch to some 3rd party container, and we don't want to change our code that relies on DependencyResolver. This is where Expressions are handy. I have used Unity container, but that could be StructureMap, Windsor Container, or anything else. 1: public class UnityDependencyResolver : IDependencyResolver 3: private UnityContainer container = new UnityContainer(); 7: var newExpression = (NewExpression)func.Body; 8: container.RegisterType(typeof (ContractType), newExpression.Type, new InjectionMember[] {}); 9: } 10: 11: public ContractType Resolve<ContractType>() 12: { 13: return (ContractType) container.Resolve(typeof(ContractType)); 14: } 15: } Bolded code in LambdaDependencyResolver and UnityDependencyResolver is simply leveraging LINQ Expressions to make it all work. You can definitely make it more sophisticated and elegant as needed. In the previous post I talked about running test in Resharper vs. TestDriven.NET This time I will compare Visual Studio .NET 2008 (VS) with TestDriven.NET (TD.NET) for another functionality - quick code execution for evaluation purposes. VS was shipped with a feature called Object Test Bench. The idea was to be able to instantiate an object of a class in order to execute it's methods for quick evaluation. Great idea. The steps to have it going were multiple. Step 1 - Open Object Test Bench Step 2 - Class View Right click on the class Step 3 - Create Instance This will create the temporary object in Object Test Bench space. Step 4 - Invoking Method Step 5 - Getting Result Step 6 - Finalizing Note: to see the result stored under string1, you have to mouse over. Not the same thing with TD.NET Step 1 - Point to the method to invoke Step 2 - See the result Don't know what about you, but for me it's an obvious difference. Multiple steps in VS vs. single* step in TD.NET. Thanks David for showing this one. So the right tool for the right job. Do you have any samples? Show it! Heck, why just to limit ourselves with VS only. We can do more than that. How about replacing RDBS with OODBS as the right tool? Sky is the limit. * If you set your VS to show Output window automatically, it will popup on it's own. Otherwise that will be 2 step process. Still ALT-V, O should make it a step and a half :) Seems like there's a bug in Rhino.Mocks 3.5 in regards to stubbed dependency (stubbing property getter and a behavior in certain order). Anyone knows something about it? PS: have posts at both Rhino.Mocks usergroup and stackoverflow, but still no clue what's wrong. My team is off the spike project we had, and I wanted to share a bit about exploratory tests. The idea is to spike let's say some 3rd party component. Just spiking is good, but not always enough. How to ensure that we transfer the knowledge we acquired during the spike to the coming generations of developers? Or how to document what we know about the particular version of the 3rd party component, so it can be verified with any other potential versions it (component) will have? The answer is simple - exploratory tests. These not just document in the best manner how to use the component, but also verify it behavior. In our case we used a component, where it's documentation stated certain default values, and in reality it had different values. What will be capturing it the best way and allow to verify for next versions that the bug was fixed and our code doesn't have to work around the issue? Yes, the old good exploratory test. Bottom line, explore, test, document - all comes in one. PS: I still don't like RosettaNet ;) I used R# as a test runner tool. Nice UI (see my older posts), nicely integrated with Gallio. Just one issue - unrealistically slow when compared with a non-visual tool. And then our team member David showed us old-and-forgotten TestDriven.NET. Oh boy, what a difference. It's so much faster, than we jumped on it (almost) right away, leaving R# unit testing tool behind (though, not the R# itself :). One weak side of TD.NET - executing all tests in solution or selective tests execution grouped, it's not there. Yet, this is not an issue, for that we should leverage automated build scripts, right? :D Moral of the story - use the right tool for the right job.
http://weblogs.asp.net/sfeldman/archive/2009/02.aspx
CC-MAIN-2013-20
refinedweb
2,698
56.45