text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Meaning of the -tt option reads: "Like -tt, but raises an error rather than a warning" now reads: "Like -t, but raises an error rather than a warning" p. 45 -- "from future import division" now reads: "from __future__ import division" "j is less than i" now reads: "j is less than or equal to i" L[i:i]=['a','b'] inserts the items 'a' and 'b' after item i in L. now reads: L[i:i]=['a','b'] inserts the items 'a' and 'b' before item i in L. The last sentence ends with "..., while L*=n has the effect of adding n copies of L to the end of L." The last sentence now reads: "..., while L *= n has the effect of adding n-1 copies of L to the end of L." x[1] = 42 # x is now [1,42,2,3] Now reads: x[1] = 42 # x is now [1,42,3,4] "else expression: statement(s)" Now reads: "else: statement(s)" def percent2(a, b, c): #needs 2.2 or "from future import" Now reads: def percent2(a, b, c): #needs 2.2 or "from __future__ import" def make_adder_2(augend): # needs 2.2 or "from future import" Now reads: def make_adder_2(augend): # needs 2.2 or "from __future__ import" print x.e, x.d, x.c, x.b. x.a NOW READS: print x.e, x.d, x.c, x.b, x.a Missing "to" in "A bound method is similar TO an unbound method,..." has now been added. The code samples use the variable "heigth", which is now correctly spelled as "height". The code: class OptimizedRectangle(object): __slots__ = 'width', 'height' __slots__ cannot usefully be added by inheritance in this way; so the example should be rewritten to avoid inheritance and rather just copy the whole Rectangle class as given at the top of page 85 under the new name OptimizedRectangle: class OptimizedRectangle(object): __slots__ = 'width', 'height' def __init__(self, width, height): self.width = width self.height = height def getArea(self): return self.width * self.height area = property(getArea, doc="area of the rectangle") "__metaclass_ = type" Now reads: "__metaclass__ = type" "Since type(object) is type, a class C that inherits from object (or some other built- in type) gets the same metaclass as object (i.e., type(C), C's metaclass, is also type) Thus, being a new-style class is synonymous with having type as the metaclass." "Thus, being..." is the start of a second sentence. A period HAS BEEN ADDED to the end of the sentence "Since type(object)...". "except InvalidAttribute, err" Now reads: "except InvalidAttribute, err:" where it says: For more on buffer, see Chapter 13. The text now reads: The buffer built-in is now deprecated. "if re.search(..." Now reads: "if digatend.search(..." The bottom line of text was missing in printings after 9/03. the missing line was: want to print only words that are followed by whitespace (not THIS HAS BEEN CORRECTED. The last Paragraph of the description of map states:- "When func is None, map returns a list of tuples ..." This is only true if map is given at least 2 seqs to "map". If it is given a single seq, it returns the seq itself. This can cause serious errors in programs that expect tuples to be returned. Note from the Author or Editor:p. 164 under `map`, 2nd paragraph, change the text that now reads When func is None, map returns a list of tuples, each with n items (one item from each iterable); this is to read When func is None, map returns a list of tuples, each with n items (one item from each iterable) when n>1; this is and add to the same paragraph (which now ends "the longer ones.") a further sentence When func is None and n is 1, map returns a list of the items of seq (while zip would return a list of tuples with just one item each). The os.path Module functions table is missing the "expanduser" method and related description. (See, for example, the documentation at the following URL:) "For example, os.path.basename('b/c/d.e') returns 'b/c'" Now reads: "For examples, os.path.dirname('b/c/d.e') returns 'b/c'" "but calling lstat on a file that does not support seeking ..." should be: "but calling lseek on a file that does not support seeking ..." To make the example work the same way for both the direct and indirect method, the line: zinfo.compress_type = zipfile.ZIP_DEFLATED HAS BEEN ADDED after the line: zinfo = zipfile.ZipInfo(name, time.localtime()[:6]) When languages is None, translation looks in the environment for the lang to use, like install. However, languages can also be a list of one or more lang names separated by colons (:), in which case translation uses the first of these names for which it finds a .mo file. NOW READS: When languages is None, translation looks in the environment for the lang to use, like install: it examines, in order, environment variables LANGUAGE, LC_ALL, LC_MESSAGES, LANG -- the first non-empty one is split on ':' to give a list of lang names (for example, 'de:en' would be split to give ['de','en']). When not None, languages must be a list of one or more lang names (for example, ['de','en']). Translation uses the first lang name in the list for which it finds an .mo file. gettext.translation(domain, languages=lang) NOW READS: gettext.translation(domain, languages=[lang]) "in releases of Python older than the ones covered in this book, unpickling from an untrusted data source was a security risk ... No such weaknesses are known in Python 2.1 and later." This is no longer true, the 2nd edition says: Note that unpickling from an untrusted data source is a security risk; an attacker could exploit this to execute arbitrary code. Don't unpickle untrusted data! x.close() Now reads: x.cursor() Classes ExternalInterfacing and Serializer (snippets on pages 284-285) are not thread-safe as shown in the book. Change the snippets as follows: Add the following auxiliary class, to be used in both snippets, and useful in its own right: class PoolOfQueues: # thread-safe, because a list's pop and append methods are atomic def __init__(self): self.pool = [] def get_a_queue_from_pool(self): try: return self.pool.pop() except IndexError: return Queue.Queue() def return_a_queue_to_pool(self, q): self.pool.append(q) queues=PoolOfQueues() Change class ExternalInterfacing to: class ExternalInterfacing(Threading.Thread): def __init__(self, externalCallable, **kwds): Threading.Thread.__init__(self, **kwds) self.setDaemon(1) self.externalCallable = externalCallable self.workRequestQueue = Queue.Queue() self.start() def request(self, *args, **kwds): "called by other threads as externalCallable would be" q = queues.get_a_queue_from_pool() self.workRequestQueue.put((q, args, kwds)) try: return q.get() finally: queues.return_a_queue_to_pool(q) def run(self): while 1: q, args, kwds = self.workRequestQueue.get() q.put(self.externalCallable(*args, **kwds)) Change class Serializer to: class Serializer(Threading.Thread): def __init__(self, **kwds): Threading.Thread.__init__(self, **kwds) self.setDaemon(1) self.workRequestQueue = Queue.Queue() self.start() def apply(self, callable, *args, **kwds): "called by other threads as callable would be" q = queues.get_a_queue_from_pool() self.workRequestQueue.put((q, callable, args, kwds)) try: return q.get() finally: queues.return_a_queue_to_pool(q) def run(self): while 1: q, callable, args, kwds = self.workRequestQueue.get() q.put(callable(*args, **kwds)) import Threading, Queue Now reads: import threading, Queue import Threading Now reads: import threading import os os.spawnv(os.p_WAIT, editor, [textfile]) NOW READS: import os os.spawnv(os.p_WAIT, editor, [editor, textfile]) and, the last sentence NOW READS: The first item of the argument <replaceable>args</replaceable> is passed to the program being spawned as "the name under which the program is being invoked". Most programs don't look at this, so you can place any string there. Just in case the editor program does look at this special first argument, passing the same string <replaceable>editor</replaceable> that is used as the second argument to os.spawnv is the simplest and most effective approach. tofile Note that f should be open for reading in binary mode, for example with mode 'rb'. now reads: tofile Note that f should be open for writing in binary mode, for example with mode 'wb'. a[0,2:4) Now reads: a[0,2:4] where it says: an array with rank of one less than a and of the same size as a The text now reads: an array with rank 1 and of the same size as a where it says: just like array(a,copy=False).flat The text now reads: just like array(a,copy=(not a.iscontiguous())).flat The URL has changed and now reads: In the example to diplay GIF images in the example, the line: img.config(image=gifsdict[imgname]) Is now indented as follows: def list_entry_clicked(*ignore): imgname = L.get(L.curselection()[0]) img.config(image=gifsdict[imgname]) An Entry instance with state=DISABLED is a good way... "state=DISABLED" should be replaced by "state='readonly'" for the remainder of the page. state=DISABLED doesn't allow you to select an Entry's text or copy it to the clipboard. Note from the Author or Editor:p. 416 and 417 *NOT* 337, change all occurrences of DISABLED (I count four) to 'readonly' (lowercase and with single quotes around). So for example state=DISABLED must become state='readonly' and so on. toggle c.deselect() Now reads: toggle c.toggle() The entry for 'iconify' shows 'deiconify' for the example. Iconify T.deiconify() Now reads: Iconify T.iconify() The following two lines have been added to the end of the script: root.config(menu=bar) Tkinter.mainloop() def makeshow(menu): def emit(entry, menu=menu): print menu, entry return emit NOW READS: def mkshow(menu, entry): def emit(): print menu, entry return emit AND the first two lines on the top of page 347; and use command=mkshow('File') and command=mkshow('Edit'), respectively, in the calls to the add_command methods of fil and edi. NOW READ: and use command=mkshow('File', x) and command=mkshow('Edit', x), respectively, in the calls to the add_command methods of fil and edi. logical error j = CY*(y-miny)/(maxy-miny) Now reads: j = CY - CY*(y-miny)/(maxy-miny) try: x.f() except AttributeError: sys.stderr.write('x is type %s, (%r) '%(x,type(x))) That 3rd line NOW READS: sys.stderr.write('x is type %s, (%r) '%(type(x), x)) omitted nlist method. The nlst method should probably be included, in its normal alphabetical order (i.e., after mkd but before pwd) with the text (in the usual mixture of fonts and typefaces like for the other methods): nlst f.nlst(pathname='.') Sends a NLST command to the FTP server, asking for the names of the files in the directory named by pathname (by default, the current directory), and returns the list of the filenames. ((i.e., as usual, I would omit weird, rarely used stuff such as, in this case, the optional, non-portable extra arguments)). "...supplies an attribute s.server that..." NOW READS: "...supplies an attribute s.system that..." s.server.listMethods() s.server.methodSignature(name) s.server.methodHelp(name) NOW READ: s.system.listMethods() s.system.methodSignature(name) s.system.methodHelp(name) where it says ntohl htonl(i32) The text now reads: ntohl ntohl(i32) AND where it says ntohs htons(i32) The text now reads: ntohs ntohs(i32) The example trivial HTTP server is missing the two lines to instantiate and start up the server. After the line do_HEAD = do_POST = do_GET, the following two lines have been added: server = BaseHTTPServer.HTTPServer(('',80), TrivialHTTPRequestHandler) server.serve_forever() try: ous.remove(x) except ValueError: pass x.close() NOW READ: try: ous.remove(x) except ValueError: pass x.close() ins.remove(x) MIMEImage class MIMEAudio(_imagedata,... Now reads: MIMEImage class MIMEImage(_imagedata,... quoteattr escape(data,entities={}) Now reads: quoteattr quoteattr(data,entities={}) Pages 507, 508, and 511: The following warning note has been added on page 511: The code examples given on pages 507, 508 and 511 may crash under some version of Python and Windows (not Python 2.3, and not Linux versions of Python). To fix your version of Python so that it is able to run the examples, visit URL: and download and run the appropriate self-installing .EXE, for example currently PyXML-0.8.2.win32-py2.2.exe if you run Python 2.2 on any version of Microsoft Windows. This self-installing EXE will add to your Python installation the latest version of the "XML subsystem" of Python, and, as a side effect, fix any bugs connected to XML handling that may have been diagnosed after the release of your version of Python. There are two arrays declared as static char merge_docs[] The second one (page 536) now reads: static char mergenew_docs[] = " sdist default does not include scripts or data files, even if specifically mentioned in the setup.py script. The default only includes: "all Python source files implied by the py_modules and packages options" To include scripts or data files by sdist, you have to create a MANIFEST.in The Installer website NOW READS: Index for "*" and "**" should be augmented for define extra arguments, 60 pass extra arguments, 64 There is no entry in the index for the consept "frozen", mentioned on page 121 just before and after the heading "Searching the Filesystem for a Module". Worse IMHO, there's no explanation in the book, that I can find, on what "frozen modules" is. A line or two on page 121 would have been nice. © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreilly.com/catalog/errata.csp?isbn=9780596001889
CC-MAIN-2017-30
en
refinedweb
Are All Types First-Class? In other words, can you have an object in Magpie that represents the type Int | Bool? Answer: Yes. Things like generics will need to internally store their type parameters. For example: class List[E] items E[] end That type parameter should be useful at the type level, for things like: def List[E] add(item E) // ... end But should also be directly available in the same way that you can do typeof(T) in C#: def List[E] displayItemType() print(E string) end This is fine if you only instantiate generics with class type arguments. But it's perfectly valid to also do: var optionalInts = List[Int | Nothing] new Which implies that Int | Nothing must itself resolve to a first-class object in Magpie. This is good because (as of 8/19) that's the current path the implementation is taking. It just makes some stuff harder because type checking has to bounce between Java and Magpie more than we'd like. This is also good in that it follows the goal of making everything the interpreter knows also available to Magpie.
http://magpie.stuffwithstuff.com/design-questions/are-all-types-first-class.html
CC-MAIN-2017-30
en
refinedweb
Applies to: SharePoint Server 2010, PerformancePoint Services Topic Last Modified: 2010-10-28 This glossary contains terminology for all the servers, applications, and tools that are part of PerformancePoint Services in Microsoft SharePoint Server 2010.. ADOMD.NET See Other Term: ActiveX Data Objects MultiDimensional.NET aggregate A single value that is composed of multiple values. For KPIs, aggregation determines the mathematical formula that is used to combine members (SUM, AVERAGE, and so on). analytic chart A report type that displays cube data in a visual, interactive chart format. analytic grid A report type that displays cube data in a visual, interactive tabular format. average weighted A type of rollup. It indicates an average that considers the weighted value of all child KPIs. This rollup reflects overall performance.ing The use of bands to represent ranges of performance based on thresholds. banding settings A setting that defines thresholds, or boundaries between changes in indicator status. bulk edit A procedure by which a user can simultaneously change specified properties for a group of selected items. cache interval The length of time that a stored copy of the views that are shown in a dashboard can exist on the server. calculated metric A metric that is based on the result of an expression, rather than originating from a data source. Calculated Metrics A feature that enables users to create simple calculations by using one or more KPI values. This reduces the amount of MDX that is required to create complex scorecards. centered indicator The indicator set that can be used when the "closer to target" scoring pattern is selected when you edit the band settings. collapse An analysis tool, similar to drill up, that displays less detail about an item while maintaining the current display of other items. cube A set of data that is organized and summarized into a multidimensional structure that is defined by a set of dimensions and measures. custom table A type of filter that enables users to choose from a list and then drive dashboard content from multiple data sources. customized scorecard A scorecard that is created without using a template. dashboard A related group of interactive scorecard and report views that are organized together in a SharePoint or Web-hosted site. Dashboard elements may share common filters that control the data displayed within the elements. Dashboard Designer See Other Term: PerformancePoint Dashboard Designer source formatting A type of conditional formatting that is configured within Microsoft SQL Server Analysis Services. It causes individual cells to be highlighted when certain conditions, based on business rules, are met. For example, if Gross Margin is over 28%, the cell is shown with a green background. Known as "OLAP Server Formatting" in Excel. Decomposition Tree A data visualization tool that helps users analyze complex information using a hierarchical scheme. The Decomposition Tree can help users find the root cause that is driving a value. For example, if sales in Canada are poor in March, the Decomposition Tree will help determine whether that is being driven by a particular product line, a certain sales territory, or an increase in returns. deploy To create a SharePoint page from a PerformancePoint dashboard so that users can access it. dimension A structural attribute of a cube that organizes data into levels. For example, a Geography dimension might include the members Country, Region, State or Province, and City. display item A folder into which attributes, measures, calculated members, KPIs, and PerformancePoint items can be organized to facilitate browsing by users.. filter. Filter Web Part A feature that enables users to modify dashboard views by changing the subset of data that is displayed in reports or scorecards. fixed value A user-entered value or a static value from an Excel spreadsheet that does not change unless manually altered by the user. Contrast this with dynamic values that are queried from a SQL Server database, OLAP cube, or other data source. formatting dimension The dimension (usually "Measures") that numeric formatting is applied to. Some cubes use a Scenario dimension that is more desirable for numeric formatting.. import To bring information from one system or program into another. The system or program receiving the data must somehow support the internal format or structure of the data. Internet Protocol Security (IPsec) A set of industry-standard, cryptography-based protection services and protocols. IPSec protects all protocols in the TCP/IP protocol suite and Internet communications by using Layer Two Tunneling Protocol (L2TP). IPsec See Other Term: Internet Protocol Security (IPsec) key performance indicator (KPI). KPI See Other Term: key performance indicator (KPI) KPI details report A report type that shows information about a KPI that is selected in a scorecard. level A degree of granularity in hierarchies. Levels enable you to view data in various degrees of detail. The Time dimension, for example, can have the levels Year, Quarter, Month, and Day. list A Web site component that stores and displays information that users can add to by using their browsers. For PerformancePoint Services, the site requires a Web server that is running Microsoft SharePoint Server. A single position or item in a dimension. The Account dimension, for example, could have a dimension member called Travel Expenses. Dimension members can be user-defined or predefined and can have properties associated with them. See dimension member property for details.. metric An actual or target value of a KPI. A metric may be used as a scorecard element.. MSXML See Other Term: Microsoft XML Core Services (MSXML) multidimensional expression (MDX) A language for querying and manipulating data in multidimensional objects (OLAP cubes). named set A grouping of dimension members or items from a data source that are named and treated as a single unit and can be referenced or reused multiple times.) OLAP See Other Term: online analytical processing (OLAP) OLAP cube See Other Term: cube. parameter A field defined on a dashboard item that can receive data from a filter control. Data supplied to this field modifies content that is displayed in the dashboard scorecard or report. PerformancePoint Content List A list that stores the elements that are used to construct a PerformancePoint dashboard. A PerformancePoint dashboard is a related group of interactive scorecards, filters, and report views that are organized together into a set of Web pages. PerformancePoint Dashboard Designer A client application that you use to create and manage dashboards, scorecards, reports, and other PerformancePoint items prior to deploying them within a dashboard to a SharePoint site. PerformancePoint Data Connections Library A SharePoint document library that may contain Office Data Connections (ODC), Universal Data Connection (UDC) files, and PerformancePoint data connections. Data connections identify a source of business data that may include cubes or perspectives that are based on online analytical processing (OLAP) cubes, relational databases, CSV files, Microsoft Excel Services spreadsheets, or other data sources. PerformancePoint PowerShell Cmdlet A task-oriented command that is used in the Windows PowerShell environment. Or A small, basic command, named in the form "verb-noun'" and implemented as a .NET class that derives from a base cmdlet class. Examples: get-process, new-drive. PerformancePoint Server See Other Term: PerformancePoint Services in Microsoft SharePoint Server PerformancePoint Service The SharePoint service application that enables dashboarding capabilities by means of scorecards, analytic grids and charts, and other decision-making tools for the enterprise. PerformancePoint Service application The PerformancePoint component that runs as a SharePoint shared service. It is one of many service applications that plug into SharePoint. Other examples include Excel Services, Search Service, and Visio Graphics Service. PerformancePoint Service application proxy The PerformancePoint Web Front End (WFE) service interface. It abstracts the communication layer between WFE components and the service application. PerformancePoint Services PerformancePoint Services Central Administration A collection of SharePoint administration pages that the administrator can use to configure PerformancePoint Services for SharePoint. PerformancePoint Services in Microsoft SharePoint Server A collection of services for Microsoft SharePoint Server that enables users to monitor organizational goals, to analyze performance information through up-to-date content and context-rich dashboards and scorecards, and to use that information to make business decisions. These capabilities were formerly part of the PerformancePoint Server product. PerformancePoint Settings Database A PerformancePoint-specific database that stores the annotations for each dashboard element, user-based filter selections, and other information about dashboard elements. PerformancePoint trusted location A location within Microsoft SharePoint Server from which dashboard content can run. The default is to trust all locations.The PerformancePoint Services administrator must change it to make it more restrictive. PerformancePoint Web Parts Functionality in PerformancePoint Services that makes it possible to display dashboard views that are defined in Dashboard Designer to users of a SharePoint site. PerformancePoint Web Services A collection of Web services that determines how PerformancePoint Services operates. pie chart A round chart that shows the size of items in a single data series, proportional to the sum of the items. refresh The activity of synchronizing dashboards and dashboard elements between the local workspace and PerformancePoint Services in Microsoft SharePoint Server. relational database A database or database management system that stores information in tables. In conducting searches, a relational database matches information from a field in one table with information in a corresponding field of another table to produce a third table that combines requested data from both tables. A visual display of data in a dashboard that can be coordinated with other report views by using filters. Reports include analytic grids and charts, KPI details, Excel Services spreadsheets, SQL Server Reporting Services reports, strategy maps, ProClarity Analytics Server reports, and Web pages. report view group Reports that are grouped together in a single dashboard zone. These reports can be conditionally shown, based on the selected KPI. Report Web Part A feature that allows users to view and interact with reports that are created in PerformancePoint Dashboard Designer. Reporting Services report A report type that acts as a wrapper for a SQL Server Reporting Services report so that the Reporting Services report can be displayed in a PerformancePoint dashboard. ribbon The ribbon is part of the Microsoft Office Fluent user interface (UI). In Dashboard Designer, It consists of contextual tools for creating, editing, and exporting dashboards and their elements. rollup The calculated value of a KPI is derived from the aggregated scores of child or descendant KPIs. Web Part A feature that enables users to view and interact with scorecards that are created in PerformancePoint Dashboard Designer. SharePoint Products and Technologies See Other Term: Microsoft SharePoint Products and Technologies Stack Selector Web Part A feature that enables users to show more than one view in a single location on a dashboard, and provides a control to switch between them. standard indicator An indicator set that can be used when either the "increasing is better" or "decreasing is better" scoring patterns are selected when you edit the band settings. strategy map A performance management tool for visually presenting an organization's or organizational unit's objectives and goals, their groupings of objectives and goals, and their mappings of objectives and goals to themes, initiatives, KPIs, targets, business processes, and action plans. Each item in the visualization contains a set of metadata, which itself is customizable. target As one aspect of a KPI, a target is the desired level of performance with respect to a specific business goal or strategy. Actual values are evaluated against the target to determine KPI score and status. time dimension A dimension that breaks time down into levels such as Year, Quarter, Month, and Day. In Analysis Services, a special type of dimension created from a date/time column. time formula An expression that is created following the Simple Time Period Syntax. It takes the form of a time unit plus or minus a whole number, such as Year-1 or Month-6. It is the formula that is applied when you are using time intelligence on a dashboard. time intelligence filter A dynamic dashboard filter that can be linked to scorecards and reports so that they will update automatically relative to the current time. unattended service account The security account that is used when a data source is configured to connect as a single shared identity for all users. update To retrieve the most current data from the data source that is associated with a scorecard. validate To ensure that all data sources that are used by a KPI or scorecard are available. variance The difference between two values, such as the difference between estimated and actual expenses.. workspace The user interface area in Dashboard Designer that contains the dashboard items that are being edited by a user. The contents of the workspace can be saved as a file with a .ddwx extension. worst indicator A type of rollup of child KPIs. It moves the indicator of the lowest performing KPI to the objective KPI row. WYSIWYG See Other Term: what you see is what you get (WYSIWYG)
https://technet.microsoft.com/en-us/library/bb838764.aspx
CC-MAIN-2017-30
en
refinedweb
Install Snort_inline on your firewall to contain intrusions, or to stop them as they're happening. Wouldn't it be nice if your NIDS could not only detect intrusions, but also do something about them? It would be nice if it could actually stop the intrusion occurring on the host that was being attacked, but the next best thing would be to block the network traffic that's propagating the attack. One tool that can do this for you is Snort_inline (). Snort_inline is a patch to Snort that modifies it to read data from the Linux kernel's Netfilter queue, which allows Snort to effectively integrate itself with the firewall. This allows it to not only detect intrusions, but to decide whether to drop packets or to forward them to another host (using Libnet). This of course requires that your kernel be compiled with IP queue support, either statically or as a module. You can see if you have the module by running a command like this: $ locate ip_queue.o /usr/src/linux-2.4.20-8/net/ipv4/netfilter/ip_queue.o /usr/src/linux-2.4.20-8/net/ipv4/netfilter/.ip_queue.o.flags /lib/modules/2.4.20-8/kernel/net/ipv4/netfilter/ip_queue.o In this case, you can see that the module is available by looking at the last line of the output. If that doesn't exist, you can check to see whether the file /proc/net/ip_queue exists. If you can't find the module, but that file exists, then it means IP queue support is compiled into your kernel statically. If neither file exists, you'll need to enable it in your kernel and recompile. In addition to requiring IP queue support, Snort_inline also needs libipq. This is a library that comes with Netfilter and is used by applications to communicate with Netfilter's queue. You can check to see if it's installed on your system by running this command: $ locate libipq /usr/include/libipq.h /lib/libipq.a If you don't see output similar to this, chances are that you don't have libipq installed. You can install it by downloading the iptables source from the Netfilter distribution site (). For instructions on compiling it, refer to [Hack #41] . After compilation is finished, run make install-dev, since libipq is not installed by default. In addition to those libraries, you'll also need the Libnet packet injection library (). To install Libnet, simply download the source distribution, unpack it, and then run ./configure && make install as root. Now that all the prerequisites are out of the way, you can compile Snort_inline. First download and unpack the source distribution, and then change to the directory that it creates. Then run this command: $ ./configure --enable-inline && make You can also use any configure options that you'd normally use with Snort, since at it's heart Snort_inline is still Snort. Don't be alarmed if your compile aborts with the following error: gcc -DHAVE_CONFIG_H -I. -I. -I../.. -I../.. -I../../src -I../../src/sfutil -I/usr/include/pcap -I../../src/output-plugins -I../../src/detection-plugins -I../../src/ preprocessors -I../../src/preprocessors/flow -I../../src/preprocessors/portscan -I../../ src/preprocessors/flow/int-snort -I../../src/preprocessors/HttpInspect/include -I/usr/ include/pcre -I/usr/local/include -g -O2 -Wall -DGIDS -D_BSD_SOURCE -D__BSD_SOURCE -D_ _FAVOR_BSD -DHAVE_NET_ETHERNET_H -DLIBNET_LIL_ENDIAN -c `test -f 'spo_alert_fast.c' || echo './'`spo_alert_fast.c `/home/andrew/snort_inline-2.1.0/src/output-plugins' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/andrew/snort_inline-2.1.0/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/andrew/snort_inline-2.1.0' make: *** [all] Error 2 This is caused by /usr/include/linux/netfilter_ipv4/ip_queue.h including /usr/include/linux/if.h instead of /usr/include/net/if.h (a problem that the author encountered while writing this). You can fix this by simply editing ip_queue.h and changing this line near the top of the file: #include <linux/if.h> to this: #include <net/if.h> You can then restart the compilation from where it left off by simply typing make, or, if you're paranoid, you can use this command to completely start over: $ make clean && make After compilation has finished, become root and type make install. You can now configure Snort_inline just as you would configure Snort regularly. However, it's recommended that you run a separate instance of Snort if you want alerting and use Snort_inline solely for setting firewall rules. In addition to modifying Snort to capture packets from Netfilter rather than libpcap, the Snort_inline patch also adds three new rule types, as well as a new rule option. The new rule types are drop , sdrop, and reject. The drop rule type will drop the packet that triggered the rule without notifying the sending host, much like the iptables DROP target, and will log that it has done so. The sdrop rule type is similar, except that packets are silently dropped with no log entry to tell you that it occurred. Using the reject rule type will block the offending packet, but will notify the sending host with either a TCP RST or an ICMP port unreachable message, depending on whether the packet that triggered the rule used the TCP or UDP protocols, respectively. The new rule option added by Snort_inline allows you to replace arbitrary content within a packet with whatever you choose. The only restriction is that the replacement byte stream must be the same length as the original. This is implemented with the replace rule option and is used in conjunction with the content rule option to select what is to be replaced. To run Snort_inline, start it just as you would start Snort. Snort_inline does add a new command-line switch, though: -Q tells it to use IP queues rather than libpcap to gather packets. So, you'll need to use this option if you want to use it in inline mode. The only thing left to do before actually running it in inline mode is to configure the kernel to send the packets to the IP queues. This is done with the iptables command: # iptables -F # iptables -A INPUT -j QUEUE # iptables -A OUTPUT -j QUEUE # iptables -A FORWARD -j QUEUE This will push all traffic going in, out, and through the machine into an IP queue from which Snort_inline will read its packets. You can then start snort_inline as you would Snort (just don't forget to use the -Q option): # snort_inline -Qvc /etc/snort/snort_inline.conf If you're administering the machine remotely, you'll probably want to start snort_inline before enabling the QUEUE targets, since it's snort_inline that will actually pass the packets back and forth. Otherwise, your remote logins will be dropped as soon as you put the iptables rules in place. If you're particularly paranoid, have your QUEUE target rules ignore packets coming from a certain IP address or range of addresses.
http://etutorials.org/Networking/network+security+hacks/Chapter+7.+Network+Intrusion+Detection/Hack+87+Prevent+and+Contain+Intrusions+with+Snort_inline/
CC-MAIN-2017-30
en
refinedweb
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Redux Initial Setup - index.js3:20 with Guil Hernandez and Beau Palmquist In this video, we'll create an index.js file that will act as the entry point for the application. We'll also turn the app.js file from the React Basics course into a Scoreboard component that will be exported as a module. New Concept ESLint: A tool that analyzes JavaScript for syntactical, logical, and conventional issues based on an .eslintrc file. It defines what rules should be used to evaluate the code. This process of analyzing JavaScript is known as linting. Further Reading - 0:00 In the previous video, we installed redux and - 0:03 react-redux NPN packages for our project. - 0:06 So in this video, we're going to add an index.jsfile that - 0:09 will act as our entry point for the application. - 0:12 But we're also going to turn the App.js file from the reactor basic's - 0:15 course into a scoreboard component that will be exported as a module. - 0:20 Then we'll fire up the application and - 0:21 demonstrate that the scoreboard application still works. - 0:24 So let's start by adding a new file to our application called index.js. - 0:30 In the new file, add the following imports, - 0:35 import React from 'react'. - 0:38 Import { render } from 'react-dom'. - 0:46 And import Scoreboard from './Scoreboard'. - 0:55 Next let's render the Scoreboard component using the render method like so. - 1:12 Currently our application will not run because we don't have a Scoreboard - 1:15 component. - 1:16 So let's remedy this by renaming this App.js file to Scoreboard.js. - 1:26 Then open up Scoreboard.js and - 1:29 import React And - 1:34 at the top of the file, you'll also see a const Application assignment. - 1:39 So let's change Application to Scoreboard. - 1:46 And finally to export the component, - 1:50 at the very bottom of the file, add export default Scoreboard. - 1:58 Before we start the dev server, - 2:00 let's make sure that all of our additional NPN packages have been installed. - 2:04 To do this, bring up your terminal or console, - 2:07 make sure you're in the project directory for this course, then run npm install. - 2:14 This will ensure that all the packages we need have been installed. - 2:17 Now type npm start to launch the app. - 2:21 Now some of you may have noticed here in the console output - 2:24 that a pre start script was executed before our dev server was launched. - 2:29 The prestart script is the ES let's step I mentioned earlier that will scan our - 2:33 JavaScript code for errors and report anything it finds in the terminal or - 2:37 console window. - 2:38 Once the server has started up, open up a browser and - 2:40 navigate to local host org 8080 and you'll see that the Scoreboard application from - 2:45 the React Basics Core still works, as you would expect. - 2:48 You can add players, remove players, - 2:52 adjust scores, and start and stop the stopwatch. - 3:03 Now that we've taken care of the project setup, the next step is to - 3:06 take the existing scoreboard app and break it into discrete components and modules. - 3:11 This will make working with Redux much easier and will be a good step towards - 3:15 isolating code and responsibilities within our application moving forward. - 3:19 See you in the next stage.
https://teamtreehouse.com/library/redux-initial-setup-indexjs
CC-MAIN-2017-30
en
refinedweb
Opened 6 years ago Closed 5 years ago #11280 closed bug (fixed) Intermittent no USB on boot Attachments (50) Change History (143) comment:1 by , 6 years ago comment:2 by , 6 years ago Does your mouse work in x86 Haiku? Please open separate tickets for network problem and missing USB devices. comment:3 by , 6 years ago comment:4 by , 6 years ago comment:5 by , 6 years ago follow-up: 7 comment:6 by , 6 years ago Been having very similar symtoms on this Asus system for a few weeks. Seems to occur on cold boots, not warm boots. Not easy accessing stuff without a working mouse :-), but I managed to check the following: - unlike the above posted syslog, the syslog here shows USB traffic in the last few lines, but not errors, just "device added" type lines. - managed to launch a Terminal and type topand I got the following: the offending team is input_server and the thread is something like PathMonitor loop Is it the same for you vidrep? Otherwise I will file a different ticket. Could be yet another case of an uninitialized variable in input_server or some such... (I had other boot-dependant trouble with input_server the past couple years). EDIT: the thread is a BLooper instantiated here , presumably from AddOnManager.cpp:220 ; analyzing further seems quite non-trivial.. follow-up: 10 comment:7. comment:8 by , 6 years ago It happened again this morning on hrev47901 x86_64. This time, the screen "tore" before the desktop appeared (see image0182), only the USB keyboard was non-functional, and one CPU core was 100% (see image0184). I was logging the serial output at the time which I have attached (HP_dc5750_serial. I'll attach the syslog on the next reboot. comment:9 by , 6 years ago The same symptoms as described earlier also apply to 32 bit build as if hrev47912, including tearing of boot screen and both CPU indicating 100% by process controller. comment:10. Looks like jessica and diver granted that wish with #11049 :-) (if I'm following that ticket correctly) comment:11 by , 6 years ago kdebug> threads 234 thread id state wait for object cpu pri stack team name 0x82a00000 234 waiting cvar 0xd2dda6a0 - 20 0xd1ff7000 234 input_server 0xd301ed40 240 waiting cvar 0xd2ffb748 - 103 0x81d2b000 234 _input_server_event_loop_ 0xd301e4a0 241 waiting sem 1351 - 10 0x81d33000 234 AddOnMonitor 0xd301d7b0 244 ready - - 1 0x81d3f000 234 PathMonitor looper 0xd301cf10 246 waiting cvar 0xd2fffe64 - 104 0x81d65000 234 Tablet Tablet 1 watcher kdebug> bt 244 stack trace for thread 244 "PathMonitor looper" kernel stack: 0x81d3f000 to 0x81d43000 user stack: 0x7a678000 to 0x7a6b8000 frame caller <image>:function + offset 0 81d42e4c (+ 224) 80094b37 <kernel_x86> reschedule(int32: 2) + 0x1007 1 81d42f2c (+ 48) 80094be1 <kernel_x86> scheduler_reschedule + 0x61 2 81d42f5c (+ 64) 801438f1 <kernel_x86> x86_hardware_interrupt + 0xe1 3 81d42f9c (+ 12) 80136ace <kernel_x86> int_bottom_user + 0x73 user iframe at 0x81d42fa8 (end = 0x81d43000) eax 0x2 ebx 0xa027f4 ecx 0x18417e08 edx 0x7a6b7744 esi 0x18417de8 edi 0x7a6b7744 ebp 0x7a6b76c8 esp 0x81d42fdc eip 0x90f8ce eflags 0x13246 user esp 0x7a6b76b0 vector: 0xfb, error code: 0x0 4 81d42fa8 (+ 0) 0090f8ce <libbe.so> node_ref<0x7a6b7744>::__eq(node_ref: 0x18417e08, node_ref&: 0x183e789e) + 0x16 5 7a6b76c8 (+ 48) 00917be5 <libbe.so> _GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler<0x184434f8>::_GetAncestor(_GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler: 0x7a6b7744, node_ref&: 0x17) + 0x4d 6 7a6b76f8 (+ 224) 00917079 <libbe.so> _GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler<0x184434f8>::_EntryCreated(BPrivate::NotOwningEntryRef&: 0x7a6b78b8, node_ref&: 0x7a6b78ac, true) + 0x179 7 7a6b77d8 (+ 240) 009157fa <libbe.so> _GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler<0x184434f8>::_EntryCreated(BMessage*: 0x183e3b50) + 0x1d2 8 7a6b78c8 (+ 48) 0091497a <libbe.so> _GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler<0x184434f8>::MessageReceived(BMessage*: 0x183e3b50) + 0x62 9 7a6b78f8 (+ 48) 007e7b23 <libbe.so> BLooper<0x184435c8>::DispatchMessage(BMessage*: 0x183e3b50, BHandler*: 0x184434f8) + 0x5b 10 7a6b7928 (+ 64) 007e93cd <libbe.so> BLooper<0x184435c8>::task_looper(0x0) + 0x211 11 7a6b7968 (+ 48) 007e8fbb <libbe.so> BLooper<0x184435c8>::_task0_(NULL) + 0x3f 12 7a6b7998 (+ 48) 01455feb <libroot.so> _get_next_team_info (nearest) + 0x5f 13 7a6b79c8 (+ 0) 613ef250 <commpage> commpage_thread_exit + 0x00 comment:12 by , 6 years ago This is still an issue with latest revision (hrev48147 x86_64). First boot after updating with "pkgman update", system freeze and 100% on one cpu core. It is also happening with 32 bit builds. follow-up: 14 comment. comment. I have three Haiku partitions on my hard drive - alpha 4.1,dev, and x86_64. Each is booted separately from the bootman menu. These are not mounted at boot time. I only mount the other partitions sometimes when I need to copy a file. comment:15 by , 6 years ago comment:16 by , 5 years ago Still there in hrev48958 ; I take back what I said about cold boots though: this time it occured on a warm boot. The USB mouse still worked, only the USB keyboard refused to work, 100% CPU usage on one core ..etc. They're both connected through a hub, and the hub to the desktop, if that matters. Infrequent, hard to reproduce bug here. comment:17 by , 5 years ago Still here in 48971; no mouse and one cpu core 100%; had to reboot 5 times before I had both keyboard and mouse. I'll attach a syslog after a few more tries. comment:18 by , 5 years ago I couldn't get x86_gcc2 to do it again, but my x86_64 partition did it right away. Attached are the syslog and previous syslog. hrev48982 x86_64 comment:19 by , 5 years ago comment:20 by , 5 years ago It's occuring right now on my desktop, keeping it open in debugger in case someone wants me to try out a command? I tried to step.. step.. step.. for a while in debugger (as well as Run/Debug/Run a dozen times), and we never ever get out of GetAncestor(), I'm always either in node_ref::eq() or in GeAncestor(). So I would theorize that.. - BOpenHashTable<AncestorHashDefinition>::Lookup() is inlined inside GetAncestor() - the infinite loop culprit is inside (i.e. it's not caused by a continuous stream of B_ENTRY_CREATED messages) - thus the "root" so to speak, of the stoppage is either this call or this call and the actual guilty party (loop that never exits exists) would be that one: Plausible? EDIT: the registers for the argument to node_ref::eq() have always the same value, so if I read the registers right, it seems the line slot = _Link(slot); does not change the value of slot, i.e. slot is linked to itself, so we never reach a slot that would make us break. comment:21 by , 5 years ago A slot being linked to itself shouldn't happen, so if this is the case, it would mean the hash table is corrupt. This part is behaving like a linked list. Does enabling the tracing in PathMonitor.cpp reveal anything interesting? In particular, is the PathHandler class used properly (no use after deletion, etc)? comment:22 by , 5 years ago comment:23 by , 5 years ago I'm stuck. 1) First wanted to start nice with a 'live' restart of input_server (instead of rebooting); but when I invoke /system/servers/input_server -q nothing happens whatsoever. Does it work for others ? Here the mouse movement does not get frozen for even a fraction of a second, and nothing gets output to syslog, hence I believe input_server is refusing to restart. (EDIT: also tried in an old 45943 and there it works... half of the time; the other half I get "locked out" of the system as neither keyboard nor mouse work any more; point is, input_server used to kinda sorta restart, but now it does not any more) 2) Next I gave up on the live restart for now and tried to replace/override the system's instance, see what would occur after rebooting, but there is still nothing output to Terminal; here's what I did: - black-listed the system instance - made a copy of the system instance of input_server, hoping it would get picked up by signature (it does, see below). - looked in syslog, no sign whatsoever of input_server logging, even though there is a libbe.so built correctly next to the server (see below) ~/Desktop> ps Team Id #Threads Gid Uid kernel_team 1 47 0 0 (..) /boot/system/servers/app_server 399 48 0 0 /boot/system/servers/syslog_daemon 418 2 0 0 /boot/home/Desktop/hrev_input-server-CUSTDEV/_test_sandbox/inpu 425 10 0 0 ~/Desktop> cd hrev_input-server-CUSTDEV/_test_sandbox ~/Desktop/hrev_input-server-CUSTDEV/_test_sandbox> ll total 260 -r-xr-xr-x 1 user root 253202 Apr 16 05:31 input_server drwxr-xr-x 1 user root 2048 Apr 19 18:54 lib ~/Desktop/hrev_input-server-CUSTDEV/_test_sandbox> strings lib/libbe.so | grep B Path (..) BPathMonitor: BPathMonitor::StartWatching(%s, %lx) BPathMonitor: BPathMonitor::StopWatching(%s) BPathMonitor: Create PathMonitor locker BPathMonitor: Start PathMonitor looper Q38BPrivate12BPathMonitor18BWatchingInterface ~/Desktop/hrev_input-server-CUSTDEV/_test_sandbox> Any idea ? Note -- the bug seems to no longer occur now that I upgraded to 49041, went back lurking in the shadows :-/ But getting help on the above would still be useful for the day when it comes back. comment:24 by , 5 years ago This might be resolved by hrev49058. Please retest if you were able to reproduce this in some way. Regarding restarting of the input_server the code handling "-q" seems very questionable. However you can just use hey to quit the input_server ("hey input_server quit"). It may block waiting for the mouse to move to wake up the listener thread, in that case just move the mouse. Once it quit, the app_server should immediately restart it and you might not even notice that it was gone. You will see the original main thread of the quit input_server in the reply printed by hey and you can check that the running input_server has a different main thread/team id to be sure. PS: If you edit your comments, no email notification will go out for that change. For people staying in the bugtracker loop purely by reading the mailing list (like me), there's a very high chance of these edits not being noticed. Therefore please avoid adding anything relevant to tickets via comment edits (style cleanup and typos are fine for edits). comment:25 by , 5 years ago comment:26 by , 5 years ago comment:27 by , 5 years ago comment:28 by , 5 years ago comment:29 by , 5 years ago FWIW, I haven't seen a pegging PathMonitor looper for many weeks now. I used to get it approx. on every 5th bootup... comment:30 by , 5 years ago comment:31 by , 5 years ago comment:32 by , 5 years ago? follow-up: 34 comment:33 by , 5 years ago hrev49337 x86_gcc2 required two reboots today before I got a working mouse. Syslog (attached). hrev47479 x86_gcc2 has been booted several times per day without any appearance of the issue discussed. I would like to obtain a copy of hrev47483 to confirm or disprove my belief that the usb commits introduced in hrev47481 and hrev47483 are in fact the source of the bug. comment:34 by , 5 years ago I would like to obtain a copy of hrev47483 to confirm or disprove my belief that the usb commits introduced in hrev47481 and hrev47483 are in fact the source of the bug. Since these changes only really concern xHCI, which is not part of the image, I rather doubt that they are responsible. The only change to the overall stack is the addition of the controller_cookie argument to the Hub class which defaults to the same default as the one in the Device class and should therefore make no logical difference to the existing code. comment:35 by , 5 years ago comment:36 by , 5 years ago This is probably the single most annoying bug in Haiku right now, due to the need to reboot again ( as many as six times on one occasion). I'm trying to nail down a range of when the bug first appeared. It may take a while due to the intermittent nature of the problem. My recollection was that I fist saw it sometime in mid-August 2014, during which time many changes were made to USB and the kernel scheduler. follow-up: 38 comment:37 by , 5 years ago I have seen the problem on hrev47479 three times while testing today, including on consecutive reboots. So, obviously it predates that build. Is there a repository where older builds can be downloaded? I have attached the syslogs from that testing session in case they may provide useful information. comment:38 by , 5 years ago Is there a repository where older builds can be downloaded? On the page of the nightly build, there's a link to older images at the bottom. comment:39 by , 5 years ago comment:40 by , 5 years ago I have narrowed it down a little further. The x86_gcc4 hybrid archive allowed me to test hrev47458, which froze on the first boot after installation to my hard drive, and on 2 of 4 boots immediately thereafter. Hrev47380 continues to boot time and again without issue. So, the range is somewhere between hrev47380 and hrev47458. comment:41 by , 5 years ago Happened again...hrev49413 x86_gcc2 Another syslog attached. previous_syslog says: hda: buffer_exchange: Error waiting for playback buffer to finish (Interrupted system call)! ??? Strange, as it froze on a cold boot. hrev47380 never did freeze after 2 weeks of testing. Revision range remains between 47380 and 47458. I cannot narrow it down further, unless more builds are made available. comment:42 by , 5 years ago comment:43 by , 5 years ago The problem is still present in hrev49627. Sometimes multiple boot attempts are required to get a functional mouse or keyboard. The problem also appears to have evolved somewhat. Previously it only happened at boot. Now it intermittently happens after the system has been up and running - one CPU core will be pegged at 100%, however keyboard and mouse are still working. Under these conditions I see the CPU cores pegging at 100%, alternating in random fashion in Pulse. comment:44 by , 5 years ago I have noted that this bug more often that not shows up on the reboot after an update. i.e. pkgman -> update or pkgman ->full-sync Yesterday, for example I had to reboot 5 times before I had a mouse. Keyboard always seems to work. If I drop into KDL is there anything you would suggest I try? comment:45 by , 5 years ago I had it happen again today. I dropped into KDL and tried syslog | tail (photo attached). Maybe the output will give a clue, maybe not. comment:46 by , 5 years ago comment:47 by , 5 years ago Today when it happened I invoked KDL and ran a few back traces. See attached PathMonitor_serial_log. comment:48 by , 5 years ago It happened on consecutive boots. I have attached a syslog with KDL from the second session. comment:49 by , 5 years ago PathMonitor_serial_log.txt contains the bt of possibly the wrong "path monitor looper" thread (there are several), but syslog.8 has some interesting stuff: Its backtrace locates the input_server PathMonitorLooper in the same place as with diver in comment:11 and also same as me in comment:20 , that is to say, in GetAncestor() Also, said "GetAncestor()" call gets interrupted and the last function call in the stack is to process_pending_ici().. Might be the same "ICI" that bugs vidrep in his other ticket, or just a coincidence? (I dunno what ICI is). Will try a couple more things in off-site emails with vidrep comment:50 by , 5 years ago Would it be possible to have a input_server debug build of Haiku made available for testing? Instructions on how to do it myself would be welcome. comment:51 by , 5 years ago Here's a tentative outline for others to flesh out if I missed something: - Get the source (if not already downloaded previously): create a folder and from within it, run git clone git://git.haiku-os.org/haiku(if on the other hand you already had the source, just run "git pull" to make sure it's up to date). - configure the build of the input_server component: locate the file "haiku/build/jam/UserBuildConfig" and add this line inside: SetConfigVar DEBUG : HAIKU_TOP src servers input_server : 3 : global ;, keeping it exactly like this (don't forget the white spaces anywhere). If that doesn't work for you (didn't for me), instead locate the file "haiku/src/servers/input/Jamfile" and add the DEBUG variable as a line around third position such that it looks like this: SubDir HAIKU_TOP src servers input ; SetSubDirSupportedPlatformsBeOSCompatible ; DEBUG = 1 ; - run jam input_server; check that it produces its output in "haiku/generated/objects/haiku/x86_gcc2/debug_1/...." and NOT in "haiku/generated/objects/haiku/x86_gcc2/release/...". Once you successfully have a debug build of input_server, we'll look into having it executed in place of the normal one; shouldn't be trouble as that one can be picked by signature, so we probably won't need to set you up with an .hpkg file ..etc. follow-up: 56 comment:52 by , 5 years ago. comment:53 by , 5 years ago I did everything as per the above instructions. With the "UserBuildConfig" script it creates the input server, but in "haiku/generated/objects/haiku/x86_gcc2/release/..." directory. Doing it the other way with the "Jamfile" results in the following error: ...failed C++ /boot/home/haiku/generated/objects/haiku/x86_gcc2/debug_1/servers/input/BottomlineWindow.o ... BUILD FAILURE: ...failed updating 4 target(s)... ...skipped 1 target(s)... ~/haiku> comment:54 by , 5 years ago What is the gcc error (above the "failed C++...." line) exactly; What do the first few lines of the Jamfile look like, quote it in here and we'll check wether the DEBUG = 1 ; line includes white spaces ..etc as needed (if you want you can do "clean formatting" quotting by clicking the 5th icon from the left of the "Bold Italics Anchor ..." row and pasting the quote between the created curly braces). comment:55 by , 5 years ago Well what do I know! :-) The gcc error (emailed off-site) is not a configuration problem but an actual problem in the code: all the errors occur between #ifdef DEBUG and #endif statement, so it's very probably a case of debug code rot, no mystery. Those tend to get less attention than release for obvious reasons. I assume vidrep should file another ticket titled "input_server does not build with DEBUG=1" as I've seen that happen before in the ticket timeline for other components. Once that other ticket is resolved we can come back to this one. EDIT: #12419 has been filed comment:56 by , 5 years ago Replying to pulkomandy:. I did as instructed but the debug build of input_server failed. I have attached a copy of the build log, jamfile and UserBuildConfig files that were used. Perhaps somebody can fix them if they are incorrect, and attach them to the ticket for download. Thanks. comment:57 by , 5 years ago comment:58 by , 5 years ago After doing a git pull and modifying that line in the UserBuildConfig, it now creates a debug build if input_server in haiku/generated/objects/x86_gcc2/debug_1/ directory. What is the next step to take? Pulkomandy? ttcoder? comment:59 by , 5 years ago It should be possible to put the generated input_server in /boot/system/non-packaged/servers/, it should be picked by the system at the next boot. Then, when you can reproduce the problem, attach Debugger to the input_server and save a debug report (if you can get either keyboard or mouse working, this should be possible). We can then look at where exactly the input_server is stuck. If you need someone to connect to the machine to help analyze things, you can set up an ssh server - Set a password using the "passwd" command for your user - Make sure "ssh server" is set to "on" in the network preferences - Configure your network (modem/router) so TCP port 22 is routed from the outside to your machine - Now the machine is open (password protected) to the internet, and anyone with the login and password can connect to it and get a terminal session. From there it's possible to inspect the situation, and extract the relevant data. comment:60 by , 5 years ago Just out of curiosity; is there anything in PVS or Coverity that could point to a problem in the input server? When this problem occurs after a boot, the mouse is always disabled. The keyboard usually continues to work. From there I can invoke kernel debugging to execute commands and capture the output from a serial port. comment:61 by , 5 years ago You can run a terminal (use the "menu" key to open deskbar, navigate to terminal) and do "Debugger -s --team nnn" where nnn is the team ID of the input server (you can find it using "ps | grep input_server" for example - it is the second column there). This will save a report on the desktop. You can also use "ps -a | grep PathMonitor" to see the path monitors, most likely one of them will not be in the "wait" state. This would be the one hogging the CPU, so you can use "Debugger -s --thread nnn" where nnn is the thread identifier of that thread (again, second column in ps output). I had a look at PVS results () and a search for input_server only leads to some add-ons (CommandActuators and TabletDevice), not the server itself. comment:62 by , 5 years ago I followed your instructions about moving the debug input_server into a non-packaged servers directory. Getting the problem to manifest itself didn't take long (it happened after the second boot attempt). I ran the commands in your instructions and created a pair of debug reports (attached). In the second instance, it caused a complete lockup of the system after executing the command. If the attached information is insufficient, let me know what further steps to take. comment:63 by , 5 years ago The first report is a Debugger crash and should be filed as a separate ticket :) The first line of the second debug report tells us that Haiku loaded /system/servers/input_server instead of the one you put into non-packaged directory, so it looks like that trick didn't work. comment:64 by , 5 years ago I'm guessing you should "blacklist" the system input_server. If that makes the system become unbootable/unusable for some reason you can revert by booting to another partition and restoring things anyway. So create a /system/settings/packages file, with the normal structure (it's listed on haiku-os.org IIRC) and add a line to it that look like this: servers/input_server comment:65 by , 5 years ago I blacklisted the input server send am apparently using the debug build. When attempting a reboot it hangs with an error message, "The application "input_server" might be blocked on a modal panel. comment:66 by , 5 years ago I was logging to a serial port the second time around when the problem manifested itself. I have attached the serial log and a debug report. comment:67 by , 5 years ago The .report mentions /boot/home/input_server (so you've put it there instead of non-packaged? should work just as well anyway), so if that kind of behavior always occurs at reboot time it'd mean the bug is now reproducible directly at reboot time, which is kinda good news. No dump of variable states in the .report, but the serial logging does contains output, like CALLED void InputServer::_DispatchEvents(EventList &) CALLED status_t InputServer::_DispatchEvent(BMessage *)` ..etc so it might be possible to find some useful info in there comment:68 by , 5 years ago I'm not sure but maybe you also need to enable debug for SetConfigVar DEBUG : HAIKU_TOP src kits storage : 1 : global ;" in UserBuildConfig? then you need to jam -q libbe.so and place it into /system/non-packaged/lib comment:69 by , 5 years ago The strange thing is that I copied the input_server into /boot/system/non-packaged/servers/, but after blacklisting the input_server, it didn't find that one on reboot, but instead found the one in my home directory. comment:70 by , 5 years ago Was it built with debug info enabled? comment:71 by , 5 years ago Yes, exactly as described above DEBUG=1 The UserBuildConfig is attached to the ticket The only change since it was attached is the error noted in comment 57 comment:72 by , 5 years ago I'd give comment:68 a try :) This would require replacing libbe.so though. Maybe it would be enough to put it in /system/non-packaged/lib. comment:73 by , 5 years ago Attached is another debug report. It appears somewhat different. comment:74 by , 5 years ago Weird, there is no PathMonitor looper thread which is the one we need. comment:75 by , 5 years ago WRT to how the input_server is found at boot: the launch daemon takes care of it, and does a search by application signature. This means the first app matching the signature, anywhere on your boot disk, will be used, in your case it was a copy in the home directory, apparently. Now that you know how to build haiku, an useful thing to try is using git bisect to further narrow down which commit created the problem. From your Haiku source directory, run: git bisect start git bisect good hrev47380 git bisect bad hrev47458 This will checkout a version of Haiku in between these two. You can then build it (run jam with the usual options) and boot the generated build. If you hit the problem, come back to your source dir and tell git: git bisect bad If the tested revision works, enter: git bisect good This will allow to pinpoint the exact commit that broke things. Git will even tell you at each step how many builds you still have to try. --- The debug builds of libbe.so and src/kits/storage are a good idea too (as suggested above). It is important to use the second command (to get a debug report for the path monitor thread specifically). And yes, since doing so stops the input server, it's expected that it appears to freeze the system (keyboard and mouse will stop working). If you have ssh access to the machine from another computer, you could use that to restart the input_server at that point. After a reboot (you can shut down the machine cleanly by pressing the power button), you will still get the debug report on the desktop, even if the system appears frozen. comment:76 by , 5 years ago Attached are a pair of debug reports I generated last night before calling it a day. Whether they're helpful or not is up to you guys. Later today I'll try doing the git bisect as instructed in comment 75. Why did I open my big mouth and say I'd be willing to do the legwork????? comment:77 by , 5 years ago comment:78 by , 5 years ago I tried the git bisect, but when I try to build the test image it fails. jam -q haiku-anyboot-image Using this same command works fine on the latest nightly. Has there been a change that would cause the older builds to fail? I'll post the build log later today after I get home from work. comment:79 by , 5 years ago I'd still go with comment:68 first as it sounds way easier. comment:80 by , 5 years ago I created a debug build of libbe.so as per the instructions. The problem manifested itself after the second boot (as usual). Attached is a syslog. follow-up: 82 comment:81 by , 5 years ago Happened again today using the debug build of libbe.so and input_server. Attached is a listimage showing that the debug builds were loaded. Created a pair of debug reports as per comment 61(attached) comment:82 by , 5 years ago Created a pair of debug reports as per comment 61(attached) The second one just has all threads running, so isn't really helpful. From the first one the cause of the high CPU use becomes obvious: The AncestorMap becomes corrupted as one of the contained Ancestors has a hash next link pointing to itself causing an endless loop in Lookup(). Why this happens is harder to figure out. Since it's only reproducible irregularly in your case and not at all for others it is probably timing or setup dependent. Possibly a race condition due to missing locking. I've looked through the PathMonitor code but nothing obvious jumped out. Enabling TRACE_PATH_MONITOR in PathMonitor.cpp might give some info (output to the serial log) although the TRACE statements are scarse. Removing the // in front of that line and rebuilding libbe should enable that. comment:83 by , 5 years ago mmlr, I did as instructed and rebuilt libbe.so a second time. I have attached a debug report using the new lib. I think you're mistaken in assuming that this issue is reproducible irregularly and that I am the only one experiencing this problem. In my case, I see this issue on at least 50% of boots. Not just on one PC, but several PC's I have tested in the past year. comment:84 by , 5 years ago TRACE_PATH_MONITOR outputs to the serial log. comment:85 by , 5 years ago Syslog(s) attached. Thanks. comment:86 by , 5 years ago Well, irregularly in the sense of it not being reproducible 100% of the time, i.e. there's some kind of variable component that decides if it happens or not. Unfortunately the logs you attached last do not contain any BPathMonitor trace output. Are you sure you rebuilt libbe.so, replaced the previous copy in non-packaged and booted with the one from the Haiku hpkg blacklisted? There should be output lines starting with "BPathMonitor:" (a lot of them). comment:87 by , 5 years ago So, just to be sure... I have to go to: haiku/src/kits/storage/PathMonitor.cpp and delete the two slashes in line 38 #define TRACE_PATH_MONITOR, add the line "SetConfigVar DEBUG : HAIKU_TOP src kits storage : 1 : global ;" in UserBuildConfig, then build libbe.so using the command jam -q libbe.so., then move the new lib into /system/non-packaged/lib and reboot, making sure to blacklist the lib at the boot menu. Correct? I'll give it another try again tonight. Is there anything I should look for, to be sure the output to the syslog is what we want? I have my PC set up for serial logging on a Windows PC. I'll capture that output as well. Thanks guys for walking me through this whole schmozzle. comment:88 by , 5 years ago Yes, pretty much. Blacklisting the original file of course (system/lib/libbe.so), not the one you put in non-packaged. You don't strictly need to set the debug flag, as this tracing doesn't depend on it when uncommented by removing the slashes, but it doesn't hurt either. As stated above there should be a lot of lines starting with "BPathMonitor:" when the change took. comment:89 by , 5 years ago I rebuilt libbe.so and it now appears to be working. First, I experienced the CPU pegging issue, and captured the serial output on a Windows PC (11052015.txt attached). Second, I ran the debugger and created a report (attached). Third, the system refused to shutdown (ticket #12306). Maybe we'll get lucky and find the cause of both issues. Let me know if any more steps can be taken at my end. comment:90 by , 5 years ago Thanks for the logs. Unfortunately they were not quite as helpful as I hoped for. I came up with a synthetic test case for the BPathMonitor which produced other data structure related crashes. These are fixed in hrev49756. The hash table corruption seen here might very well have been caused by the same problem. Please check if said revision fixes the problem for you. If hrev49756 doesn't fix the problem I'll prepare a patch that adds earlier detection of the corruption of the hash table which would then hopefully shed some light on the producer of the corruption and not just its victim. comment:91 by , 5 years ago So far, so good. 40+ warm/cold boots without a problem. Let's keep the ticket open to see if the issue manifests itself again over the long term. comment:92 by , 5 years ago It's been almost 1 week and no sign of the problem. Lets assume this is fixed. Thanks. comment:93 by , 5 years ago Thanks for reporting and helping troubleshoot this! Changing component to USB but I didn't spot any usb related errors in your syslog. Could you check what your CPU is busy with?
https://dev.haiku-os.org/ticket/11280
CC-MAIN-2020-34
en
refinedweb
mock_data 1.2.8 Generate random data using Dart. Can be used to create random strings, integers, names, colors, IPs, UUIDs, URLs and dates. Generate random data using Features # API provides generation of: - Integers in any range of numbers - Strings with various characters and length - Colors represented with needed color space - Dates between different moments in time - Names such as John or Mary - UUIDv4 and Timestamp-first UUIDs - URLs with many fragments - IPv4 and IPv6 Usage # A simple usage example: import 'package:mock_data/mock_data.dart'; main() { mockName(); // Generate male or female first name. mockName('male'); // Generate male first name. mockName('female'); // Generate female first name. mockInteger(1, 6); // Generate integer in range from 1 do 6. mockString(16); // Generate string of length 16. mockIPv4(); // Generate IPv4 represented with // format(default is '*.*.*.*') as String. mockIPv6(); // Generate IPv6, same usage as with IPv4. mockColor('hex'); // Generate color represented in hex format. mockColor('rgb'); // Generate color represented in RGB format. mockUUID(); // Generate UUIDv4 } These are some basic examples. There are many more methods and they all support tweeking of parameters to suit you in generating random data. By reading examples you can learn more about functionality and usage of mock_data
https://pub.dev/packages/mock_data
CC-MAIN-2020-34
en
refinedweb
Shade - Memcached Client for ScalaShade - Memcached Client for Scala OverviewOverview Shade is a Memcached client based on the de-facto Java library SpyMemcached. The interface exposed is very Scala-ish, as you have a choice between making asynchronous calls, with results wrapped as Scala Futures, or blocking calls. The performance is stellar as it benefits from the optimizations that went into SpyMemcached over the years. Shade also fixes some problems with SpyMemcached's architecture, choices that made sense in the context of Java, but don't make so much sense in the context of Scala (TODO: add details). The client is production quality. Supported for Scala versions: 2.10, 2.11 and 2.12. Release NotesRelease Notes MaintainersMaintainers These are the people maintaining this project that you can annoy: - Alex: @alexandru - Lloyd: @lloydmeta Usage From SBTUsage From SBT dependencies += "io.monix" %% "shade" % "1.10.0" Initializing the Memcached ClientInitializing the Memcached Client To initialize a Memcached client, you need a configuration object. Checkout the Configuration case class. import shade.memcached._ import scala.concurrent.ExecutionContext.Implicits.global val memcached = Memcached(Configuration("127.0.0.1:11211")) As you can see, you also need an ExecutionContext passed explicitly. As an implementation detail, the execution context represents the thread-pool in which requests get processed. Simple non-blocking requestsSimple non-blocking requests Useful imports: import concurrent.duration._ // for specifying timeouts import concurrent.Future Setting a key: val op: Future[Unit] = memcached.set("username", "Alex", 1.minute) Adding a key that will only set it if the key is missing (returns true if the key was added, or false if the key was already there): val op: Future[Boolean] = memcached.add("username", "Alex", 1.minute) Deleting a key (returns true if a key was deleted, or false if the key was missing): val op: Future[Boolean] = memcached.delete("username") Fetching a key: val result: Future[Option[String]] = memcached.get[String]("username") As you can see, for fetching a key the get() method needs an explicit type parameter, otherwise it doesn't know how to deserialize it. More on this below. Blocking requestsBlocking requests Sometimes working with Futures is painful for quick hacks, therefore add(), set(), delete() and get() have blocking versions in the form of awaitXXX(): memcached.awaitGet("username") match { case Some(username) => println(s"Hello, $username") case None => memcached.awaitSet("username", "Alex", 1.minute) } Compare-and-setCompare-and-set Sometimes you want to have some sort of synchronization for modifying values safely, like incrementing a counter. Memcached supports Compare-And-Swap atomic operations and so does this client. val op: Future[Boolean] = memcached.compareAndSet("username", Some("Alex"), "Amalia", 1.minute) This will return either true or false if the operation was a success or not. But working with compareAndSet is too low level, so the client also provides these helpers: def incrementCounter: Future[Int] = memcached.transformAndGet[Int]("counter", 1.minute) { case Some(existing) => existing + 1 case None => 1 } The above returns the new, incremented value. In case you want the old value to be returned, do this: def incrementCounter: Future[Option[Int]] = memcached.getAndTransform[Int]("counter", 1.minute) { case Some(existing) => existing + 1 case None => 1 } Serializing/DeserializingSerializing/Deserializing Storing values in Memcached and retrieving values involves serializing and deserializing those values into bytes. Methods such as get(), set(), add() take an implicit parameter of type Codec[T] which is a type-class that specifies how to serialize and deserialize values of type T. By default, Shade provides default implementations of Codec[T] for primitives, such as Strings and numbers. Checkout Codec.scala to see those defaults. For more complex types, a default implementation based on Java's ObjectOutputStream and ObjectInputStream exist (also in Codec.scala). However, because serializing/deserializing values like this is problematic (you can end up with lots of errors related to the ClassLoader used), this codec is available as part of the MemcachedCodecs trait (also in Codec.scala) and it either needs to be imported or mixed-in. The import works like so: import shade.memcached.MemcachedCodecs._ But this can land you in trouble because of the ClassLoader. For example in a Play 2.x application, in development mode the code is recompiled when changes happen and the whole environment gets restarted. If you do a plain import, you'll get ClassCastException or other weird errors. You can solve this by mixing-in MemcachedCodecs in whatever trait, class or object you want to do requests, as in: case class User(id: Int, name: String, age: Int) trait HelloController extends Controller with MemcachedCodecs { def memcached: Memcached // to be injected // a Play 2.2 standard controller action def userInfo(id: Int) = Action.async { for (user <- memcached.get[User]("user-" + id)) yield Ok(views.showUserDetails(user)) } // ... } Or, in case you want to optimize serialization/deserialization, you can always implement your own Codec[T], like: // hackish example implicit object UserCodec extends Codec[User] { def serialize(user: User): Array[Byte] = s"${user.id}|${user.name}|${user.age}".getBytes("utf-8") def deserialize(data: Array[Byte]): User = { val str = new String(data, "utf-8") val Array(id, name, age) = str.split("|") User(id.toInt, name, age.toInt) } }
https://index.scala-lang.org/monix/shade/shade/1.10.0?target=_2.12
CC-MAIN-2020-34
en
refinedweb
Implementation status: to be implemented Synopsis #include <grp.h> void endgrent(void); struct group *getgrent(void); void setgrent(void); Description The functions operate on the group database, which contains for each users' group: The group structure is defined in < grp.h> as follows: struct group { char *gr_name; /* group name */ char *gr_passwd; /* group password */ gid_t gr_gid; /* group ID */ char **gr_mem; /* NULL-terminated array of pointers to names of group members */ }; Arguments: None. The getgrent() function returns a pointer to a structure containing the broken-out fields of an entry in the group database. If the group database is not already open, getgrent() opens it and returns a pointer to a group structure containing the first entry in the database. Thereafter, it returns a pointer to a group structure containing the next group structure in the group database, so successive calls may be used to search the entire database. The setgrent() function rewinds the group database so that the next getgrent() call returns the first entry, allowing repeated searches. The endgrent() function closes the group database. On error, the setgrent() and endgrent() functions set errno to indicate the error. If you want to check for error situations, you should set errno to 0, then call the function, then check errno. These functions are not thread-safe. Return value On success the getgrent() function returns a pointer to a group structure. On end-of-file, getgrent() returns a null pointer and does not change the setting of errno. On error, getgrent() returns a null pointer and errno is set to indicate the error. The application doesgrgid(), getgrnam(), or getgrent(). The returned pointer, and pointers within the structure, are invalidated if the calling thread is terminated. Errors [ EINTR] - A signal was caught during the operation. [ EIO] - An I/O error has occurred. [ EMFILE] - All file descriptors available to the process are currently open. [ ENFILE] - The maximum allowable number of files is currently open in the system. Implementation tasks - Implement the getgrent()function. - Implement the endgrent()function. - Implement the setgrent()function.
http://phoenix-rtos.com/documentation/libphoenix/posix/endgrent
CC-MAIN-2020-34
en
refinedweb
SD_BUS_GET_N_QUEUED_READ(3) sd_bus_get_fd SD_BUS_GET_N_QUEUED_READ(3) sd_bus_get_n_queued_read, sd_bus_get_n_queued_write - Get the number of pending bus messages in the read and write queues of a bus connection object #include <systemd/sd-bus.h> int sd_bus_get_n_queued_read(sd_bus *bus, uint64_t *ret); int sd_bus_get_n_queued_write(sd_bus *bus, uint64_t *ret); sd. On success, these functions return 0 or a positive integer. On failure, they return a negative errno-style error code. Errors Returned errors may indicate the following problems: -ECHILD The bus connection was created in a different process. systemd(1), sd-bus(3), sd_bus_process(3), sd_bus_send(3), sd_bus_flush_N_QUEUED_READ(3) Pages that refer to this page: 30-systemd-environment-d-generator(7), systemd.index(7)
https://www.man7.org/linux/man-pages/man3/sd_bus_get_n_queued_write.3.html
CC-MAIN-2020-34
en
refinedweb
Visit the free getting started tutorials on nativescript.org for JavaScript, Angular, or Vue.js. Kindly note that we filter out plugins that: package.json $ tns plugin add @essent-nativescript-ng-sentry This is a plugin to log app crashes with Sentry. Run the following command from the root of your project: $ tns plugin add @essent/nativescript-ng-sentry This command automatically installs the necessary files, as well as stores nativescript-ng-sentry as a dependency in your project's package.json file. To use nativescript-ng-sentry you must first import the module: import import { NgSentry } from '@essent/nativescript-ng-sentry'; At the launch of your app call setCredentials with your own credentials, these can be found in your Sentry Project Settings, Client Keys (DSN). Use the public DSN for these credentials. Optionally you can also provide an environment and a user id. setCredentials NgSentry.getInstance().setCredentials('123456', '123456789abcdefghijklmnopqrstuvw', 'development', 'unique-user-id'); To log a crash, call saveCrash with a message and details. The details will be used as a Sentry breadcrumb, you can use this to save a stacktrace, for example. You can have a look at our example on how to call this with an uncaughtErrorEvent. saveCrash NgSentry.getInstance().saveCrash('My crash message', 'My crash details'); Crashes are not send to Sentry automatically, you can call sendCrashes to send all saved crashes to Sentry. We suggest you call this method in the resume event of your app. sendCrashes NgSentry.getInstance().sendCrashes(); You can save breadcrumbs to see what a user did before a crash occurred, these will be added to the next crash you save. To add a breadcrumb use saveBreadcrumb with a title and category. saveBreadcrumb NgSentry.getInstance().saveBreadcrumb('Routed to details page', 'state'); Optionally you can add extra data to the breadcrumb. const properties: KeyValue<string> = { page: 'Change user data', changed: 'Username' }; NgSentry.getInstance().saveBreadcrumb('Save success', 'action', properties); Optionally you can set a maximum amount of breadcrumbs, the default is 50. NgSentry.getInstance().setMaxAmountOfBreadcrumbs(10); Popularity metric based on: Quality metric based on: Maintenance metric based on:
https://www.nsplugins.com/plugin/@essent-nativescript-ng-sentry
CC-MAIN-2020-34
en
refinedweb
Rewrite it in Rust— 14 min Early this year, I managed to mostly move away from JS development into native code, which in my case means a lot of C/C++, as well as Rust, hopefully more of that in the future. Most of what I will write about comes from my experience with sentry-native, which will soon release a rewritten version in C. That being said, all of the opinions in this post are my own. I also want to start this with a quote from Bruce Lee: If I tell you I'm good, probably you will say I'm boasting. But if I tell you I'm not good, you'll know I'm lying. # Imposter Syndrome While it has been quite some time since I have actively dealt with C code, I can up to speed with anything you can throw at me pretty quickly. I do make quite good progress with my work on sentry-native; my code compiles, runs and passes tests. But for some reason, I don’t really feel confident in it. I’m not really sure if the things that I do are really correct, or if it is just luck that it works. And I constantly have the feeling that I must be missing something, or that things will probably blow up at some point later. This is just in my mind though, and a classic example of imposter syndrome. And surprisingly, I don’t have this when writing Rust. Writing Rust code really empowers me, in the literal sense that I feel powerful and confident when writing Rust code. I have the feeling that whatever I do is correct. Quite remarkable actually. # Distractions and Explicitness One reason that I don’t feel very productive with C is that there is a lot of boilerplate and ceremony around almost everything. Dealing with allocations, strings, iterables and generics is very tedious. I sometimes have the feeling that I don’t even see the real application logic because it is so obfuscated and drowns among all the mallocs, NULL-checks, manual copying and pointer-chasing. One of the big distractions is checking for NULL all the time. There is two issues with this. One is that obviously, these checks do come with a runtime cost. The other one is about explicitness. Is returning NULL part of the API contract, like Option in Rust? Does it actually mean something? Or is it just cargo-culted boilerplate that people copy-paste, because its what everyone else is doing? I actually had to deal with a bug where NULL had special meaning. # Infallible allocation Most of the checks however are just unnecessary boilerplate in my opinion. And this boilerplate multiplies btw. Say you have 3 allocations in a functions. When you assume that they can fail, you would have to make sure to free the ones before the failing one, right? And what do you want to do anyway? Just return NULL from your function? What if the function has a different return function? Will it silently fail? Can you actually recover from a possible allocation failure? Your program needs memory to do its job. If it doesn’t, it can’t do its job, and it might be the best idea to just crash hard. The other question is, will you ever get a NULL from malloc anyway? I have read some quite good blog posts about this topic in the past, but don’t have any links handy. In any case. Nowadays, most software you run will be 64-bit, which means that virtual adress space is practically unlimited. And most systems, even smartphones have a lot of physical memory. A lot more than a typical program should allocate. If it does, it is very likely that it has some leaks anyway. And it is not only about your own program. It is kind of the behavior of the OS. Some time ago, there was post about linux behaving horribly under low memory conditions, which I have also experienced some time. Your system will stall hard, up to the point of requiring you to power-cycle, long before your programm will get a NULL from malloc. There has really been said enough about this, but C developers still cargo-cult these NULL-checks everywhere. Rust allocations are infallible. If they do fail, I think it raises a panic, which you can decide to recover from, or not. Anyway, from a developer point of view, the code looks a lot cleaner! You can actually start to see the business logic underneath all the boilerplate. Oh, and the use of Option makes intentions very clear, which brings me to the next point. # Documentation This has been praised a lot, and for good reason. The Rust documentation is excellent! The format is awesome, and most of the docs have examples, which thanks to doctests will also never be out of sync. When looking for C docs, there are a ton of different websites, and most of them are just horrible. Rustdoc itself is awesome, but the whole spectrum of rust documentation is a delight! # Ownership and Mutability Speaking of Documentation and Memory-management. The ownership model of Rust actually makes so much sense! Working with C code, I often don’t know who is responsible for freeing some memory. And I would guess that there is a lot of unnecessary copying going on because of that. And not to mention memory leaks. Sure, you can also leak memory in Rust, but its a lot harder! One kind-of way to guess this in C is the const keyword. If a function returns something const, it usually means that ownership is not transfered. But the other way around, ownership and mutability is something completely different. Maybe I return something that is mutable, but must not be freed! # Strings Another thing that deserves a lot of praise is the Rust &str type, which really is just a &[u8] slice, which is guaranteed to be valid utf8, which is a really awesome guarantee to have! For interfacing with the OS, there is OsStr, with appropriate conversion functions. I had to touch a bit of OS-specific string code in C recently, and it was horrible. But the real power actually lies in the way that strings in Rust are represented as slices, as a pair of (pointer, length), whereas strings in C need to be \0 terminated. This makes Rust strings a lot more efficient. In Rust, you can trivially get a sub-slice of the string, whereas in C, you have to copy the sub-slice, and \0-terminate it. To actually make a copy, you will also need the length of the string, which is a O(N) operation in C, but O(1) in Rust. Apart from this, the &str API of Rust is very rich! I miss .lines() and .ends_with() so much! On the other hand, I also made the experience that Rust strings are not as easy to deal with than for example JS. But now I think that maybe the way that I index into, and slice my JS strings is actually unsafe, considering unicode outside the ASCII range. # API and ABI Now that I have touched a bit on both memory allocation, and having to copy a lot when working with C, one way Rust avoids this is by better dealing with value-types and reference-types. In Rust, you can more easily return structs from functions, and move them into functions via arguments. Those will live on the stack and don’t require allocation, which makes it more efficient than in C. Most of the time though you will deal with references, as in C. And from a coding perspective, there is no difference, whereas in C, you will have to learn the difference between -> and ., which makes refactoring more annoying in some places. One of the reasons C has to allocate and return pointers in a lot of places is that there is no other way to make a struct opaque, hiding its members, and also making it extensible. In C, you can either expose your structs, making them public API and requiring breaking changes when touching them, or you use opaque pointers, which require allocation. Rust decouples API and ABI, and really Rust has no stable ABI at all. This means that you can hide details of a struct, change its size without requiring major version bumps, and still have the advantages of stack allocation. Speaking of stack allocation. I actually ran into uninitialized memory issues with structs on the stack already a couple of times. Very annoying, and for some reason, the compiler didn’t warn me of those. # Generics and Traits Another thing that came to my mind is that Rusts Traits, Iterators and Generics make it super easy to deal with streaming data, which can further improve performance, and avoid a ton of intermediate allocations. I am actually considering to re-implement something like Write in C, which would abstract away serializing data either into an in-memory buffer, onto disk, or right onto the network, without having to allocate a lot of intermediate buffers. But I already know that the C-version can never be as fast as Rust, as it would likely involve dynamic function calls, whereas Rust can just specialize and inline everything. # Dependency Management A bit related to ABI is also the question of static vs dynamic linking. Rust does not really do dynamic linking (or does it?) There are some technical differences between static and dynamic linking. Dynamic linking can better namespace things, and also share both memory, and disk space among programs. But seriously, in a world where our phones run Java, our Desktops run JavaScript, and the Cloud does heavy sandboxing and containerization, we are way past caring about memory usage. Static linking has some performance advantages, with link-time-optimization and dead-code-elimination. And Rust has a good story on symbol mangling, avoiding some of the pitfalls of static linking. And since it has no stable ABI anyway, it pretty much can’t do dynamic linking anyway. Anyhow, I recently asked colleagues about this, before I realized that I wanted their opinion on something completely different. I was actually refering to vendoring dependencies vs relying on OS provided libraries. One of the only times I had problems compiling an older (unmaintained) rust app was because of openssl-sys, which was trying to compile and link against my OS provided version. Which got out of sync, prevented the already compiled version of my app from starting, and made it impossible for me to actually re-compile. This is not a new problem either. There is a lot of talk about vendoring dependencies. That way you are independent of the libraries and the versions thereof, that your OS provides. As always, there are tradeoffs. It might be a good idea that the Distribution can update system libraries, to patch vulnerabilities, in case you don’t update your own vendored version. On the other hand, this limits the version of a library you can use, and also requiring your users to have that certain library installed in the first place. Having to deal with such things in C again is a real throwback, and I would love to just be able to consume whatever version of a dependency that I want, and have it statically link and just work, no matter where I copy my resulting binary. This is true portability and “run everywhere”. # Building Speaking of portability and dependencies. Rust has a really awesome story around cross compilation. And the way it does feature-flags and platform specific conditional code is awesome! This is just so much better than having tons of inconsistent, platform and compiler specific define flags. Oh, and it has a standard module system! And cargo!!! Having dealt with CMake for the past week, I really can’t understand how it has ever gained such popularity. The configuration syntax is horrific! It is case-insensitive, functions have space separated, optional and variadic arguments. Strings don’t need to be quoted unless you want to use certain special chars (which ones?). And there is no clear distinction between plain strings, and lists, at least not that I can tell. It has a global namespace of artifacts, with frequent name clashes, and it is absolutely not obvious to me how variables are scoped when you are dealing with multiple files. But at least I have figured out that it is a good idea to set target-specific flags. Which is not really obvious in the first place. Oh, and have I mentioned that the documentation is also horrible. How to best consume and integrate with external (vendored) dependencies is also absolutely not obvious. Since I had to look at build systems again, I want to quote from the meson docs: every moment a developer spends writing or debugging build definitions is a second wasted. I am so happy that Rust has cargo and crates. It is so refreshing to work with! Things just work as they should, and as you would expect them to. # The paradox of choice Building C code is very much non-trivial, which explains the plethora of tools that exist out there. Not to mention that almost every project I know of has its own way of building, its own way of dealing with feature flags, etc. While choice and competition are certainly a good thing to have, and to allow. Too much can lead to fragmentation, and is quite frankly overwhelming. Rust on the other hand has one clear and obvious way of doing things. But it still offers the possibility to extend this if necessary. Rust has one way of building things. It has one way of configuring your builds. It has one way of documenting things. It has one way of doing testing. Of doing benchmarks. Etc, etc. And these are very good choices as well. IMO, it is not the case of Rust being too young to have fragmented. I have the impression that the things just work. Less time spent dealing with all that, more time to actually getting stuff done. # Onboarding and Confidence Coming full circle to the beginning. One thing that people criticize about Rust is its learning curve. Well yes, Rust takes some time to learn. But I think that investment provides a great return. As I said in my #rust2020 post, I do think learning Rust makes you a better developer. And most of the time, when there is no obvious easy solution to a problem, Rust kind-of leads the way to a better and more correct solution. Hard things are still hard. But once you have learned Rust, it is so much easier to get started and anboarded to a bigger project, and feel productive very quickly. This is important! # Conclusion In my short time being a C developer again, I have seen already seen logic errors, threading problems, memory unsafety problems, and just plain inefficient code, which could all have been avoided by using Rust. And some of that code has been written by engineers far better than me. So much for the argument that smart engineers don’t make mistakes. And yes, I would love to rewrite everything is Rust, just because! I am also very much in favor of a completely libc-free Rust! Where we have completely self-contained binaries which do their own syscalls with their only dependency being a specific kernel version. I have too little knowledge about how this would look like on other platforms than linux, tbh. This could be a true cross-compile once, run everywhere language. Especially this cross-compiling, and the good things that I have heard about cbindgen make me wish that I could just ship pre-built static and dynamic libraries for all the platforms for users who don’t want to deal with compiling rust themselves, instead of having to deal with building C on all kinds of systems and compilers. There is just so many good things to say about Rust! I didn’t even mention things like enums, pattern-matching and the fact that it has integer types that make sense (what is an unsigned long long int anyway?)!
https://swatinem.de/blog/rewrite-in-rust/
CC-MAIN-2020-34
en
refinedweb
I recently got a comment on my article from 1st February about Soldering a Slice of Pi/O asking for some help in programming it in Python. This reminded me that I have not got very far with this little board beyond checking that the Raspberry Pi could see it. So I decided to see what I could find and if I could make it do something. My first port of call was to look around the web. I quickly found several pages about the board and its key component, the MCP23017 i2C Expander chip. - The datasheet for the chip is here - Construction and fitting instructions are here - A general discussion of i/o expansion using these chips can be found here As far as writing programs to control the pins, though. That was a bit harder to find. There is quite a nice article about controlling the MCP23017 chip here. Mid-way down the page it gives some examples of controlling the ports using i2ctools. If you followed the steps given in the construction guide above, or in my original article, you should already have these tools installed so, for example, you can do: i2cset -y 0 0x20 x00 0x00 i2cset -y 0 0x20 0x12 0x01 sleep 1 i2cset -y 0 0x20 0x12 0x00 to switch A0 (that’s the pin nearest the Raspberry Pi GPIO header) on for a second, then off again. I hadn’t quite realised until I started this that the IO headers on the Slice of Pi/O board are in two distinct rows. The outside row is all GND, and the inside row contains the actual GPIO pins. There are also two pin numbering schemes in use. The chip documentation numbers the pins as two banks of eight, named A0-A7 and B0-B7. They are separated into two banks for practical reasons to do with addressing. i2c software, however, typically seems to refer to the pins as a single run from 0 to 15. Just to clear up any confusion A0 is pin 1, A7 is pin 7, B0 is pin 8, B7 is pin 15. The pins run in ascending order from the end of the board with the Raspberry Pi GPIO header. Pin 0 (A0) is nearest the header, and Pin 15 (B7) is farthest away. Programming the pins in a shell script is all well and good, but what was asked for was a way to program the pins in Python. This needed a bit more looking around. Eventually I found that some excellent teaching examples at Adafruit, in particular this one about programming the MCP23017. If you are already comfortable with programming in Python on the Raspberry Pi, then the code example available from that page may be all you need to get started. I have not done much of this, though. Most of my Raspberry Pi programming has either been in C or in Ruby. To give the Adafruit examples a try, I decided to have a go at installing and using the Python Web IDE. Installation was pretty simple: just follow the instructions at Adafruit. The example code provided is OK for checking that everything is in place, but “out of the box” it just reads and prints the value of pin 3, then flashes pin 0 on and off as fast as Python will go. This is not very easy for an ordinary human being to spot. To make things a bit more visible, I wired an LED with a resistor in series between pin 0 and GND, then put a one second sleep between the flashes: import time ... while True: mcp.output(0, 1) # Pin 0 High time.sleep(1) mcp.output(0, 0) # Pin 0 Low time.sleep(1) At first I thought it was not working, but then I realised that the problem was in my choice of resistor. I had picked one with too high a value, so the LED was very dim. But when I turned off the room lights I could just about see it flashing. Flushed with success I decided to do something a bit more interesting, and upgraded the code to cycle round three different pins “flashing” them in sequence: pins = [0,1,2] t = 1 while True: for pin in pins: mcp.output(pin, 1) time.sleep(t) mcp.output(pin, 0) time.sleep(t) Trying to do this with my poor selection of resistors and LEDs was not very satisfactory, so I used it as an excuse to break out my Saleae USB logic analyser. I’ll have to write another post about this device, but for now, I connected two of its channels to pins 0 and 1, then decreased the wait time “t” of the above example to 0.001s. I got a lovely trace showing the pins rising and falling. I think that’s enough for now. I have seen the pins working from Python, both as inputs and outputs, which I think is what was wanted. While looking I did find another likely looking example of this sort of thing, with Python code here. I’ve not tried this one, but if you do , please let me know how you get on. [update] I have finally found out how to stop WordPress converting x characters as mentioned in the comments. Hopefully all the x characters on this page and others should now be correct. If you do find any rogue “multiply” symbols please let me know. Thanks, Frank. There’s also an MCP23008 tutorial here but I dunno if that’s any better than any of the other tutorials you’ve read. Thanks for that Frank. Unfortunately, I get an error that I’m unable to fathom when I cut ‘n paste your example commands: pi@raspberrypi ~ $ i2cset -y 1 0×20 0×00 0×00 Error: Chip address is not a number! A definition of I2C parameters I found at states: chip-address specifies the address of the chip on that bus, and is an integer between 0x03 and 0x77. data-address specifies the address on that chip to write to, and is an integer between 0x00 and 0xFF. You specify 0x00 as the chip address in your example, yet this contradicts that it is “…an integer between 0x03 and 0x77…”. Are the I2C range parameters different for Ubuntu? Grasping at straws, I tried replacing the chip address 0x00 with 0x20 with the same result. Where am I going wrong? pi@raspberrypi ~ $ i2cdetect -y 1 0 1 2 3 4 5 6 7 8 9 a b c d e f 00: ———- — — — — — — — — — — — — 10: — — — — — — — — — — — — — — — — 20: 20 — — — — — — — — — — — — — — — 30: — — — — — — — — — — — — — — — — 40: — — — — — — — — — — — — — — — — 50: — — — — — — — — — — — — — — — — 60: — — — — — — — — — — — — — — — — 70: — — — — — — — — pi@raspberrypi ~ $ i2cdetect -l i2c-0 i2c bcm2708_i2c.0 I2C adapter i2c-1 i2c bcm2708_i2c.1 I2C adapter Frank *is* using 0x20 as his chip-address. That man-page you link to says: i2cset [-f] [-y] [-m mask] [-r] i2cbus chip-address data-address [value [mode]] and Frank’s command line is: i2cset -y 0 0×20 0×00 0×00 so the parameters match up as: i2cbus=0 chip-address=0x20 data-address=0x00 value=0x00 (-y is a flag that doesn’t take any additional arguments) (Frank’s using i2cbus 0, and you’re using i2cbus 1, so I’m assuming Frank must be using a Rev1.0 RPi and you’re using a Rev2.0 RPi – ) I don’t currently have access to my Pi to check, but I’d guess from your error that maybe you’re using a O (capital letter o) somewhere instead of a 0 (number zero)? Also, some browsers get funny with ‘hidden formatting’ getting copied when you cut’n’paste from a website, so try typing in the command by hand instead of cut’n’pasting. Nah, checked that. ‘Tried manual typing. The manual says: “…chip-address specifies the address of the chip on that bus, and is an integer between 0x03 and 0x77…” The i2cdetect command shows my device at address 0x20 (with value 0x20 ?) and listing empty values at addresses 0x03 through 0x77 except 0x20. OK – I can now! do: pi@raspberrypi ~ $ i2cset -y 1 0x20 0x20 0x01 pi@raspberrypi ~ $ i2cset -y 1 0x20 0x20 0x00 I think the error was that chip address 0x00 wasn’t valid (but why the error report “is not a number” – I thought zero was a number when I was at school). Therefore, the above two commands firstly select pin A0 (on the Slice of PI/O) and then switch it high? Is that correct? This is weird. I just powered up the raspberry pi, still with its pi/O attached, copied the above examples and now I see the same problem. It’s odd because I created the above examples by cutting and pasting from my terminal window. After a bit of checking it seems that WordPress may have changed the “x” characters. They look a little strange, and when I went back and replaced them with proper lower-case ‘x’ characters the command line worked! Try replacing the “x”s and see if things work better for you too. Yeah – that’s the problem – those darned x s All the LEDs on my relay board lit up like a darned christmas tree! Thanks Frank Glad that got it sorted. I’m guessing that the problem here is something to do with the syntax colouring plugin I have on this blog. I think it’s time to look for a better one! Huh?! Didn’t I already suggest typing in the whole command by hand instead of copy’n’pasting? ;-) Yeah – I did type the ‘hole command. ‘Had holes in it… Doh!
http://raspberryalphaomega.org.uk/2013/03/14/controlling-a-slice-of-pio-with-python/
CC-MAIN-2020-34
en
refinedweb
MVVM Support - 6 minutes to read This topic demonstrates two approaches of using the MVVM pattern in DXBars, DXRibbon and GalleryControl. Defining UI at View Level This is the simplest approach, with which a UI is defined at the View level. UI element behavior is implemented by commands defined in a View Model. Example This example creates a simple UI consisting of a MainMenuControl with a button. A click on the button invokes the ShowTextCommand defined in a bound POCO View Model. NOTE A complete sample project is available at. Imports System Imports System.Collections.Generic Imports System.Configuration Imports System.Data Imports System.Linq Imports System.Threading.Tasks Imports System.Windows Namespace WpfApplication25 ''' <summary> ''' Interaction logic for App.xaml ''' </summary> Partial Public Class App Inherits Application End Class End Namespace Defining UI at View Model Level This is an advanced approach, with which a UI is defined in View Models. The View Models contain all necessary information required to render or populate a visual component(s) with data. WPF DevExpress controls provide the following properties to support this MVVM pattern implementation. - ...(Item)Source - This property is used to bind a component/class to a data collection. - ...(Item)Style - Allows you to customize styles of visual objects generated using templates. It also allows you to implement custom template selecting logic. - ...(Item)Template - Templates are used to render underlying objects in a specific form. - ...(Item)TemplateSelector - Template selectors allow you to choose templates based on custom logic. Below is the list of these properties provided by specific WPF DevExpress Menu and Navigation controls. NOTE When you define a DataTemplate, the DataTemplate's root element must be a ContentControl with the required object (bar, bar item, ribbon page, etc.) as the content. Implicit Data Templates The ...Template and ...TemplateSelector properties support the direct assignment of Data Templates. You can also associate Data Templates with certain View Models implicitly. Define a DataTemplate object in resources and associate it with a specific View Model using the DataType property. The DataType property should specify the type of the target View Model. The implicit template association feature is demonstrated in the "Implicit Data Templates" demo shipped with the installation. Examples Example 1 This example demonstrates the advanced approach to implement MVVM support in an application using DXBars. A complete example can be found in the "MVVM Bars" demo shipped with the installation. This very same approach is applicable when designing applications with DXRibbon and GalleryControl. See the "MVVM Ribbon" demo to learn more. This example shows how to populate bars in a BarManager and bar items from underlying collections. Assume there are two classes (View Models) that describe bars and bar items. The first class, MVVMBarViewModel, provides a Bars collection, whose elements (BarViewModel objects) describe individual bars. public class MVVMBarViewModel { public virtual ObservableCollection<BarViewModel> Bars { get; set; } public MVVMBarViewModel() { Bars = new ObservableCollection<BarViewModel>(); //... } //... } The second class, BarViewModel, contains a Commands collection, whose elements (BarButtonInfo) contain information to initialize bar items. public class BarViewModel { public BarViewModel() { Name = ""; Commands = new ObservableCollection<BarButtonInfo>(); } public virtual string Name { get; set; } public virtual ObservableCollection<BarButtonInfo> Commands { get; set; } } The main window's DataContext is set to the MVVMBarViewModel in XAML. This DataContext will be propagated to the window's children, including a BarManager component. Once the BarManager is ensured of receiving the proper DataContext, it can be populated with bars from the MVVMBarViewModel.Bars collection using data binding. <local:BarsDemoModule.Resources> <DataTemplate x: <ContentControl> <dxb:Bar </ContentControl> </DataTemplate> </local:BarsDemoModule.Resources> <dxdb:DemoModuleControl> <Grid> <dxb:BarManager </Grid> </dxdb:DemoModuleControl> Here, the BarManager.BarsSource property is bound to the MVVMBarViewModel.Bars collection. The BarManager.BarTemplate property is set to a template that will visualize elements in theBarManager.BarsSource collection. The collection's elements (BarViewModel objects) are automatically assigned to the DataTemplate's DataContext, allowing for bar settings initialization with BarViewModel properties. Thus, the Bar.Caption property is bound to the BarViewModel.Name property and the Bar.ItemLinkSource property is bound to the BarViewModel.Commands property. Generally speaking, a DataTemplate's DataContext is automatically set to an object being visualized by this template. When defining a DataTemplate for a Bar, the DataTemplate's root element must be ContentControl with a Bar object as the content. It is also possible to define a style that will be automatically applied to each bar created from a template. For instance, in the markup below, a style defines an item template selector (an object that selects templates for bar items based on custom logic). <local:CommandTemplateSelector x: <Style x: <Setter Property="ItemTemplateSelector" Value="{StaticResource itemTemplateSelector}"/> </Style> <dxb:BarManager ... All bindings between View Models and View classes are set up in XAML, without using code-behind files. However, there is one exception: template selectors must be written in code-behind files. The CommandTemplateSelector below chooses between two DataTemplates (SubItemTemplate or ItemTemplate). public class CommandTemplateSelector : DataTemplateSelector { public DataTemplate SubItemTemplate { get; set; } public DataTemplate ItemTemplate { get; set; } public override DataTemplate SelectTemplate(object item, DependencyObject container) { if (item != null && item is BarButtonInfo) { if (item is GroupBarButtonInfo) return SubItemTemplate; else return ItemTemplate; } return null; } } The approach used to initialize bar items within bars is identical to the one used to initialize bars. See the "MVVM Bars" demo shipped with the installation for a complete example. Example 2 TIP A complete sample project is available in the DevExpress Code Examples database at. The example demonstrates how to generate pages, groups and items from a collection according to the MVVM pattern. To generate pages in the RibbonPageCategory, bind the RibbonPageCategoryBase.PagesSource property to a collection. Use the RibbonPageCategoryBase.PageTemplate property to specify a template for generated pages. The RibbonPage and RibbonPageGroup contain similar properties for generating groups and bar items: - RibbonPage.GroupsSource, RibbonPage.GroupTemplate - RibbonPageGroup.ItemLinksSource, RibbonPageGroup.ItemTemplate
https://docs.devexpress.com/WPF/10434/controls-and-libraries/ribbon-bars-and-menu/common-concepts/mvvm-support
CC-MAIN-2020-34
en
refinedweb
Having wasted all my free time yesterday on trying to find out how the PiFace CAD is interfaced to the Raspberry Pi, I thought I’d take a different approach today. When I was last working with SPI, I used my trusty Saleae Logic analyser to find out what was happening, so I thought I’d connect this up between the Raspberry Pi and the PFCAD, to see what is really going on. The first problem, as I pointed out in my initial review of the PiFace Control and Display, was that the PFCAD board does not provide easy access to any of the Pi’s GPIO pins. I did not want to mess around unsoldering the PFCAD, so my solution was to extend the Pi’s GPIO pins with the same kind of “long leg” socket that I use for my own circuit boards. I then ran the “sysinfo” example, which had worked fine yesterday: python3 /usr/share/doc/python3-pifacecad/examples/sysinfo.py. Oddly, it failed, with a very strange complaint. Traceback (most recent call last): File "/usr/share/doc/python3-pifacecad/examples/sysinfo.py", line 5, in import pifacecad File "/usr/lib/python3/dist-packages/pifacecad/__init__.py", line 19, in from pifacecommon.interrupts import ( EOFError: EOF read where not expected After a bit of poking around, it seems that something has gone wrong with “python3”. When I tried the outwardly similar python /usr/share/doc/python3-pifacecad/examples/sysinfo.py it worked without any complaints. I can only assume that this is a result of SD card corruption when the Pi was powered off. Maybe I should use my Pi Supply switch, after all. My first run of the “sysinfo” example showed some activity on the MOSI and MISO lines, but did not trigger the SPI analyser. It looked like I was monitoring the wrong select line. So I switched to triggering on CS1 and bingo! Now we are getting somewhere. The PFCAD is using CS1. Looking at the data being transferred, I see a chain of short groups of bytes: 0x40 0x0A 0x08 0x41 0x0A 0x00 0x40 0x00 0xFF 0x40 0x0C 0xFF 0x40 0x13 0x00 0x40 0x01 0x00 0x40 0x04 0xFF Then quite a long gap (around 22ms) before we get to anything else. It seems quite likely that this is some sort of initialisation sequence, so lets try to send it from some other code and see what happens. #include #include #include #include #include #include #define CHANNEL 1 void send3(uint8_t a, uint8_t b, uint8_t c) { uint8_t buf[3]; buf[0] = a; buf[1] = b; buf[2] = c; wiringPiSPIDataRW(CHANNEL, buf, 3); } void main(int argc, char** argv) { if (wiringPiSPISetup(CHANNEL, 4000000) < 0) { fprintf (stderr, "SPI Setup failed: %s\n", strerror (errno)); exit(errno); } printf("start\n"); send3(0x40, 0x0A, 0x0B); send3(0x41, 0x0A, 0x00); send3(0x40, 0x00, 0xFF); send3(0x40, 0x0C, 0xFF); send3(0x40, 0x13, 0x00); send3(0x40, 0x01, 0x00); send3(0x40, 0x04, 0xFF); printf("done\n"); } This is a fairly simple piece of C code, using Gordon's WiringPi library to do the low-level SPI twiddling, but I was impressed to see that it had some effect. It switched off the back-light on the PFCAD display! This is more than I managed in a day of code autopsy and head scratching. I could keep on with this approach, and transcribe all the rest of the bytes I saw on the SPI bus, but I would prefer to work at a slightly higher level. So I need to work out what these mean. This means dipping into the MCP23S17 datasheet. From the documentation, the MCP23S17 expects three-byte sequences of the form OPCODE REGISTER VALUE. OPCODE varies according to which MCP23S17 is being addressed, but in this case 0x40 is WRITE and 0x41 is READ. I'm not sure at the moment what the PFCAD software is doing with its write then read of register 0x0A, but I'm guessing it might be a check to see if the port expander is present and working. The others seem relatively straightforward, though. These seem to appear in a slightly strange order, but the intention is clear. The LCD is connected to bank "B", so the bank is set to output with initially low values. Bank A is set as inputs with pull-up resistors and default interrupt handling, so those are either unused, or connected to the PFCAD buttons. One of the Bank B data lines is obviously connected to the back-light input on the HD44780, so when they are all set low, the backlight switches off. I think that's enough detective work for now. I'll see how much further I can get another time!
http://raspberryalphaomega.org.uk/2013/11/09/detective-work-on-piface-control-and-display/
CC-MAIN-2020-34
en
refinedweb
Akonadi::Server::ItemMoveHandler Akonadi::Server::ItemMoveHandler Class Reference #include <itemmovehandler.h> Inheritance diagram for Akonadi::Server::ItemMoveHandler: Detailed Description Handler for the item move command. Semantics Moves the selected items. Item selection can happen within the usual three scopes: - based on a uid set relative to the currently selected collection - based on a global uid set (UID) - based on a list of remote identifiers within the currently selected collection (RID) Destination is a collection id. Definition at line 30 of file itemmovehandler.h. Member Function Documentation Parse and handle the IMAP message using the streaming parser. The implementation MUST leave the trailing newline character(s) in the stream! - Returns - true if parsed successfully, false in case of parse failure Implements Akonadi::Server::Handler. Definition at line 128 of file itemmovehandler.cpp. The documentation for this class was generated from the following files: This file is part of the KDE documentation. Documentation copyright © 1996-2020 The KDE developers. Generated on Sun Aug 2 2020 23:15:27 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 Documentation copyright © 1996-2020 The KDE developers. Generated on Sun Aug 2 2020 23:15:27 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/kdepim/akonadi/html/classAkonadi_1_1Server_1_1ItemMoveHandler.html
CC-MAIN-2020-34
en
refinedweb
strip and objcopy don't filter out .. components from paths inside archive. Consider an archive created with the following command: $ printf '!<arch>\n%-48s%-10d`\n../file\n%-48s%-10s`\n' '//' 8 '/0' 0 > test.a then runnig strip/objcopy on it will unlink ./file (e.g. unlink("stq0g2tL/../st4Mtgu4/../file") ). Consider this: $ printf '!<arch>\n%-48s%-10d`\n../../file\n\n%-48s%-10s`\n' '//' 12 '/0' 0 > test.a then runnig strip/objcopy on it will unlink ../../file (e.g. unlink("staOxyFW/../../st4KIqLm/../../file") ). See also . Created attachment 7899 [details] Proposed patch Hi Alexander, Please could you try out the uploaded patch and let me know if it works for you ? Cheers Nick Yes, the check seems to be Ok in general. And the specific issues are fixed. Two remarks: - strip/objcopy don't remove temporary files and dirs when run on the test.a from below. Perhaps, this is intended behavior, I don't know; - you seems to target Windows but the macros in include/filenames.h don't check for dos special names like con and prn (but it shouldn't be a problem under cygwin1.7). This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "gdb and binutils". The branch, master has been updated via dd9b91de2149ee81d47f708e7b0bbf57da10ad42 (commit) from 834107255bbefceb445fa733ebc1ea5d9f41ec7f (commit) Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below. - Log -----------------------------------------------------------------;h=dd9b91de2149ee81d47f708e7b0bbf57da10ad42 commit dd9b91de2149ee81d47f708e7b0bbf57da10ad42 Author: Nick Clifton <nickc@redhat.com> Date: Thu Nov 6 14:49:10 2014 +0000 Prevent archive memebers with illegal pathnames from being extracted from an archive.. ----------------------------------------------------------------------- Summary of changes: binutils/ChangeLog | 16 ++++++++++++++-- binutils/ar.c | 9 +++++++++ binutils/bucomm.c | 26 ++++++++++++++++++++++++++ binutils/bucomm.h | 12 ++++++++---- binutils/doc/binutils.texi | 3 ++- binutils/objcopy.c | 6 ++++++ 6 files changed, 65 insertions(+), 7 deletions(-) Hi Alexander, OK - I have checked the patch in. With regard to your questions. 1) Leaving the temporary files behind is not an intended feature, it is a bug. I see about creating a patch to fix it. 2. Adding handling for Windows special files seems a bit over the top. Are there any real world sceanarios where this would be a real problem ? Cheers Nick Created attachment 7909 [details] Cleanup temporary files on error Hi Alexander, Please try out this patch and see if it gets rid of those left over temporary files... Cheers Nick (In reply to Nick Clifton from comment #6) > Please try out this patch and see if it gets rid of those left over > temporary files... The patch doesn't apply to git head: patching file binutils/objcopy.c Hunk #1 FAILED at 2298. Hunk #2 FAILED at 2310. Hunk #3 FAILED at 2353. 3 out of 5 hunks FAILED -- saving rejects to file binutils/objcopy.c.rej (In reply to Nick Clifton from comment #5) > 2. Adding handling for Windows special files seems a bit over the top. > Are there any real world sceanarios where this would be a real problem ? Not really sure. I mainly think about sending garbage to a serial (com1-com9) or parallel (lpt9) port when something is connected to it, or to a printer (prn). But I haven't checked what native Windows ar (or other tools) will do in such a case. Created attachment 7913 [details] Proposed patch (regenerated) Hi Alexander, Sorry about that. The master sources are changing rapidly at the moment. Please try this regenerated patch instead. Cheers Nick Sorry, Nick, the new patch seems exactly as the previous. And it doesn't apply to git head. Did I miss something? Hi Alexander, > Sorry, Nick, the new patch seems exactly as the previous. So it is. :-( I just assumed that I had made a mistake last time and so I regenerated the patch. I should have checked to see if it was actually different in some way. > And it doesn't apply to git head. Did I miss something? Well I guess so. I generated the patch by running "git diff binutils/objcopy.c" from an up-to-date set of binutils master sources, so I do not see how I missed anything. Are you using the master branch ? If you have a look at the failed hunks is there anything obvious about why they did not apply ? If it still causes you problems I will just go ahead and check the patch in. Then you can pull an updated set of sources and try again. My local testing has not shown up any problems with the patch... Cheers Nick Ok, figured it out -- tabs were garbled while copy-pasting from a Web-page. Sorry for the noise. The patch is working for me (binutils/strip-new and binutils/objcopy). This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "gdb and binutils". The branch, master has been updated via 5e186ece2feebb46e63ff6bb2d2490aad0d5a724 (commit) from 36e9d67b868c85232ab630514260f0d9c9c6b27b (commit) Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below. - Log -----------------------------------------------------------------;h=5e186ece2feebb46e63ff6bb2d2490aad0d5a724 commit 5e186ece2feebb46e63ff6bb2d2490aad0d5a724 Author: Nick Clifton <nickc@redhat.com> Date: Mon Nov 10 14:28:43 2014 +0000 Fix objcopy and strip so that they remove their temporary files even if an error occurs. PR binutils/17552 * (copy_archive): Clean up temporary files even if an error occurs. ----------------------------------------------------------------------- Summary of changes: binutils/ChangeLog | 6 ++++++ binutils/objcopy.c | 21 ++++++++++++++------- 2 files changed, 20 insertions(+), 7 deletions(-) This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "gdb and binutils". The branch, binutils-2_25-branch has been updated via 8f66a6af276d17c0e386cd2409873f2e3e0b8a37 (commit) via 32a9d621c3c480aa093a089a36e36c35f68a4010 (commit) from ff67f476b9907b9fddfbafff52caa4cce6a6f58c (commit) Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below. - Log -----------------------------------------------------------------;h=8f66a6af276d17c0e386cd2409873f2e3e0b8a37 commit 8f66a6af276d17c0e386cd2409873f2e3e0b8a37 Merge: 32a9d62 ff67f47 Author: Nick Clifton <nickc@redhat.com> Date: Mon Nov 17 17:04:16 2014 +0000 Merge branch 'binutils-2_25-branch' of ssh://sourceware.org/git/binutils-gdb into binutils-2_25-branch Conflicts: gas/ChangeLog;h=32a9d621c3c480aa093a089a36e36c35f68a4010 commit 32a9d621c3c480aa093a089a36e36c35f68a4010 Author: Nick Clifton <nickc@redhat.com> Date: Mon Nov 17 16:59:09 2014 +0000 Applies a series of patches for PR 17512 and 17533 which fix invalid memory accesses. 2014-11-13 Nick Clifton <nickc@redhat.com> PR binutils/17512 * config/obj-coff.c (coff_obj_symbol_new_hook): Set the is_sym field. 2014-11-14 Nick Clifton <nickc@redhat.com> PR binutils/17512 * dwarf.c (get_encoded_value): Add an 'end' parameter. Change the 'data' parameter to a double pointer and return the updated value. (decode_location_expression): Update call to get_encoded_value. (frame_need_space): Handle the case where one or both of the mallocs fails. (read_cie): Initialise the cie pointer, even if the read fails. (display_debug_frames): Warn if the calculated block_end is before the start of the block. Break the loop if the CIE could not be read. Update call to get_encoded_value. Warn if the read CFA expressions are too big. 2014-11-13 Nick Clifton <nickc@redhat.com> PR binutils/17531 * readelf.c (process_version_sections): If the read of the version def information fails, make sure that the external verdef data is not used. (get_dynamic_data): Do not attempt to allocate memory for more dynamic data than there is in the file. If the read fails, free the allocated buffer. (process_symbol_table): Do not print dynamic information if we were unable to read the dynamic symbol table. (print_gnu_note): Do not print the note if the descsz is too small. 2014-11-12 Nick Clifton <nickc@redhat.com> PR binutils/17512 * dwarf.c (read_and_display_attr_value): Check that we do not read past end. (display_debug_pubnames_worker): Add range checks. (process_debug_info): Check for invalid pointer sizes. (display_loc_list): Likewise. (display_loc_list_dwo): Likewise. (display_debug_ranges): Likewise. (display_debug_aranges): Check for invalid address size. (read_cie): Add range checks. Replace call strchr with while loop. * objdump.c (dump_dwarf): Replace abort with a warning message. (print_section_stabs): Improve range checks. * rdcoff.c (coff_get_slot): Use long for indx parameter type. Add check for an excesively large index. * rddbg.c (read_section_stabs_debugging_info): Zero terminate the string table. Avoid walking off the end of the stabs data. * stabs.c (parse_stab_string): Add check for a NULL name. 2014-11-11 Nick Clifton <nickc@redhat.com> PR binutils/17531 * binutils/readelf.c (dynamic_nent): Change type to size_t. (slurp_rela_relocs): Use size_t type for nrelas. (slurp_rel_relocs): Likewise. (get_program_headers): Improve out of memory error message. (get_32bit_section_headers): Likewise. (get_32bit_section_headers): Likewise. (get_64bit_section_headers): Likewise. (get_32bit_elf_symbols): Likewise. (get_64bit_elf_symbols): Likewise. (process_section_groups): Likewise. (get_32bit_dynamic_section): Likewise. (get_64bit_dynamic_section): Likewise. (process_dynamic_section): Likewise. (process_version_sections): Likewise. (get_symbol_index_type): Likewise. (process_mips_specific): Likewise. (process_corefile_note_segment): Likewise. (process_version_sections): Use size_t type for total. (get_dynamic_data): Change type of number parameter to size_t. Improve out of memory error messages. (process_symbol_table): Change type of nbuckets and nchains to size_t. Skip processing of sections headers if there are none. Improve out of memory error messages. 2014-11-11 Nick Clifton <nickc@redhat.com> PR binutils/17531 * readelf.c (display_arm_attribute): Avoid reading off the end of the buffer when processing a Tag_nodefaults. 2014-11-10 Nick Clifton <nickc@redhat.com> PR binutils/17531 * readelf.c (ia64_process_unwind): Replace assertion with an error message. Add range checking for group section indicies. (hppa_process_unwind): Replace assertion with an error message. (process_syminfo): Likewise. (decode_arm_unwind_bytecode): Add range checking. (dump_section_as_strings): Add more string range checking. (display_tag_value): Likewise. (display_arm_attribute): Likewise. (display_gnu_attribute): Likewise. (display_tic6x_attribute): Likewise. (display_msp430x_attribute): Likewise. 2014-11-10 Nick Clifton <nickc@redhat.com> PR binutils/17552 * objcopy.c (copy_archive): Clean up temporary files even if an error occurs. 2014-11-07 Nick Clifton <nickc@redhat.com> PR binutils/17531 * readelf.c (get_data): Avoid allocating memory when we know that the read will fail. (find_section_by_type): New function. (get_unwind_section_word): Check for invalid symbol indicies. Check for invalid reloc types. (get_32bit_dynamic_section): Add range checks. (get_64bit_dynamic_section): Add range checks. (process_dynamic_section): Check for a corrupt time value. (process_symbol_table): Add range checks. (dump_section_as_strings): Add string length range checks. (display_tag_value): Likewise. (display_arm_attribute): Likewise. (display_gnu_attribute): Likewise. (display_tic6x_attribute): Likewise. (display_msp430x_attribute): Likewise. (process_mips_specific): Add range check. 2014-11-06 Nick Clifton <nickc@redhat.com>. 2014-11-05 Nick Clifton <nickc@redhat.com> PR binutils/17531 * readelf.c (printable_section_name): New function. (printable_section_name_from_index): New function. (dump_relocations): Use new function. (process_program_headers, get_32bit_elf_symbols, (get_64bit_elf_symbols, process_section_headers, (process_section_groups, process_relocs, ia64_process_unwind, (hppa_process_unwind, get_unwind_section_word, decode_arm_unwind, (arm_process_unwind, process_version_sections, (process_symbol_table, apply_relocations, get_section_contents, (dump_section_as_strings, dump_section_as_bytes, (display_debug_section, process_attributes, process_mips_specific, (process_mips_specific process_gnu_liblist): Likewise. (get_unwind_section_word): Check for a missing symbol table. Replace aborts with error messages. (arm_process_unwind): Check for a missing string table. (process_attributes): Check for an attribute length that is too small. (process_mips_specific): Check for a corrupt GOT symbol offset. 2014-11-05 Nick Clifton <nickc@redhat.com> PR binutils/17533 * bucomm.c (is_valid_archive_path): New function. * bucomm.h (is_valid_archive_path): Prototype it. * ar.c (extract_file): Call is_valid_archive_path to verify a member filename before extracting it. * objcopy.c (copy_archive): Likewise. 2014-11-04 Nick Clifton <nickc@redhat.com> PR binutils/17531 * readelf.c (get_data): If the reason parameter is null, do not print any error messages. (get_32bit_section_headers): Verify section header entry size before reading in the section headers. (get_64bit_section_headers): Likewise. (process_section_headers): Pass FALSE to get_section_headers. (get_file_header): Pass TRUE to get_section_headers. (process_dynamic_section): Change an assert to an error message. (process_symbol_table): Handle corrupt histograms. (get_32bit_program_headers): Verify program header entry size before reading in the program headers. (get_64bit_program_headers): Likewise. (get_unwind_section_word): Do nothing if no section was provided. Fail if the offset is outside of the section. (print_dynamic_symbol): Catch out of range symbol indicies. (process_mips_specific): Likewise. (process_attributes): Make sure that there is enough space left in the section before attempting to read the length of the next attribute. 2014-11-03 Nick Clifton <nickc@redhat.com> PR binutils/17512 * objdump.c (slurp_symtab): Fail gracefully if the table could not be read. (dump_relocs_in_section): Likewise. 2014-11-14 Nick Clifton <nickc@redhat.com> PR binutils/17597 * opncls.c (bfd_get_debug_link_info): Avoid reading off the end of the section. (bfd_get_alt_debug_link_info): Likewise. 2014-11-14 Nick Clifton <nickc@redhat.com> PR binutils/17512 * ieee.c (ieee_archive_p) Skip processing if no bytes are read at all. (ieee_object_p): Likewise. 2014-11-13 H.J. Lu <hongjiu.lu@intel.com> * coffcode.h (coff_slurp_line_table): Add cast to unsigned int. 2014-11-13 H.J. Lu <hongjiu.lu@intel.com> * coffcode.h (coff_pointerize_aux_hook): Fix a typo. 2014-11-13 Nick Clifton <nickc@redhat.com> PR binutils/17512 * coffcode.h (coff_ptr_struct): Add is_sym field. (coff_new_section_hook): Set the is_sym field. (coff_pointerize_aux_hook): Check the is_sym field. (coff_print_aux): Likewise. (coff_compute_section_file_positions): Likewise. (coff_write_object_contents): Likewise. (coff_slurp_line_table): Likewise. (coff_slurp_symbol_table): Likewise. (CALC_ADDEND): Likewise. * coffgen.c (coff_renumber_symbols): Likewise. (coff_mangle_symbols): Likewise. (coff_fix_symbol_name): Likewise. (coff_write_symbol): Likewise. (coff_write_alien_symbol): Likewise. (coff_write_native_symbol): Likewise. (coff_write_symbols): Likewise. (coff_write_linenumbers): Likewise. (coff_pointerize_aux): Likewise. (coff_get_normalized_symtab): Likewise. (coff_get_symbol_info): Likewise. (bfd_coff_get_syment): Likewise. (bfd_coff_get_auxent): Likewise. (coff_print_symbol): Likewise. (coff_find_nearest_line_with_names): Likewise. (bfd_coff_set_symbol_class): Likewise. (coff_make_empty_symbol): Set the is_sym field. (coff_bfd_make_debug_symbol): Likewise. * peicode.h (pe_ILF_make_a_symbol): Likewise. * libcoff.h: Regenerate. * libcoff-in.h: Regenerate. 2014-11-12 Nick Clifton <nickc@redhat.com> PR binutils/17512 * coffcode.h (coff_slurp_line_table): Set the line number of corrupt entries to -1. (coff_slurp_symbol_table): Alway initialise the value of the symbol. * coffgen.c (coff_print_symbol): Check that the combined pointer is valid. (coff_print_symbol): Do not print negative line numbers. * peXXigen.c (pe_print_idata): Add range checking displaying member names. 2014-11-12 Alan Modra <amodra@gmail.com> PR binutils/17512 * coffcode.h (coff_slurp_line_table): Drop line number info not preceded by a valid function entry. Revert last change. 2014-11-11 Nick Clifton <nickc@redhat.com> PR binutils/17512 * coffcode.h (coff_slurp_line_table): Initialise the parts of the line number cache that would not be initialised by the copy from the new line number table. (coff_classify_symbol): Allow for _bfd_coff_internal_syment_name returning NULL. * coffgen.c (coff_get_normalized_symbols): Get the external symbols before allocating space for the internal symbols, in case the get fails. * elf.c (_bfd_elf_slurp_version_tables): Only allocate a verref array if one is needed. Likewise with the verdef array. * peXXigen.c (_bfd_XXi_swap_sym_in): Replace abort()'s with error messages. (_bfd_XXi_swap_aux_in): Make sure that all fields of the aux structure are initialised. (pe_print_edata): Avoid reading off the end of the data buffer. 2014-11-11 Alan Modra <amodra@gmail.com> PR binutils/17512 * coffcode.h (coff_slurp_line_table): Use updated lineno_count when building func_table. 2014-11-11 Alan Modra <amodra@gmail.com> PR binutils/17512 * coffcode.h (coff_slurp_line_table): Don't bfd_zalloc, just memset the particular bits we need. Update src after hitting loop "continue". Don't count lineno omitted due to invalid symbols in nbr_func, and update lineno_count. Init entire terminating lineno. Don't both allocating terminator in n_lineno_cache. Redirect sym->lineno pointer to where n_lineno_cache will be copied, and free n_lineno_cache. * pe-mips.c (NUM_HOWTOS): Typo fix. 2014-11-10 Nick Clifton <nickc@redhat.com> PR binutils/17521 * coff-i386.c (NUM_HOWTOS): New define. (RTYPE2HOWTO): Use it. (coff_i386_rtype_to_howto): Likewise. (coff_i386_reloc_name_lookup): Likewise. (CALC_ADDEND): Check that reloc r_type field is valid. * coff-x86_64.c (NUM_HOWTOS): New define. (RTYPE2HOWTO): Use it. (coff_amd64_rtype_to_howto): Likewise. (coff_amd64_reloc_name_lookup): Likewise. (CALC_ADDEND): Check that reloc r_type field is valid. * coffcode.h (coff_slurp_line_table): Check for symbol table indexing underflow. (coff_slurp_symbol_table): Use zalloc to ensure that all table entries are initialised. * coffgen.c (_bfd_coff_read_string_table): Initialise unused bits in the string table. Also ensure that the table is 0 terminated. (coff_get_normalized_symtab): Check for symbol table indexing underflow. * opncls.c (bfd_alloc): Catch the case where a small negative size can result in only 1 byte being allocated. (bfd_alloc2): Use bfd_alloc. * pe-mips.c (NUM_HOWTOS): New define. (coff_mips_reloc_name_lookup): Use it. (CALC_ADDEND): Check that reloc r_type field is valid. * peXXigen.c (_bfd_XXi_swap_aouthdr_in): Initialise unused entries in the DataDirectory. (pe_print_idata): Avoid reading beyond the end of the data block wen printing strings. (pe_print_edata): Likewise. Check for table indexing underflow. * peicode.h (pe_mkobject): Initialise the pe_opthdr field. (pe_bfd_object_p): Allocate and initialize enough space to hold a PEAOUTHDR, even if the opt_hdr field specified less. 2014-11-08 Alan Modra <amodra@gmail.com> * peXXigen.c (pe_print_idata): Revert last patch, cast lhs instead. 2014-11-07 H.J. Lu <hongjiu.lu@intel.com> * peXXigen.c (pe_print_idata): Cast to unsigned long in range checks. 2014-11-07 Alan Modra <amodra@gmail.com> * tekhex.c (tekhex_set_arch_mach): Ignore unknown arch errors. 2014-11-07 Alan Modra <amodra@gmail.com> * tekhex.c (CHUNK_SPAN): Define. (struct data_struct <chunk_init>): Use one byte per span, update all code accessing this field. (find_chunk): Add create param, don't create new entry unless set. (insert_byte): Don't save zeros. (first_phase): Set section SEC_CODE or SEC_DATA flag depending on symbol type. Create an alternate section if both types of symbol are given. Attach type '2' and '6' symbols to absolute section. (move_section_contents): Fix caching of chunk. Don't create chunk when reading, or for writing zeros. (tekhex_set_section_contents): Don't create initial chunks. (tekhex_write_object_contents): Use CHUNK_SPAN. 2014-11-07 Alan Modra <amodra@gmail.com> * aoutx.h (aout_get_external_symbols): Tidy allocation of symbol buffer. 2014-11-07 Alan Modra <amodra@gmail.com> * archive.c (_bfd_slurp_extended_name_table): Revert bfd_get_size check. * coffcode.h (coff_set_alignment_hook): Likewise. (coff_slurp_line_table): Likewise. * coffgen.c (coff_get_normalized_symtab): Likewise. (_bfd_coff_get_external_symbols): Likewise. * elf.c (bfd_elf_get_str_section): Likewise. * tekhex.c (first_phase): Likewise. 2014-11-06 Nick Clifton <nickc@redhat.com> * aoutx.h (slurp_symbol_table): Revert previous delta. (slurp_reloc_table): Likewise. * compress.c (bfd_get_full_section_contents): Remove file size test. * coffgen.c (coff_get_normalized_symtab): Allow zero-sized symtabs and do not complain about linker generated files. 2014-11-04 Nick Clifton <nickc@redhat.com> PR binutils/17512 * coffcode.h (handle_COMDAT): Replace abort with BFD_ASSERT. Replace another abort with an error message. (coff_slurp_line_table): Add more range checking. * peXXigen.c (pe_print_debugdata): Add range checking. 2014-11-05 Nick Clifton <nickc@redhat.com> PR binutils/17512 * coffcode.h (coff_set_alignment_hook): Warn if the file lies about the number of relocations it contains. (coff_sort_func_alent): Return 0 if the pointers are NULL. (coff_slurp_line_table): Add more range checks. Do not free new tables created when sorting line numbers. * peXXigen.c (pe_print_idata): Add range checks. (pe_print_edata): Likewise. (rsrc_print_resource_entries): Likewise. Avoid printing control characters. Terminate priniting if corruption is detected. (rsrc_print_resource_directory): Terminate printing if an unknown directory type is encountered. (pe_print_debugdata): Fix off-by-one error. (rsrc_count_entries): Add range checking. (rsrc_parse_entry): Likewise. 2014-11-04 Nick Clifton <nickc@redhat.com> PR binutils/17512 * compress.c (bfd_get_full_section_contents): Improve test for linker created objects. PR binutils/17533 * archive.c (_bfd_slurp_extended_name_table): Handle archives with corrupt extended name tables. 2014-11-03 Nick Clifton <nickc@redhat.com> PR binutils/17512 * aoutx.h (slurp_symbol_table): Check that computed table size is not bigger than the file from which is it being read. (slurp_reloc_table): Likewise. * coffcode.h (coff_slurp_line_table): Remove unneeded local 'warned'. Do not try to print the details of a symbol with an invalid index. * coffgen.c (make_a_sectiobn_from_file): Check computed string index against length of string table. (bfd_coff_internal_syment_name): Check read in string offset against length of string table. (build_debug_section): Return a pointer to the section used. (_bfd_coff_read_string_table): Store the length of the string table in the coff_tdata structure. (bfd_coff_free_symbols): Set the length of the string table to zero when it is freed. (coff_get_normalized_symtab): Check offsets against string table or data table lengths as appropriate. * cofflink.c (_bfd_coff_link_input_bfd): Check offset against length of string table. * compress.c (bfd_get_full_section_contents): Check computed size against the size of the file. * libcoff-in.h (obj_coff_strings_len): Define. (struct coff_tdata): Add strings_len field. * libcoff.h: Regenerate. * peXXigen.c (pe_print_debugdata): Do not attempt to print the data if the debug section is too small. * xcofflink.c (xcoff_link_input_bfd): Check offset against length of string table. 2014-10-31 Nick Clifton <nickc@redhat.com> PR binutils/17512 * coffgen.c (_bfd_coff_get_external_symbols): Do not try to load a symbol table bigger than the file. * elf.c (bfd_elf_get_str_section): Do not try to load a string table bigger than the file. * tekhex.c (first_phase): Check that the section range is sane. ----------------------------------------------------------------------- Summary of changes: bfd/ChangeLog | 282 ++++++++++++ bfd/aoutx.h | 24 +- bfd/archive.c | 5 +- bfd/coff-i386.c | 17 +- bfd/coff-x86_64.c | 11 +- bfd/coffcode.h | 170 +++++--- bfd/coffgen.c | 168 ++++++-- bfd/cofflink.c | 5 +- bfd/elf.c | 24 +- bfd/ieee.c | 6 +- bfd/libcoff-in.h | 3 + bfd/libcoff.h | 16 +- bfd/opncls.c | 41 +- bfd/pe-mips.c | 9 +- bfd/peXXigen.c | 220 +++++++--- bfd/peicode.h | 15 +- bfd/tekhex.c | 112 +++-- bfd/xcofflink.c | 5 +- binutils/ChangeLog | 199 +++++++++ binutils/ar.c | 9 + binutils/bucomm.c | 26 ++ binutils/bucomm.h | 12 +- binutils/doc/binutils.texi | 3 +- binutils/dwarf.c | 209 +++++++--- binutils/objcopy.c | 23 +- binutils/objdump.c | 27 +- binutils/rdcoff.c | 9 +- binutils/rddbg.c | 40 ++- binutils/readelf.c | 1039 ++++++++++++++++++++++++++++++++------------ binutils/stabs.c | 30 +- gas/ChangeLog | 10 + gas/config/obj-coff.c | 1 + 32 files changed, 2109 insertions(+), 661 deletions(-)
https://sourceware.org/bugzilla/show_bug.cgi?id=17552
CC-MAIN-2019-51
en
refinedweb
#include "petscdmda.h" PetscErrorCode DMDAVecGetArrayWriteF90() does not work with gfortran versions before 4.5 Developer Notes: This has code duplication with DMDAVecGetArray() and DMDAVecGetArrayRead()
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMDA/DMDAVecGetArrayWrite.html
CC-MAIN-2019-51
en
refinedweb
lp:~macslow/unity/unity.fix-863230 - Get this branch: - bzr branch lp:~macslow/unity/unity.fix-863230 Branch merges - John Lea (community): Needs Fixing (design) on 2012-03-19 - Tim Penhey (community): Approve on 2012-02-21 - Diff: 42 lines (+20/-0)2 files modifiedmanual-tests/DragDropDashLauncher.txt (+6/-0) plugins/unityshell/src/Launcher.cpp (+14/-0) Related bugs Related blueprints Branch information - Owner: - Mirco Müller - Status: - Merged Recent revisions - 1972. By Mirco Müller on 2012-02-29 Added a manual-test for the bug - 1971. By Mirco Müller on 2012-02-15 Best attempt yet, trying to fix LP: #863230 - 1970. By Sam Spilsbury on 2012-02-14 Fix bug 931409 Don't unset the handlers until the window is destroyed.. Fixes: https:/ /bugs.launchpad .net/bugs/ 931409. Approved by Tim Penhey. - 1969. By Marco Trevisan (Treviño) on 2012-02-14 Correctly draw the tooltip tex to fix bug #930043. Fixes: https:/ /bugs.launchpad .net/bugs/ 930043. Approved by Tim Penhey. - 1968. By Thomi Richards on 2012-02-14 Added a Hud controller class, and a simple unit test to test a hud bug.. Fixes: https:/ /bugs.launchpad .net/bugs/ 931405. Approved by Tim Penhey. - 1967. By Jason Smith on 2012-02-14 Fixes leak caused by freeing memory before we release its contents. Fixes: . Approved by Tim Penhey. - 1966. By Thomi Richards on 2012-02-14 Refactor autopilot emulators into separate namespaces. Make it easier to create autopilot emulators for new Unity state objects.. Fixes: . Approved by Tim Penhey, Alex Launi. - 1965. By Andrea Cimitan on 2012-02-14 - Added shadows on launcher icons and alt-tab icons - Tweaked BFB edges - Tweaked background tile color for wallpaper-colorized launcher icons (including BFB) - Added device and show-desktop launcher icons to the list of wallpaper-colorized launcher icons. Fixes: . Approved by Marco Trevisan (Treviño), Andrea Cimitan. - 1964. By Tim Penhey on 2012-02-13 Fixes the hud crashing when you open hud on an empty desktop. Fixes: https:/ /bugs.launchpad .net/bugs/ 931405. Approved by Marco Trevisan (Treviño). - 1963. By Jason Smith on 2012-02-13 Makes unity use nux::ObjectPtr for AbstractLaunche rIcon's rather than doing manual memory management.. Fixes: . Approved by Gord Allott.
https://code.launchpad.net/~macslow/unity/unity.fix-863230
CC-MAIN-2019-51
en
refinedweb
import "go.chromium.org/luci/common/sync/dispatcher" Package dispatcher implements a super-charged version of a buffered channel connected to a (potentially) parallelized work dispatcher. This can be used when you have a mismatch between the rate of production of items and the rate of consumption of those items. For example: * if you have a producer which constantly produces new world states, and you want to sink the latest one into a slow external RPC (but still do retries if no new state appears). * if you have bursty user data which you'd like to batch according to some maximum batch size, but you don't want the data to get too stale in case you don't hit that batch limit. * your external RPC can absorb almost infinite data, and the order of delivery doesn't matter, but you don't want to block the data producer. * etc. The dispatcher can be configured to: * Buffer a certain amount of work (with possible backpressure to the producer). * Batch pending work into chunks for the send function. * Drop stale work which is no longer important to send. * Enforce a maximum QPS on the send function (even with parallel senders). * Retry batches independently with configurable per-batch retry policy. channel.go coordinator.go doc.go options.go options_validate.go DropFnQuiet is an implementation of Options.DropFn which drops batches without logging anything. DropFnSummarized returns an implementation of Options.DropFn which counts the number of dropped batches, and only reports it at the rate provided. Unlike the default log function, this only logs the number of dropped items and the duration that they were collected over. ErrorFnQuiet is an implementation of Options.ErrorFn which doesn't log the batch, but does check for `transient.Tag` to determine `retry`. type Channel struct { // C is an unbuffered channel which you can push single work items into. // // Close this to shut down the Channel. C chan<- interface{} // DrainC will unblock when this Channel is closed/canceled and fully drained. DrainC <-chan struct{} } Channel holds a chan which you can push individual work items to. NewChannel produces a new Channel ready to listen and send work items. Args: * `ctx` will be used for cancellation and logging. When the `ctx` is canceled, the Channel will: * drop all incoming data on Channel.C; All new data will be dropped (calling DropFn). * drop all existing unleased batches (calling DropFn) * ignore all errors from SendFn (i.e. even if ErrorFn returns 'retry=true', the batch will be dropped anyway) If you want to gracefully drain the Channel, you must close the channel and wait for DrainC before canceling the context. * `send` is required, and defines the function to use to process Batches of data. This function MUST respect `ctx.Done`, or the Channel cannot drain properly. * `opts` is optional (see Options for the defaults). The Channel MUST be Close()'d when you're done with it, or the Channel will not terminate. This applies even if you cancel it via ctx. The caller is responsible for this (as opposed to having Channel implement this internally) because there is no generally-safe way in Go to close a channel without coordinating that event with all senders on that channel. Because the caller of NewChannel is effectively the sender (owner) of Channel.C, they must coordinate closure of this channel with all their use of sends to this channel. Close is a convenience function which closes C (and swallows panic if already closed). CloseAndDrain is a convenience function which closes C (and swallows panic if already closed) and then blocks on DrainC/ctx.Done(). IsDrained returns true iff the Channel is closed and drained. ErrorFn is called to handle the error from SendFn.. This may: * inspect/log the error * manipulate the contents of failedBatch * return a boolean of whether this Batch should be retried or not. If this is false then the Batch is dropped. If it's true, then it will be re-queued as-is for transmission according to BufferFullBehavior. * pass the Batch.Data to another goroutine (in a non-blocking way!) to be re-queued through Channel.WriteChan. Args: * failedBatch - The Batch for which SendFn produced a non-nil error. * err - The error SendFn produced. Returns true iff the dispatcher should re-try sending this Batch, according to Buffer.Retry. type Options struct { // [OPTIONAL] The ErrorFn to use (see ErrorFn docs for details). // // Default: Logs the error (at Info for retryable errors, and Error for // non-retryable errors) and returns true on a transient error. ErrorFn ErrorFn // [OPTIONAL] Called with the dropped batch any time the Channel drops a batch. // // This includes: // * When FullBehavior==DropOldestBatch and we get new data. // * When FullBehavior==DropOldestBatch and we attempt to retry old data. // * When ErrorFn returns false for a batch. // //. // // When the channel is fully drained, this will be invoked exactly once with // `(nil, true)`. This will occur immediately before the DrainedFn is called. // Some drop functions buffer their information, and this gives them an // opportunity to flush out any buffered data. // // Default: logs (at Info level if FullBehavior==DropOldestBatch, or Warning // level otherwise) the number of data items in the Batch being dropped. DropFn func(b *buffer.Batch, flush bool) // [OPTIONAL] Called exactly once when the associated Channel is closed and // has fully drained its buffer, but before DrainC is closed. // // Note that this takes effect whether the Channel is shut down via Context // cancellation or explicitly by closing Channel.C. // // This is useful for performing final state synchronization tasks/metrics // finalization/helpful "everything is done!" messages/etc. without having to // poll the Channel to see if it's done and also maintain external // synchronization around the finalization action. // // Called in the main handler loop, but it's called after all other work is // done by the Channel, so the only thing it blocks is the closure of DrainC. // // Default: No action. DrainedFn func() // [OPTIONAL] A rate limiter for how frequently this will invoke SendFn. // // Default: 1 QPS with a burst of 1. QPSLimit *rate.Limiter Buffer buffer.Options // contains filtered or unexported fields } Options is the configuration options for NewChannel. SendFn is the function which does the work to actually transmit the Batch to the next stage of your processing pipeline (e.g. do an RPC to a remote service). The function may manipulate the Batch however it wants (see Batch). In particular, shrinking the size of Batch.Data for confirmed-sent items will allow the dispatcher to reduce its buffer count when SendFn returns, even if SendFn returns an error. Removing items from the Batch will not cause the remaining items to be coalesced into a different Batch. The SendFn MUST be bound to this Channel's Context; if the Channel's Context is Cancel'd, SendFn MUST terminate, or the Channel's DrainC will be blocked. We don't pass it as part of SendFn's signature in case SendFn needs to be bound to a derived Context. Non-nil errors returned by this function will be handled by ErrorFn. Package dispatcher imports 8 packages (graph) and is imported by 3 packages. Updated 2019-12-06. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/common/sync/dispatcher
CC-MAIN-2019-51
en
refinedweb
Python has the module named time to handle time-related tasks. In this article, we will explore the time module in detail. We will learn to use the different time-related functions defined in the time module. The Python time module provides many ways of representing time in the code, such as objects, numbers, and strings. It also provides functionality other than representing time, like waiting during the code execution and measuring the efficiency of the code. Python Time Module Content Overview If we want to use functions defined in the module, we need to import the module first. import time You can manage the concept of Python time in your application is by using the floating-point number that represents a number of seconds that have passed since the beginning of an era that is, since the particular starting point. Let’s go to that Epoch point. The Epoch It is a Python time with a floating-point number representing elapsed time since the beginning of an era. An important concept to grasp here is that, when dealing with Python time, you’re considering the period identified by the starting point. In computing, you call that starting point the epoch. Python time.time() Let’s calculate the total seconds since the epoch. # app.py from time import time seconds = time() print("Seconds since epoch =", seconds) Output ➜ pyt python3 app.py Seconds since epoch = 1575025885.825993 ➜ pyt For the Unix system, January 1, 1970, 00:00:00 at UTC is epoch (the point where time begins). Python time.gmtime() We can use the time.gmtime() to determine your system’s epoch. # app.py import time print(time.gmtime(0)) Output ➜ pyt python3 app.py time.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=1, tm_isdst=0) ➜ pyt Python time.ctime() Python time.ctime() function takes seconds passed since epoch as the argument and returns a string representing local time. # app.py from time import ctime import time # seconds passed since epoch seconds = 1575025885.825993 local_time = ctime(seconds) print("Local time:", local_time) Output ➜ pyt python3 app.py Local time: Fri Nov 29 16:41:25 2019 ➜ pyt A string representation of time, also known as the timestamp, returned by Python ctime() is formatted with the following structure: - Day of the week: Fri (Friday) - The month of the year: Nov (November) - Day of the month: 29 - Hours, minutes, and seconds using a 24-hour clock notation: 16:41:25 - Year: 2019 Python 3.7 introduced the time_ns() function, which returns the integer value representing the same elapsed time since an epoch, but in nanoseconds rather than seconds. Measuring time in seconds is useful for several reasons: - You can use the float to calculate the difference between two points in time. - The float is easily serializable, meaning that it can be stored for data transfer and come out intact on the other side. Python time.sleep() Python sleep() function suspends (delays) execution of the current thread for the given number of seconds. To learn more, visit Python sleep(). # app.py from time import sleep sleep(1.1) print("After 1.1 seconds") Output ➜ pyt python3 app.py After 1.1 seconds ➜ pyt It will print the statement after 1.1 seconds. Python time.localtime() Python localtime() function takes the number of seconds passed since epoch as an argument and returns struct_time in local time. See the following code. # app.py from time import localtime ltime = localtime(1575125769) print(ltime)) ➜ pyt You can extract the year, month, day, hour, min, sec, and more data. # app.py from time import localtime ltime = localtime(1575125769) print(ltime) print("Year", ltime.tm_year) print("Month", ltime.tm_mon) print("Day", ltime.tm_mday) print("Second", ltime.tm_sec)) Year 2019 Month 11 Day 30 Second 9 ➜ pyt If there is no argument or None is passed to localtime() function, then the value returned by time() is used. Python time.mktime() The mktime() function takes the struct_time (or the tuple containing 9 elements corresponding to struct_time) as the argument and returns the seconds passed since epoch in local time. It’s the inverse function of localtime(). See the following code. # app.py from time import mktime tm = (2019, 11, 29, 8, 44, 4, 4, 362, 0) local_time = mktime(tm) print("Local time:", local_time) Output ➜ pyt python3 app.py Local time: 1574997244.0 ➜ pyt Python time.asctime() Python asctime() function takes struct_time (or a tuple containing 9 elements corresponding to struct_time) as the argument and returns the string representing it. # app.py from time import asctime tm = (2019, 11, 29, 8, 44, 4, 4, 362, 0) local_time = asctime(tm) print("Local time:", local_time) Output ➜ pyt python3 app.py Local time: Fri Nov 29 08:44:04 2019 ➜ pyt Python time.struct_time Class Several functions in the time module such as gmtime(), asctime() etc. either take time.struct_time object as the argument or return it. Here’s an example of time.struct_time object. time.struct_time(tm_year=2019, tm_mon=11, tm_mday=29, tm_hour=6, tm_min=35, tm_sec=17, tm_wday=3, tm_yday=361, tm_isdst=0) The values (elements) of the time.struct_time object is accessible using both indices and attributes. Finally, Python Time Module Example is over.
https://appdividend.com/2019/11/29/python-time-module-example-python-time-tutorial/
CC-MAIN-2019-51
en
refinedweb
import "github.com/cosmos/cosmos-sdk/codec" attempt to make some pretty json MustMarshalJSONIndent executes MarshalJSONIndent except it panics upon failure. Register the go-crypto to the codec RegisterEvidences registers Tendermint evidence types with the provided codec. amino codec to marshal/unmarshal generic sealed codec to be used throughout sdk Package codec imports 6 packages (graph) and is imported by 392 packages. Updated 2019-08-20. Refresh now. Tools for package owners.
https://godoc.org/github.com/cosmos/cosmos-sdk/codec
CC-MAIN-2019-51
en
refinedweb
Here’s a Blazor Component that uses C# to retrieve data from a Web Service and display it in the new Blazor-Enabled KendoGrid. To put it another way: Client-side updates with no JavaScript required. As you'll see here, thanks to the Telerik UI for Blazor Early Preview, you can already start using Telerik UI components in a Blazor environment. The biggest problems you'll have in using this technology is creating the environment that it runs in – the components themselves just work as advertised. Fundamentally, this is all preview technology so you're going to need to use both the previews for Visual Studio 2019, and .NET Core 3.0 SDK. Once you've installed those downloads, you'll create your project by starting up Visual Studio 2019 Preview and selecting Create a New Project from the choices on the right of the initial dialog. In the ‘Create a new project' dialog that appears, select ASP.NET Core Web Applications and click the Next button. The next dialog will let you name your project (among other configuration options). Click the Create button after you're done with this page and (finally!) pick the kind of project you want to create. To work with Blazor, you'll want to use the Razor Components template, so select it, and click the Create button to initialize your project in its solution. This is your first opportunity to check that you've successfully installed the right “bundle of everything”: If your Visual Studio solution contains a single project that has, in its Components/Pages folder, files with the .razor extension, then you've got the right combination of Visual Studio and .NET Version 3.0. Razor Components aren't “true” Blazor in the sense that you'll have C# code running in the browser. Instead, Razor Components execute your C# code on the server and use SignalR to communicate between the client and server. This obviously limits the scalability of your application compared to fully executing all code on the client. However, it also avoids the two megabyte download that “Blazor on the client” currently requires (and, to be fair, in ASP.NET Core with the right topology, the scaling limit for SignalR is pretty high). The major benefit of experimenting with Razor Components is that it's an official technology that's included in .NET Core 3.0, while “Blazor on the client,” despite being in version 0.9.0, is still — and I'm quoting Microsoft here — in “pre-alpha.” However, the code and HTML in this post should work as-is with “Blazor on the Client” when (or if) that technology comes out of alpha. The one wrinkle that I found after I got everything set up was that I would make changes to my code and it would make no difference to my running application until I selected Build | Rebuild Solution. If you get tired of constantly requesting rebuilds (or, as I did, just kept forgetting to do it) then the simplest solution is to trigger a rebuild every time you debug in every project by going to Tools | Options | Projects and Solutions | .NET Core and unchecking the Up to Date Checks option. If that solution seems too heavy-handed to you, then you can fix the problem for the project you're working on by going into the project's .csproj file and adding this: <ItemGroup> <UpToDateCheckInput Include="@(Content->WithMetadataValue('Extension', '.razor'))" /> </ItemGroup> You're now ready to add the Telerik components to your project. The Telerik team, not surprisingly, provides the best description of how to set up your application to use the new Blazor-enabled components. You'll also need to tweak your project file. Hey, no one said that exploring the future of technology was going to be easy. For the case study I'm going to use here, I added a second ASP.NET Core API project to my solution. This Web Service will provide a set of Customer objects. I defined my Customer object in another Class Library (.NET Standard) project that I referenced from both my Razor Components and API project. I added both of these other projects just by right-clicking on the Solution node in Solution Explorer and selecting Add | New Project. My Customer class is ridiculously simple: namespace Customers.Common { public class Customer { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } } } And my API class wasn't much more complicated. I created a List of Customer objects in the constructor for a controller class (cleverly called Customers) and returned that collection from my controller class's Get method: [Route("Customers")] [ApiController] public class Customers : ControllerBase { List<Customer> custs = new List<Customer>(); public Customers() { custs.Add(new Customer { Id = 1, FirstName = "Peter", LastName = "Vogel" }); // more customers added } [HttpGet] public ActionResult<IEnumerable<Customer>> Get() { return custs; } } I also set up the Solution to start debugging with two Startup projects by right-clicking on the Solution node, picking Set Startup Projects and selecting the ‘Multiple startup projects' option in the resulting dialog. In the list of projects that follow that option, I set the Action dropdown list to “Start” for both my Razor Components project and my API project before clicking the OK button. If you've put all that in place, then you're ready to create a Razor Components page that uses the KendoGrid to display a list of Customer objects retrieved from a Web Service. To add the page, right-click on the Razor Components' project's Components/Pages folder and select Add | New Item. Unfortunately, the Add New Item dialog currently doesn't include a template that generates a Razor Components class. However, if you select the Razor View template and set the file name to end with the .razor extension, everything will work out fine (it's the .razor extension that defines a Razor Component). I called my file DisplayCustomers.razor. At the top of my Razor component, I set the URL path to retrieve this component, added the namespace for the Class Library holding the definition of my Customer class, and enabled the Tag Helpers in the Kendo.Blazor namespace (I could also have done that last step in one of my project's _ViewImports files). Here's what that looked like: @page "/displaycustomers" @using Customers.Common @addTagHelper *,Kendo.Blazor Within my page, I added a header and used the Kendo Tag Helpers to define my KendoGrid. This gives you another checkpoint to see if you've got the “bundle of right stuff”: If your KendoGrid tags aren't in boldface, then the Telerik.UI.for.Blazor NuGet package didn't get installed correctly. <h1>List Customers</h1> <KendoGrid Data=@customers Sortable=true> <KendoGridColumns> <KendoGridColumn Field="Id" Title="Customer Id" /> <KendoGridColumn Field="FirstName" Title="First Name" /> <KendoGridColumn Field="LastName" Title="Last Name" /> </KendoGridColumns> </KendoGrid> This grid is attached to a field in my Razor Component that will provide the data for the grid to display (I also turned on automatic sorting because, well, why not?). I've defined three columns in my grid, one for each of the three properties on my Customer object. I also set the title for the columns those properties are displayed in. Now, I need to define the field that will hold that data. That code looks like this: @functions { private IEnumerable<Customer> customers = null; } At this point, all the page will do is display a blank grid. To start retrieving the data from the Web Service, I need, in my component's functions block, to override the component's OnInitAsync method. That method is automatically run as part of initializing the component as it's displayed, so it's a good place to do any initialization for the component. Rather than load a lot of code into the OnInitAsync method, I'll put the code that does all the work in a separate method that I'll call GetCustomers: OnInitAsync GetCustomers @functions { private IEnumerable<Customer> customers = null; protected override Task OnInitAsync() { GetCustomers(); return base.OnInitAsync(); } } For that GetCustomers method, I just write C# code, leveraging the .NET Core APIs. I added this method right below my OnInitAsync method: public async void GetCustomers() { HttpClient hc = new HttpClient(); HttpResponseMessage rm = await hc.GetAsync(""); customers = await rm.Content.ReadAsAsync<IEnumerable<Customer>>(); StateHasChanged(); } To use the ReadAsAsync method to call my service, I had to add a NuGet package to my project (Microsoft.AspNet.WebApi.Client). But, I'll point out, that's just another C# package. Really, only the call at the end of the method to the Razor Component's StateHasChanged method shows that this code is updating client-side resources. The StateHasChanged method notifies Blazor that the client's DOM has been updated which, in turn, causes Blazor to compare the updated DOM to what the user is currently seeing, figure out what's different, and update the user's view to reflect the loaded data. StateHasChanged And there's the beauty of Blazor: efficient client-side updates without a line of JavaScript and with access to the full .NET Core API. My client-side toolkit has suddenly gotten much more focused. The full project referenced in this article can be found here. To learn more about Telerik UI for Blazor and download the preview, don't forget to check out this introductory blog post or visit the product page here.!
https://www.telerik.com/blogs/integrating-web-services-with-kendogrid-blazor-and-razor-components
CC-MAIN-2019-51
en
refinedweb
8.5. Implementation of Recurrent Neural Networks from Scratch¶ In this section we implement a language model introduce in Section 8 from scratch. It is based on a character-level recurrent neural network trained on H. G. Wells’ The Time Machine. As before, we start by reading the dataset first, which is introduced in Section 8.3. %matplotlib inline import d2l import math from mxnet import autograd, np, npx, gluon npx.set_np() tokenspx.one_hot(np.array([0, 2]), len(vocab)) array([ shape of the minibatch we sample each time is (batch size, timestep). The one_hot function transforms such a minibatch into a 3-D tensor with the last dimension equals to the vocabulary size. We often transpose the input so that we will obtain a (timestep, batch size, vocabulary size) output that fits into a sequence model easier. X = np.arange(batch_size * num_steps).reshape(batch_size, num_steps) npx.one_hot(X.T, len(vocab)).shape (35, 32, def normal(shape): return np.random.normal(scale=0.01, size=shape, ctx=ctx) # Hidden layer parameters W_xh = normal((num_inputs, num_hiddens)) W_hh = normal((num_hiddens, num_hiddens)) b_h = np.zeros(num_hiddens, ctx=ctx) # Output layer parameters W_hq = normal((num_hiddens, num_outputs)) b_q = np.zeros(num_outputs, ctx=ctx) # Attach gradients params = [W_xh, W_hh, b_h, W_hq, b_q] for param in params: param.attach_grad() return params 8.5.3. RNN Model¶ First, we need an init_rnn_state function to return the hidden state at initialization. It returns an ndarray filled with 0 and with a shape of (batch size, number of hidden units). Using tuples makes it easier to handle situations where the hidden state contains multiple variables (e.g., when combining multiple layers in an RNN where each layer requires initializing). def init_rnn_state(batch_size, num_hiddens, ctx): return (np.zeros(shape=(batch_size, num_hiddens), ctx=ctx), ) The following rnn function defines how to compute the hidden state and output in a timestep. The activation function here uses the \(\tanh\) function. As described in Section 4.1, the mean value of the \(\tanh\) function is 0, when the elements are evenly distributed over the real numbers. def rnn(inputs, state, params): # Inputs shape: (num_steps, batch_size, vocab_size) W_xh, W_hh, b_h, W_hq, b_q = params H, = state outputs = [] for X in inputs: H = np.tanh(np.dot(X, W_xh) + np.dot(H, W_hh) + b_h) Y = np.dot(H, W_hq) + b_q outputs.append(Y) return np.concatenate(outputs, axis=0), (H,) Now we have all functions defined, next we create a class to wrap these functions and store parameters. # Saved in the d2l package for later usepx has ((1120, 28), 1, (32, 512)) We can see that the output shape is (number steps \(\times\) batch size, vocabulary size), while the hidden. # Saved in the d2l package for later use def predict_ch8(prefix, num_predicts, model, vocab, ctx): state = model.begin_state(batch_size=1, ctx=ctx) outputs = [vocab[prefix[0]]] def get_input(): return np))) return ''.join([vocab.idx_to_token[i] for i in outputs]) We test the predict_rnn function first. Given that we did not train the network it will generate nonsensical predictions. We initialize it with the sequence traveller and have it generate 10 additional characters. predict_ch8('time traveller ', 10, model, vocab, ctx) 'time traveller iiiiiiiiii' 8.5.5. Gradient Clipping¶ For a sequence of length \(T\), we compute the gradients over these \(T\) timesteps in an iteration, which results in a chain of matrix-products with length \(\mathcal{O}(T)\) during backpropagating. As mentioned in Section 4.8, it might result in numerical instability, e.g., the gradients may either explode or vanish, when \(T\) is large. Therefore, RNN models often need extra help to stabilize the training. Recall that of making progress, whereas. Gradient clipping provides a quick fix to the gradient exploding. While it does. # Saved in the d2l package for later use def grad_clipping(model, theta): if isinstance(model, gluon.Block): params = [p.data() for p in model.collect_params().values()] else: params = model.params norm = math.sqrt(sum((p.grad ** 2).sum() for p in params)) if norm > theta: for param in params: param.grad[:] *= theta / norm 8.5.6. Training¶ Let’s first define the function to train the model on one data epoch. It differs from the models training of Section 3.6 in three places: Different sampling methods for sequential data (independent sampling and sequential partitioning) will result in differences in the initialization of hidden states. We clip the gradients before updating the model parameters. This ensures that the model does not diverge even when gradients blow up at some point during the training process, and it effectively reduces the step size automatically. We use perplexity to evaluate the model. This ensures that sequences of different length are comparable. When the consecutive sampling is used, we initialize the hidden state at the beginning of each epoch. Since the \(i^\mathrm{th}\) example in the next minibatch is adjacent to the current \(i^\mathrm{th}\) example, so the next minibatch can use the current hidden state directly, we only detach the gradient so that we compute the gradients within a minibatch. When using the random sampling, we need to re-initialize the hidden state for each iteration since each example is sampled with a random position. Same as the train_epoch_ch3 function in Section 3.6, we use generalized updater, which could be either a Gluon trainer or a scratched implementation. # Saved in the d2l package for later use * y.size, y.size) return math.exp(metric[0]/metric[1]), metric[1]/timer.stop() The training function again supports either we implement the model from scratch or using Gluon. # Saved in the d2l package for later use}) def updater(batch_size): return trainer.step(batch_size) else: def updater(batch_size): return d2l.sgd(model.params, lr, batch_size) def predict(prefix): return')) Now we can train a model. Since we only use \(10,000\) tokens in the dataset, so here the model needs more epochs to converge. num_epochs, lr = 500, 1 train_ch8(model, train_iter, vocab, lr, num_epochs, ctx) Perplexity 1.0, 37856 tokens/sec on gpu(0) time traveller it s against reason said filby what reason said traveller it s against reason said filby what reason said Finally let’s check the results to use a random sampling iterator. train_ch8(model, train_iter, vocab, lr, num_epochs, ctx, use_random_iter=True) Perplexity 1.4, 37237 tokens/sec on gpu(0) time traveller came back andfilby s anecdote collapsed the thing traveller you can show black is white by argument said fil While implementing the above RNN model from scratch is instructive, it is not convenient. In the next section we will see how to improve significantly on the current model and how to make it faster and easier to implement. 8.5.7. Summary¶ Sequence models need state initialization for training. Between sequential models you need to ensure to detach the gradients, different. Modify the predict function such as to use sampling rather than picking the most likely next character. What happens? Bias the model towards more likely outputs, e.g., by sampling from \(q(w_t \mid w_{t-1}, \ldots, w_1) \propto p^\alpha(w_t \mid w_{t-1}, \ldots, w_1)\) for \(\alpha > 1\). Run the code in this section without clipping the gradient. What happens?.
https://d2l.ai/chapter_recurrent-neural-networks/rnn-scratch.html
CC-MAIN-2019-51
en
refinedweb
Hey all, I am novice into the world of ruby and trying to do a simple search with the below program, but I am unable to run and no error is thrown. Can somebody help me where I am going wrong? require ‘watir’ include Watir require ‘test/unit’ class Test_google_Search < Test::Unit::TestCase def search test_site = '' $ie = IE.new $ie.goto(test_site) $ie.text_field(:name, "q").set("pickaxe") $ie.button(:name, "btnG").click assert($ie.contains_text("Programming Ruby") ) end end -Sachin S. Uplaonkar.
https://www.ruby-forum.com/t/unable-to-run-the-program/224965
CC-MAIN-2021-31
en
refinedweb
GraphQL can be an incredibly useful tool for decoupling orchestration and schema for your back end implementation, be it APIs, direct database calls or even a simple in-memory store. One thing that presents itself fairly quickly, though, is the issue of tracking what is happening when your Graph queries are executed. Query Issues Adding an additional layer presents an entirely new suite of issues to address, the most common of which below. Number of IO Operations A single Graph query can look deceptively simple, but be configured to call many APIs, read / write to many areas of a data store or even call out to third party providers. This can occur due to an overly complicated back-end implementation, or simply pulling too much data from the front end. Overall Duration (Performance) Often this can be a side-effect of the above, but one IO operation can be all it takes to drag a query down. That one API call, which runs a poorly optimized SQL query can take 30+ seconds given the right conditions or data set. Errors Logging is incredibly helpful for tracking your errors, but for viewing statistics on the queries generating errors and individual traces, this can be a more complex task to generate from logs. Monitoring Your Setup Apollo GraphQL is a very common community-driven GraphQL implementation, primarily for NodeJS. In addition to the core libraries, they offer a free (with paid additional functionality available) SaaS platform for monitoring your GraphQL implementation: Apollo Studio. This offers solutions to the above, as well as the following functionality: - Track schema changes (with notifications) - Explore your schema and run queries - Report on schema usage Integrating to Apollo Studio For users of the Apollo Server GraphQL implementation (for NodeJS), it's pretty straight-forward. For Java and Python implementations there are also third-party providers, but that's where support ends. The link above also details how to create a custom integration and that's where this article picks up. This process involves: importing the protobuf schema, converting performance stats to Apollo trace format, signing the message and finally converting to a background process for batching. Generating Apollo Studio Classes for Protobuf There are a number of Protobuf implementations for .NET Core, but I like protobuf-net as it's a nice, clean, Apache 2.0 Licensed implementation. It is also supported by protogen, a great online generator that will output protobuf-net classes ready for use (for its CSharp profile). If you open the latest schema from the link here, you can simply paste into the generator. NOTE: At the time of writing, [(js_preEncoded)=true] isn't supported by the generator, and can be removed from the proto schema. Converting to Apollo Studio Format In order to get data in a suitable format for Apollo, you can enable Apollo Tracing enrichment of your responses in GraphQL.NET. What follows is a large code dump of how I put together a conversion system for these classes to those generated by the above: public class MetricsToTraceConverter { public Trace? CreateTrace(ExecutionResult result) { ApolloTrace? trace = result.Extensions != null && result.Extensions.ContainsKey("tracing") ? (ApolloTrace)result.Extensions["tracing"] : null; var resolvers = trace?.Execution.Resolvers? .OrderBy(x => string.Join(":", x.Path), new ConfigurationKeyComparer()) .ToArray(); var rootTrace = resolvers?.FirstOrDefault(x => x.Path.Count == 1); if (rootTrace == null && result.Errors == null) return null; int resolverIndex = 1; var rootErrors = result.Errors?.Where(x => x.Path != null && x.Path.Count() == 1).ToArray(); var rootNode = rootTrace != null && resolvers != null ? CreateNodes(rootTrace.Path, CreateNodeForResolver(rootTrace, rootErrors), resolvers, ref resolverIndex, GetSubErrors(rootTrace.Path, result.Errors?.ToArray())) : new Trace.Node(); if (rootTrace == null && result.Errors != null) { foreach (var executionError in result.Errors) rootNode.Errors.Add(CreateTraceError(executionError)); } return new Trace { StartTime = trace?.StartTime ?? DateTime.Now, EndTime = trace?.EndTime ?? DateTime.Now, DurationNs = (ulong)(trace?.Duration ?? 0), http = new Trace.Http { method = Trace.Http.Method.Post, StatusCode = result.Errors?.Any() == true ? (uint)HttpStatusCode.BadRequest : (uint)HttpStatusCode.OK }, Root = rootNode }; } private static Trace.Node CreateNodeForResolver(ApolloTrace.ResolverTrace resolver, ExecutionError[]? executionErrors) { var node = new Trace.Node { ResponseName = resolver.FieldName, Type = resolver.ReturnType, StartTime = (ulong)resolver.StartOffset, EndTime = (ulong)(resolver.StartOffset + resolver.Duration), ParentType = resolver.ParentType }; if (executionErrors != null) { foreach (var executionError in executionErrors) node.Errors.Add(CreateTraceError(executionError)); } return node; } private static Trace.Error CreateTraceError(ExecutionError executionError) { var error = new Trace.Error { Json = JsonConvert.SerializeObject(executionError), Message = executionError.Message }; if (executionError.Locations != null) error.Locations.AddRange(executionError.Locations.Select(x => new Trace.Location { Column = (uint)x.Column, Line = (uint)x.Line })); return error; } private static ExecutionError[]? GetSubErrors(List<object> path, ExecutionError[]? errors) { return errors ?.Where(x => x.Path != null && x.Path.Count() > path.Count && x.Path.Take(path.Count).SequenceEqual(path)) .ToArray(); } private static Trace.Node CreateNodes(List<object> path, Trace.Node node, ApolloTrace.ResolverTrace[] resolvers, ref int resolverIndex, ExecutionError[]? executionErrors) { bool isArray = node.Type.StartsWith("[") && node.Type.TrimEnd('!').EndsWith("]"); if (isArray) { if (resolverIndex < resolvers.Length) { var resolver = resolvers[resolverIndex]; while (resolver.Path != null && resolver.Path.Count == path.Count + 2 && resolver.Path.Take(path.Count).SequenceEqual(path)) { var index = (int)(resolver.Path[^2]); var subPath = path.Concat(new object[] {index}).ToList(); var previousIndex = resolverIndex; node.Childs.Add(CreateNodes(subPath, new Trace.Node { Index = Convert.ToUInt32(index), ParentType = node.Type, Type = node.Type.TrimStart('[').TrimEnd('!').TrimEnd(']') }, resolvers, ref resolverIndex, GetSubErrors(subPath, executionErrors))); // Avoid infinite loop if the worst happens and we don't match any items for this index (HOW?!?!?) if (resolverIndex == previousIndex) resolverIndex++; if (resolverIndex >= resolvers.Length) break; resolver = resolvers[resolverIndex]; } } } else { if (resolverIndex < resolvers.Length) { var resolver = resolvers[resolverIndex]; while (resolver.Path != null && resolver.Path.Count == path.Count + 1 && resolver.Path.Take(path.Count).SequenceEqual(path)) { var errors = executionErrors?.Where(x => x.Path.SequenceEqual(resolver.Path)).ToArray(); resolverIndex++; node.Childs.Add(CreateNodes(resolver.Path, CreateNodeForResolver(resolver, errors), resolvers, ref resolverIndex, GetSubErrors(resolver.Path, executionErrors))); if (resolverIndex >= resolvers.Length) break; resolver = resolvers[resolverIndex]; } } } return node; } } So what is all of this doing? Here's an overview: - Retrieve the tracing data added by enabling "Apollo tracing" enrichment of results from the execution result. - Order the resolvers hierarchically (ConfigurationKeyComparer does this beautifully), so that we can consume them in order and avoid expensive full scans of the resolver traces. - Find the root trace (should be the first item). - Collect all root errors from the traces. - Construct a node hierarchy from node paths - this is fairly complicated, but it's easier to see how this works using a sample of data (view one at runtime). - If there's no traces, but there are errors, add those to the root node. - Return a Trace object (from protobuf generated classes) for queuing for send in a batch. Up Next In the next article, we'll look at how to generate and send the full report class to Apollo Studio. Discussion (0)
https://dev.to/mattjhosking/integrating-apollo-studio-with-graphql-for-net-part-1-4h5f
CC-MAIN-2021-31
en
refinedweb
Available with Standard or Advanced license. The goal of upgrading an enterprise geodatabase is to update the geodatabase system tables, stored procedures, types, and functions to take advantage of new functionality and bug fixes. Install a new version of ArcGIS Pro or ArcGIS Server that there is no formal mechanism to downgrade a geodatabase to a previous version. If, after upgrading to a newer version, you want to downgrade the geodatabase, you must restore the old database from a backup file. The following is a checklist of steps to complete before you upgrade your geodatabase: - Read the SAP HANA database requirements for ArcGIS to confirm that Esri supports the SAP HANA and ArcGIS version combination you want to use. - Check to see if your geodatabase can be upgraded. To do this, install the ArcGIS Pro or ArcGIS Server version you want to move to onto one machine. - To check from ArcGIS Pro, connect to the geodatabase in the Catalog pane and open Database Properties. A message appears under Upgrade Status indicating whether an upgrade is possible. - To check from ArcGIS Server, use the ArcPy Describe function to determine if you can upgrade the geodatabase. The following is an example of creating a connection to the geodatabase and checking if the geodatabase can be upgraded. Note that you must connect as the sde user to run this. If False is returned, you can upgrade the geodatabase. Proceed with the remaining steps. If True is returned, you do not need to upgrade. Do not proceed with subsequent steps. # Open Python. cd /arcgis/server/tools ./python # Create a connection to the geodatabase. arcpy.CreateDatabaseConnection_management("/usr/tmp/", "egdb_connection.sde", "SAP HANA", sys.argv[1], "DATABASE_AUTH", "sde", sys.argv[2], "SAVE_USERNAME") # Import ArcPy and check the geodatabase release. import arcpy isCurrent = arcpy.Describe('/usr/tmp/egdb_connection.sde').currentRelease print isCurrent - Confirm the sde user has catalog read privileges in the database. - Create a backup of the database. - Remove any custom functionality you may have added to the geodatabase system tables outside ArcGIS. The upgrade procedure cannot accommodate customizations you make to the system tables. If such customizations prevent the alteration of a system table's schema, the upgrade will fail. - Confirm there are no other connections to the geodatabase you are upgrading. You can now upgrade your geodatabase. Upgrade the geodatabase You can use the Upgrade Geodatabase tool in ArcGIS Pro or a Python script run on an ArcGIS Pro or ArcGIS Server machine to upgrade your geodatabase. Use the Upgrade Geodatabase tool Open the Upgrade Geodatabase geoprocessing tool from one of the following: - The Geodatabase Administration toolset in the Data Management toolbox -. You must connect to the geodatabase as the sde user to run the prerequisite check and upgrade the geodatabase. The prerequisite check detects other active connections to the geodatabase, determines whether you connected as the sde user, and confirms that the sde user has sufficient privileges to upgrade the geodatabase (catalog read). the following script, paste it into a text file, and save it. You can then run the script with site-specific information at the command line. """ Name: upgrade_gdb_for_sap_hana.py Type upgrade_gdb_for_sap_hana.py -h or upgrade_gdb_sap_hana.py --help for usage Author: Esri """ # Import system modules import arcpy, os, optparse, sys # Define usage and version parser = optparse.OptionParser(usage = "usage: %prog [Options]", version="%prog 1.0 for " + arcpy.GetInstallInfo()['Version'] ) #Define help and options parser.add_option ("-i", dest="data_source", type="string", default="", help="SAP HANA ODBC data source name") = "DATABASE_AUTH" username = options.User.lower() password = options.Password do_upgrade = options.Upgrade database = "" database_type = "SAP HANA" instance = options.data_source # Get the current product license product_license=arcpy.ProductInfo() # Checks required license level to upgrade if product_license.upper() == "ARCVIEW" or product_license.upper() == 'ENGINE': print ("\n" + product_license + " license found!" + " Enterprise geodatabase upgrade requires an ArcGIS for Desktop Standard or Advanced, ArcGIS Engine with the Geodatabase Update extension, or ArcGIS for Server license.") sys.exit("Re-authorize ArcGIS before upgrading.") else: print ("\n" + product_license + " license available! Continuing to upgrade...") arcpy.AddMessage("+++++++++") # Local variables Conn_File_NameT = instance + "_" +) For example, if you saved the text file as gdbupgrade, your SAP HANA data source is named mydata, and your sde password is mysdepassword, type the following at a command prompt: gdbupgrade --DBMS SAP HANA -i mydata -u sde -p mysdepassword --upgrade TRUE
https://pro.arcgis.com/en/pro-app/latest/help/data/geodatabases/manage-saphana/upgrade-gdb-sap-hana.htm
CC-MAIN-2021-31
en
refinedweb
Secret Key Encryption¶. Example¶ import nacl.secret import nacl.utils # This must be kept secret, this is the combination to your safe key = nacl.utils.random(nacl.secret.SecretBox.KEY_SIZE) # This is your safe, you can use it to encrypt or decrypt messages box = nacl.secret.SecretBox(key) # This is our message to send, it must be a bytestring as SecretBox will # treat it as just a binary blob of data. message = b"The president will be exiting through the lower levels" PyNaCl can automatically generate a random nonce for us, making the encryption very simple: # Encrypt our message, it will be exactly 40 bytes longer than the # original message as it stores authentication information and the # nonce alongside it. encrypted = box.encrypt(message) assert len(encrypted) == len(message) + box.NONCE_SIZE + box.MACBYTES(nacl.secret.SecretBox.NONCE_SIZE) encrypted = box.encrypt(message, nonce) If you need to get the ciphertext and the authentication data without the nonce, you can get the ciphertext attribute of the EncryptedMessage instance returned by encrypt(): nonce = nacl.utils.random(nacl.secret.SecretBox.NONCE_SIZE) encrypted = box.encrypt(message, nonce) # since we are transmitting the nonce by some other means, # we just need to get the ciphertext and authentication data ctext = encrypted.ciphertext # ctext is just nacl.secret.SecretBox.MACBYTES longer # than the original message assert len(ctext) == len(message) + box.MACBYTES Finally, the message is decrypted (regardless of how the nonce was generated): # Decrypt our message, an exception will be raised if the encryption was # tampered with or there was otherwise an error. plaintext = box.decrypt(encrypted) print(plaintext.decode('utf-8')) The president will be exiting through the lower levels Requirements¶ Key¶ The 32 bytes key given to SecretBox must be kept secret. It is the combination to your “safe” and anyone with this key will be able to decrypt the data, or encrypt new data. Nonce¶ The 24 random() with SecretBox.NONCE_SIZE. Reference¶ - class nacl.secret. SecretBox(key, encoder)[source]¶ The SecretBox class encrypts and decrypts messages using the given secret key. The ciphertexts generated by Secretbox. Give your nonces a different prefix, or have one side use an odd counter and one an even counter. Just make sure they are different. - Parameters - - Returns An instance of EncryptedMessage. decrypt(ciphertext, nonce, encoder)[source]¶ Decrypts the ciphertext using the nonce (explicitly, when passed as a parameter or implicitly, when omitted, as part of the ciphertext) and returns the plaintext message. Algorithm details¶ - Encryption - - Authentication -
https://pynacl.readthedocs.io/en/latest/secret/
CC-MAIN-2021-31
en
refinedweb
Table of Contents - What is Airport Info API? - How does the Airport Info API work? - Target Audience - How to Connect to Airport Info API Tutorial – Step by Step - Explanation of Airport Info API Endpoints - 1. Airport - How to Use Airport Info API with Python - How to Use Airport Info API with PHP - How to Use Airport Info API with Ruby - How to Use Airport Info API with Javascript - Benefits - Alternatives to Airports Info API - Summary Building an application for the airline industry cannot be imagined without access to information about airports worldwide. There was a time when developers had to start from scratch and maintain their databases to build applications and services for the travel industry. Today, businesses are witnessing the rising trend of using external APIs instead of rebuilding everything from scratch. One such API is the Airport Info API which contains a comprehensive database of all the airports worldwide. In addition, the API includes detailed information about any given airport, such as full name, website, IATA, and ICAO codes. It’s a must-have API for the developers aiming to build tracking applications. Because it also provides geolocation data, such as latitude and longitude information about the airports. The API requires an understanding of airport codes as it uses both ICAO and IATA codes to locate an airport. Below is the quick rundown of these codes in case you haven’t used them yet. - IATA (International Air Transport Association) airport codes are three-letter codes, which are also used to identify an airport uniquely. They are much more accessible and used in airline tickets, luggage tags, travel itineraries, etc. For instance, LAX is the IATA code for Los Angeles International Airport. - ICAO(International Civil Aviation Organization) focuses on global synchronization of civil aviation regulations and has assigned international airports with four-digit codes, often called ICAO codes. They are primarily used for ‘official’ purposes and cover more airports, including many smaller airports. KJFK is the ICAO code for the John F. Kennedy International Airport. What is Airport Info API? Airport Info API is a Restful API that allows users to access information about any airport using its IATA and ICAO codes. The API returns valuable information about any given airport, such as below: - IATA and ICAO code - Name - Location - Street - City - Country - State - Country_ISO - Phone - Latitude - Longitude - UCT - Website Unlike its competitors, this API allows users to search for an airport using either IATA or ICOA API, ensuring maximum coverage of all the airports across the world. How does the Airport Info API work? The API follows the Rest principles and provides a single endpoint that can be executed over HTTPS using the language and platform of your choice. The client issues a GET request to fetch the detail about any given airport by providing either IATA or ICAO code as parameters. The API server then responds with the details as JSON formatted string as illustrated in the below diagram. Target Audience Travel Agencies Airport Info API can be consumed by developers aiming to build applications for travel agencies. It eliminates the need to maintain an up-to-date database about airports across the world. Online Flight Booking Application Online flight booking apps can also integrate Airport Info API to show quick and accurate airport information. Flight Tracker Applications Airport Info API provides geolocation data about the airports, which can help Flight tracker applications plot the airport locations on a live map. Aviation Professionals Aviation professionals can also use Airport Info API to reference airport data using IATA or ICAO code quickly. Airport Websites Directory Airport website directories and search engines can integrate Airport Info API to list popular airports and their official websites. This way, they can facilitate their customers to find out more information about the airport, such as flight information, parking availability, and taxi wait times. How to Connect to Airport Info API Tutorial – Step by Step RapidAPI makes it super easy to connect to Airport Info API using a variety of SDKs. Below is the step-by-step tutorial. Step 1: Signup on RapidAPI If you don’t have an account on RapidAPI, you can create an account by visiting this URL. You can also signup using Google, Github, or Facebook logins. Step2: Search the API Marketplace Once registration is completed, you will be redirected to the dashboard. Navigate to “API Marketplace” and search for “Airport Info API” (as shown in the below image): Step 3 – Test the API The final step is to test the API. To do this, Navigate to the ‘Endpoints’ tab and select the available endpoint ‘Airport’. You may then choose the Code-snippet for your favorite programming language. The RapidAPI console will generate the necessary code required to execute the endpoint. You would notice that the API Key has been generated automatically by the RapidAPI console. T Finally, to execute this endpoint, provide an IATA or ICAO code and click on the ‘Test Endpoint’ button to view the API response. We provided the ‘LAX’ IATA code, and the API successfully returned the response, containing details about Los Angeles International Airport. Note: In case of any failure, the Airport Info. If you didn’t provide an IATA or ICAO code, the API would return the following error in JSON response. In case of invalid IATA or ICAO code, the API returns the following error message. Explanation of Airport Info API Endpoints 1. Airport This endpoint retrieves airport information using IATA or ICAO code. It returns the data as JSON Formatted text. It has two optional parameters - IATA (three-digit airport code, such as LAX, JFK). - ICAO (four-digit airport code such as KLAX, KJFK). In the below example, we executed the endpoint for John F Kennedy Airport using its ICAO code – KJKF. As a result, the API has successfully returned the requested information. How to Use Airport Info API with Python Please make sure that you have installed the Python application. The Code snippet can be generated using several client libraries available for python like http.client, Unirest, and the default http.client. The below example uses the ‘Airport’’ endpoint using http.client library to fetch the information about KJFK airport. import http.client conn = http.client.HTTPSConnection("airport-info.p.rapidapi.com") headers = { 'x-rapidapi-key': "ce19d0164fmsh3d383efc0e85ce5p16dcb1jsnb1a4a3c79541", 'x-rapidapi-host': "airport-info.p.rapidapi.com" } conn.request("GET", "/airport?icao=KJFK", headers=headers) res = conn.getresponse() data = res.read() print(data.decode("utf-8")) How to Use Airport Info API with PHP Before running the sample code shared below, please make sure that you have installed PHP. Then, you may follow this guide on how to install this correctly. The sample code also executes the ‘Airport’’ endpoint. However, it uses both ICAO and IATA codes for JFK Airport. The code snippet is generated using the cURL library. <: airport-info Airport Info API with Ruby The below code snippet illustrate how to consume the Airport Info API using Ruby."] = 'airport-info.p.rapidapi.com' response = http.request(request) puts response.read_body How to Use Airport Info API with Javascript Please make sure that you have set up the JavaScript environment. Then, you can download text editors such as Atoms, Visual Studio Code, NotePad++, etc. The following code snippet illustrates how to invoke Airport endpoint using both IATA and ICAO codes using the fetch method. fetch("", { "method": "GET", "headers": { "x-rapidapi-key": "ce19d0164fmsh3d383efc0e85ce5p16dcb1jsnb1a4a3c79541", "x-rapidapi-host": "airport-info.p.rapidapi.com" } }) .then(response => { console.log(response); }) .catch(err => { console.error(err); }); Benefits Speed: Airport Info API can provide the results in just 170 milliseconds, making it ideal to be used in high performant applications. Cost-effective The API is completely free and can save you from spending a lot of time and money to build a similar functionality on your own Geolocation One of the prominent features of Airport Info API is its ability to provide accurate geolocation data, which can be consumed by various applications that involve plotting location markers on live maps. Search Engine Friendly The API is search engine friendly. It allows search bots and web-scrappers to find more information using the official website of any given airport. Seamless Integration: The API is highly flexible and can be seamlessly integrated with various platforms as it follows the REST API standards. Alternatives to Airports Info API Following are the similar APIs available on RapidAPI if you are looking for alternatives: - AeroDatabox API: This API provides flight data for small travel or aviation application. Such as flight status, airports, runways, aircraft information, and more. - AirportIX API: It’s another powerful API that allows users to fetch information about airports, airlines, planes, routes, etc. - World Airports: This is a paid API that can find the list of airports matching the search string using airport name, code, or location. - World IATA Airports API: This is a freemium API that can only find airport lists and their details using IATA code. Summary This article explored Airport Info API in detail. It explained how this API could be integrated using various languages such as Python, PHP, Ruby, and JavaScript. It also highlighted its target audience, endpoints, and benefits.
https://rapidapi.com/blog/airport-info-api-with-python-php-ruby-and-javascript-examples/
CC-MAIN-2021-31
en
refinedweb
I'm working on a library that allows users to input arbitrary expressions. My library then compiles those expressions as part of a larger expression into a delegate. Now, for still unknown reasons compiling the expression with Compile sometimes/often results in code that is far slower than it would be if it weren't a compiled expression. I asked a question about this before and one workaround was to not use Compile, but CompileToMethod and create a static method on a new type in a new dynamic assembly. That works and the code is fast. But users can input arbitrary expressions and it turns out that if the user calls a non-public function or accesses a non-public field in the expression, it throws a System.MethodAccessException (in the case of a non-public method) when the delegate is invoked. What I could probably do here is create a new ExpressionVisitor that checks if the expression accesses anything non-public and use the slower Compile in those cases, but I'd rather have that the dynamic assembly somehow gets the rights to access the non-public members. Or find out if there's anything I can do about Compile being slower (sometimes). The full code to reproduce this problem: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Linq.Expressions; using System.Reflection; using System.Reflection.Emit; namespace DynamicAssembly { public class Program { private static int GetValue() { return 1; } public static int GetValuePublic() { return 1; } public static int Foo; static void Main(string[] args) { Expression<Func<int>> expression = () => 10 + GetValue(); Foo = expression.Compile()(); Console.WriteLine("This works, value: " + Foo); Expression<Func<int>> expressionPublic = () => 10 + GetValuePublic(); var compiledDynamicAssemblyPublic = (Func<int>)CompileExpression(expressionPublic); Foo = compiledDynamicAssemblyPublic(); Console.WriteLine("This works too, value: " + Foo); var compiledDynamicAssemblyNonPublic = (Func<int>)CompileExpression(expression); Console.WriteLine("This crashes"); Foo = compiledDynamicAssemblyNonPublic(); } static Delegate CompileExpression(LambdaExpression expression) { var assemblyBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly( new AssemblyName("MyAssembly"+ Guid.NewGuid().ToString("N")), AssemblyBuilderAccess.Run); var moduleBuilder = assemblyBuilder.DefineDynamicModule("Module"); var typeBuilder = moduleBuilder.DefineType("MyType", TypeAttributes.Public); var methodBuilder = typeBuilder.DefineMethod("MyMethod", MethodAttributes.Public | MethodAttributes.Static); expression.CompileToMethod(methodBuilder); var resultingType = typeBuilder.CreateType(); var function = Delegate.CreateDelegate(expression.Type, resultingType.GetMethod("MyMethod")); return function; } } } The problem is not permissions because there is no permission that can allow you to access a non-public field or member of another class without reflection. This is analogous to the situation where you compiled two non-dynamic assemblies and one assembly calls a public method in the second assembly. Then if you change the method to private without recompiling the first assembly, the first assemblies call will now fail at runtime. In other words the expression in your dynamic assembly is being compiled into an ordinary method call which it doesn't have permission to call anymore than you do from another class even in the same assembly. Since no permission can solve your problem, you might be able to transform non-public field and method references into subexpressions that use reflection. Here is an example taken from your test case. This fails: Expression<Func<int>> expression = () => 10 + GetValue(); but this will succeed: Expression<Func<int>> expression = () => 10 + (int)typeof(Program).GetMethod("GetValue", BindingFlags.Static | BindingFlags.NonPublic).Invoke(null, null); Since this does not crash with an exception, you can see that your dynamic assembly does have reflection permission and it can access the private method, it just can't do it using an ordinary method call that CompileToMethod results in. If the non-dynamic assembly is built by you, you can actually include a InternalsVisibleTo for the the dynamic assembly (even works with a strong name). That would allow using internal members, which may be enough in your case? To get an idea, here's an example which shows hot to enable the dynamic assembly of Moq to use internal stuff from another assembly: If this approach is not sufficient, I'd go with a combination of Rick's and Miguel's suggestions: create "proxy" DynamicMethods for each invocation to a non-public member and change the expression tree so that they are used instead of the original invocations.
https://expressiontree-tutorial.net/knowledge-base/5758999/-net--accessing-non-public-members-from-a-dynamic-assembly
CC-MAIN-2021-31
en
refinedweb
NAMErecv, recvfrom, recvmsg - receive a message from a socket SYNOPSIS #include <sys/socket.h> ssize_t recv(int sockfd, void *buf, size_t len, int flags); ssize_t recvfrom(int sockfd, void *restrict buf, size_t len, int flags, struct sockaddr *restrict src_addr, socklen_t *restrict addrlen); ssize_t recvmsg(int sockfd, struct msghdr *msg, int flags); DESCRIPTION); flags argumentThe flags argument is formed by ORing one or more of the following values: - MSG_CMSG_CLOEXEC (recvmsg() only; since Linux 2.6.23) - Set the close-on-exec flag for the file descriptor received via a UNIX domain file descriptor using the SCM_RIGHTS operation (described in unix(7)). This flag is useful for the same reasons as the O_CLOEXEC flag of open(2). - MSG_DONTWAIT (since Linux 2.2) - Enables nonblocking operation; if the operation would block, the call fails with the error EAGAIN or EWOULDBLOCK. This provides similar behavior_ERRQUEUE (since Linux 2.2) - flag is set in the msghdr. After an error has been passed, the pending socket error is regenerated based on the next queued error and will be passed on the next socket operation. -_TRUNC (since Linux 2.2) - For raw (AF_PACKET), Internet datagram (since Linux 2.4.27/2.6.8), netlink (since Linux 2.6.22), and UNIX datagram (since Linux 3.4) sockets: return the real length of the packet or datagram, even when it was longer than the passed buffer. -) */. For further information on the use of ancillary data in various socket domains, see unix(7) and ip(7). was discarded due to lack of space in the buffer for ancillary data. - MSG_OOB - is returned to indicate that expedited or out-of-band data was received. - MSG_ERRQUEUE - indicates that no data was received but an extended error from the socket error queue. RETURN VALUEThese calls return the number of bytes received, or -1 if an error occurred. In the event of an error, errno is set to indicate the error.. ERRORSThese are some standard errors generated by the socket layer. Additional errors may be generated and returned from the underlying protocol modules; see their manual pages. - EAGAIN or EWOULDBLOCK - The socket is marked nonblocking and the receive operation would block, or a receive timeout had been set and the timeout expired before data was received. POSIX.1 allows either error to be returned for this case, and does not require these constants to have the same value, so a portable application should check for both possibilities. - EBADF - The argument sockfd is an invalid file descriptor. - ECONNREFUSED - A remote host refused to allow the network connection (typically because it is not running the requested service). - EFAULT - The receive buffer pointer(s) point outside the process's address space. - EINTR - The receive was interrupted by delivery of a signal before any data was available; see signal(7). - EINVAL - Invalid argument passed. - ENOMEM - Could not allocate memory for recvmsg(). - ENOTCONN - The socket is associated with a connection-oriented protocol and has not been connected (see connect(2) and accept(2)). - ENOTSOCK - The file descriptor sockfd does not refer to a socket. CONFORMING TOPOSIX.1-2001, POSIX.1-2008, 4.4BSD (these interfaces first appeared in 4.2BSD). POSIX.1 describes only the MSG_OOB, MSG_PEEK, and MSG_WAITALL flags. NOTESIf, the msg_controllen field of the msghdr structure should be typed as socklen_t, and the msg_iovlen field should be typed as int, but glibc currently types both as size_t. See recvmmsg(2) for information about a Linux-specific system call that can be used to receive multiple datagrams in a single call.
https://man.archlinux.org/man/recv.2.en
CC-MAIN-2021-31
en
refinedweb
What is torch.nn really?¶ by Jeremy Howard, fast.ai. Thanks to Rachel Thomas and Francisco Ingham. We recommend running this tutorial as a notebook, not a script. To download the notebook (.ipynb) file, click the link at the top of the page. PyTorch provides the elegantly designed modules and classes torch.nn , torch.optim , Dataset , and DataLoader to help you create and train neural networks. In order to fully utilize their power and customize them for your problem, you need to really understand exactly what they’re doing. To develop this understanding, we will first train basic neural net on the MNIST data set without using any features from these models; we will initially only use the most basic PyTorch tensor functionality. Then, we will incrementally add one feature from torch.nn, torch.optim, Dataset, or DataLoader at a time, showing exactly what each piece does, and how it works to make the code either more concise, or more flexible. This tutorial assumes you already have PyTorch installed, and are familiar with the basics of tensor operations. (If you’re familiar with Numpy array operations, you’ll find the PyTorch tensor operations used here nearly identical). MNIST data setup¶ We will use the classic MNIST dataset, which consists of black-and-white images of hand-drawn digits (between 0 and 9). We will use pathlib for dealing with paths (part of the Python 3 standard library), and will download the dataset using requests. We will only import modules when we use them, so you can see exactly what’s being used at each point. from pathlib import Path import requests DATA_PATH = Path("data") PATH = DATA_PATH / "mnist" PATH.mkdir(parents=True, exist_ok=True) URL = "" FILENAME = "mnist.pkl.gz" if not (PATH / FILENAME).exists(): content = requests.get(URL + FILENAME).content (PATH / FILENAME).open("wb").write(content) This dataset is in numpy array format, and has been stored using pickle, a python-specific format for serializing data. import pickle import gzip with gzip.open((PATH / FILENAME).as_posix(), "rb") as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1") Each image is 28 x 28, and is being stored as a flattened row of length 784 (=28x28). Let’s take a look at one; we need to reshape it to 2d first. from matplotlib import pyplot import numpy as np pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray") print(x_train.shape) Out: (50000, 784) PyTorch uses torch.tensor, rather than numpy arrays, so we need to convert our data. import torch x_train, y_train, x_valid, y_valid = map( torch.tensor, (x_train, y_train, x_valid, y_valid) ) n, c = x_train.shape print(x_train, y_train) print(x_train.shape) print(y_train.min(), y_train.max()) Out: tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]) tensor([5, 0, 4, ..., 8, 4, 8]) torch.Size([50000, 784]) tensor(0) tensor(9) Neural net from scratch (no torch.nn)¶ Let’s first create a model using nothing but PyTorch tensor operations. We’re assuming you’re already familiar with the basics of neural networks. (If you’re not, you can learn them at course.fast.ai). PyTorch provides methods to create random or zero-filled tensors, which we will use to create our weights and bias for a simple linear model. These are just regular tensors, with one very special addition: we tell PyTorch that they require a gradient. This causes PyTorch to record all of the operations done on the tensor, so that it can calculate the gradient during back-propagation automatically! For the weights, we set requires_grad after the initialization, since we don’t want that step included in the gradient. (Note that a trailing _ in PyTorch signifies that the operation is performed in-place.) Note We are initializing the weights here with Xavier initialisation (by multiplying with 1/sqrt(n)). import math weights = torch.randn(784, 10) / math.sqrt(784) weights.requires_grad_() bias = torch.zeros(10, requires_grad=True) Thanks to PyTorch’s ability to calculate gradients automatically, we can use any standard Python function (or callable object) as a model! So let’s just write a plain matrix multiplication and broadcasted addition to create a simple linear model. We also need an activation function, so we’ll write log_softmax and use it. Remember: although PyTorch provides lots of pre-written loss functions, activation functions, and so forth, you can easily write your own using plain python. PyTorch will even create fast GPU or vectorized CPU code for your function automatically. def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1) def model(xb): return log_softmax(xb @ weights + bias) In the above, the @ stands for the dot product operation. We will call our function on one batch of data (in this case, 64 images). This is one forward pass. Note that our predictions won’t be any better than random at this stage, since we start with random weights. bs = 64 # batch size xb = x_train[0:bs] # a mini-batch from x preds = model(xb) # predictions preds[0], preds.shape print(preds[0], preds.shape) Out: tensor([-1.7664, -2.3189, -2.1053, -2.2533, -2.1124, -2.6840, -2.3172, -2.5439, -2.6346, -2.7224], grad_fn=<SelectBackward>) torch.Size([64, 10]) As you see, the preds tensor contains not only the tensor values, but also a gradient function. We’ll use this later to do backprop. Let’s implement negative log-likelihood to use as the loss function (again, we can just use standard Python): def nll(input, target): return -input[range(target.shape[0]), target].mean() loss_func = nll Let’s check our loss with our random model, so we can see if we improve after a backprop pass later. yb = y_train[0:bs] print(loss_func(preds, yb)) Out: tensor(2.2978, grad_fn=<NegBackward>) Let’s also implement a function to calculate the accuracy of our model. For each prediction, if the index with the largest value matches the target value, then the prediction was correct. def accuracy(out, yb): preds = torch.argmax(out, dim=1) return (preds == yb).float().mean() Let’s check the accuracy of our random model, so we can see if our accuracy improves as our loss improves. print(accuracy(preds, yb)) Out: tensor(0.1250) We can now run a training loop. For each iteration, we will: - select a mini-batch of data (of size bs) - use the model to make predictions - calculate the loss loss.backward()updates the gradients of the model, in this case, weightsand bias. We now use these gradients to update the weights and bias. We do this within the torch.no_grad() context manager, because we do not want these actions to be recorded for our next calculation of the gradient. You can read more about how PyTorch’s Autograd records operations here. We then set the gradients to zero, so that we are ready for the next loop. Otherwise, our gradients would record a running tally of all the operations that had happened (i.e. loss.backward() adds the gradients to whatever is already stored, rather than replacing them). Tip You can use the standard python debugger to step through PyTorch code, allowing you to check the various variable values at each step. Uncomment set_trace() below to try it out. from IPython.core.debugger import set_trace lr = 0.5 # learning rate epochs = 2 # how many epochs to train for for epoch in range(epochs): for i in range((n - 1) // bs + 1): # set_trace() start_i = i * bs end_i = start_i + bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb) loss.backward() with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_() That’s it: we’ve created and trained a minimal neural network (in this case, a logistic regression, since we have no hidden layers) entirely from scratch! Let’s check the loss and accuracy and compare those to what we got earlier. We expect that the loss will have decreased and accuracy to have increased, and they have. print(loss_func(model(xb), yb), accuracy(model(xb), yb)) Out: tensor(0.0831, grad_fn=<NegBackward>) tensor(1.) Using torch.nn.functional¶ We will now refactor our code, so that it does the same thing as before, only we’ll start taking advantage of PyTorch’s nn classes to make it more concise and flexible. At each step from here, we should be making our code one or more of: shorter, more understandable, and/or more flexible. The first and easiest step is to make our code shorter by replacing our hand-written activation and loss functions with those from torch.nn.functional (which is generally imported into the namespace F by convention). This module contains all the functions in the torch.nn library (whereas other parts of the library contain classes). As well as a wide range of loss and activation functions, you’ll also find here some convenient functions for creating neural nets, such as pooling functions. (There are also functions for doing convolutions, linear layers, etc, but as we’ll see, these are usually better handled using other parts of the library.) If you’re using negative log likelihood loss and log softmax activation, then Pytorch provides a single function F.cross_entropy that combines the two. So we can even remove the activation function from our model. import torch.nn.functional as F loss_func = F.cross_entropy def model(xb): return xb @ weights + bias Note that we no longer call log_softmax in the model function. Let’s confirm that our loss and accuracy are the same as before: print(loss_func(model(xb), yb), accuracy(model(xb), yb)) Out: tensor(0.0831, grad_fn=<NllLossBackward>) tensor(1.) Refactor using nn.Module¶ Next up, we’ll use nn.Module and nn.Parameter, for a clearer and more concise training loop. We subclass nn.Module (which itself is a class and able to keep track of state). In this case, we want to create a class that holds our weights, bias, and method for the forward step. nn.Module has a number of attributes and methods (such as .parameters() and .zero_grad()) which we will be using. Note nn.Module (uppercase M) is a PyTorch specific concept, and is a class we’ll be using a lot. nn.Module is not to be confused with the Python concept of a (lowercase m) module, which is a file of Python code that can be imported. from torch import nn class Mnist_Logistic(nn.Module): def __init__(self): super().__init__() self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784)) self.bias = nn.Parameter(torch.zeros(10)) def forward(self, xb): return xb @ self.weights + self.bias Since we’re now using an object instead of just using a function, we first have to instantiate our model: model = Mnist_Logistic() Now we can calculate the loss in the same way as before. Note that nn.Module objects are used as if they are functions (i.e they are callable), but behind the scenes Pytorch will call our forward method automatically. print(loss_func(model(xb), yb)) Out: tensor(2.4200, grad_fn=<NllLossBackward>) Previously for our training loop we had to update the values for each parameter by name, and manually zero out the grads for each parameter separately, like this: with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_() Now we can take advantage of model.parameters() and model.zero_grad() (which are both defined by PyTorch for nn.Module) to make those steps more concise and less prone to the error of forgetting some of our parameters, particularly if we had a more complicated model: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad() We’ll wrap our little training loop in a fit function so we can run it again later. def fit():() with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad() fit() Let’s double-check that our loss has gone down: print(loss_func(model(xb), yb)) Out: tensor(0.0814, grad_fn=<NllLossBackward>) Refactor using nn.Linear¶ We continue to refactor our code. Instead of manually defining and initializing self.weights and self.bias, and calculating xb @ self.weights + self.bias, we will instead use the Pytorch class nn.Linear for a linear layer, which does all that for us. Pytorch has many types of predefined layers that can greatly simplify our code, and often makes it faster too. class Mnist_Logistic(nn.Module): def __init__(self): super().__init__() self.lin = nn.Linear(784, 10) def forward(self, xb): return self.lin(xb) We instantiate our model and calculate the loss in the same way as before: model = Mnist_Logistic() print(loss_func(model(xb), yb)) Out: tensor(2.3684, grad_fn=<NllLossBackward>) We are still able to use our same fit method as before. fit() print(loss_func(model(xb), yb)) Out: tensor(0.0817, grad_fn=<NllLossBackward>) Refactor using optim¶ Pytorch also has a package with various optimization algorithms, torch.optim. We can use the step method from our optimizer to take a forward step, instead of manually updating each parameter. This will let us replace our previous manually coded optimization step: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad() and instead use just: opt.step() opt.zero_grad() ( optim.zero_grad() resets the gradient to 0 and we need to call it before computing the gradient for the next minibatch.) from torch import optim We’ll define a little function to create our model and optimizer so we can reuse it in the future. def get_model(): model = Mnist_Logistic() return model, optim.SGD(model.parameters(), lr=lr) model, opt = get_model() print(loss_func(model(xb), yb))() opt.step() opt.zero_grad() print(loss_func(model(xb), yb)) Out: tensor(2.3087, grad_fn=<NllLossBackward>) tensor(0.0806, grad_fn=<NllLossBackward>) Refactor using Dataset¶ PyTorch has an abstract Dataset class. A Dataset can be anything that has a __len__ function (called by Python’s standard len function) and a __getitem__ function as a way of indexing into it. This tutorial walks through a nice example of creating a custom FacialLandmarkDataset class as a subclass of Dataset. PyTorch’s TensorDataset is a Dataset wrapping tensors. By defining a length and way of indexing, this also gives us a way to iterate, index, and slice along the first dimension of a tensor. This will make it easier to access both the independent and dependent variables in the same line as we train. from torch.utils.data import TensorDataset Both x_train and y_train can be combined in a single TensorDataset, which will be easier to iterate over and slice. train_ds = TensorDataset(x_train, y_train) Previously, we had to iterate through minibatches of x and y values separately: xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] Now, we can do these two steps together: xb,yb = train_ds[i*bs : i*bs+bs] model, opt = get_model() for epoch in range(epochs): for i in range((n - 1) // bs + 1): xb, yb = train_ds[i * bs: i * bs + bs] pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() print(loss_func(model(xb), yb)) Out: tensor(0.0831, grad_fn=<NllLossBackward>) Refactor using DataLoader¶ Pytorch’s DataLoader is responsible for managing batches. You can create a DataLoader from any Dataset. DataLoader makes it easier to iterate over batches. Rather than having to use train_ds[i*bs : i*bs+bs], the DataLoader gives us each minibatch automatically. from torch.utils.data import DataLoader train_ds = TensorDataset(x_train, y_train) train_dl = DataLoader(train_ds, batch_size=bs) Previously, our loop iterated over batches (xb, yb) like this: for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb) Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader: for xb,yb in train_dl: pred = model(xb) model, opt = get_model() for epoch in range(epochs): for xb, yb in train_dl: pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() print(loss_func(model(xb), yb)) Out: tensor(0.0819, grad_fn=<NllLossBackward>) Thanks to Pytorch’s nn.Module, nn.Parameter, Dataset, and DataLoader, our training loop is now dramatically smaller and easier to understand. Let’s now try to add the basic features necessary to create effective models in practice. Add validation¶ In section 1, we were just trying to get a reasonable training loop set up for use on our training data. In reality, you always should also have a validation set, in order to identify if you are overfitting. Shuffling the training data is important to prevent correlation between batches and overfitting. On the other hand, the validation loss will be identical whether we shuffle the validation set or not. Since shuffling takes extra time, it makes no sense to shuffle the validation data. We’ll use a batch size for the validation set that is twice as large as that for the training set. This is because the validation set does not need backpropagation and thus takes less memory (it doesn’t need to store the gradients). We take advantage of this to use a larger batch size and compute the loss more quickly. train_ds = TensorDataset(x_train, y_train) train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True) valid_ds = TensorDataset(x_valid, y_valid) valid_dl = DataLoader(valid_ds, batch_size=bs * 2) We will calculate and print the validation loss at the end of each epoch. (Note that we always call model.train() before training, and model.eval() before inference, because these are used by layers such as nn.BatchNorm2d and nn.Dropout to ensure appropriate behaviour for these different phases.) model, opt = get_model() for epoch in range(epochs): model.train() for xb, yb in train_dl: pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() model.eval() with torch.no_grad(): valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl) print(epoch, valid_loss / len(valid_dl)) Out: 0 tensor(0.4385) 1 tensor(0.3114) Create fit() and get_data()¶ We’ll now do a little refactoring of our own. Since we go through a similar process twice of calculating the loss for both the training set and the validation set, let’s make that into its own function, loss_batch, which computes the loss for one batch. We pass an optimizer in for the training set, and use it to perform backprop. For the validation set, we don’t pass an optimizer, so the method doesn’t perform backprop. def loss_batch(model, loss_func, xb, yb, opt=None): loss = loss_func(model(xb), yb) if opt is not None: loss.backward() opt.step() opt.zero_grad() return loss.item(), len(xb) fit runs the necessary operations to train our model and compute the training and validation losses for each epoch. import numpy as np def fit(epochs, model, loss_func, opt, train_dl, valid_dl): for epoch in range(epochs): model.train() for xb, yb in train_dl: loss_batch(model, loss_func, xb, yb, opt) model.eval() with torch.no_grad(): losses, nums = zip( *[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl] ) val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums) print(epoch, val_loss) get_data returns dataloaders for the training and validation sets. def get_data(train_ds, valid_ds, bs): return ( DataLoader(train_ds, batch_size=bs, shuffle=True), DataLoader(valid_ds, batch_size=bs * 2), ) Now, our whole process of obtaining the data loaders and fitting the model can be run in 3 lines of code: train_dl, valid_dl = get_data(train_ds, valid_ds, bs) model, opt = get_model() fit(epochs, model, loss_func, opt, train_dl, valid_dl) Out: 0 0.41907566217184067 1 0.2820877184152603 You can use these basic 3 lines of code to train a wide variety of models. Let’s see if we can use them to train a convolutional neural network (CNN)! Switch to CNN¶ We are now going to build our neural network with three convolutional layers. Because none of the functions in the previous section assume anything about the model form, we’ll be able to use them to train a CNN without any modification. We will use Pytorch’s predefined Conv2d class as our convolutional layer. We define a CNN with 3 convolutional layers. Each convolution is followed by a ReLU. At the end, we perform an average pooling. (Note that view is PyTorch’s version of numpy’s reshape) class Mnist_CNN(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1) self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1) self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1) def forward(self, xb): xb = xb.view(-1, 1, 28, 28) xb = F.relu(self.conv1(xb)) xb = F.relu(self.conv2(xb)) xb = F.relu(self.conv3(xb)) xb = F.avg_pool2d(xb, 4) return xb.view(-1, xb.size(1)) lr = 0.1 Momentum is a variation on stochastic gradient descent that takes previous updates into account as well and generally leads to faster training. model = Mnist_CNN() opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) fit(epochs, model, loss_func, opt, train_dl, valid_dl) Out: 0 0.36573471329212187 1 0.24933695367574693 nn.Sequential¶ torch.nn has another handy class we can use to simplify our code: Sequential . A Sequential object runs each of the modules contained within it, in a sequential manner. This is a simpler way of writing our neural network. To take advantage of this, we need to be able to easily define a custom layer from a given function. For instance, PyTorch doesn’t have a view layer, and we need to create one for our network. Lambda will create a layer that we can then use when defining a network with Sequential. class Lambda(nn.Module): def __init__(self, func): super().__init__() self.func = func def forward(self, x): return self.func(x) def preprocess(x): return x.view(-1, 1, 28, 28) The model created with Sequential is simply: model = nn.Sequential( Lambda(preprocess),.AvgPool2d(4), Lambda(lambda x: x.view(x.size(0), -1)), ) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) fit(epochs, model, loss_func, opt, train_dl, valid_dl) Out: 0 0.36269435596466065 1 0.2155356614947319 Wrapping DataLoader¶ - Our CNN is fairly concise, but it only works with MNIST, because: - It assumes the input is a 28*28 long vector - It assumes that the final CNN grid size is 4*4 (since that’s the average pooling kernel size we used) Let’s get rid of these two assumptions, so our model works with any 2d single channel image. First, we can remove the initial Lambda layer by moving the data preprocessing into a generator: def preprocess(x, y): return x.view(-1, 1, 28, 28), y class WrappedDataLoader: def __init__(self, dl, func): self.dl = dl self.func = func def __len__(self): return len(self.dl) def __iter__(self): batches = iter(self.dl) for b in batches: yield (self.func(*b)) train_dl, valid_dl = get_data(train_ds, valid_ds, bs) train_dl = WrappedDataLoader(train_dl, preprocess) valid_dl = WrappedDataLoader(valid_dl, preprocess) Next, we can replace nn.AvgPool2d with nn.AdaptiveAvgPool2d, which allows us to define the size of the output tensor we want, rather than the input tensor we have. As a result, our model will work with any size input. model = nn.Sequential(.AdaptiveAvgPool2d(1), Lambda(lambda x: x.view(x.size(0), -1)), ) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) Let’s try it out: fit(epochs, model, loss_func, opt, train_dl, valid_dl) Out: 0 0.32028542876243593 1 0.29812016689777376 Using your GPU¶ If you’re lucky enough to have access to a CUDA-capable GPU (you can rent one for about $0.50/hour from most cloud providers) you can use it to speed up your code. First check that your GPU is working in Pytorch: print(torch.cuda.is_available()) Out: True And then create a device object for it: dev = torch.device( "cuda") if torch.cuda.is_available() else torch.device("cpu") Let’s update preprocess to move batches to the GPU: def preprocess(x, y): return x.view(-1, 1, 28, 28).to(dev), y.to(dev) train_dl, valid_dl = get_data(train_ds, valid_ds, bs) train_dl = WrappedDataLoader(train_dl, preprocess) valid_dl = WrappedDataLoader(valid_dl, preprocess) Finally, we can move our model to the GPU. model.to(dev) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) You should find it runs faster now: fit(epochs, model, loss_func, opt, train_dl, valid_dl) Out: 0 0.20925791692137719 1 0.19359942852258682 Closing thoughts¶ We now have a general data pipeline and training loop which you can use for training many types of models using Pytorch. To see how simple training a model can now be, take a look at the mnist_sample sample notebook. Of course, there are many things you’ll want to add, such as data augmentation, hyperparameter tuning, monitoring training, transfer learning, and so forth. These features are available in the fastai library, which has been developed using the same design approach shown in this tutorial, providing a natural next step for practitioners looking to take their models further. We promised at the start of this tutorial we’d explain through example each of torch.nn, torch.optim, Dataset, and DataLoader. So let’s summarize what we’ve seen: - torch.nn - Module: creates a callable which behaves like a function, but can also contain state(such as neural net layer weights). It knows what Parameter(s) it contains and can zero all their gradients, loop through them for weight updates, etc. - Parameter: a wrapper for a tensor that tells a Modulethat it has weights that need updating during backprop. Only tensors with the requires_grad attribute set are updated - functional: a module(usually imported into the Fnamespace by convention) which contains activation functions, loss functions, etc, as well as non-stateful versions of layers such as convolutional and linear layers. - torch.optim: Contains optimizers such as SGD, which update the weights of Parameterduring the backward step - Dataset: An abstract interface of objects with a __len__and a __getitem__, including classes provided with Pytorch such as TensorDataset - DataLoader: Takes any Datasetand creates an iterator which returns batches of data. Total running time of the script: ( 0 minutes 36.490 seconds) Gallery generated by Sphinx-Gallery
https://pytorch.org/tutorials/beginner/nn_tutorial.html
CC-MAIN-2021-31
en
refinedweb
387 [details] repro solution ### Overview Given a solution where: 1. TestApp.iOS references TestApp PCL and ClassLibrary1_iOS 2. TestApp (PCL) references ClassLibrary1 PCL 3. ClassLibrary1 (PCL) and ClassLibrary1_iOS have the same assembly name, default namespace, and methods when the iOS project is built, the ClassLibrary1_iOS project dll gets copied to app's bin/ directory, but with the wrong mdb file. Instead the mdb file for the ClassLibrary1 PCL project is present resulting in not being able to debug the iOS library project and this error in the debug output: > 2016-06-17 15:48:17.581 TestAppiOS[41586:3406222] warning: Symbol file .mdb doesn't match image Note, the Android project does not have this issue even with the same project structure as above. Also, this seems to only be an issue with Visual Studio as testing on XS resulted in the correct mdb being copied for the iOS project. However, given that both library projects have the same assembly name and methods, I'm not 100% sure what the expected behavior is in this case. ### Steps to Reproduce 1. Run the "TestApp.iOS" project in the attached repro solution ### Expected Results The breakpoint set in "Class1.cs" inside ClassLibrary_iOS will be hit as this is the method that is called during runtime. ### Actual Results The breakpoint is not hit ### Environment Info Microsoft Visual Studio Professional 2015 Version 14.0.25123.00 Update 2 Microsoft .NET Framework Version 4.6.01584 5.2.603. Created attachment 16388 [details] build output Thank you for a very thorough investigation of the issue and a beautifully clean repro! We'll take a look. It seems like this wouldn't be an uncommon scenario for developers creating bait&switch PCLs, so I think it's worth investigating and documenting an expected behavior. Updating Target Milestone to C8SR1. This bug will be addressed on Cycle10 when we switch to using PPDB instead of MDB for Visual Studio 2015+. Fixed in 4.6.0.168
https://bugzilla.xamarin.com/41/41966/bug.html
CC-MAIN-2021-25
en
refinedweb
Contents: TBD - ID Token: TBD: Registering new scopes To register a new set of scopes, please feel free to just create a new section. Also, once this is finalized, please open a ticket on to request the scopes being added to the staging/production systems. Standard These scopes are standardized, and not namespaced. Availability: Development, Staging, Production. Ipsilon Base namespace:
https://www.fedoraproject.org/w/index.php?title=Infrastructure/Authentication&oldid=485606
CC-MAIN-2021-25
en
refinedweb
I am using NgbModal to open modal window described with <ng-template> directive. It seems to work quite well. But I can’t figure out how to test the content of my modal. I am new to this so feel free to point out any other issues you might see with my approach. I think the issue is that the <ng-template> content is not outputted before the modal is actually shown. So I tried to call click() on the toggle before, but that seem to not be enough. For instance there is a form in my content so my test could look like: it("toggle button should work", () => { const button = fixture.nativeElement.querySelector("button"); expect(button).toBeTruthy(); button.click(); fixture.detectChanges(); const formDe = fixture.debugElement.query(By.css("div")); expect(formDe).toBeTruthy(); }); Here is my component: import { Component } from "@angular/core"; import { NgbModal } from "@ng-bootstrap/ng-bootstrap"; @Component({ selector: "app-login", template: ` <ng-template #content> <div class="modal-body"></div> </ng-template> <button class="btn btn-lg btn-outline-primary" (click)="open(content)"> Launch demo modal </button> `, styleUrls: ["./login.component.css"] }) export class LoginComponent { constructor(private modalService: NgbModal) {} open(modal) { this.modalService.open(modal); } } I would like to be able to get a hold of the modal content in some way. currently I am getting Error: Expected null to be truthy. On the div element. Source: New feed Source Url How to test modal content when using NgbModal service One Reply to “How to test modal content when using NgbModal service” […] How to test modal content when using … – Angular Questions […]
https://angularquestions.com/2019/10/20/how-to-test-modal-content-when-using-ngbmodal-service/
CC-MAIN-2021-25
en
refinedweb
README js-year-calendarjs-year-calendar A fully customizable year calendar widget This library is also available for: RequirementsRequirements This plugin uses pure javascript. No library is required. InstallationInstallation You can get the widget using the following methods: - From the GitHub repository - From the Node package manager, using the following command: npm install js-year-calendar - From Yarn, using the following command: yarn add js-year-calendar - From the CDN, by adding the following script directly in your HTML page: <script src=""></script> AND <link rel="stylesheet" type="text/css" href="" /> InitializationInitialization If you're using javascript modules, don't forget to import the library: import Calendar from 'js-year-calendar'; import 'js-year-calendar/dist/js-year-calendar.css'; UsageUsage You can create a calendar using the following javascript code : new Calendar('.calendar') Or new Calendar(document.querySelector('.calendar')); Where .calendar is the selector of a DIV element that should contain your calendar. You can also use the following HTML if you don't want to use javascript to initialize the calendar <div data-</div> The calendar will be automatically created when the page will finish loading Using optionsUsing options You can specify options to customize the calendar: new Calendar('.calendar', { style: 'background', minDate: new Date() }) You can find the exhaustive list of options in the documentation. LanguageLanguage If you want to use the calendar in a different language, you should import the locale file corresponding to the language you want to use, and then set the language prop of the calendar: import Calendar from 'js-year-calendar'; import 'js-year-calendar/locales/js-year-calendar.fr'; OR <script src=""></script> <script src=""></script> Then new Calendar('.calendar', { language: 'fr' }) The list of available languages is available here Updating calendarUpdating calendar You can update the calendar after being instantiated: const calendar = new Calendar('.calendar'); calendar.setStyle('background'); calendar.setMaxDate(new Date()); You can find the exhaustive list of methods in the documentation. EventsEvents You can bind events to the calendar at initialization const calendar = new Calendar('.calendar', { clickDay: function(e) { alert('Click on day ' + e.date.toString()); } }); or later new Calendar('.calendar'); document.querySelector('.calendar').addEventListener('clickDay', function(e) { alert('Click on day ' + e.date.toString()); }); You can find the exhaustive list of events in the documentation. Migrating from bootstrap-year-calendarMigrating from bootstrap-year-calendar This widget is based on the bootstrap-year-calendar widget. If you were using this widget, these are the modifications to apply to successfully migrate your project: InitializationInitialization The project doesn't use jQuery anymore, so the initialization of the calendar will be using pure Javascript. The old code: $('.calendar').calendar({ /* Options */ }) Will be replaced by: new Calendar('.calendar', { /* Options */ }); Or new Calendar($('.calendar').get(0), { /* Options */ }); // Use ".get(0)" to get the DOM element from the jQuery element Get the calendar from the DOM elementGet the calendar from the DOM element Given that the widget doesn't rely on jQuery, it won't be possible to get the calendar instance from the DOM element anymore: $('.calendar').data('calendar').set...(); You will have to store the instance of the calendar by yourself: const calendar = new Calendar('.calendar'); ... calendar.set...(); What nextWhat next Check the documentation and examples pages to discover all the functionalities.
https://www.skypack.dev/view/@4lolo/js-year-calendar
CC-MAIN-2021-25
en
refinedweb
There are a couple of issues with OpenSSL’s BIO_*printf() functions, defined in crypto/bio/b_print.c, that are set to be fixed in the forthcoming security release. The function that is primarily responsible for interpreting the format string and transforming this string and the functions arguments to a string is _dopr(). _dopr() scans the format string in an incremental fashion and employs doapr_outch() for each character it wants to output. doapr_outchr() doapr_outch()’s first two arguments are a double pointer to a statically allocated buffer (char** sbuffer) and a pointer to a char pointer (char **buffer) whose value will be set to a memory region dynamically allocated by doapr_outch(). The first argument, the static buffer, should always be valid. Its size is pointed to by the third argument to doapr_outch(), size_t* currlen. 700 static void 701 doapr_outch(char **sbuffer, 702 char **buffer, size_t *currlen, size_t *maxlen, int c) 703 { 704 /* If we haven't at least one buffer, someone has doe a big booboo */ 705 assert(*sbuffer != NULL || buffer != NULL); 706 707 /* |currlen| must always be <= |*maxlen| */ 708 assert(*currlen <= *maxlen); 709 710 if (buffer && *currlen == *maxlen) { 711 *maxlen += 1024; 712 if (*buffer == NULL) {; 723 } else { 724 *buffer = OPENSSL_realloc(*buffer, *maxlen); 725 if (!*buffer) { 726 /* Panic! Can't really do anything sensible. Just return */ 727 return; 728 } 729 } 730 } 731 732 if (*currlen < *maxlen) { 733 if (*sbuffer) 734 (*sbuffer)[(*currlen)++] = (char)c; 735 else 736 (*buffer)[(*currlen)++] = (char)c; 737 } 738 739 return; 740 } The idea here is that doapr_outch() will incrementally fill the statically allocated buffer sbuffer until its maximum capacity has been reached; whether this is the case is asserted by the if on line 732, a byte will be appended to *sbuffer on line 734: 732 if (*currlen < *maxlen) { 733 if (*sbuffer) 734 (*sbuffer)[(*currlen)++] = (char)c; Once sbuffer is full (at which point *currlen is equal to *maxlen) and the calling functions allows the dynamic allocation of memory (buffer is non-zero), then this condition evaluates as true: 710 if (buffer && *currlen == *maxlen) { From this point on, an allocation takes place every 1024 bytes. Once a single successful heap allocation takes place, *sbuffer is zeroed:; The corollary of sbuffer being zero for the remainder of the BIO_printf() invocation is that from now on, bytes will be appended to the heap-based *buffer rather than the stack-based *sbuffer: 732 if (*currlen < *maxlen) { 733 if (*sbuffer) 734 (*sbuffer)[(*currlen)++] = (char)c; 735 else 736 (*buffer)[(*currlen)++] = (char)c; 737 } Differences between BIO_printf/BIO_vprintf and BIO_snprintf/BIO_vsnprintf The functions BIO_printf() and BIO_vprintf() allow doapr_outch() to dynamically allocate memory by supplying a valid pointer to a char pointer. 744 int BIO_printf(BIO *bio, const char *format, ...) 745 { 746 va_list args; 747 int ret; 748 749 va_start(args, format); 750 751 ret = BIO_vprintf(bio, format, args); 752 753 va_end(args); 754 return (ret); 755 } 756 757 int BIO_vprintf(BIO *bio, const char *format, va_list args) 758 { 759 int ret; 760 size_t retlen; 761 char hugebuf[1024 * 2]; /* Was previously 10k, which is unreasonable 762 * in small-stack environments, like threads 763 * or DOS programs. */ 764 char *hugebufp = hugebuf; 765 size_t hugebufsize = sizeof(hugebuf); 766 char *dynbuf = NULL; 767 int ignored; 768 769 dynbuf = NULL; 770 CRYPTO_push_info("doapr()"); 771 _dopr(&hugebufp, &dynbuf, &hugebufsize, &retlen, &ignored, format, args); 772 if (dynbuf) { 773 ret = BIO_write(bio, dynbuf, (int)retlen); 774 OPENSSL_free(dynbuf); 775 } else { 776 ret = BIO_write(bio, hugebuf, (int)retlen); 777 } 778 CRYPTO_pop_info(); 779 return (ret); 780 } BIO_vprintf() supplies both a statically allocated buffer (hugebuf), its size is encoded in hugebufsize, and it also supplies a pointer to a char pointer (dynbuf). The same applies to BIO_printf() through its use of BIO_vprintf(). By contrast, the other two *printf functions, BIO_vsnprintf() and BIO_snprintf() only use a statically allocated buffer, which is to be supplied by the caller: 788 int BIO_snprintf(char *buf, size_t n, const char *format, ...) 789 { 790 va_list args; 791 int ret; 792 793 va_start(args, format); 794 795 ret = BIO_vsnprintf(buf, n, format, args); 796 797 va_end(args); 798 return (ret); 799 } 800 801 int BIO_vsnprintf(char *buf, size_t n, const char *format, va_list args) 802 { 803 size_t retlen; 804 int truncated; 805 806 _dopr(&buf, NULL, &n, &retlen, &truncated, format, args); 807 808 if (truncated) 809 /* 810 * In case of truncation, return -1 like traditional snprintf. 811 * (Current drafts for ISO/IEC 9899 say snprintf should return the 812 * number of characters that would have been written, had the buffer 813 * been large enough.) 814 */ 815 return -1; 816 else 817 return (retlen <= INT_MAX) ? (int)retlen : -1; 818 } The vulnerability One of the problems with the doapr_outch() function is that it cannot signal failure to allocate memory to its caller, because it is a void-returning function: 713 *buffer = OPENSSL_malloc(*maxlen); 714 if (!*buffer) { 715 /* Panic! Can't really do anything sensible. Just return */ 716 return; 717 } 724 *buffer = OPENSSL_realloc(*buffer, *maxlen); 725 if (!*buffer) { 726 /* Panic! Can't really do anything sensible. Just return */ 727 return; This lack of error signaling means that _dopr() will continue to call doapr_outch() as long as there are characters left to output. Moreover, maxlen is incremented before the allocation. This means that even if the allocation fails, maxlen still represents the size of the heap memory which it would be if the allocation had succeeded: 711 *maxlen += 1024; 712 if (*buffer == NULL) { 713 *buffer = OPENSSL_malloc(*maxlen); 714 if (!*buffer) { 715 /* Panic! Can't really do anything sensible. Just return */ 716 return; 717 } Thus, upon the first call to doapr_outch() after the failed allocation, the following condition evaluates as false: 710 if (buffer && *currlen == *maxlen) { The failed allocation caused *buffer (the value) to be zeroed, but buffer (the pointer) is still valid. However, *currlen does no longer equate *maxlen, because *maxlen has just been incremented by 1024 in the previous call. Failing to evaluate this condition as true, the entire middle part of the function is skipped, and the following code is evaluated: 732 if (*currlen < *maxlen) { 733 if (*sbuffer) 734 (*sbuffer)[(*currlen)++] = (char)c; 735 else 736 (*buffer)[(*currlen)++] = (char)c; 737 } *currlen is now indeed *maxlen, and *sbuffer is zero (if at least one valid OPENSSL_malloc() call is succesfull, *sbuffer is zeroed, as noted earlier). Thus this code is executed: 736 (*buffer)[(*currlen)++] = (char)c; *buffer is zero, and *currlen might be anything, depending on at which point in the process an allocation failed. Thus, effectively *currlen is used as a pointer to write data to. *currlen is a 32-bit integer, so when used as a pointer it is bound to point to a byte within the first 4 gigabytes of the virtual address space. On a 64-bit system, it is unlikely that a write to this region will not cause a page fault. However, in a 32-bit memory layout, the odds are in the attacker’s favor, especially if they have some way of causing memory attrition within the germane system. It might seem far-fetched that an attacker might have the agency to cause an allocation to fail at a very precise moment, namely when *currlen, if used as a pointer, is pointing to a memory region that they want to overwrite. However, how much memory there is left to allocate within a system is not merely constituted by OpenSSL’s (or the application that uses it) use of the heap; any other application running concurrently with OpenSSL whose resource consumption might be influenced by the attacker (such as other public-facing networking services running on a server) is susceptible to being complicit in heap corruption occurring in doapr_outch(). Even if precise memory corruption through memory attrition, that could lead to code execution, is in practice too difficult for the attacker, there’s still the possibility that important data within the program’s heap is corrupted, whose consequences could be nearly as disastrous. Heap vandalism, basically. And even if you discount the presence of malice, then genuine, temporary shortages of heap memory could lead to random heap corruption. An alternative approach to triggering the vulnerability Moreover, an interesting, sure-fire way to cause a OPENSSL_realloc() failure exists. OPENSSL_realloc() is really just a macro for CRYPTO_realloc(): 375 void *CRYPTO_realloc(void *str, int num, const char *file, int line) 376 { 377 void *ret = NULL; 378 379 if (str == NULL) 380 return CRYPTO_malloc(num, file, line); 381 382 if (num <= 0) 383 return NULL; num is a signed, 32-bit integer. If it is zero or negative, NULL is returned. Because in doapr_outch() *maxlen is incremented by 1024 for each allocation: 711 *maxlen += 1024; it will eventually become a negative value. The subsequent OPENSSL_realloc() will then inevitably fail, because CRYPTO_realloc() refuses to do allocations of a negative size. In other words, by supplying a very large string to BIO_printf() (basically one where the result of the combination of the format string and the arguments exceeds 1 << 31 bytes minus the size of the stack-based buffer), the vulnerability is guaranteed to trigger. Probably another way than using the “%s” format with a very large string is to exploit the padding mechanisms present in the helper functions fmtstr(), fmtint(), fmptp(). Affected software I’ve been able to confirm that PHP’s openssl_pkcs7_encrypt is vulnerable to this attack through its internal use of BIO_printf, if an attacker is able to supply a very large $headers parameter. Apache httpd also uses BIO_printf: but I haven’t yet checked to what extent it might be exploitable. A number of other high-profile applications are also using BIO_printf(): 2 thoughts on “OpenSSL CVE-2016-0799: heap corruption via BIO_printf” The code-snippets are unfortunately unreadable because of html-escaping. Could you fix them? Thanks! Great work!!
https://guidovranken.com/2016/02/27/openssl-cve-2016-0799-heap-corruption-via-bio_printf/
CC-MAIN-2021-25
en
refinedweb
This site uses strictly necessary cookies. More Information So here i am wondering, how would i go about this problem: In my FPS game, there are BOTs. BOTs get a random name generated at the start of every game scene. Now what i want to do, is to only generate the name once at the very first start of the level, and then use the same one until i quit to the menu and choose a different map. How would i go about that? I tried thinking about using static variables, however, these are counted as one and the same across multiple instances of the script. So that's a no, now making individually 12 or so static variables, and somehow make the script know which one to choose seems like a way too complicated approach to my problem. PlayerPrefs are even worse for this. The last idea i just had was to use some object that uses the DontDestroyOnLoad() function, but then how would i even go about assigning the bot objects when loading a new map again? Please help, i seriously have no idea. but, you want one name for each typr of bot? you are destroying and instantiating multiple times that bot? i imagine that there ar emultiple type of bots and they get instantiate/destroyed all the time? yeah, thats how i have it working right now. and whats the difference between the bots? have different scripts? Answer by talyh · Jun 15, 2019 at 02:54 PM You could have a readonly variable. The caveat being, you need to generate the random name at the constructor. public class Bot { private readonly string _name; public string name {get {return _name; }} public Bot() { _name = "randomlygenerated"; } } Hello, sorry for the late answer. I am trying to understand what this code really does. I will try this when i have the time, but if this somehow helps, my bots's gameobjects get destroyed when they are killed. Answer by Kennai · Jun 21, 2019 at 01:44 PM Create simple class with two variables - string Name and bool Taken then create static class, where you will have a list of that class. And two static functions: TakeName() - will get first empty name (and mark it as taken) from list or generate new name (and also mark it as taken and add it to list) if all names are taken. Call this function in bot's awake function ReleaseName(string name) - call this func inside bot's OnDestroy function. this function will uncheck "taken" attribute for that75 People are following this question. 2d AI: enemy targeting c# 2 Answers Distribute terrain in zones 3 Answers Making an FPS arcade game. 0 Answers Multiple Cars not working 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1640869/dynamic-static-variables.html
CC-MAIN-2021-25
en
refinedweb
nghttp2_submit_response¶ Synopsis¶ #include <nghttp2/nghttp2.h> - int nghttp2_submit_response(nghttp2_session *session, int32_t stream_id, const nghttp2_nv *nva, size_t nvlen, const nghttp2_data_provider *data_prd)¶ Submits response HEADERS frame and optionally one or more DATA frames against the stream stream_id. The nva is an array of name/value pair nghttp2_nvwith nvlen elements. The application is responsible to include required pseudo-header fields (header field whose name starts with ":") in nva and must place pseudo-headers before regular header fields. This function creates copies of all name/value pairs in nva. It also lower-cases all names in nva. The order of elements in nva is preserved. For header fields with nghttp2_nv_flag.NGHTTP2_NV_FLAG_NO_COPY_NAMEand nghttp2_nv_flag.NGHTTP2_NV_FLAG_NO_COPY_VALUEare set, header field name and value are not copied respectively. With nghttp2_nv_flag.NGHTTP2_NV_FLAG_NO_COPY_NAME, application is responsible to pass header field name in lowercase. The application should maintain the references to them until nghttp2_on_frame_send_callbackor nghttp2_on_frame_not_send_callbackis called. HTTP/2 specification has requirement about header fields in the response HEADERS. See the specification for more details. If data_prd is not NULL, it provides data which will be sent in subsequent DATA frames. This function does not take ownership of the data_prd. The function copies the members of the data_prd. If data_prd is NULL, HEADERS will have END_STREAM flag set. This method can be used as normal HTTP response and push response. When pushing a resource using this function, the session must be configured using nghttp2_session_server_new()or its variants and the target stream denoted by the stream_id must be reserved using nghttp2_submit_push_promise(). To send non-final response headers (e.g., HTTP status 101), don't use this function because this function half-closes the outbound stream. Instead, use nghttp2_submit_headers()for this purpose. This function returns 0 if it succeeds, or one of the following negative error codes: nghttp2_error.NGHTTP2_ERR_NOMEM Out of memory. nghttp2_error.NGHTTP2_ERR_INVALID_ARGUMENT The stream_id is 0. nghttp2_error.NGHTTP2_ERR_DATA_EXIST DATA or HEADERS has been already submitted and not fully processed yet. Normally, this does not happen, but when application wrongly calls nghttp2_submit_response()twice, this may happen. nghttp2_error.NGHTTP2_ERR_PROTO The session is client session. Warning Calling this function twice for the same stream ID may lead to program crash. It is generally considered to a programming error to commit response twice.
https://nghttp2.org/documentation/nghttp2_submit_response.html
CC-MAIN-2021-25
en
refinedweb
This is a more detailed example question that I asked for: Best Practice with classes and the database I'm using c# with Sql Server and using Dapper as my ORM. Here is my class that will load the Component table into the application: class DBComponent { public int ID { get; set; } public string Component { get; set; } public int Material_ID { get; set; } public int Color_ID { get; set; } } Then I need my other class that will have the actual values: class Component { public int ID { get; set; } public string Component { get; set; } public string Material { get; set; } public string Color { get; set; } public Component(DBComponent C) { ID = C.ID; Component = C.Component; Material = //queries the Material Table passing in C.Material_ID and returning the Material Value Color = //queries the Color Table passing in C.Color_ID and returning the Color Value } } The reason I'm doing this is so that I can use the values for a WinForm application with controls(combobox) and other needs. Also the "DBComponent" Class would have a method that would take a "Component" object and create a "DBComponent" object which would be sent back to the database as a new record. Is this the best way to handle this situation? Or is there a better method? In my other post someone mentioned that dapper can do this by itself to where I won't need to create 2 classes, only need 1 class. How is that so? You could use just one class and just one query to load your data. It is just a matter to build the correct sql query and let Dapper do its magic in mapping the data retrieved to your Component class. Suppose you change your component class like this public class Component { public int ID { get; set; } public string ComponentX { get; set; } public int Color_ID { get; set; } public int Material_ID { get; set; } public string Material { get; set; } public string Color {get;set;} } now you can retrieve your data using a proper join between the tables IEnumerable<Component> SelectComponents() { using (IDbConnection connection = OpenConnection()) { const string query = @"SELECT p.ID, p.Component as ComponentX, p.Color_ID, p.Material_ID, c.Color, m.Material FROM Component p JOIN Color c on p.Color_ID = c.ID JOIN Material m on p.Material_ID = m.ID"; return connection.Query<Component>(query, null); } } Notice that I have renamed the member Component to ComponentX because you can't have a member name with the same name of the enclosing type
https://dapper-tutorial.net/knowledge-base/45968361/transferring-database-tables-to-classes--classes-to-database-tables
CC-MAIN-2021-25
en
refinedweb
Introduction In this lesson, we will learn how to program Raspberry Pi to make an LED blink. You can play numerous tricks with an LED as you want. Now get to start and you will enjoy the fun of DIY at once! Components - 1 * Raspberry Pi - 1 * Breadboard - 1 * T-Extension Board - 1 * 40-Pin Cable - 1 * LED - 1 * Resistor (220Ω) - Jumper wires Principle In this experiment, connect a 220 Ω resistor to the anode (the long pin of the LED), then the resistor to 3.3 V, and connect the cathode (the short pin) of the LED to B17 of Raspberry Pi. We can see from the schematic diagram that the anode of LED connects to a current-limiting resistor and then to 3.3V. Therefore, to turn on an LED, we need to make B17 low (0V) level. It can be realized by programming. Experimental Procedures Step 1: Build the circuit. For C Language Users: Step 2: Go to the folder of the code. If you use a monitor, you’re recommended to take the following steps. Go to /home/pi/ and find the folder SunFounder_Super_Kit_V3.0_for_Raspberry_Pi . Find C in the folder, right-click on it and select Open in Terminal. Then a window will pop up as shown below. So now you’ve entered the path of the code 01_blinkLed.c In the lessons later, we will show how to get into the folder of the code in command way, not with the display. You only need to find out the code file and right click Open in Terminal. You can back to lesson 1 to check if you forgot. Certainly, the terminal can be opened if you’re using display, and then use cd command directly to go to the code path. If you log into the Raspberry Pi remotely, use “cd” to change directory: cd /home/pi/SunFounder_Super_Kit_V3.0_for_Raspberry_Pi/C Note: Change directory to the path of the code in this experiment via cd. In either way, you now are in the folder C. The subsequent procedures under the two methods are the same. Let’s move on. Step 3: Compile the Code. gcc 01_blinkLed.c -o 01_blinkLed -lwiringPi Note: gcc is GNU Compiler Collection. Here, it functions like compiling the C language file 01_blinkLed.c and outputting an executable file 01_blinkLed. In the command, -o means outputting and -lwiringPi is to load the library wiringPi (l is short for library).If you want to write your own C code and compile to run it, you need to master gcc. Since the gcc command is too long, you can use make to test the experimental effect of the kit to make the process quicker and more convenient. make 01_blinkLed Note: The make command will compile according to the rules in the Makefile. Two files will be generated after compiling: “*.o” and an executable file. We use makefile, in essence, is to write the compilation method of gcc into the automated script. If you have written your own program in C language, you need to write and modify the makefile so as to use make command to compile your C code. Step 4: Run the executable file output in the previous step: sudo ./01_blinkLed Note: To control the GPIO, you need to access to led with the permission of superuser (sudo is not needed to control the GPIO for the raspbian system after 2016-5-27), namely, by the command sudo. In the command “ ./ ” indicates the current directory. The whole command is to run the 01_blinkLed in the current directory. If you want to view the code 01_blinkLed.c, press Ctrl + C to stop running the code. Then type the following command to open it: nano 01_blinkLed.c Note: nano is a text editor tool. The command is to open the code file 01_edblinkLed.cby this tool. Code Explanation #include <wiringPi.h> //The hardware drive library designed for the C language of Raspberry Pi. Adding this library is convenient for hardware initialization, I/O ports, PWM outputs, etc. #include <stdio.h> // Standard I/O library. The pintf function used for printing the data displayed on the screen is realized by this library. There are many other performance functions for you to explore. #define LedPin 0 // Pin B17 of the T_Extension Board is corresponding to the pin0 in wiringPi, namely, GPIO 0 of the raspberry Pi. Assign GPIO 0 to LedPin, LedPin represents GPIO 0 in the code later._7<< For Python Users: Step 2: Go to the folder of the code and run it. Open the downloaded folder SunFounder_Super_Kit_V3.0_for_Raspberry_Pi/Python and you can see them. If you use a monitor, you’re recommended to take the following steps. Find 01_blinkLed.pyand double click it to open. Now you’re in the file. Click Run ->Run Module in the window and the following contents will appear. To stop it from running, just click the X button on the top right to close it and then you’ll back to the code details. If you modify the code, before clicking Run Module (F5) you need to save it first. Then you can see the results. If you want to log into the Raspberry Pi remotely, type in the command: cd /home/pi/SunFounder_Super_Kit_V3.0_for_Raspberry_Pi/Python Run the code: sudo python3 01_blinkLed.py Note: Here sudo – superuser do, and python means to run the file by Python. If you want to view the code 01_blinkLed.py, press Ctrl + C to stop running the code. Then type the following command to open it: nano 01_blinkLed.py Note: nano is a text editor tool. The command is to open the code file 01_blinkLed.c by this tool. # import RPI.GPIO package, thus python code control GPIO easily with it. import time # import time package, for time delay function in the following program. LedPin = 17 # LED connects to the B17 of the T-shape extension board, namely, the GPIO 0 of the Raspberry Pi. # Define a setup function for some setup def setup(): GPIO.setmode(GPIO.BCM) # Set the GPIO modes to BCM Numbering # Set LedPin's mode to output, and initial level to High (3.3v) GPIO.setup(LedPin, GPIO.OUT, initial=GPIO.HIGH) # Define a main function for main process def main(): # Print messages print_message() while True: print ("...LED ON") # Turn on LED GPIO.output(LEDPin, GPIO.LOW) # delay 0.5 second, which is equals to the delay in C language, using second as the unit,() Press Ctrl+X to exit. If you have modified the code, there will be a prompt asking whether to save the changes or not. Type in Y (save) or N (don’t save). Then press Enter to exit. Type in nano 01_blinkLed.py again to see the effect after the change. Run the code to make it work. It will be like below: Further Exploration If you want the LED to speed up the blinking, just change the delay time. For example, change the time to delay(200)(for C) or time.sleep(0.2)(for python) in the program, recompile and run, and then you will see the LED blink faster. Summary Raspberry Pi packages many low-level detail designs, which ease your way to explore your own apps. Maybe that is the charm of Raspberry Pi. Now you have already learnt how to use the Raspberry Pi GPIO to blink an LED. Keep moving to the next contents. If you haven’t modified the code, you do not need to run make 01_blinkLedagain. make 01_blinkLed Or a message will appear: make: ’01_blinkLed’ is up to date It will not appear only when you run the make command after having changed the code and saved it. Tips: For any TECHNICAL questions, add a topic under FORUM section on our website and we’ll reply as soon as possible.
https://learn.sunfounder.com/lesson-1-blinking-led-9/
CC-MAIN-2021-25
en
refinedweb
nghttp2_session_create_idle_stream¶ Synopsis¶ #include <nghttp2/nghttp2.h> - int nghttp2_session_create_idle_stream(nghttp2_session *session, int32_t stream_id, const nghttp2_priority_spec *pri_spec)¶ Creates idle stream with the given stream_id, and priority pri_spec. The stream creation is done without sending PRIORITY frame, which means that peer does not know about the existence of this idle stream in the local endpoint. RFC 7540 does not disallow the use of creation of idle stream with odd or even stream ID regardless of client or server. So this function can create odd or even stream ID regardless of client or server. But probably it is a bit safer to use the stream ID the local endpoint can initiate (in other words, use odd stream ID for client, and even stream ID for server), to avoid potential collision from peer's instruction. Also we can use nghttp2_session_set_next_stream_id()to avoid to open created idle streams accidentally if we follow this recommendation. If session is initialized as server, and pri_spec->stream_idpoints to the idle stream, the idle stream is created if it does not exist. The created idle stream will depend on root stream (stream 0) with weight 16. Otherwise, if stream denoted by pri_spec->stream_idis not found, we use default priority instead of given pri_spec. That is make stream depend on root stream with weight 16. This function returns 0 if it succeeds, or one of the following negative error codes: nghttp2_error.NGHTTP2_ERR_NOMEM Out of memory. nghttp2_error.NGHTTP2_ERR_INVALID_ARGUMENT Attempted to depend on itself; or stream denoted by stream_id already exists; or stream_id cannot be used to create idle stream (in other words, local endpoint has already opened stream ID greater than or equal to the given stream ID; or stream_id is 0
https://nghttp2.org/documentation/nghttp2_session_create_idle_stream.html
CC-MAIN-2021-25
en
refinedweb
I will be on vacation starting today, and returning on Monday, August 24. During that time I will be in Maine but I will be writing Programming Silverlight 4, studying RIA Services in depth and otherwise having a geek vacation. Project Turing, the Mini-Tutorials and a series of videos are slated for the weeks following my return (though I will be on vacation again August 28-31). To bring you up to date on project status: - Turing is well under way and now has a FAQ, Table of Contents and a Definitions Page - Wiki is on hold pending legal issues - Better Videos resumes next week - Quick Bits are updated frequently - Mini-tutorials are now listed here - Community contact and presentations are being booked now September September should be very exciting as I’ll be in Redmond from September 14 through the 17th, with Silverlight Firestarter slated for September 17th and a presentation at the .NET Developers Association of Redmond on Monday September 14th (registration free, but required).
https://jesseliberty.com/2009/08/17/hey-where%E2%80%99s-jesse/
CC-MAIN-2021-25
en
refinedweb
The changed in a string. Example malyalam 1 Explanation If we can add ‘a’ to the initial string, we can create a palindrome. madaam 1 Explanation Either add ‘d’ or ‘a’ to make the original string palindrome. Algorithm - Set length of the string to l and output to 0. - Declare an integer array. - Store and maintain the count of each character in a string. - Traverse the array starting from 0 to while i < 26. - Check if countChar[i] % 2 == 1, - If true, then do output++. - If the output is equal to 0, then return 0. - Else return output-1. Explanation You are given a string, your task given is to find out the minimum insertion to be done in a string so that it becomes a Palindrome. The position of characters can be changed in a string. We are going to count the occurrence of the character of a string and store it to an array. Because the idea behind is that when a string is a palindrome, there is only a single character that can be odd when the string length is odd. Other than that all characters have even frequency. So we need to find characters that occur an odd number of times. We are going to count every character in the input string and store it to an array. As we already mentioned, a string which is palindrome can only have one character which occurs odd number of times. So the output would be one less than the character count. After storing every character string occurrence in an array. We are then making an array traversing from i=0 to i is less than 26. This is because there are a total of 26 characters and we should suppose that there will be a probability of occurrence of 26 characters in a given string. While traversing the array, we will check if dividing each count by 2 leaves a remainder 1 if it is true, then it will increase the count of output by 1(output++ ). After traversing an array, if count remains as zero, means we find nothing in character which is odd means the string is already palindrome, we will return 0 else we will return (output-1) as we already mentioned output will be one less than the character count and hence we got output. Code C++ code to find Minimum insertions to form a palindrome with permutations allowed #include<iostream> using namespace std; int getMinimumInsertion(string str) { int l = str.length(),output = 0; int countChar[26] = { 0 }; for (int i = 0; i < l; i++) countChar[str[i] - 'a']++; for (int i = 0; i < 26; i++) if (countChar[i] % 2 == 1) output++; return (output == 0) ? 0 : output - 1; } int main() { string str = "malyalam"; cout << getMinimumInsertion(str); return 0; } 1 Java code to find Minimum insertions to form a palindrome with permutations allowed class insertionToPalindrome { public static int getMinimumInsertion(String str) { int l = str.length(),output = 0; int countChar[] = new int[26]; for (int i = 0; i < l; i++) countChar[str.charAt(i) - 'a']++; for (int i = 0; i < 26; i++) { if (countChar[i] % 2 == 1) output++; } return (output == 0) ? 0 : output - 1; } public static void main(String[] args) { String str = "malyalam"; System.out.println(getMinimumInsertion(str)); } } 1 Complexity Analysis Time Complexity O(n) where “n” is the number of characters in the input string. Space Complexity O(1), because we have created an extra array having constant size. Thus the space complexity is constant.
https://www.tutorialcup.com/interview/string/minimum-insertions-to-form-a-palindrome-with-permutations-allowed.htm
CC-MAIN-2021-25
en
refinedweb
This site uses strictly necessary cookies. More Information I"m trying to make a FPS game using Unity Standard Assets, in which the user shoots arrows with a bow, requiring the arrows to be projectiles (contrary to a raycast-only fps). I figured out how to make the arrows fire from a point of the bow and go straight somewhere, but what I actually want is making so that the arrows spawn from the bow and go to where the player is aiming (aka the middle of the screen). I followed some similar questions here on Unity Answers, but for some reason the arrows don't go exactly to where the point is (they actually differ a lot from where the player is aiming). Additionally, sometimes projectile collisions won't register (the arrow sometimes passes through solid objects). here is the code added code contained in the fps controller script that makes the player shoot arrows: Ray ray = m_Camera.ViewportPointToRay(new Vector3(0.5F, 0.5F, 0)); RaycastHit hit; // Check whether your are pointing to something so as to adjust the direction Vector3 targetPoint; if (Physics.Raycast(ray, out hit)) targetPoint = hit.point; else targetPoint = ray.GetPoint(1000); // You may need to change this value according to your needs // Create the bullet and give it a velocity according to the target point computed before GameObject arrowProjectile = Instantiate(m_ArrowPrefab, m_ArrowSpawn.position, m_ArrowPrefab.transform.rotation); Quaternion rotationOffset = Quaternion.Euler(270f, 0f, 0f); arrowProjectile.transform.rotation = m_Camera.transform.rotation * rotationOffset; Rigidbody arrowRB = arrowProjectile.GetComponent<Rigidbody>(); m_ArrowList.Add(arrowProjectile); arrowRB.velocity = (targetPoint - m_ArrowSpawn.transform.position).normalized * m_ShootForce; //arrowRB.velocity = m_Camera.transform.forward * m_ShootForce; UpdateArrowCount() // Checks if arrowList is full. If yes, destroy the oldest arrow object and here is the code contained in a script attached to the arrow prefab: public class Arrow : MonoBehaviour { [SerializeField] private Rigidbody arrowRB; [SerializeField] private Collider arrowCollider; private bool hitSomething = false; Quaternion m_RotationOffset; void Start() { //arrowRB = GetComponent<Rigidbody>(); m_RotationOffset = Quaternion.Euler(270f, 0f, 0f); transform.rotation = Quaternion.LookRotation(arrowRB.velocity) * m_RotationOffset; } void Update() { if (!hitSomething) { transform.rotation = Quaternion.LookRotation(arrowRB.velocity) * m_RotationOffset; } } private void OnCollisionEnter(Collision collision) { if (collision.gameObject.tag == "Player" || collision.gameObject.tag == "MainCamera") { Physics.IgnoreCollision(collision.collider, arrowCollider); } if (collision.collider.tag != "Arrow" && collision.collider.tag != "Player" && collision.collider.tag != "MainCamera") { hitSomething = true; arrowRB.constraints = RigidbodyConstraints.FreezeAll; } } } Answer by unity_ek98vnTRplGj8Q · Jun 29, 2020 at 10:26 PM Your aiming code looks fine to me; I'm suspicious that your arrow is colliding with something that is altering the trajectory of it. I notice in your "OnCollisionEnter" script you tell the arrow collider to ignore the player and main camera colliders, but you only do it after it has already collided with them and has already altered the physics trajectory. Physics.IgnoreCollision will ignore any FUTURE collisions between the specified colliders, but will not retroactively ignore collisions that have already happened. I recommend you either use Physics.IgnoreCollision() when spawning the arrow, use physics layers and unity's collision matrix in the physics options to ignore collisions between the arrow and the player (this is the easiest to do), or move your arrow spawn point forward so that it doesn't collide with the player on spawn. As far as your arrow passing through objects - you can still use Raycasting for projectiles. A common practice is to keep track of the arrow's positions each frame as well as its previous position from the frame before then raycast between the two positions to check for a collision. This way your arrow still flies with the same velocity as a projectile, but you will see any colliders that your arrow would normally have missed in between frames. You can also try updating the collision detection options on the rigidbody itself, but I've personally had mixed results doing this. Thank you for your answer! You were right, the arrow was indeed coliding with the player, and thus changing trajectories. I used unity's collision matrix and the problem is gone. About arrows passing through objects:I will try your suggestion to see if it works. I did realize, however, that removing those collisions actually makes it happen more rarely. Using first and third person prefabs together. 0 Answers How to move the rotation of the X and Z axis of a 3D model's head with Mouse Input? 0 Answers Innacurate terrain collision with fast rigidbody 1 Answer Proper instantiate projectile with variables (C#) 1 Answer ignore ray collision with game object children 2 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1746757/how-should-i-make-the-projectiles-in-a-fps-go-stra.html
CC-MAIN-2021-25
en
refinedweb
1. Introduction and Overview 2. Key Concepts for Hardware Service Providers 3. Key Concepts for System Administrators and Application Developers Administrative Interfaces High-Availability Framework Device IDs and DID Pseudo Driver Cluster Membership Monitor Cluster Configuration Repository (CCR) Zones). Each cluster node that is physically connected to multihost disks provides a path to the storage for any node in the cluster. For Solaris Volume Manager, the volume manager namespaces are located in the /dev/md/diskset/dsk (and rdsk) directories. These namespaces consist of directories for each Solaris Volume Manager disk set imported throughout the cluster.. Third-party generated device trees are still valid and continue to work. Given a local device name, an easy mapping is provided to obtain its global name. For more information, see the scdidadm(1M) man page..
https://docs.oracle.com/cd/E29086_01/html/E35254/cacdefff.html
CC-MAIN-2019-22
en
refinedweb
#include <SliceSpec.H> Specifies the slice we want out of an IntVect, Box, BaseFab, etc. For IntVect and Box slicing, only the direction matters. For IntVect it refers to the element we are to remove. For Box it refers to the dimension we are to remove. Referenced by LevelData< T >::degenerate(), and LevelData< T >::degenerateLocalOnly().
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.2/structSliceSpec.html
CC-MAIN-2019-22
en
refinedweb
Subject: Re: [boost] different matrix library? From: DE (satan66613_at_[hidden]) Date: 2009-08-15 05:38:14 on 15.08.2009 at 13:14 joel wrote : > Well, the problem is not the design per itself. What usually happens > with such lib is that you have tons of different user types > that all want disjoint set of features. use DRY (don't repeat yourself) idiom i got it > Matrix lib always looks easy to do. Except they are not. I can toss you > tons of questions : will you support openMP, well it seems trivial to me > threading, i dream about a way like #include <the_lib/lib.h> #include <the_lib/threading.h> //enabling threading for the_lib that would be perfect for me > MPI, possibly > SIMD definitely yes > extensions, there must be a common way for common users > will you embed loop level optimization based on expression > introspection ? sooner or later > Will you interface looks like matlab, maple or > mathematica ? i prefer to model STL interfaces where appropriate in general: as common as possible > etc ... Not even counting the things we barely scratched > like storage mode, allocation mode etc... of course it is such a missingd feature it must be there > That's why I'm avoiding to comment your code cause I'm developing > something similar but for a somehow different audience than yours and my > view will prolly be radically different than yours. but i can get some useful stuff from a radically different design and utilize it to make the design better > I can also only reiterate the fact that I have a somehow large code base > for such a thing that's maybe worth reusing. sorry but i didn't get the point > Three heads are better than two I think. indeed -- Pavel Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2009/08/154980.php
CC-MAIN-2019-22
en
refinedweb
Using @Autowired in Abstract Classes Last modified: March 20, 2019 explain how to use the @Autowired annotation in abstract classes. We’ll apply @Autowired to an abstract class, and focus on the important points which we should take into account. 2. Setter Injection We can use @Autowired on a setter method: public abstract class BallService { private LogRepository logRepository; @Autowired public final void setLogRepository(LogRepository logRepository) { this.logRepository = logRepository; } } When we use @Autowired on a setter method, we should use the final keyword, so that the subclass can’t override the setter method. Otherwise, the annotation won’t work as we expect. 3. Constructor Injection We can’t use @Autowired on a constructor of an abstract class. Spring doesn’t evaluate the @Autowired annotation on a constructor of an abstract class. The subclass should provide the necessary arguments to the super constructor. Instead, we should use @Autowired on the constructor of the subclass: public abstract class BallService { private RuleRepository ruleRepository; public BallService(RuleRepository ruleRepository) { this.ruleRepository = ruleRepository; } } @Component public class BasketballService extends BallService { @Autowired public BasketballService(RuleRepository ruleRepository) { super(ruleRepository); } } 4. Cheat Sheet Let’s just wrap up with a few rules to remember. First, an abstract class isn’t component-scanned since it can’t be instantiated without a concrete subclass. Second, setter injection is possible in an abstract class, but it’s risky if we don’t use the final keyword for the setter method. The application may not be stable if a subclass overrides the setter method. Third, as Spring doesn’t support constructor injection in an abstract class, we should generally let the concrete subclasses provide the constructor arguments. This means that we need to rely on constructor injection in concrete subclasses. And finally, using constructor injection for required dependencies and setter injection for optional dependencies is a good rule of thumb. However, as we can see with some of the nuances with abstract classes, constructor injection is more favorable here in general. So, really we can say that a concrete subclass governs how its abstract parent gets its dependencies. Spring will do the injection as long as Spring wires up the subclass. 5. Conclusion In this article, we practiced using @Autowired within an abstract class and explained a few but important key points. The source code of this tutorial can be found in the Github project as usual.
https://www.baeldung.com/spring-autowired-abstract-class
CC-MAIN-2019-22
en
refinedweb
Latest version on sid errors I run Debian sid along with Catfish built from latest bzr, and though the .deb builds fine, I encounter the following error when trying to run: $ catfish Traceback (most recent call last): File "bin/catfish.py", line 50, in <module> import catfish File "/usr/share/ from catfish import CatfishWindow File "/usr/share/ from catfish_lib import Window, CatfishSettings ImportError: cannot import name CatfishSettings I could not reproduce the error on my Xubuntu 13.10 VM, so I'm wondering if something's wrong/missing with my python on Debian or whether something's wrong with Catfish. Question information - Language: - English Edit question - Status: - Solved - For: - Catfish Edit question - Assignee: - No assignee Edit question - Solved by: - Sean Davis - Solved: - 2013-07-27 - Last query: - 2013-07-27 - Last reply: - 2013-07-25 Yeah, I had missed that, but it should be fixed in the latest commits. Thanks, but I don't see it currently (or did you mean it's going to be done in the next commit?) http:// Thanks Sean Davis, that solved my question. Okay I had to add the CatfishSettings.py file to Makefile.in.in
https://answers.launchpad.net/catfish-search/+question/232758
CC-MAIN-2019-22
en
refinedweb
Change history¶ 4.5.0¶ The Redis transport now supports a custom separator for keys. Previously when storing a key in Redis which represents a queue we used the hardcored value \x06\x16separator to store different attributes of the queue in the queue’s name. The separator is now configurable using the sep transport option:#906 and introduce unique broadcast queue names as an optional keyword argument. If you want each broadcast queue to have a unique name specify unique=True: >>> from kombu.common import Broadcast >>> q = Broadcast(>> q = Broadcast(queue='foo') >>> q.name 'foo' Codebase improvements and fixes by: - Omer Katz 4.4.0¶ Restore bz2 import checks in compression module. The checks were removed in celery/kombu#9#954. Instead bump the required redis-py dependency to 3.2.0 to include this fix andymccurdy/redis-py@4e1e748. Contributed by Peter Lithammer Added support for broadcasting using a regular expression pattern or a glob pattern to multiple Pidboxes. Contributed by Jason Held 4.3.0¶is now reentrant. This allows callingQueueto pass queue_arguments to Queue object. This allows :class: kombu.simple.SimpleQueueto connect to RabbitMQ queues with custom arguments like ‘x-queue-mode’=’lazy’. Contributed by C Blue Neeh Add support for ‘redAttributes ‘f’#415 there is a mismatch between the monkey-patched eventlet queue and the interface Kombu is expecting. This causes Celery to crash when the broker_pool_limit configuration option is set eventlet/eventlet#415` 4.2.2-post1¶ Note The previous release contained code from master. It is now deleted from PyPi. Please use this release instead. - No changes since previous release. 4.2.2¶ Support both Redis client version 2.x and version 3.x. Contributed by Ash Berlin-Taylor and Jeppe Fihl-Pearson 4.2.1¶ Note The 4.2.0 release contained remains of the async module by accident. This is now fixed. Handle librabbitmq fileno raising a ValueError when socket is not connected. Contributed by Bryan Shelton 4.2.0¶ Now passing max_retries, interval_start, interval_step, interval_maxparameters from broker transport_optionsto ensure_connection()when returning 4.1.0¶ SQS: Added support for long-polling on all supported queries. Fixed bug causing error on parsing responses with no retrieved messages from SQS. Contributed by Anthony Lukach. Async hub: Fixed potential infinite loop while performing todo tasks (Issue celery/celery#3712). Qpid: Fixed bug where messages could have duplicate delivery_tag(Issue #563). Contributed by bmbouter. MongoDB: Fixed problem with using readPreferenceoption at pymongo 3.x. Contributed by Mikhail Elovskikh. Re-added support for :pypi: SQLAlchemy Contributed by Amin Ghadersohi. SQS: Fixed bug where hostname would default to localhostifin :func: generate_oidif :func: uuid.uuid3fails. boto3 (no longer using boto). - Adds support for Python 3.4+ - Adds support for FIFO queues (Issue #678) and (Issue celery/celery#3690) - Avoids issues around a broken endpoints file (Issue celery/celery#3672) Contributed by Mischa Spiegelmock and Jerry Seutter. Zookeeper: Added support for delaying task with Python 3. Contributed by Dima Kurguzov. SQS: Fixed bug where kombu.transport.SQS.drain_events()did not support callback argument (Issue #694). Contributed by Michael Montgomery. Fixed bug around modifying dictionary size while iterating over it (Issue #675). Contributed by Felix Yan. etcd: Added handling for EtcdExceptionexception rather than EtcdError. Contributed by Stephen Milner. Documentation improvements by: - Mads Jensen - Matias Insaurralde - Omer Katz - Dmitry Dygalo - Christopher Hoskin 4.0.2¶ Now depends on amqp2.1.4 This new version takes advantage of TCP Keepalive settings on Linux, making it better at detecting closed connections, also in failover conditions. Redis: Priority was reversed so, e.g. priority 0 became priority 9. 4.0.1¶ Now depends on amqp2.1.3 This new version takes advantage of the new TCP_USER_TIMEOUTsocketerror (Issue #652). Contributed by Toomore Chiang. Safe argument to urllib.quotemust be bytes on Python 2.x (Issue #645). Documentation improvements by: - Carlos Edo - Cemre Mengu 4.0¶ Now depends on amqp2.0. The new py-amqp version have been refactored for better performance, using modern Python socket conventions, and API consistency. No longer depends on anyjson. Kombu will now only choose between simplejson and the built-in json. Using the latest version of simplejson is recommended: $ pip install -U simplejson Removed transports that are no longer supported in this version: - Django ORM transport - SQLAlchemy ORM transport - Beanstalk transport - ZeroMQ transport - amqplib transport (use pyamqp). API Changes Signature of kombu.Messagenow: sentinel://0.0.0.0:26379;sentinel://0.0.0.0:26380/... where each sentinel is separated by a ;. Multiple sentinels are handled by kombu.Connectionconstructor, and placed in the alternative list of servers to connect to in case of connection failure. Contributed by Sergey Azovskov, and Lorenzo Mancini RabbitMQ Queue Extensions New arguments have been added to kombu.Queuethat lets you directly and conveniently configure the RabbitMQ queue extensions.. RabbitMQ: Message.acknow supports the multipleargument. If multiple is set to True, then all messages received before the message being acked will also be acknowledged. amqps://can now be specified to require SSL (Issue #610). Consumer.cancel_by_queueis now constant time. Connection.ensure*now raises kombu.exceptions.OperationalError. Things that can be retried are now reraised as kombu.exceptions.OperationalError. Redis: Fixed SSL support. Contributed by Robert Kolba. New Queue.consumer_argumentscan be used for the ability to set consumer priority via x-priority. See Example: Queue( 'qname', exchange=Exchange('exchange'), routing_key='qname', consumer_arguments={'x-priority': 3}, ) Queue/Exchange: no_declareoption added (also enabled for internal amq. exchanges) (Issue #565).etimes, Django promise, UUID and Decimal. Beanstalk: Priority 0 is now lowest, 9 is highest. (backward incompatible) This to match how priorities in AMQP works. Fix contributed by Alex Koshelev. Redis: now supports SSL using the sslargument to Connection. Redis: Fanout exchanges are no longer visible between vhosts, and fanout messages can be filtered by patterns. (backward incompatible) It was possible to enable this mode previously using the fanout_prefix, and fanout_patternstransport options, but now these are enabled by default. If you want to mix and match producers/consumers running different versions you need to configure your kombu 3.x clients to also enable these options: >>> None, and the default is instead set by Producer.publish. Consumernow supports a new prefetch_countargument, which if provided will force the consumer to set an initial prefetch count just before starting. Virtual transports now stores priorityas a property, not in delivery_info, to be compatible with AMQP. reply_toargument to Producer.publishcan now be Queueinstance. kombu.mixins.ConsumerProducerMixinfor ‘’ ¶ The deprecated method Consumer.add_queue_from_dicthas been removed. Use instead: consumer.add_queue(Queue.from_dict(queue_name, **options)) The deprecated function kombu.serialization.encodehas been removed. Use kombu.serialization.dumps()instead. The deprecated function kombu.serialization.decodehas been removed. Use kombu.serialization.loads()instead. Removed module kombu.syn detect_environmenthas been moved to kombu.utils.compat 3.0.37¶ - Connection: Return value of .info()was no longer JSON serializable, leading to “itertools.cycle object not JSON serializable” errors (Issue #635). 3.0.36¶). 3.0.35¶ redis-py> 2.10) socket_keepalive_options(requires redis-py> 2.10) msgpack: Fixes support for binary/unicode data 3.0.34¶ Qpid: Adds async error handling. Contributed by Brian Bouterse. Qpid: Delivery tag is now a UUID4 (Issue #563). Fix contributed by Brian Bouterse. Redis: Connection.as_uri() returned malformed URLs when the redis+socketscheme was ised (Issue celery/celery#2995). msgpack: Use binary encoding instead of utf-8 (Issue #570). 3.0.33¶ - Now depends on amqp1.4.9. - Redis: Fixed problem with auxilliary connections causing the main consumer connection to be closed (Issue #550). - Qpid: No longer uses threads to operate, to ensure compatibility with all environments (Issue #531). 3.0.32¶ - Redis: Fixed bug introduced in 3.0.31 where the redis transport always connects to localhost, regardless of host setting. 3.0.31¶ - Redis: Fixed bug introduced in 3.0.30 where socket was prematurely disconnected. - Hub: Removed debug logging message: “Deregistered fd…” (Issue #549). 3.0.30¶. 3.0.29¶ 3.0.28¶ Django transport migrations. If you’re using Django 1.8 and have already created the kombu_transport_django tables, you have to run a fake initial migration: $ python manage.py migrate kombu_transport_django --fake-initial No longer compatible with South by default. To keep using kombu.transport.django with South migrations you now need to configure a new location for the kombu migrations: SOUTH_MIGRATION_MODULES = { 'kombu_transport_django': 'kombu.transport.django.south_migrations', } Keep old South migrations in kombu.transport.django.south_migrations. Now works with Redis < 2.10 again. 3.0.27¶ Now depends on amqp1option to modify how consumer tags are generated (Issue #509). 3.0.25¶ pyamqp/librabbitmq now uses 5671 as default port when SSL is enabled (Issue #459). Redis: Now supports passwords in redis+socket://:pass@host:portURLs (Issue #460). Producer.publishnow defines the expirationproperty. bindingsis now JSON serializable (Issue #453). Contributed by Sergey Tikhonov. Fixed typo in error when yaml is not installed (said msgpack). Contributed by Joshua Harlow. Redis: Now properly handles redis.exceptions.TimeoutErrorraised byexceptions. Contributed by Brian Bouterse. Queue.__repr__now makes sure return value is not unicode (Issue #440). qpid: Queue.purgeincorrectly raised AttributeErrrorif the does not exist (Issue #439). Contributed by Brian Bouterse. Linux: Now ignores permission errors on epoll unregister. 3.0.24¶ The Qpid broker is supported for Python 2.x environments. The Qpid transport includes full SSL support within Kombu. See the kombu.transport.qpiddocs for more info. Contributed by Brian Bouterse and Chris Duryee through support from Red Hat. Dependencies: extra[librabbitmq] now requires librabbitmq 1.6.0 Docstrings for TokenBucketdidargument to Producer(Issue #423). Django: Fixed app_labelfor older Django versions ( < 1.7). (Issue #414). 3.0.23¶ Django: Fixed bug in the Django 1.7 compatibility improvements related to autocommit handling. Contributed by Radek Czajka. Django: The Django transport models would not be created on syncdb after app label rename (Issue #406). 3.0.22¶angoto work with recent changes in Django 1.7. SimpleQueue removed messages from the wrong end of buffer (Issue #380). Tests: Now using unittest.mockif available (Issue #381). 3.0.21¶ Fixed remaining bug in maybe_declarefor auto_deleteexchanges. Fix contributed by Roger Hu. MongoDB: Creating a channel now properly evaluates a connection (Issue #363). Fix contributed by Len Buckens. 3.0.20¶ Reverts change in 3.0.17 where maybe_declarecaches the declaration of auto_delete queues and exchanges. Fix contributed by Roger Hu. Redis: Fixed race condition when using gevent and the channel is closed. Fix contributed by Andrew Rodionoff. 3.0.19¶ - The wheel distribution did not support Python 2.6 by failing to list the extra dependencies required. - Durable and auto_delete queues/exchanges can be be cached using maybe_declare. 3.0.17¶() 3.0.16¶ kombu[librabbitmq]now depends on librabbitmq 1.5.1. Redis: Fixes TypeErrorproblem. 3.0.15¶ Now depends on amqp1_globalflag appropriately: def update_prefetch_count(channel, new_value): channel.basic_qos( 0, new_value, not channel.connection.client.qos_behavior_matches_spec, ) Users of librabbitmq. 3.0.14¶ MongoDB: Now endures a connection failover (Issue #123). Fix contributed by Alex Koshelev. MongoDB: Fixed KeyErrorwhenfrom attempting to close a non-existing connection (Issue #320). 3.0.13¶1_patternstransport option: >>> conn = kombu.Connection('redis://', transport_options={ ... 'fanout_patterns': True, ... }) When enabled the exchange will work like an amqp topic exchange if the binding key is a pattern. This is planned to be default behavior in the future. Redis: Fixed cycleno such attribute error. 3.0.12¶ Now depends on amqp1. 3.0.11¶ Now depends on amqp1.4.2. Now always trusts messages of type application/data and application/text or which have an unspecified content type (Issue #306). Compression errors are now handled as decode errors and will trigger the Consumer.on_decode_errorcallback if specified. New kombu.Connection.get_heartbeat_interval()method that can be used to access the negotiated heartbeat value. - kombu.common.oid_for no longer uses the MAC address of the host, but. 3.0.10¶ Now depends on amqp1.4.1. maybe_declarenow raises a “recoverable connection error” if the channel is disconnected instead of a ChannelErrorso that the operation can be retried. Redis: Consumer.cancel()is now thread safe. This fixes an issue when using gevent/eventlet and a message is handled after the consumer is canceled. 3.0.9¶ Now depends on amqp1.4.0. Redis: Basic cancel for fanout based queues now sends a corresponding UNSUBSCRIBEcommand to the server. This fixes an issue with pidbox where reply messages could be received after the consumer was canceled,: consumenow.closenow sets .pollerto None. 3.0.8¶ Serializer: loads and dumps now wraps exceptions raised into DecodeErrorand kombu.exceptions.EncodeErrorrespectively.and OSErrorare now treated as recoverable connection errors. SQS: Improved performance by reading messages in bulk. Contributed by Matt Wise. Connection Pool: Attempting to acquire from a closed pool will now raise RuntimeError. 3.0.6¶is now a named tuple. 3.0.5¶ Now depends on amqp1. 3.0.4¶ - common.QoS: decrement_eventuallynow makes sure the value does not go below 1 if a prefetch count is enabled. 3.0.3¶ SQS: Properly reverted patch that caused delays between messages. Contributed by James Saryerwinnie select: Clear all registerd fds on poller.cloe Eventloop: unregister if EBADF raised. 3.0.2¶ - Now depends on amqpversion 1.3.2. - select: Fixed problem where unregister did not properly remove the fd. 3.0.1¶ Now depends on amqpversionif eventlet/gevent used. Pidbox: Fixes problem where expires header was None, which is a value not supported by the amq protocol. ConsumerMixin: New consumer_contextmethod for starting the consumer without draining events. 3.0.0¶ Now depends on amqpversion 1.3. No longer supports Python 2.5 The minimum Python version supported is now Python 2.6.0 for Python 2, and Python 3.3 for Python 3.argument was first supported for consumers in version 2.5.10, and first supported by Queue.get asynciomodule. It’s not a complete implementation obviously, but the goal is that it will be easy to change to it once that is possible. Utility function kombu.common.ipublishhas been removed. Use Producer(..., retry=True)instead. Utility function kombu.common.isend_replyhas been removed Use send_reply(..., retry=True)instead. kombu.common.entry_to_queueand kombu.messaging.entry_to_queuehas been removed. Use Queue.from_dict(name, **options)instead. Redis: Messages are now restored at the end of the list. Contributed by Mark Lavin. StdConnectionErrorand StdChannelErroris removed and amqp.ConnectionErrorand amqp.ChannelErroris used instead. Message object implementation has moved to kombu.message.Message. Serailization: Renamed functions encode/decode to dumps()and loads(). For backward compatibility the old names are still available as aliases. The kombu.log.anon_loggerfunction has been removed. Use get_logger()instead. queue_declarenow returns namedtuple with queue, message_count, and consumer_countfields. LamportClock: Can now set lock class kombu.utils.clock: Utilities for ordering events added. SimpleQueuenow allows you to override the exchange type used. Contributed by Vince Gonzales. Zookeeper transport updated to support new changes in the kazoolibrary.has been removed: Use multiprocessing.util.Finalizeinstead.and message_tablenametransport options. Contributed by Ryan Petrello. - Redis transport: Now supports using local UNIX sockets to communicate with the Redis server (Issue #1283) To connect using a UNIX socket you have to use the redis+socketURL-prefix: redis+socket:///tmp/redis.sock. This functionality was merged from the celery-redis-unixsocket project. Contributed by Maxime Rouyrre. ZeroMQ transport: drain_events now supports timeout. Contributed by Jesper Thomschütz. 2.5.15¶ - Declaration cache: Now only keeps hash of declaration so that it does not keep a reference to the channel. - Declaration cache: Now respects entity.can_cache_declarationattribute. - Fixes Python 2.5 compatibility. - Fixes tests after python-msgpack changes. Queue.get: Now supports acceptargument. 2.5.14¶ - safe_str did not work properly resulting in UnicodeDecodeError(Issue #248). 2.5.13¶ Now depends on amqp1. 2.5.12¶ - Redis: Ignore errors about keys missing in the round-robin cycle. - Fixed test suite errors on Python 3. - Fixed msgpack test failures. 2.5.11¶ Now depends on amqp 1.0.12 (Py3 compatibility issues). MongoDB: Removed cause of a “database name in URI is being ignored” warning. Fix by Flavio Percoco Premoli Adds passiveoptionifmethod, that can be used to clean up after connections without I/O. queue_bindis no longer called for queues bound to the “default exchange” (Issue #209). Contributed by Jonathan Halcrow. The max_retries setting for retries was not respected correctly (off by one). 2.5.10¶ Note about upcoming changes for Kombu 3.0¶ Kombu 3 consumers will no longer accept pickle/yaml or msgpack by default, and you will have to explicitly enable untrusted deserializers either globally using kombu.enable_insecure_serializers(), or using the accept argument to Consumer. Changes¶ New utility function to disable/enable untrusted serializers. Consumer: acceptcan now be used to specify a whitelist of content types to accept. If the accept whitelist is set and a message is received with a content type that is not in the whitelist then a ContentDisallowedexception is raised. Note that this error can be handled by the already existing on_decode_error callback Examples: Consumer(accept=['application/json']) Consumer(accept=['pickle', 'json']) Now depends on amqp 1.0.11 pidbox: Mailbox now supports the acceptargument. Redis: More friendly error for when keys are missing. Connection URLs: The parser did not work well when there were multiple ‘+’ tokens. 2.5.9¶and driver_nameattributes. Fix contributed by Mher Movsisyan. Fixed bug with kombu.utils.retry_over_timewhen no errback specified. 2.5.8¶ Now depends on amqp1 botovnobeing available. Fix contributed by Ephemera. 2.5.7¶ Now depends on amqp 1.0.9 Redis: A regression in 2.5.6 caused the redis transport to ignore options set in transport_options. Redis: New socket_timeouttransport option. Redis: InconsistencyErroris now regarded as a recoverable error. Resource pools: Will no longer attempt to release resource that was never acquired. MongoDB: Now supports the ssloption. Contributed by Sebastian Pawlus. 2.5.6¶ - Now depends on amqp 1.0.8 which works around a bug found on some Python 2.5 installations where 2**32 overflows to 0. 2.5.5¶ling. 2.5.4¶ Fixed problem with connection clone and multiple URLs (Issue #182). Fix contributed by Dane Guempel. zeromq: Now compatible with libzmq 3.2.x. Fix contributed by Andrey Antukh. Fixed Python 3 installation problem (Issue #187). 2.5.1¶ - Fixed bug where return value of Queue.as_dict could not be serialized with JSON (Issue #177). 2.5.0¶canthat defines how ensure_connection()/ ensure()/ kombu.Connection.autoretry()will reconnect in the event of connection failures. The default reconnection strategy is round-robin, which will simply cycle through the list forever, and there’s also a shufflestrategy. Queuenowclass used as a thread-safe way to manage changes to a consumer or channels prefetch_count. This was previously an internal class used in Celery now moved to the kombu.commonmodule. Consumer now supports a on_messagecallback that can be used to process raw messages (not decoded). Other callbacks specified using the callbacksargument, and the receivemethod will be not be called when a on message callback is present. New utility kombu.common.ignore_errors()ignores connection and channel errors. Must only be used for cleanup actions at shutdown or on connection loss. Support for exchange-to-exchange bindings. The Exchangeentity gained bind_toand unbind_frommethods: e1 = Exchange('A')(connection) e2 = Exchange('B')(connection) e2.bind_to(e1, routing_key='rkey', arguments=None) e2.unbind_from(e1, routing_key='rkey', arguments=None) This is currently only supported by the pyamqptransport. Contributed by Rumyana Neykova. 2.4.10¶ - The previous versions connection pool changes broke Redis support so that it would always connect to localhost (default setting) no matter what connection parameters were provided (Issue #176). 2.4.9¶no longer tries to call the non-existent Producer._close. librabbitmq: Now implements transport.verify_connectionso that connection pools will not give back connections that are no longer working. New and better repr()for Queue and Exchange objects. Python 3: Fixed problem with running the unit test suite. Python 3: Fixed problem with JSON codec. 2.4.8¶ Redis: Improved fair queue cycle implementation (Issue #166). Contributed by Kevin McCarthy. Redis: Unacked message restore limit is now unlimited by default. Also, the limit can now be configured using the unacked_restore_limittransporttransport 2.4.7¶ 2.4.6¶ Adds additional compatibility dependencies: Python <= 2.6: - importlib - ordereddict Python <= 2.5 - simplejson 2.4.4¶ - amqplib: Fixed a bug with asynchronously reading large messages. - pyamqp: Now requires amqp 0.9.3 - Cleaned up test requirements. 2.4.1¶ Redis: Fixed race condition that could cause the consumer to crash (Issue #151) Often leading to the error message "could not convert string to float" Connection retry could cause an inifite loop (Issue #145). The amqpalias is now resolved at runtime, so that eventlet detection works even if patching was done later. 2.4.0¶ New experimental ZeroMQ <kombu.transport.zmqtransport. Contributed by John Watson. Redis: Ack timed-out messages were not restored when using the eventloop. Now uses pickle protocol 2 by default to be cross-compatible with Python 3. The protocol can also now be changed using the PICKLE_PROTOCOLenvironment variable. Adds Transport.supports_evattribute. Pika: Queue purge was not working properly. Fix contributed by Steeve Morin. Pika backend was no longer working since Kombu 2.3 Fix contributed by Steeve Morin. 2.3.1¶ librabbitmq: Can now handle messages that does not have a content_encoding/content_type set (Issue #149). Fix contributed by C Anthony Risinger. Beanstalk: Now uses localhost by default if the URL does not contain a host. 2.3.0¶plibrary: $is not installed, and librabbitmq will also be updated to support the same features. Connection now supports heartbeat argument. If enabled you must make sure to manually maintain heartbeats by calling the Connection.heartbeat_checkhas been added for the ability to inspect if a transport supports heartbeats or not. Calling heartbeat_check. 2.2.5¶ Pidbox: Now sets queue expire at 10 seconds for reply queues. EventIO: Now ignores ValueErrorraised by epoll unregister. MongoDB: Fixes Issue #142 Fix by Flavio Percoco Premoli 2.2.4¶flag set. New experimental filesystem transport. Contributed by Bobby Beever. Virtual Transports: Now support anonymous queues and exchanges. 2.2.3¶ BrokerConnectionnow renamed to Connection. The name Connectionhas been an alias for a very long time, but now the rename is official in the documentation as well. The Connection alias has been available since version 1.1.3, and BrokerConnectionwill still work and is not deprecated. Connection.clone()now works for the sqlalchemy transport. kombu.common.eventloop(), kombu.utils.uuid(), and kombu.utils.url.parse_url()can now be imported from the kombumodule directly. Pidbox transport callback after_reply_message_receivednow happens in a finally block. Trying to use the librabbitmq://transport will now show the right name in the ImportErrorif librabbitmqis not installed. The librabbitmq falls back to the older pylibrabbitmqname for compatibility reasons and would therefore show No module named pylibrabbitmqinstead of librabbitmq. 2.2.2¶ Now depends on anyjson0.3.3 Json serializer: Now passes bufferobjects directly, since this is supported in the latest anyjsonversion. Fixes blocking epoll call if timeout was set to 0. Fix contributed by John Watson. setup.py now takes requirements from the requirements/directory. The distribution directory contrib/is now renamed to extra/ 2.2.1¶ SQS: Default visibility timeout is now 30 minutes. Since we have ack emulation the visibility timeout is only in effect if the consumer is abrubtly terminated. retry argument to Producer.publishnowBuffercan now be bound to connections (which will use the default channel). Connection.manager.get_bindingsnow. 2.2.0¶ Important Notes¶variable must be set to the target RabbitMQ virtual host, and the URLmust be the AMQP URL to the server. The amqptransport alias will now use librabbitmqif. News¶field_stepstransport. Fixes¶ eventio: Now ignores ENOENT raised by epoll.register, and EEXIST from epoll.unregister. eventio: kqueue now ignores KeyErroron unregister. Redis: Message.rejectnow supports the requeueargument.. Nonblocking consume support¶nowif there is more data to read. This is to support eventloops where other things must be handled between draining events. 2.1.8¶ Bound Exchange/Queue’s are now pickleable. Consumer/Producer can now be instantiated without a channel, and only later bound using .revive(channel). ProducerPool now takes Producerargument.. 2.1.6¶. 2.1.5¶ The url parser removed more than the first leading slash (Issue #121). SQLAlchemy: Can now specify url using + separator Example: Connection('sqla+mysql://localhost/db') Better support for anonymous queues (Issue #116). Contributed by Michael Barrett. Connection.as_urinow quotes url parts (Issue #117). Beanstalk: Can now set message TTR as a message property. Contributed by Andrii Kostenko 2.1.4¶. 2.1.1¶. 2.1.0¶ MongoDB: Now supports fanout (broadcast) (Issue #98). Contributed by Scott Lyons. amqplib: Now detects broken connections by using MSG_PEEK. pylibrabbitmq: Now supports basic_get(Issue #97). gevent: Now always uses the selectpolling_INTERVALsetting). Adds convenience function: kombu.common.eventloop(). 2.0.0¶ Important Notes¶ Python Compatibility¶ No longer supports Python 2.4. Users of Python 2.4 can still use the 1.x series. The 1.x series has entered bugfix-only maintenance mode, and will stay that way as long as there is demand, and a willingness to maintain it. New Transports¶ django-kombuis now part of Kombu core. The Django message transport uses the Django ORM to store messages. It uses polling, with a default polling interval of 5 seconds. The polling interval can be increased or decreased by configuring the KOMBU_POLLING_INTERVALDj a URL: django:// and then add kombu.transport.djangoto INSTALLED_APPS, and run manage.py syncdbto create the necessary database tables. Upgrading If you have previously used django-kombu, then the entry in INSTALLED_APPSmust be changed from djkombutois now part of Kombu core. This change requires no code changes given that the sqlalchemytransport alias is used. News¶ kombu.mixins.ConsumerMixinisnow supports automatic retry. Producer.publishnow supports a declarekeyword argument. This is a list of entities ( Exchange, or Queue) that should be declared before the message is published. 1.5.1¶ Fixes issue with kombu.compatintroduced') 1.5.0¶_queuetransport option: >>> x = Connection('redis://', ... transport_options={'deadletter_queue': 'ae.undeliver'}) In addition, an UndeliverableWarningis now emitted when the dead-letter queue is enabled and a message ends up there. Contributed by Ionel Maries Cristian. MongoDB transport now supports Replicasets (Issue #81). Contributed by Ivan Metzlar. The Connection.ensuremethods now accepts a max_retriesvalue of 0. A value of 0 now means do not retry, which is distinct from Nonewhich means retry indefinitely. Contributed by Dan McGee. SQS Transport: Now has a lowercase sqsalias, so that it can be used with broker URLs (Issue #82). Fix contributed by Hong Minhee SQS Transport: Fixes KeyError on message acknowledgments ). 1.4.2¶ 1.4.1¶ - 1.4.0 broke the producer pool, resulting in new connections being established for every acquire. 1.4.0¶ Adds module kombu.mixins. This module contains a ConsumerMixinclass that can be used to easily implement a message consumer thread that consumes messages from one or more kombu.Consumerinstances.attribute that can be used to check if the connection instance has established a connection. ConnectionPool.acquire_channelnownow contains an implementation of Lamports logical clock. 1.3.2¶ - Broke Python 2.5 compatibility by importing parse_qslfrom urlparse - Connection.default_channel is now closed when connection is revived after connection failures. - Pika: Channel now supports the connection.clientattribute as required by the simple interface. - pools.set_limit now raises an exception if the limit is lower than the previous limit. - pools.set_limit no longer resets the pools. 1.3.1¶ Last release broke after fork for pool reinitialization. Producer/Consumer now has a connectionattribute, giving access to the Connectionof the instance. Pika: Channels now have access to the underlying Connectioninstance using channel.connection.client. This was previously required by the Simpleclasses and is now also required by Consumerand Producer. Connection.default_channel is now closed at object revival. Adds kombu.clocks.LamportClock. compat.entry_to_queue has been moved to new module kombu.common. 1.3.0¶ Broker connection info can be now be specified using URLs The broker hostname can now be given as: .. code-block:: textcan now be used as a context manager. Producer.__exit__now properly calls releaseinsteadis now an alias to the amqplibtransport. kombu.syn.detect_environmentnow returns ‘default’, ‘eventlet’, or ‘gevent’ depending on what monkey patches have been installed. Serialization registry has new attribute type_to_nameso it is possible to lookup serializater name by content type. Exchange argument to Producer.publishcan now be an Exchangeinstance. compat.Publishernow supports the channelkeyword argument. Acking a message on some transports could lead to KeyErrorbeing raised (Issue #57). Connection pool: Connections are no long instantiated when the pool is created, but instantiated as needed instead. Tests now pass on PyPy. Connection.as_urinow includes the password if the keyword argument include_passwordis set. Virtual transports now comes with a default default_connection_paramsattribute. 1.2.1¶ Now depends on amqplib >= 1.0.0. Redis: Now automatically deletes auto_delete queues at basic_cancel. serialization.unregisteradded so it is possible to remove unwanted seralizers. Fixes MemoryError while importing ctypes on SELinux (Issue #52). Connection.autoretryis a version of ensurethat works with arbitrary functions (i.e. it does not need an associated object that implements the revivemethod. Example usage: channel = connection.channel() try: ret, channel = connection.autoretry(send_messages, channel=channel) finally: channel.close() ConnectionPool.acquireno longer force establishes the connection. The connection will be established as needed. Connection.ensurenow supports an on_revivecallback that is applied whenever the connection is re-established. Consumer.consuming_from(queue)returns True if the Consumer is consuming from queue. Consumer.cancel_by_queuedid not remove the queue from queues. compat.ConsumerSet.add_queue_from_dictnow automatically declared the queue if auto_declareset. 1.2.0¶ - 1.1.6¶transport option. amqplib: Now uses localhost as default hostname instead of raising an error. 1.1.4¶ Redis transport: Now requires redis-py version 2.4.4 or later. New Amazon SQS transport added. Usage: >>> conn = Connection(transport='SQS', ... userid=aws_access_key_id, ... password=aws_secret_access_key) The environment variables AWS_ACCESS_KEY_IDand AWS_SECRET_ACCESS_KEYare also supported. librabbitmq transport: Fixes default credentials support. amqplib transport: Now supports login_method for SSL auth. Connectionnow supports the login_method keyword argument. Default login_method is AMQPLAIN. 1.1.3¶has been added to the list of known connection related errors ( Connection.connection_errors). amqplib: Now converts SSLErrortimeout errors to socket.timeout() Ensures cyclic references are destroyed when the connection is closed. 1.1.2¶. 1.1.1¶ - 1.1.0 started using Queue.LifoQueuewhich is only available in Python 2.6+ (Issue #33). We now ship with our own LifoQueue. 1.1.0¶ Important Notes¶ Virtual transports: Message body is now base64 encoded by default (Issue #27). This should solve problems sending binary data with virtual transports. Message compatibility is handled by adding a body_encodingproperty, so messages sent by older versions is compatible with this release. However – If you are accessing the messages directly not using Kombu, then you have to respect the body_encodingproperty.attribute.and passwordarguments to Connection(Issue #30). Connection: Default autentication credentials are now delegated to the individual transports. This means that the useridand passwordarguments to Connection is no longer guest/guest by default. The amqplib and pika transports will still have the default credentials. Consumer.__exit__()did not have the correct signature (Issue #32). Channel objects now have a channel_idattribute. - MongoDB: Version sniffing broke with development versions of mongod (Issue #29). - New environment variable KOMBU_LOG_CONNECTIONwill now emit debug log messages for connection related actions. KOMBU_LOG_DEBUGwill also enable KOMBU_LOG_CONNECTION. 1.0.7¶works properly with Redis. consumer_tag argument to Queue.consumecan’t be None(Issue #21). A None value is now automatically converted to empty string. An empty string will make the server generate a unique tag. Connection now supports a transport_optionsargument. This can be used to pass additional arguments to transports. Pika: drain_eventsraised socket.timeouteven if no timeout set (Issue #8). 1.0.6¶ The delivery_modealiases (persistent/transient) were not automatically converted to integer, and would cause a crash if using the amqplib transport. Redis: The redis-py InvalidDataexception suddenly changed name to DataError. The KOMBU_LOG_DEBUGenvironment variable can now be set to log all channel method calls. Support for the following environment variables have been added: KOMBU_LOG_CHANNELwill wrap channels in an object that logs every method call. KOMBU_LOG_DEBUGboth) 1.0.5¶command only available in MongoDB 1.3+, so now raises an exception if connected to an incompatible server version. Virtual Transports: basic.cancelshould not try to remove unknown consumer tag. 1.0.4¶ Added Transport.polling_interval Used by django-kombu to increase the time to sleep between SELECTs when there are no messages in the queue. Users of django-kombu should upgrade to django-kombu v0.9.2. 1.0.3¶ - ConnectionPool: Re-connect if amqplib connection closed - Adds Queue.as_dict+ Exchange.as_dict. 1.0.2¶ - amqplib: Message properties were not set properly. - Ghettoq backend names are now automatically translated to the new names.
https://docs.celeryproject.org/projects/kombu/en/stable/changelog.html
CC-MAIN-2019-22
en
refinedweb
Problem The Hamming distance between two integers is the number of positions at which the corresponding bits are different. Given two integers x and y, calculate the Hamming distance. Example: Input: x = 8, y = 1 Output: 2 Explanation: Solution With Java it is quite easy to calculate. First we will do a bitwise XOR and then use Integer.bitCount() to calculate the number of one-bits in the two’s complement binary representation of XORed value. package com.programtalk.learn.interview.questions; public class HammingDistance { public static void main(String[] args) { System.out.println(hammingDistance(8, 1)); } public static int hammingDistance(int x, int y) { return Integer.bitCount(x ^ y); } }
https://programtalk.com/java/hamming-distance-two-integers/
CC-MAIN-2019-22
en
refinedweb
First time here? Check out the FAQ! Panda team is developing a Web application to change the configuration of Ivy server. We are creating the login step, so that just the Ivy Admin can access our page. Could you please tell us how to authenticate with the Ivy Administrator account (like the picture at the bottom) Panda team tried to do : public static AuthenticationException login(ISession session, String userName, String password) throws PersistencyException { try { session.authenticateSessionUser(userName, new Password(password)); return null; } catch(AuthenticationException ex) { return ex; } } The result is we just can login with the accounts (IUser) of our application, not the account to manage the server. Please help us. Thank you very much asked 15.01.2014 at 11:10 anphunl 76●2●3●8 accept rate: 50% edited 16.01.2014 at 09:36 Can you explain me the use case you want to solve by authenticate a system administrator in your application? I may can help you better if I know the root cause of your question. Hi Reto Weiss, Thanks for your comment, I updated my question. Could you please take a look? You can make any user of an application a system administrator by granting him all permissions of the system security descriptor type on the system security descriptor. See SystemAdminMaker.java file below for more details. Note the class uses NON Public API and will therefore break in future releases of Xpert.ivy! SystemAdminMaker.java: package ch.ivyteam.ivy.demo; import ch.ivyteam.ivy.security.IPermissionGroup; import ch.ivyteam.ivy.security.ISecurityDescriptor; import ch.ivyteam.ivy.security.ISecurityManager; import ch.ivyteam.ivy.security.IUser; import ch.ivyteam.ivy.server.IServer; import ch.ivyteam.ivy.server.ServerFactory; // ============================================================ // ATTENTION: // ============================================================ // The following code access NON PUBLIC API. // Therefore this code will break in future Xpert.ivy versions! // ============================================================ public class SystemAdminMaker { public void grantSystemAdminRightsTo(IUser user) { ISecurityDescriptor systemSecurityDescriptor = getSystemSecurityDescriptor(); systemSecurityDescriptor.grantPermissions(getRootPermissionGroup(), user); } public void ungrantSystemAdminRightsTo(IUser user) { ISecurityDescriptor systemSecurityDescriptor = getSystemSecurityDescriptor(); systemSecurityDescriptor.ungrantPermissions(getRootPermissionGroup(), user); } private ISecurityDescriptor getSystemSecurityDescriptor() { IServer server = ServerFactory.getServer(); ISecurityManager securityManager = server.getSecurityManager(); ISecurityDescriptor systemSecurityDescriptor = securityManager.getSystemSecurityDescriptor(); return systemSecurityDescriptor; } private IPermissionGroup getRootPermissionGroup() { return getSystemSecurityDescriptor().getSecurityDescriptorType().getRootPermissionGroup(); } } answered 17.01.2014 at 16:29 Reto Weiss ♦♦ 4.8k●17●25●54 accept rate: 74% Hi Reto Weiss, Thank you very much for your answer. Actually, our problem is not the permission to access the server's information. We are doing like that : public static List<ienvironment> getEnvironmentList(final IApplication application) throws Exception { return SecurityManagerFactory.getSecurityManager().executeAsSystem(new Callable<list<ienvironment>>() { @Override public List<IEnvironment> call() throws Exception { return application.getEnvironments(); } }); } Now the problem is how can the admin of ivy server log in to our application? Regards, Phu Nguyen Hi Phu I know that my answers is not was you have asked. But it is not a good idea to login a system administrator to your application. This is because a system administrator is in fact a user of the "system" application in Xpert.ivy. Because applications are strictly divided it is not possible to login a user of one application to another application. What the code does that I have written in the answer is to give a user of your application the same rights that also the system administrator have. Therefore it turns a user of your application into a system administrator. Hi Phu It is also possible to grant the system administrator rights to a role of your application. In this case every user that owns the role automatically inherits the system administrator rights from the role. Hi Reto Weiss, Thank you very much. We will use the users in our application and grant them the needed permission. Best regards, Phu Nguyen Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown authentication ×15 Asked: 15.01.2014 at 11:10 Seen: 1,948 times Last updated: 20.01.2014 at 09:03 How to authenticate a user that is provided by a Single Sign On (SSO) proxy. Does AXon ivy support Login Register forms (register,forgot password,login) ? Calling NTLM protected REST service with SSO REST token authentication How to connect To SystemDB using Windows-Authentification Can i turn on/off webservice authentication programmatically SSO on Apache: not logged in to Ivy but in tomcat? How to verify username/password of user without logging in? Login page on Process Start with http-request link and 'Only wf users' set to true How to set information for logon IUser without permanently store to system?
https://answers.axonivy.com/questions/253/how-can-we-change-authentication-scope
CC-MAIN-2019-22
en
refinedweb
In this Google flutter code example we are going to learn how to use Sized 'sized: BasicSizedBox(), ); } } sizedbox.dart import 'package:flutter/material.dart'; class BasicSizedBox extends StatelessWidget { //A box with a specified size @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text("SizedBox Widget")), //the sizedBox widget forces its child to have a specific width and/or height (if that widget permits that) //if it doesnt contain a child, it will size itself to the given width and height //can also use SizedBox.expand(), which stes with and height to infinity //or SizedBox.fromSize() which requires a Size class body: Center( child: SizedBox( height: 80.0, width: 80.0, //the container has no width or height, but the sizedbox widget forces it to its own width/height child: Container( color: Colors.red, ), ), ), ); } } If you have any questions or suggestions kindly use the comment box or you can contact us directly through our contact page below.
https://inducesmile.com/google-flutter/how-to-use-sizedbox-widget-in-flutter/
CC-MAIN-2019-22
en
refinedweb
Currently there are open source NOSQL databases, Redis, Tokyo Cabinet, Cassandra, Voldemort, MongoDB, Dynomite, HBase, CouchDB, Hypertable, Riak, Tin, Flare, Lightcloud, KiokuDB, Scalaris, Kai, ThruDB and so on. First, read and write performance to m database tables, memory database, key value, language java, oriented database, development of language, time updates, document design, master slave, database comparison, query expression, storage requirements, similar features, collision detection, massive storage, javascript java, document verification, value performance, erlang c, common technical documentOctober 19 Some commonly used code snippets, taken from the AS3 Game Programming University (ActionScript 3.0 Game Programming University), if reproduced, please indicate the source - Arthur. 1, the collision detection addEventListener (Event.ENTER_FRAME, check lt, text boxes, style sheet, style sheets, as3, arrow, code snippets, crescent, custom mouse, game programming, collision detection, trace event, mylink, mouse mouse, programming university, move star, arthur 1, star xMay 11 Alternativa3D 7.6 provides a collision detection based on the Ellipsoid, with versions prior to 7.5.1 has been implemented support for skeletal animation, implements a simple Web 3D office online demo URL address: implementation, demo, cn, conversion, operation mode, tps, skeletal animation, web 3d, crash test, collision detection, fps, keyboard mouse, global method, mode changes, ellipsoid, external model, test capabilitiesJanuary 12 Before seen in the altar where a friend had posted a class of irregular objects, collision detection, here, I paste a foreigner, a cow write a class by their own tests, with a 1500 * 1500 vector and a 10 * not less than 10 moves like a small ball col cow, rectangle, import flash, vector, boundaries, foreigner, intersection, sprite, collision detection, target2, test areaJanuary 12 [Url] 3D1% 26amp; orderby% 3Ddateline% 26amp; filter% 3D86400 [/ url] General bitmapdata collision detection, if the registration point is at the origin can be a normal collision detection if t amp, orderby, rectangle, matrix, bbs, rar, ty, tid, system parameters, collision detection, registration point, hittest, coordinate system, detection stage, white manJanuary 10 -------------------------------------------------- --------- (Basic physical) velocity and acceleration Angular velocity is converted to x, y velocity vector: vx = speed * Math.cos (angle); vy = speed * Math.sin (angle); Angular acceleration (acting coordinates, radius, friction, gravity, sprite, number math, targety, targetx, y spring, y vy, ax number, collision detection, angular velocity, speed math, rebound, velocity vector, y velocity, slow motion, acceleration vector, angular accelerationJanuary 10 3D1% 26amp; orderby% 3Ddateline% 26amp; filter% 3D86400 I want to be a similar game with the dinosaurs, Street Fighter, all the roles on stage, y coordinates of a large block y coordinates shou orderby, principle, array, y quot, fist, algorithm, landlord, realization, arrow keys, enemies, tid, movie clip, graphic objects, collision detection, dinosaurs, attack button, street fighter, fighting game, unreachable areas, wild childJanuary 2 APE physics engine with good results, it is important that the class of small, easy to learn. Annex I to write a physics engine this game is a game to see my first online tutorial, this engine is still relatively small, had a good hard look, made som logs, source code, cn, google, programmer, demo demo, modified version, physics engine, english documents, cat 3, engine 39, car class, engine applications, texture, particles, amoy, collision detection, particle collision, engine demo, pliDecember 22 As if you have some game development experience, that I believe you should know better, but the scene into sub-visual objects, the mouse will cause the sliding of Hurricane cpu; The main reason is that the traversal fp kept within the visual object. implementation, listener, attributes, mouse events, development experience, traversal, game development, subset, games, mouse point, consumption, listeners, mouse messages, collision detection, efficiency optimization, mouse song, open slide, hurricaneDecember 21 Disclaimer: I am learning to use purely personal, not business practices. I reproduced the above article to carry out part of the or physics engine, game development, php doc, collisions, java version, polygon, elasticity, engine parameters, collision detection, game world, villain, 10086, doc view, 2d game engine, game physics, rigid body simulation, simulation library, interactive scene, game quality, realistic actionDecember 14 Games written with lua, lua in the brew of the running virtual machine, the game is very slow, 4 to 5 frames, can not stand. Checked and found that some spend most of computing time, as the AI rules, collision detection, accounting for more than 100 accounting, virtual machine, game, arithmetic, games, collision detectionDecember 3 This example comes from the "Making Things Move" book method for object collision detection hitTestObject simple to use, but for the irregular shape to be displayed "error in the collision," so the rectangle is, hitTestObject result of quot, package org, rectangle, import flash, number 0, graph, sprite, tetris, collision detection, irregular shape, h graphics, object collision, unit graphicsNovember 20Re rectangle, matrix, ty, mousey, collision detection, redclipNovember 18 In order not to waste time talking about the dynamic rigid body, can directly reference see the following links: # articles Here shows how to implement the 2D real physical game. I will introduce some of m thrust, inertia, checker, low speed, line changes, vertex, moving objects, collision detection, convex polygon, angular velocity, pn, material composition, body balance, polygon vertices, vector length, rigid body, physical game, infinite mass, density of material, 2d vectorNovember 16 Third, fast-moving objects on the further expansion of Slow moving object above approach will achieve very good results. But when the object moving very fast, the collision system will lose accuracy, missing the collision, or to allow cross between o amp, principle, axis, phenomenon, mathematics, accuracy, intervals, projector, disjoint, moving objects, collision detection, future 3, relative speed, interval 2, mtd, velocity vector, static algorithm, response c, collision systemNovember 16 VI, calculated contact In order to move the rigid body dynamics, we need accurate calculation of the contact between two colliding polygons. This is not complicated for 2D, but in the 3D scene can become very complex. In the 2D case, consider two cas amp, threshold, 3f, oa, accurate calculation, mapping function, 3d scene, vertex, polygons, collision detection, intersection point, visual presentation, contact point, collision point, rigid body dynamics, vector norm, const vector, const matrix, 2d case, support mappingNovember 16 Introduction This article describes how a 2D action game accurate and efficient collision detection. This collision is based on the polygon and not on the wizard. Between these two will be different in design. Wizard-based collision detection is thro collisions, bullets, triangles, moving objects, collision detection, static friction, orderly conduct, action game, rigid body, vector mathematics, speed wizard, convex polygons, polygonal approximation, wizard system, system wizard, design wizard, realistic physics, collision point, collision system, 2d actionNovember 16 Fourth, the basic arc collision response The following is to be made given the amount of separation of the two objects, and add a little friction and a number of static friction, so that the objects resting on the inclined plane. The speed of some si countdown, algorithm, response code, particles, pb, decomposition rate, vn, collision detection, static friction, collision response, particle energy loss, particle collisions, elasticity coefficient, particle velocity, particle collision, physical accuracy, collision energy, speed collision, infinite mass, inclined planeNovember 16 You will have to own the program and careful pixel collision which may be too expensive iPhone. My advice is to write (as in every other programming language interface) Collidable agreement, give it a collidedWith: (Collidable *) c function and the i data access, lt, logic, axis, array, norms, c function, algorithm, iphone, game, collision detection, programming language interface, pixel data, bounding box, super mario brothersNovember 16 Second, the expansion for the collision response method of separating axes Polygon intersection detection method is very useful, but can do more. When the polygon intersection, I want them to move to avoid them intersect. Separation axis method can b lt, amp, axis, interval, intervals, response method, axes, vectors, vertex, vertices, random object, ey, collision detection, distance vector, mtd, intersection detection, collision response, minimum separationNovember 16 Recent upsurge of creating Flash games everywhere, especially the community game. Like the Zynga , Playfish , Playdom , WonderHill provides Flash game requires you pay. View their web page and portfolio, after the game to get to a lot of pictures on demo demo, physics engine, game developers, flash game, game engine, practical applications, facebook, game technology, music sound, network game, platform game, collision detection, flash games, game demo, social gaming, tile maps, gaming platforms, game state, 3d engines, best gameNovember 11 js to achieve the principles of animation production with the same animation. Animation is little difference between the number of the original painting with a certain number of frames played. js animation is one of the elements by continuous change implementation, source code, logic, feelings, obstacles, matter of fact, css property, time stamp, first class, continuous change, code 39, source analysis, starting time, animation class, collision detection, animation production, small games, js games, original painting, zombiesOctober 26 Irregular collision detection, you can use the projector, and a little along the X axis parallel to determine the intersection point with the irregular shapes to know if a collision. Vector Annex: axis, annex, game, projector, collision detection, intersection point, vector geometry, irregular shapesOctober 22 1. Main AS3 API hitTestObject (obj: DisplayObject): Boolean hitTestPoint (x: Number, y: Number, shapeFlag: Boolean = false); shapeFlag check the display object defines the actual pixel (true); or display the object border (false) package as3 { import target, import flash, private function, sprite, flash events, collision detection, circle graphicsSeptember 4 We continue to use the SmallGameLib development! This time we do a player controlled by the mouse a mouse, to avoid attacks from all sides of the insects. If he encountered bug, then Game Over. The game is divided into three interface 1. Difficult ch interface, two kinds, layer logic, canvas, small game, config, bugs, logic function, game development, absolute position, score, game game, collision detection, mice, insects, worm, gameoverAugust 27 Document class package { import flash.display.Sprite; import flash.events.Event; /** * ... Multiple object collision detection * @author zkl * From :Action Script3.0 Animation */ public class Bubbles2 extends Sprite { private var balls:Array; private lt, document class, import flash, number 0, balls, radius, number 1, private function, gravity, sprite, number math, flash events, ball x, y spring, ax number, collision detection, x ball, dy dx, object collision, vxAugust 20 Android Open Source 3D game engine to switch] [Summary 1. JPCT-AE Description: jPCT support the Android version. Website: 2. Kwwaak3 Description: Quake 3 ported to Android platform mobile phone features: sound, network, O google, maps, java interface, memory management, task management, management resource, 3d models, 3d graphics, personal development, 3d engine, engine characteristics, collision detection, game engine features, animation website, window linux, graphics effects, model website, quake 3, gamine, rigid bodyJuly 27 Flash game engine list of Flash, engine, lists, games Flash, engine, lists, games of the recent wave of four to create Flash games, especially the community game. Just as with Zynga, Playfish, Playdom, WonderHill offer Flash games require that you pa physics engine, game developers, flash game, game engine, prototypes, practical applications, facebook, game technology, music sound, platform game, games flash, collision detection, flash games, game demo, social gaming, tile maps, gaming platforms, game state, 3d engines, technology centerJuly 6 The following class encapsulates three public static methods, a simple collision detection can be used directly complexHitTestObject, need to be more complex, it would need to see to understand the following code, and use the three public static meth math, blog, static methods, import flash, boundaries, foreigner, sprite, collision detection, rectangle intersection, target2, test area, object collisionJuly 2 Game engine in one of the most critical, is the scene management techniques; One of the most basic part of that scene segmentation. Scene Segmentation several problems to be solved as follows: Loading game scene is a need for real-time streaming or g engine core, load time, preview version, game engine, option 1, gaming scene, dynamic allocation, collision detection, brute force, partition tree, game scene, dynamic objects, management techniques, tree option, scene segmentation, split tree, alpha preview, dynamic scene, octree, scene managementJuly 1 Collection of game engine to read a variety of evolutionary history, when we introduced the game often run into the "engine" (Engine) this word, the engine in the game plays exactly what kind of role? It's evolutionary development of the game an plug ins, small game, game engine, evolutionary history, 3d games, game players, lighting effects, collision detection, shooting games, real time strategy games, real time strategy, games puzzle, role playing games, puzzle games, network characteristics, particle effects, continuous evolution, volume output, evolutionary development, music actionJune 28, ugly hand "is a man on the make it through 20 seconds." Today, things took a PYGAME nothing to write one, relatively simple, some features do not, such wonderful degree, induced missiles, increasing bullets. Picture is instance variables, quot, js, doubt, drawing board, bullets, sprite, collision detection, missiles, trajectory generation, movement 3, control aircraftJune 21 Recent study combined with Flash CS3 and Flex make the game side of things, leadership required to achieve a simple Dacang Ying game, similar to the happy farm. Of course, do not effect how good, can kill flies on it. First contact with Flash CS3, no quot, logic, callback function, memory leaks, class inheritance, fly, flies, first contact, wings, key frame, happy farm, movieclip, collision detection, key frames, fly fly, fly model, fly swatter, game side, device button, experience artMay 6 Very good video tutorials Original Address: Full Screen View Address: [Url =] Video 1 Address [/ Url] [Ur variable name, actionscript, as3, address url, chapter xviii, chapter xvi, chapter xv, chapter xiii, professional point, collision detection, meaningful variable names, flexibility exercise, particle physics, player swf, 3d line, interactive sports, 3d animation, chapter xix, chapter xi, auto 1April 9 package com.easyasrpg.implement.algorithm ( import flash.display.BitmapData; import flash.display.BlendMode; import flash.display.DisplayObject; import flash.display.Sprite; import flash.geom.ColorTransform; import flash.geom.Matrix; import flash.geo lt, math, import flash, algorithm, boundaries, sprite, detection tools, collision detection, rectangle intersection, target2, test areaMarch 8 Just saw a Flex version of the Online PhotoShop tools, at the following address: https: / / and PhotoShop transverse processes compared to desktop applications, it not only has the basic function, also photo, languages, web browser, desktop applications, application software, desktop application, drag on, adobe website, photoshop tools, transverse processes, home images, software photoshop, collision detectionMarch 2 failed to obain specified collectionsubcontractor management procedureweblogic server 9.2 screenshotsconfigure axes FigureCanvasWxAgg pythonspatial Query HBase158post.cm我我我234pecom118.122.225.71.8888 mssdyy home.jsphttp: 122.225.26.142:7070drgdfs.tskl.erya100.com
http://www.quweiji.com/tag/collision-detection/
CC-MAIN-2019-22
en
refinedweb
Usage¶ To use the fmt library, add format.h and format.cc from a release archive or the Git repository to your project. Alternatively, you can build the library with CMake. If you are using Visual C++ with precompiled headers, you might need to add the line #include "stdafx.h" before other includes in format.cc. Building the library¶ The included CMake build script can be used to build the fmt library on a wide range of platforms. CMake is freely available for download from. CMake works by generating native makefiles or project files that can be used in the compiler environment of your choice. The typical workflow starts with: mkdir build # Create a directory to hold the build output. cd build cmake <path/to/fmt> # Generate native build scripts. where <path/to/fmt> is a path to the fmt repository. If you are on a *nix system, you should now see a Makefile in the current directory. Now you can build the library by running make. Once the library has been built you can invoke make test to run the tests. You can control generation of the make test target with the FMT_TEST CMake option. This can be useful if you include fmt as a subdirectory in your project but don’t want to add fmt’s tests to your test target. If you use Windows and have Visual Studio installed, a FORMAT.sln file and several .vcproj files will be created. You can then build them using Visual Studio or msbuild. On Mac OS X with Xcode installed, an .xcodeproj file will be generated. To build a shared library set the BUILD_SHARED_LIBS CMake variable to TRUE: cmake -DBUILD_SHARED_LIBS=TRUE ... Header-only usage with CMake¶ You can add the fmt library directory into your project and include it in your CMakeLists.txt file: add_subdirectory(fmt) or add_subdirectory(fmt EXCLUDE_FROM_ALL) to exclude it from make, make all, or cmake --build .. Settting up your target to use a header-only version of fmt is equaly easy: target_link_libraries(<your-target> PRIVATE fmt-header-only) Building the documentation¶ To build the documentation you need the following software installed on your system: Python with pip and virtualenv - Less with less-plugin-clean-css. Ubuntu doesn’t package the clean-cssplugin so you should use npminstead of aptto install both lessand the plugin: sudo npm install -g less less-plugin-clean-css. First generate makefiles or project files using CMake as described in the previous section. Then compile the doc target/project, for example: make doc This will generate the HTML documentation in doc/html. Android NDK¶ fmt provides Android.mk file that can be used to build the library with Android NDK. For an example of using fmt with Android NDK, see the android-ndk-example repository.
https://fmt.dev/5.1.0/usage.html
CC-MAIN-2019-22
en
refinedweb
This is the mail archive of the guile@cygnus.com mailing list for the guile project. robertb@continuumsi.com writes: > > one immediately apparent problem is using `gh_eval_str'. you > > probably want to simply save the return value of > > `gh_new_procedure', an SCM. > > Well, doing that defeats the purpose of trying not to have global > variables! Perhaps I was too idealist to believe that I could avoid > use of global variables in Guile. It was so easy in X-Windows! in your example, does this code gh_set_ext_data(gh_new_procedure1_0("ReadSymbols", ServerReadSymbols), (void*)lib); get called multiply? each `gh_new_procedure1_0' call returns a new procedure object. `ServerReadSymbols' is already global in the C function namespace and "ReadSymbols" is already global in the string pool. you are starting with global data and then calling a function that gives you a new pointer each time called. probably you will want to emulate libguile practice of making `ServerReadSymbols' file static and then in the init procedure (called only once), saving the SCM. then you can pass this SCM around w/ resorting to `gh_eval_str' (which consults yet another global namespace...). thi
http://www.sourceware.org/ml/guile/1999-05/msg00071.html
CC-MAIN-2019-22
en
refinedweb
Data scientist at Port Jackson Partners in Sydney, Australia. My PhD was in computational biology. In my spare time I write about medical research at BioSky.co.CVAbout In this tutorial, I’m going to provide an introduction to the basics of the programming language C++. I’ll describe how to compile your first program with the gcc compiler on Linux and Mac, although the code should also work on Windows using the Visual Studio compiler. If you’re new to programming, I’d probably recommend getting started with an easier scripting language like Python, before you get into C++. That said, hopefully the information in this post should still be useful for the complete beginner. What is C++? C++ was invented by Bjarne Stroustrup in 1978 as an extension of the popular C programming language. It added classes and the ability to write object orientated code. Is C++ worth learning? Although C and C++ came out decades ago, they remain extremely popular when speed and performance are a crucial requirement. Although C is a great language, C++ added a lot of functionality which comes in handy for big projects. While knowing C will give you a head-start when understanding C++, it is not essential. Personally, I started off learning C before changing to C++ because the imaging library I use for a lot of my work (OpenCV) is written in C++. What a C++ program looks like If we now jump into some code, I’ll explain a basic C++ program to you. Open a text editor and write the following code: #include <iostream> int main() { std::cout << "Hello, world!" << std::endl; return 0; } Save your file as “hello.cpp”. The “cpp” indicates the filetype for a C++ program. To compile, open a terminal and navigate to the folder where your “hello.cpp” file was saved, then type: g++ hello.cpp -o hello This will create a your program, which will be named “hello”. To execute your program type in: ./hello The “./” before your program name tells the computer to look within the current directory for a program named “hello”. If everything worked, you should now see “Hello, world!” output in your terminal. Well done, you’ve compiled and run your first C++ program… but how did it work? I’ll write the code out again, but this time I’ll explain it using comments. Comments are text you can write in your code which the computer will ignore whilst compiling your program. // This is a comment, when I write "//" it tells the // compiler to ignore all text for the rest of the line! /* This is a multiline comment The compiler will ignore all text until I write an asterisk and forward slash */ // include tells the compiler to add some prewritten code to your program // iostream enables us to input and output text with the program via the terminal // it is part of the standard library of C++ #include <iostream> // All C++ programs are executed from within the main function // the type of function is an int (integer) because it returns a number int main() { // we're now going to output text to the terminal // keep in mind how each line of code is ended with a ";" except for when a function is started // we'll use the "cout" command to do this: think of "cout" as its pronounced - "c-out" // cout is included in the standard library we brought into the program with the #include directive // to tell the compiler we're using commands from the standard library, we prefix them with "std::" // finally, the "endl" command adds a newline to the end of the text we output std::cout << "Hello, world!" << std::endl; // If the program runs without error, main() should return 0 return 0; } You can also write the program so that you do not need to prefix standard library commands with “std::” by defining the namespace of the standard library: #include <iostream> using namespace std; int main() { cout << "Hello, world!" << endl; return 0; } C++ Data Types Now let’s create some variables and add some of the basic data types of C++ to the program: #include <iostream> // we need to include the string header because we'll be using this data type later #include <string> using namespace std; int main() { // some ways you can create variables holding integers int number1; number1 = 51; int number2 = 3; // initialising two variables at once only works if they're the same type int number3, number4; number3 = 23; number4 = 67; int addition = number3 + number4; cout << addition << endl; // for numbers with a decimal place, use the float data type // be wary that C++ has a finite level of precision after the decimal float decimal1 = 0.2; // if you want more precision, use the "double" data type double precise = 0.00000000000000201; // characters and strings // characters are enclosed in single quotation marks, strings in double char letter = 'a'; // you can create arrays to store lists of letters or numbers // only one data type can be stored in each array // you can tell the compiler how many elements go into the array with a number in the brackets char character_array[4] = {'A','B','C','D'}; // arrays are counted from zero, and are accessed using the brackets // so if you want to access the 'A' char first = character_array[0]; // here is an easier way to create an array, the compiler will figure out the length itself // note the double quotation marks char sentence[] = "This is a sentence stored in an array"; cout << sentence << endl; // C++ also added the string data type in the standard library // this makes working with text even easier! string sentence2 = "This is a sentence using the C++ standard library string type"; // boolean is a simple data type: true or false, 1 or 0 bool isTrue = true; bool wrong = false; return 0; } I hope you’ve found my introduction to the basics of C++ useful, I’m hoping to write future posts that go into C++ in more depth soon. - Genomics and Healthcare - October 28, 2017 - How to write a conclusion - October 27, 2017 - Fruit flies, honeybees, and alcohol - October 18, 2017
http://jacksimpson.co/c-for-beginners/
CC-MAIN-2019-22
en
refinedweb
Oh, hmm. Actually, when I run: from mojo.UI import * print(getTestInstalledFonts()) ...I get an empty list. This gets populated with test installed fonts once I make them. Perhaps the opening error message could just be made more clear? Or maybe it's not needed at all, if Robofont automatically de-installs test fonts on crashing? what RoboFont does related to test install: each test install font path is added to a list in the prefs, on quit RF loops over this list and deinstalls all fonts. When RF crashes, it cannot deinstall those fonts. So the next launch detects those test installed fonts, a popup will appear and RF tries to deinstall them. RoboFont tries to inform users as much as possible on issues happening in the background. Silent fixes are not the best solutions for many cases...
https://forum.robofont.com/topic/269/crash-with-testinstalled-font/
CC-MAIN-2019-22
en
refinedweb
Unexpected auto-completer behavior when working with aiida Hi, I use wing to test the aiida-core package with the code given here:.... The code snippets are the following: from aiida import load_profile load_profile() from aiida.orm import Code, Computer, Data, Node, Float, Str from aiida.plugins import CalculationFactory, DataFactory from aiida.engine import calcfunction In the above code, the calcfunction only exists in aiida.engine, so it should be imported like this: from aiida.engine import calcfunction But when I type the following: from aiida.orm import c<tab> then the auto-completer of wing will still suggest the name of calcfunction as one of the candidates, which if I selected and input there, the code will be error and finally fail to run. Any hints for this problem? Regards What happens if you complete the completion and do goto-definition on calcfunction on the 'from aiida.orm import' line? This might be a result of orm importing calcfunction internally in some way, or the symbol being listed in a *.pyi file if there are any, but then having other import behavior at runtime. Looking at where Wing ends up after goto-definition is the first step to figuring this out. Both of the two imports will skip to the same location here:... Another differences I noted that is when importing by 'from aiida.orm import' the result of the completion will becomes 'calcfunction(function)', while the completion result for the 'from aiida.engine import' line is 'calcfunction'. Still cannot figure out the reason. Regards What is the code error when you try to run it? I'm guessing maybe a circular import problem. See the following:
https://ask.wingware.com/question/1777/unexpected-auto-completer-behavior-when-working-with-aiida/
CC-MAIN-2021-17
en
refinedweb
September 2015 Project Update - Semifinals09/18/2015 at 14:21 • 0 comments Hello Hackers! In the same spirit as my quarter-finals project update, here’s one post containing all the information for the next judging round. Please excuse the repetitions! - Video: FlexSEA - Hackaday Prize Semifinals (that's the one embedded above) - Video: Prof. Hugh Herr talking about the FlexSEA project - You can get a copy of the software projects and the hardware design files. - In terms of system design document, please refer to my thesis. It contains extensive details about the hardware and the software. It's a great snapshot of the design as of May 2015. - For more detailed design descriptions, consult the numerous Project Logs that I wrote about critical aspects of the project. - As far as licensing goes, the thesis is licensed under Creative Common Attribution-NonCommercial-ShareAlike (CC BY-NC-SA 2015) and the hardware files are Open Source Hardware. Drop me a line if you use them, I’m curious to know about your projects. - Last time I linked to a short video shot by my colleague and friend Luke testing a strain-gauged based force controller on FlexSEA-Execute 0.1 for his exoskeleton. Much more exciting, here’s a video of him walking with one of his exoskeleton prototypes! You can also see his latest work in my Semifinals video. - I started working on the next hardware revision, FlexSEA-Execute 0.2. All the details are in Working toward FlexSEA-Execute 0.2. The wide-input range power supply (18-50V in, 10V out, 82%+ efficiency) PCB is being manufactured right now, and a good part of the gate driver test board is designed. Exciting times! - One of the requirements is “artist’s rendition of the “productized” design/look and feel of the project”… Does beautiful layout and 3D CAD qualifies as art? I bet it does. In The evolution of FlexSEA prototypes I show pictures of the 3D design, and below you’ll see a screenshot of Execute integrated in the ExoBoot. - Do not hesitate to ask me questions about the system in the comments below! - I'm looking for contributors for the project. Drop me a line if you are interested. So, why am I spending all this time describing technical aspects of the project?09/17/2015 at 19:58 • 0 comments I started playing with electricity and motors when I was a kid and I spent countless hours trying to build DIY RC cars and robots as a young teenager, without any real success. Keep in mind that this was before the age of the Arduino, when programming a PIC16F84 in ASM was the way to go. And that was hard. I still remember the amazing feeling from when I first got a RF remote control working, in C, on an 18F452-centered custom circuit and PCB! Years later, with an Electrical Engineering degree and a Master of Science in my pocket, I did not forget my roots. I’m using this project page as a teaching tool to help you in your projects. I’m covering technical implementation that are typically hard for hobbyists, and hopefully I’m finding the right words and examples to make it more accessible. I’m trying to make the world a better place with FlexSEA, but also by making a complete engineering design freely available to the public, with detailed explanations. What would you like to know? Ask in the comments! Controlling a brushless motor (BLDC): Software09/07/2015 at 19:45 • 0 comments. Table 15 presents a second LUT, used to control the direction of the rotation. Of course, no one likes big look-up table in PNG format, so here’s the Excel sheet I used. The complete system can be seen in Figure 67.. Controlling a brushless motor (BLDC): Hardware09/07/2015 at 19:18 • 0 comments The BLDC schematic consists of 3 copies of the Half-bridge sheet (motor commutation), the Shorted-Leads protection circuit, phase voltage sensing and bridge temperature sensing. DC motors are commonly driven by a circuit called an H-Bridge. An H bridge is an electronic circuit that enables a voltage to be applied across a load in either direction. Closing S1 and S4 will make the motor turn in one direction, while closing S2 and S3 will make it rotate in the opposite direction. Closing S1 and S3, or S2 and S4, can be used to brake the motor. Closing S1 and S2, or S3 and S4, will create a short circuit on the power supply and can lead to catastrophic failure. The switches in the above schematic can be relay contacts, bipolar transistors, MOSFETs or IGBTs. MOSFETs offer the best efficiency for low-voltage applications.] Trajectory generation: trapezoidal speed profile09/07/2015 at 18:29 • 0 comments Picture yourself in your car, going from one stop sign to the next. The fastest way to cover that distance would be to put the pedal to the ground, go as fast as possible and, at the last second, brake as hard as possible. Why isn’t that the prevailing way of driving? First, the accelerations would be way too high, you’d go from pulled in your seat to banging your head on the steering wheel. Second, unless you are driving a race car, the dynamics at play will prevent you from instantaneously reaching your top speed. You’ll need time to accelerate and decelerate. A typical way to control robotic joints is to use a trapezoidal speed profile, which is quite similar to the profiles you’d get by driving your car: acceleration, constant speed, and deceleration. In terms of math and logic the code is extremely simple. I wrote two Matlab scripts; the first one has all the math, and the second one loads experiments and calls the first one. To understand how I implemented this, read this code first, then dig in the C files. Main function: function [ spd, pos, acc ] = trapez_motion_2( pos_i, pos_f, spd_i, spd_max, a ) %Based on trapez_motion_1.m Now in a function % Limitation(s): assumes 0 initial speed (spd_i is useless, should be 0) dt = 0.01; %10ms skip_sspeed = 0; d_pos = pos_f - pos_i; %Difference in position %It's possible to overshoot position if the acceleration is too low. %In that case we should sacrifice the top speed if((2*acc_pos) > d_pos) disp('Position overshoot') spd_max = sqrt(a*d_pos) %Redo the initial math: end cte_spd_pos = d_pos - 2*acc_pos; cte_spd_pos_discrete = (cte_spd_pos/spd_max) / dt if(cte_spd_pos_discrete < 0) disp('No steady speed!') skip_sspeed = 1; end %At this point all the parameters are computed, we can get the 3 plots vector_length = 2*length(a_t_discrete) + length(cte_spd_pos_discrete); spd = zeros(1,vector_length); spd(1) = spd_i; pos = zeros(1,vector_length); pos(1) = pos_i; acc = zeros(1,vector_length); acc(1) = a; %Acceleration: for i = 1:a_t_discrete tmp_spd = spd(end) + spd_inc; tmp_pos = sum(spd)*dt; tmp_acc = a; spd = [spd tmp_spd]; pos = [pos tmp_pos]; acc = [acc tmp_acc]; end if(skip_sspeed == 0) %Constant speed for i = 1:cte_spd_pos_discrete tmp_spd = spd(end); tmp_pos = sum(spd)*dt; tmp_acc = 0; spd = [spd tmp_spd]; pos = [pos tmp_pos]; acc = [acc tmp_acc]; end end %Negative Acceleration: for i = 1:a_t_discrete tmp_spd = spd(end) - spd_inc; tmp_pos = sum(spd)*dt; tmp_acc = -a; spd = [spd tmp_spd]; pos = [pos tmp_pos]; acc = [acc tmp_acc]; end end And now the code the function that calls the previous script: close all; clear all; clc; % exp(1,:) = [1000,3000,0,700,150]; % exp(2,:) = [0,100,0,10,1]; % exp(3,:) = [0,500,0,10,1]; % exp(4,:) = [0,500,0,10,1]; exp(5,:) = [0,500,10,20,1]; % exp(6,:) = [0,300,0,15,5]; % exp(7,:) = [0,100,0,5,10]; % exp(8,:) = [0,5,0,5,1]; dim = size(exp); max_i = dim(1); for i = 1:max_i str = sprintf('Experiment #%i',i); disp(str) [spd1 pos1 acc1] = trapez_motion_2(exp(i,1),exp(i,2),exp(i,3),exp(i,4),exp(i,5)); figure() subplot(3,1,1) plot(acc1, 'r') title('Acceleration') subplot(3,1,2) plot(spd1, 'r') title('Speed') subplot(3,1,3) plot(pos1, 'r') title('Position') end The beauty of Matlab is that it’s so powerful and simple. You can quickly prove your algorithm… but will it work in real life? In that case, my first C implementation had issues with integer rounding that created a lot of problems. Those sharp transitions created a lot of vibration and instability on the prosthetic knee I was using as a test bench! Here's the full C implementation, starting with trapez.h and followed by trapez.c: //**************************************************************************** // MIT Media Lab - Biomechatronics // Jean-Francois (Jeff) Duval // jfduval@mit.edu // 05/2015 //**************************************************************************** // trapez: trapezoidal trajectory generation //**************************************************************************** #ifndef TRAPEZ_H_ #define TRAPEZ_H_ //**************************************************************************** // Include(s): //**************************************************************************** //**************************************************************************** // Public Function Prototype(s): //**************************************************************************** long long trapez_gen_motion_1(long long pos_i, long long pos_f, long long spd_max, long long a); long long trapez_get_pos(long long max_steps); //**************************************************************************** // Definition(s): //**************************************************************************** #define TRAPEZ_DT 0.001 //Trapezoidal timebase. Has to match hardware! #define TRAPEZ_ONE_OVER_DT 1000 #define SPD_FACTOR 10000 //Scaling for integer #define ACC_FACTOR 10000 #endif // TRAPEZ_H_ //**************************************************************************** // MIT Media Lab - Biomechatronics // Jean-Francois (Jeff) Duval // jfduval@mit.edu // 05/2015 //**************************************************************************** // trapez: trapezoidal trajectory generation //**************************************************************************** //Work based on trapez_gen_x.m, translated in C //JFDuval 06/17/2014 //**************************************************************************** // Include(s) //**************************************************************************** //Comment the next line to use in your application: //#define DEBUGGING_OUTPUT #include <stdio.h> #include <stdint.h> #include <stdlib.h> #include <math.h> #include "trapez.h" //#include "main.h" //**************************************************************************** // Local variable(s) //**************************************************************************** //Common variables - careful, do not change "manually"! long long d_pos = 0, d_spd = 0, a_t = 0, a_t_discrete = 0, spd_inc = 0, acc_pos = 0, acc = 0; long long init_pos = 0, cte_spd_pos = 0, cte_spd_pos_discrete = 0; long long skip_sspeed = 0; long long pos_step = 0; long long trapez_transitions[3] = {0,0,0}; long long sign = 0; //**************************************************************************** // Private Function Prototype(s): //**************************************************************************** static long long trapez_compute_params(long long pos_i, long long pos_f, long long spd_max, long long a); //**************************************************************************** // Public Function(s) //**************************************************************************** //Based on trapez_motion_2.m //Assumes 0 initial speed long long trapez_gen_motion_1(long long pos_i, long long pos_f, long long spd_max, long long a) { long long abs_d_pos = 0, abs_acc_pos = 0, dual_abs_acc_pos = 0; //spd_max & a have to be positive spd_max = llabs(spd_max); a = llabs(a); //Compute parameters (in global variables) trapez_compute_params(pos_i, pos_f, spd_max, a); //Absolute values abs_d_pos = llabs(d_pos); abs_acc_pos = llabs(acc_pos); dual_abs_acc_pos = 2*abs_acc_pos; #ifdef DEBUGGING_OUTPUT printf("d_pos = %lld, abs_d_pos = %lld.\n", d_pos, abs_d_pos); printf("1) acc_pos = %lld, abs_acc_pos = %lld.\n", acc_pos, abs_acc_pos); #endif //It's possible to overshoot position if the acceleration is too low. //In that case we should sacrifice the top speed if(dual_abs_acc_pos > abs_d_pos) { #ifdef DEBUGGING_OUTPUT printf("Position overshoot.\n"); #endif //New top speed: spd_max = sqrt(a*abs_d_pos); #ifdef DEBUGGING_OUTPUT printf("New spd_max: %lld.\n", spd_max); #endif //Redo the initial math: //Compute parameters (in global variables) trapez_compute_params(pos_i, pos_f, spd_max, a); //Absolute values (they probably changed) abs_d_pos = abs(d_pos); abs_acc_pos = abs(acc_pos); dual_abs_acc_pos = 2*abs_acc_pos; } //Plateau - constant speed #ifdef DEBUGGING_OUTPUT printf("d_pos = %lld, abs_d_pos = %lld.\n", d_pos, abs_d_pos); printf("2) acc_pos = %lld, abs_acc_pos = %lld.\n", acc_pos, abs_acc_pos); #endif cte_spd_pos = abs_d_pos - (2*abs_acc_pos); cte_spd_pos_discrete = (SPD_FACTOR*cte_spd_pos/spd_max)*TRAPEZ_ONE_OVER_DT; cte_spd_pos_discrete = cte_spd_pos_discrete / SPD_FACTOR; #ifdef DEBUGGING_OUTPUT printf("cte_spd_pos = %lld, cte_spd_pos_discrete = %lld.\n", cte_spd_pos, cte_spd_pos_discrete); #endif if(cte_spd_pos_discrete < 0) { cte_spd_pos_discrete = 0; #ifdef DEBUGGING_OUTPUT printf("No steady speed!\n"); #endif } //Transitions: trapez_transitions[0] = a_t_discrete; trapez_transitions[1] = a_t_discrete + cte_spd_pos_discrete; trapez_transitions[2] = 2*a_t_discrete + cte_spd_pos_discrete; #ifdef DEBUGGING_OUTPUT printf("tr[0] = %lld, tr[1] = %lld, tr[2] = %lld.\n", trapez_transitions[0], trapez_transitions[1], trapez_transitions[2]); #endif pos_step = 0; //Variable used to output the current position command return (2*a_t_discrete + cte_spd_pos_discrete); //Returns the number of steps } //Runtime function - gives the next position setpoint long long trapez_get_pos(long long max_steps) { static long long tmp_spd = 0, last_tmp_spd = 0, tmp_pos = 0; long long position = 0; static long long pos_integral = 0; //At this point all the parameters are computed, we can get the 3 plots //First time: if(pos_step == 0) { tmp_spd = 0; last_tmp_spd = 0; tmp_pos = 0; pos_integral = 0; #ifdef DEBUGGING_OUTPUT printf("pos_step = 0, pos_integral = %lld.\n", pos_integral); #endif } //Acceleration: if(pos_step <= trapez_transitions[0]) { last_tmp_spd = tmp_spd; tmp_spd = last_tmp_spd + spd_inc; } //if(skip_sspeed == 0) //ToDo useful? { //Constant speed //last_tmp_spd = tmp_spd; if((pos_step >= trapez_transitions[0]) && (pos_step <= trapez_transitions[1])) { tmp_spd = last_tmp_spd; } } //Negative Acceleration: if((pos_step >= trapez_transitions[1]) && (pos_step <= trapez_transitions[2])) { last_tmp_spd = tmp_spd; tmp_spd = last_tmp_spd - spd_inc; } if(pos_step <= max_steps) { //Ready for next one. pos_step++; //Common math: pos_integral += tmp_spd; tmp_pos = pos_integral/(TRAPEZ_ONE_OVER_DT * SPD_FACTOR); position = tmp_pos + init_pos; #ifdef DEBUGGING_OUTPUT if(pos_step < 10) printf("pos_step = %lld, pos_integral = %lld, position = %lld.\n", pos_step, pos_integral, position); #endif } else { position = tmp_pos + init_pos; } //New setpoint return position; } //**************************************************************************** // Private Function(s) //**************************************************************************** //Computes all the parameters for a new trapezoidal motion trajectory //Called by trapez_gen_motion_1() static long long trapez_compute_params(long long pos_i, long long pos_f, long long spd_max, long long a) { long long tmp = 0, i = 0; //Sign if(pos_f < pos_i) sign = -1; else sign = 1; acc = a; init_pos = pos_i; d_pos = pos_f - pos_i; //Difference in position d_spd = spd_max ; //Difference in speed a_t = (ACC_FACTOR*d_spd) / a; //How long do we accelerate? a_t_discrete = a_t * TRAPEZ_ONE_OVER_DT / ACC_FACTOR; // (in ticks) //a_t_discrete = a_t; //Simplification of *100/100 spd_inc = (sign*SPD_FACTOR*d_spd) / a_t_discrete; //Every tick, increase spd by #ifdef DEBUGGING_OUTPUT printf("d_spd = %lld, a_t_discrete = %lld, spd_inc = %lld, d_pos = %lld.\n", d_spd, a_t_discrete, spd_inc, d_pos); #endif acc_pos = 0; for(i = 0; i < a_t_discrete; i++) { tmp += spd_inc; //tmp = i*spd_inc; acc_pos = acc_pos + tmp; } acc_pos = acc_pos / (SPD_FACTOR * TRAPEZ_ONE_OVER_DT); //Combine terms #ifdef DEBUGGING_OUTPUT printf("acc_pos = %lld (2x = %lld), %f%% of d_pos.\n", acc_pos, (2*acc_pos), (float)(2*acc_pos*100/d_pos)); #endif return 0; }As you can see, I’m multiplying everything by a big factor to use integer math (way more efficient than floating point math) and I scale it back down once I’m done computing. At runtime, in my time shared while() loop (see Managing timing: how to sequence tasks for more details) I can get a new setpoint and use it for my controllers. The trajectory is only calculated when I initiate a new motion (usually after receiving a command from FlexSEA-Manage or -Plan). //Case 5: Quadrature encoder & Position setpoint case 5: #ifdef USE_QEI1 //Refresh encoder readings encoder_read(); #endif //USE_QEI1 #ifdef USE_TRAPEZ if((ctrl.active_ctrl == CTRL_POSITION) || (ctrl.active_ ctrl == CTRL_IMPEDANCE)) { //Trapezoidal trajectories (can be used for bot h Position & Impedance) ctrl.position.setp = trapez_get_pos(steps); //New setpoint } #endif //USE_TRAPEZ break; //Case 6: P & Z controllers, 0 PWM case 6: #ifdef USE_TRAPEZ if(ctrl.active_ctrl == CTRL_POSITION) { motor_position_pid(ctrl.position.setp, ctrl.pos ition.pos); } else if(ctrl.active_ctrl == CTRL_IMPEDANCE) { //Impedance controller motor_impedance_encoder(ctrl.impedance.setpoint _val, ctrl.impedance.actual_val); } #endif //USE_TRAPEZ //If no controller is used the PWM should be 0: if(ctrl.active_ctrl == CTRL_NONE) { motor_open_speed_1(0); } break; And that’s it! You now have a complete example of a trajectory generator, both in Matlab and C, that works in the real life. Now, please note that a better strategy is to use s-curve trajectories to minimize the jerk. I didn’t implement that yet, but feel free to contribute to FlexSEA by coding it! Some reference here: On Algorithms for Planning S-curve Motion Profiles. FlexSEA-Plan09/07/2015 at 16:44 • 0 comments. The initial plan was to design a custom embedded computer with only the features required for our application. To start experimenting before designing, the BeagleBone Black was selected. It is economical ($55), widely available, open-source hardware (with full documentation available) and its processor, the TI AM3358, has two Programmable Realtime Units (PRU) that can be used to efficiently communicate with peripherals, making it a perfect reference for a custom design. By removing multimedia features and optimizing connectors its size (89 x 55 x 15.4mm) can be greatly reduced. While the design efforts were focused on FlexSEA-Manage and FlexSEA-Execute, the Internet of Things (IoT) wave grew stronger. Smaller embedded computers were released, with price tags low enough to be embedded in typical appliances. One example is the Intel Edison. At 35 x 25 x 4mm it has a 500MHz processor, 1GB of RAM, 4GB of FLASH, Bluetooth and WiFi. It was decided not to design a custom embedded computer but to rather use a standard communication interface (SPI) that would allow the user to select any product on the market. Processing power can easily be added to the FlexSEA system as new embedded computers become available. Working toward FlexSEA-Execute 0.209/06/2015 at 21:34 • 0 comments? The evolution of FlexSEA prototypes09/06/2015 at 18:42 • 0 comments Before... It’s Friday! Time for some messy prototyping/testing pictures.09/04/2015 at 19:18 • 0 comments What you see in my thesis and in my project logs is the final result. To keep it real, I decided to share with you some “action shots” taken during testing and prototyping of the different FlexSEA boards. This one doesn’t look too bad… until you realized that I’m holding the motor shaft fixed with vise-grips, a pen and electrical tape. A few weeks after that I designed the test bench that you can see in Current Controller: Software. To test the circuits I usually hot-glue them to scrap pieces of MDF. No time for screws! Ugly, but functional. Here you can see a FlexSEA-Execute board covered with test probes, a RC-Hobby Wattmeter and a big flywheel used to characterize the BLDC motor at high currents. The tall bolts act as a cage in case the flywheel… well, starts flying. And finally, sometimes there is no time for wire management or mechanical integration! Have a great weekend! MOSFET Power Dissipation and Temperature09/04/2015 at 19:03 • 0 comments In certain designs we want to use a clutch to hold joints still with a minimal amount of power. The power required to engage an electro-magnetic clutch is higher than the power required to keep it locked. We are using a MOSFET switch with PWM to control the average voltage applied to the clutch. What power rating do we need for this MOSFET? Side note: why use a high-side P-MOSFET? The use of a high-side switch is preferred because it is typical to link the metal chassis of prostheses to ground and some clutches have their casing grounded. Using a high-side switch can simplify the electromechanical integration. The power requirements being low, it is possible to use a P-Channel MOSFET as a switch (N-MOSFET have better specs and are always used for high power applications, such as the motor bridge). D13 is used as a free-wheeling diode for inductive loads, as a protection for Q3. P-MOSFET power dissipation: The clutched used in the experimental setup is rated for 24V 250mA 6W. The unit in hand was tested at 242mA. To accommodate bigger clutches (and other output devices) the calculations will be done for 24V 10W (417mA), used at 10V. The current at 10V will be 174mA. Using an FDN5618P P-MOSFET: Where: ILOAD: 174mA VOUT/VIN = D = 10V/24V = 0.42 RDS(ON) = 0.315 Ω (worst case) CRSS: 19pF VIN: 24V fSW: 20kHz Igate: 10.9mA We obtain & . The total power dissipation is 7.5mW. If the clutch is powered at 24V 10W (no switching) the dissipation will be 55mW. The thermal resistance of the SuperSOT-3 package, from junction to ambient, is 270°C/W. 49C is way below the maximum junction temperature, we do not need to worry about thermal constraints and we can use the FDN5618P P-MOSFET for this application. I used a clutch as an example, but keep in mind that the math is the same for other output devices. Note: Formulas based on
https://hackaday.io/project/5765/logs
CC-MAIN-2021-17
en
refinedweb
35 Production Concerns & Redis One of the most exciting parts of programming is sharing what you’ve created with the world. For web applications, this usually means deploying your project to a server that is accessible via the internet. Web servers can be dedicated machines in a data center, containers in a cloud or even a Raspberry Pi sitting in your closet. As long as your server can run Swift and has a connection to the internet, you can use it to deploy Vapor applications. In this chapter, you’ll learn the advantages and disadvantages of some common deployment methods for Vapor. You’ll also learn how to properly optimize, configure and monitor your applications to increase efficiency and uptime. Using environments Every instance of Application has an associated Environment. Each environment has a String name. Common environments include: production, development, and testing. You can retrieve the current environment from the environment property of Application. print(req.application.environment) // "production" For the most part, the environment is there for you to use as you wish while configuring your application. However, some parts of Vapor will behave differently when running in a release environment. Some differences include hiding debug information in 500 errors and reducing the verbosity of error logs. Because of this, make sure you are using the production environment when running your application in production. Choosing an environment Most templates include code to detect the current environment when the application runs. If you open main.swift in your project’s Run module, you’ll see something similar to the following: import App import Vapor var env = try Environment.detect() try LoggingSystem.bootstrap(from: &env) let app = Application(env) defer { app.shutdown() } try configure(app) try app.run() swift run Run serve --env development $ swift run Run serve -e prod Compiling with optimizations While developing your application, you’ll usually compile code using Swift’s debug build mode. Debug build mode is fast and includes useful debug information in the resulting binary. Xcode can use this information later to provide more information about fatal errors and breakpoint debugging. Building release in Xcode You enable release build mode in Xcode using the scheme editor. To build in release mode, edit the scheme for your app’s executable target. Then, select Release under Build Configuration. Building release using SwiftPM When deploying to Linux, you’ll need to use SwiftPM to compile release executables since Xcode is not available. By default, SwiftPM compiles in debug build mode. To specify release mode, append -c release to your build command. swift build -c release swift test -c release Note on testing Building and testing your code regularly in production-like environments is important for catching issues early. Some modules you will use, like Foundation, have different implementations depending on the platform. Subtle differences in implementation can cause bugs in your code. Sometimes, an API’s implementation may not yet exist for a platform. Container environments like Docker help you address this by making it easy to test your code on platforms different from your host machine, such as testing on Linux while developing on macOS. Using Docker Docker is a great tool for testing and deploying your Vapor applications. Deployment steps are coded into a Dockerfile you can commit to source control alongside your project. You can execute this Dockerfile to build and run instances of your app locally for testing or on your deployment server for production. This has the advantage of making it easy to test deployments, create new ones and track changes to how your deploy your code. Process monitoring To run a Vapor application, you simply need to launch the executable generated by SwiftPM. swift build -c release .build/release/Run serve -e prod Supervisor Supervisor, also called supervisord, is a popular process monitor for Linux. This program allows you to register processes that you would like to start and stop on demand. If one of those processes crashes, Supervisor will automatically restart it for you. It also makes it easy to store the process’s stdout and stderr in /var/log for easy access. apt-get install supervisor systemctl restart supervisor // 1 [program:my-app] command=/path/to/my-app/.build/release/Run serve -e prod // 2 autostart=true autorestart=true // 3 stderr_logfile=/var/log/my-app.err.log stdout_logfile=/var/log/my-app.out.log supervisorctl reread supervisorctl update Systemd Another alternative that doesn’t require you to install additional software is called systemd. It’s a standard part of the Linux versions that Swift supports. For more on how to configure your app using systemd, see Chapter 34, “Deploying with AWS”. Reverse Proxies Regardless of where or how you deploy your Vapor application, it’s usually a good idea to host it behind a reverse proxy like nginx. nginx is an extremely fast, battle tested and easy-to-configure HTTP server and proxy. While Vapor supports directly serving HTTP requests, proxying behind nginx can provide increased performance, security, and ease-of-use. nginx, for example, can provide support for TLS (SSL), public file serving and HTTP/2. Installing Nginx nginx is usually installed using APT on Ubuntu but may vary depending on your deployment method. apt-get update apt-get install nginx systemctl start nginx systemctl restart nginx systemctl stop nginx server { ## 1 server_name hello.com; ## 2 listen 80; ## 3 root /home/vapor/Hello/Public/; try_files $uri @proxy; ## 4 location @proxy { ## 5 proxy_pass; ## 6 proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ## 7 proxy_connect_timeout 3s; proxy_read_timeout 10s; } } Logging Using Swift’s app.get("log-test") { req -> HTTPStatus in req.logger.info("The route was called") return .ok } Horizontal scalability Finally, one of the most important concerns in designing a production-ready app is that of scalability. As your application’s user base grows and traffic increases, how will you keep up with demand? What will be your bottlenecks? When first starting out, a reasonable solution can be to increase your server’s resources as traffic increases — adding RAM, better CPU, more disk space, etc. This is commonly referred to as scaling vertically. Load balancing Now that you understand some of the benefits of horizontal scaling, you may be wondering how it actually works. The key to this concept is load balancers. Load balancers are light-weight, fast programs that sit in front of your application’s servers. When a new request comes in, the load balancer chooses one of your servers to send the request to. Sessions with Redis To demonstrate how this works in an app, download the starter project for this chapter. The project is based on the TIL app from the first sections of this book. Open the project in Xcode and build the application. import Redis // 1 let redisHostname = Environment .get("REDIS_HOSTNAME") ?? "localhost" // 2 let redisConfig = try RedisConfiguration(hostname: redisHostname) // 3 app.redis.configuration = redisConfig app.sessions.use(.redis) # 1 docker run --name postgres \ -e POSTGRES_DB=vapor_database \ -e POSTGRES_USER=vapor_username \ -e POSTGRES_PASSWORD=vapor_password \ -p 5432:5432 -d postgres # 2 docker run --name redis -p 6379:6379 -d redis Where to go from here? You now understand the common pitfalls to avoid when moving your Swift web application to production. It’s time to put the best practices and useful tools listed here to use. Here are some additional resources that should prove invaluable as you continue to hone your skills:
https://www.raywenderlich.com/books/server-side-swift-with-vapor/v3.0/chapters/35-production-concerns-redis
CC-MAIN-2021-17
en
refinedweb
1. Introduction In this tutorial, we'll implement two linked list reversal algorithms in Java. 2. Linked List Data Structure A linked list is a linear data structure in which a pointer in each element determines the order. Each element of a linked list contains a data field to store the list data and a pointer field to point to the next element in the sequence. Also, we can use a head pointer to point to the start element of a linked list: After we reverse the linked list, the head will point to the last element of the original linked list, and the pointer of each element will point to the previous element of the original linked list: In Java, we have a LinkedList class to provide a doubly-linked list implementation of the List and Deque interfaces. However, we'll use a general singly-linked list data structure in this tutorial. Let's first start with a ListNode class to represent an element of a linked list: public class ListNode { private int data; private ListNode next; ListNode(int data) { this.data = data; this.next = null; } // standard getters and setters } The ListNode class has two fields: - An integer value to represent the data of the element - A pointer/reference to the next element A linked list may contain multiple ListNode objects. For example, we can construct the above sample linked list with a loop: ListNode constructLinkedList() { ListNode head = null; ListNode tail = null; for (int i = 1; i <= 5; i++) { ListNode node = new ListNode(i); if (head == null) { head = node; } else { tail.setNext(node); } tail = node; } return head; } 3. Iterative Algorithm Implementation Let's implement the iterative algorithm in Java: ListNode reverseList(ListNode head) { ListNode previous = null; ListNode current = head; while (current != null) { ListNode nextElement = current.getNext(); current.setNext(previous); previous = current; current = nextElement; } return previous; } In this iterative algorithm, we use two ListNode variables, previous and current, to represent two adjacent elements in the linked list. For each iteration, we reverse these two elements and then shift to the next two elements. In the end, the current pointer will be null, and the previous pointer will be the last element of the old linked list. Therefore, previous is also the new head pointer of the reversed linked list, and we return it from the method. We can verify this iterative implementation with a simple unit test: @Test public void givenLinkedList_whenIterative(head); for (int i = 5; i >= 1; i--) { assertNotNull(node); assertEquals(i, node.getData()); node = node.getNext(); } } In this unit test, we first construct a sample linked list with five nodes. Also, we verify that each node in the linked list contains the correct data value. Then, we call the iterative function to reverse the linked list. Finally, we check the reversed linked list to make sure the data are reversed as expected. 4. Recursive Algorithm Implementation Now, let's implement the recursive algorithm in Java: ListNode reverseListRecursive(ListNode head) { if (head == null) { return null; } if (head.getNext() == null) { return head; } ListNode node = reverseListRecursive(head.getNext()); head.getNext().setNext(head); head.setNext(null); return node; } In the reverseListRecursive function, we recursively visit each element in the linked list until we reach the last one. This last element will become the new head of the reversed linked list. Also, we append the visited element to the end of the partially reversed linked list. Similarly, we can verify this recursive implementation with a simple unit test: @Test public void givenLinkedList_whenRecursiveRecursive(head); for (int i = 5; i >= 1; i--) { assertNotNull(node); assertEquals(i, node.getData()); node = node.getNext(); } } 5. Conclusion In this tutorial, we implemented two algorithms to reverse a linked list. As always, the source code for the article is available over on GitHub.
https://www.baeldung.com/java-reverse-linked-list
CC-MAIN-2021-17
en
refinedweb
EOS API module ( in CamelCase 🐫)EOS API module ( in CamelCase 🐫) Application programming interface for using the EOS blockchain via the RPC API provided by Block Producer Nodes. This is for read-only API calls. This project wraps the official eosio/eosjs-api to provide camelcase output. It only works with await/async and promise code style, there's no support for the callback style. It is a work in progress. ContentsContents Getting StartedGetting Started yarn add @eoscostarica/eosjs-camel-api # or npm install -S @eoscostarica/eosjs-camel-api const eosCamelApi =const api = eosCamelApi // same options object that eosio/eosjs-api supportsconst logInfo = async {const info = await apiconsole}// { serverVersion: 'ad4ba283',// chainId: '038f4b0fc8ff18a4f0842a8f0564611f6e96e8535901dd45e43ac8691a1c4dca',// headBlockNum: 8448809,// lastIrreversibleBlockNum: 8448494,// lastIrreversibleBlockId: '0080e9eefdcfb032231d2c8cc5c850a004034fb85831febc22d55e63723da590',// headBlockId: '0080eb294f506de95c636e690cf523c7895987114d32bb87378ff13b322d2904',// headBlockTime: '2018-08-06T02:32:26.000',// headBlockProducer: 'acryptolions',// virtualBlockCpuLimit: 200000000,// virtualBlockNetLimit: 1048576000,// blockCpuLimit: 199900,// blockNetLimit: 1048576 } eosjs-camel-api functions receive both snakecase and camelcase arguments and always return camelcase objects. It defaults to the Jungle Testnet via the endpoint. Camel Namespace FunctionsCamel Namespace Functions eosjs-camel-api exposes functions that not part of eosjs-api in the camel namespace. Eg const eosCamelApi =const jungleApi = eosCamelApiconst mainNetApi = eosCamelApiconsole// { httpEndpoint: '' }console// { httpEndpoint: '' } ContributingContributing We follow the open source collaborative ettiquete, the standardjs code style. Read EOS Costa Rica's Open Source Contributing Guidelines for more detail Bug ReportingBug Reporting Please report bugs big and small by opening an issue. No possible bug report is too small. MaintainersMaintainers About EOS Costa RicaAbout EOS Costa Rica EOS Blockchain is aiming to become a decentralized operating system which can support large-scale decentralized applications. EOS Costa Rica supports the EOS.io community by maintaining and contributing to open source initiatives, meetups and workshops. We challenge ourselves to provide the EOS platform with a strong geographical and political diversity by running the most robust EOS Block Producer possible from Costa Rica; We pledge to leverage our talent, experience, and sustainable Internet resources to meet such an important challenge. LicenseLicense MIT © EOS Costa Rica
https://www.npmjs.com/package/@eoscostarica/eosjs-camel-api
CC-MAIN-2021-17
en
refinedweb
In our last post about REST APIs, we have learned the basics of how REST APIs function. In this post, we would see how we can develop our own REST APIs. We would use Python and Flask for that. If you are new to Python, we have you covered with our Python: Learning Resources and Guidelines post. Python / Flask code is pretty simple and easy to read / understand. So if you just want to grasp the best practices of REST API design but lack Python skills, don’t worry, you will understand most of it. However, I would recommend you try out the codes hands on. Writing codes by hand is a very effective learning method. We learn more by doing than we learn by reading or watching. Installing Flask and Flask-RESTful We will be using the Flask framework along with Flask-RESTful. Flask-RESTful is an excellent package that makes building REST APIs with Flask both easy and pleasant. Before we can start building our app, we first need to install these packages. pip install flask pip install flask-restful Once we have the necessary packages installed, we can start thinking about our API design. RESTful Mailing List You see, I just recently started this Polyglot.Ninja() website and I am getting some readers to my site. Some of my readers have shown very keen interest to receive regular updates from this blog. To keep them posted, I have been thinking about building a mailing list where people can subscribe with their email address. These addresses get stored in a database and then when I have new posts to share, I email them. Can we build this mailing list “service” as a REST API? The way I imagine it – we will have a “subscribers” collection with many subscriber. Each subscriber will provide us with their full name and email address. We should be able to add new subscriber, update them, delete them, list them and get individual data. Sounds simple? Let’s do this! Choosing a sensible URL We have decided to build our awesome mailing list REST API. For development and testing purposes, we will run the app on my local machine. So the base URL would be. This part will change when we deploy the API on a production server. So we probably don’t need to worry about it. However, for API, the url path should make sense. It should clearly state it’s intent. A good choice would be something like /api/ as the root url of the API. And then we can add the resources, so for subscribers, it can be /api/subscribers. Please note that it’s both acceptable to have the resource part singular (ie. /api/subscriber) or plural ( /api/subscribers). However, most of the people I have talked to and the articles I have read, more people like the plural form. API Versioning: Header vs URL We need to think about the future of the API before hand. This is our first iteration. In the future, we might want to introduce newer changes. Some of those changes can be breaking changes. If people are still using some of the older features which you can’t break while pushing new changes, it’s time you thought about versioning your API. It is always best practice to version your API from the beginning. The first version of the api can be called v1. Now there are two common method of versioning APIs – 1) Passing a header that specifies the desired version of the API 2) Put the version info directly in the URL. There are arguments and counter arguments for both approaches. However, versioning using url is easier and more often seen in common public APIs. So we accommodate the version info in our url and we make it – /api/v1/subscribers. Like discussed in our previous REST article, we will have two types of resources here – “subscriber collection” (ie. /subscribers) and “individual subscriber” elements (ie. /subscribers/17). With the design decided upon and a bigger picture in our head, let’s get to writing some codes. RESTful Hello World Before we start writing our actual logic, let’s first get a hello world app running. This will make sure that we have got everything setup properly. If we head over to the Flask-RESTful Quickstart page, we can easily obtain a hello world code sample from there. from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) class HelloWorld(Resource): def get(self): return {'hello': 'world'} api.add_resource(HelloWorld, '/') if __name__ == '__main__': app.run(debug=True) Let’s save this code in a file named main.py and run it like this: python main.py If the code runs successfully, our app will launch a web server here –. Let’s break down the code a bit: - We import the necessary modules (Flask and Flask-RESTful stuff). - Then we create a new Flaskapp and then wrap it in Api. - Afterwards, we declare our HelloWorldresource which extends Resource. - On our resource, we define what the gethttp verb will do. - Add the resource to our API. - Finally run the app. What happens here, when we write our Resources, Flask-RESTful generates the routes and the view handlers necessary to represent the resource over RESTful HTTP. Now let’s see, if we visit the url, do we get the message we set? If we visit the url, we would see the expected response: { "hello": "world" } Trying out REST APIs While we develop our api, it is essential that we can try out / test the API to make sure it’s working as expected. We need a way to call our api and inspect the output. If you’re a command line ninja, you would probably love to use curl. Try this on your terminal: ➜ curl -X GET { "hello": "world" } ➜ This would send a GET request to the URL and curl would print out the response on the terminal. It is a very versatile tool and can do a lot of amazing things. If you would like to use curl on a regular basis, you may want to dive deeper into the options / features / use cases. These can help you: However, if you like command line but want a friendlier and easier command line tool, definitely look at httpie. Now what if you’re not a CLI person? And we can agree that sometimes GUI can be much more productive to use. Don’t worry, Postman is a great app! If you are developing and testing a REST API, Postman is a must have app! Back to Business We now have a basic skeleton ready and we know how to test our API. Let’s start writing our mailing list logic. Let’s first layout our resources with some sample data. For this example, we shall not bother about persisting the data to some database. We will store the data in memory. Let’s use a list as our subscriber data source for now. from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app, prefix="/api/v1") users = [ {"email": "[email protected]", "name": "Masnun", "id": 1} ] class SubscriberCollection(Resource): def get(self): return {"msg": "All subscribers "} def post(self): return {"msg": "We will create new subscribers here"}) What changes are notable here? - Note we added a prefixto the Apifor versioning reason. All our urls will be prefixed by /api/v1. - We created a list named usersto store the subscribers. - We created two resources – SubscriberCollectionand Subscriber. - Defined the relevant http method handlers. For now the response just describes the intended purpose of that method. - We add both resources to our api. Note how we added the idparameter to the url. This idis available to all the methods defined on Subscriber. Fire up the local development server and try out the API. Works fine? Let’s move on! Parsing Request Data We have to accept, validate and process user data. In our cases, they would be the subscriber information. Each subscriber would have an email address, a full name and ID. If we used a database, this ID would have been auto generated. Since we are not using a database, we would accept this as part of the incoming request. For processing request data, the RequestParser can be very helpful. We will use it in our POST calls to /api/subscribers/ to validate incoming data and store the subscriber if the data is valid. Here’s the updated code so far: from flask import Flask from flask_restful import Resource, Api from flask_restful.reqparse import RequestParser app = Flask(__name__) api = Api(app, prefix="/api/v1") users = [ {"email": "[email protected]", "name": "Masnun", "id": 1} ]) Here we have made two key changes: - We created a new instance of RequestParserand added argumentsso it knows which fields to accept and how to validate those. - We added the request parsing code in the postmethod. If the request is valid, it will return the validated data. If the data is not valid, we don’t have to worry about it, the error message will be sent to the user. Testing the request parser If we try to pass invalid data, we will get error messages. For example, if we request without any data, we will get something like this: { "message": { "email": "Missing required parameter in the JSON body or the post body or the query string", "id": "Please enter valid integer as ID", "name": "Name has to be valid string" } } But if we pass valid data, everything works fine. Here’s an example of valid data: This will get us the following response: { "msg": "Subscriber added", "subscriber_data": { "email": "[email protected]", "id": 3, "name": "John Smith" } } Cool, now we know how to validate user data 🙂 Please remember – never trust user input. Always validate and sanitize user data to avoid security risks. Next, we need to implement the user level updates. Subscriber Views We went ahead and completed the code for the rest of the methods. The updated code now looks like this: from flask import Flask from flask_restful import Resource, Api from flask_restful.reqparse import RequestParser app = Flask(__name__) api = Api(app, prefix="/api/v1") users = [ {"email": "[email protected]", "name": "Masnun", "id": 1} ] def get_user_by_id(user_id): for x in users: if x.get("id") == int(user_id): return x): user = get_user_by_id(id) if not user: return {"error": "User not found"} return user def put(self, id): args = subscriber_request_parser.parse_args() user = get_user_by_id(id) if user: users.remove(user) users.append(args) return args def delete(self, id): user = get_user_by_id(id) if user: users.remove(user) return {"message": "Deleted"} api.add_resource(SubscriberCollection, '/subscribers') api.add_resource(Subscriber, '/subscribers/<int:id>') if __name__ == '__main__': app.run(debug=True) What did we do? - We added a helper function to find users from the list by it’s id - The update view works – we can update the user data. In our case we’re deleting the data and adding the new data. In real life, we would use UPDATEon the database. - Delete method works fine! Feel free to go ahead and test the endpoints! HTTP Status Codes Our mailing list is functional now. It works! We have made good progress so far. But there’s something very important that we haven’t done yet. Our API doesn’t use proper http status codes. When we send response back to the client, we should also give it a status code. This code would help the client better interpret the results. Have you ever visited a website and saw “404 Not found” error? Well, 404 is the status code, that means the document / resource you were looking for is not available. Saw any “500 Internal Server Error” lately? Now you know what that 500 means. We can see the complete list of http status codes here:. Also depending on whether you’re a cat person or a dog enthusiast, these websites can explain things better: So let’s fix our code and start sending appropriate codes. We can return an optional status code from our views. So when we add a new subscriber, we can send 201 Created like this: return {"msg": "Subscriber added", "subscriber_data": args}, 201 And when we delete the user, we can send 204. return None, 204 What’s next? We have made decent progress today. We have designed and implemented a very basic API. We chose a sensible url, considered API versioning, did input validation and sent appropriate http status codes. We have done good. But what we have seen here is a very simple implementation. There are a lot of scope of improvements here. For example, our API is still open to public, there is no authentication enabled. So anyone with malicious intentions can flood / spam our mailing list database. We need to secure the API in that regard. We also don’t have a home page that uses HATEOAS to guide the clients. We don’t yet have documentation – always remember, the documentation is very important. We developers often don’t feel like writing documentation but well written documentation helps the consumers of your API consume it better and with ease. So do provide excellent docs! I don’t know when – but in our next post on REST APIs, we shall explore more into the wonderful world of API development. And may be we shall also talk about some micro services? If you would like to know when I post those contents, do subscribe to the mailing list. You can find a subscription form on the sidebar. And if you liked the post, do share with your friends 🙂 20 thoughts on “REST API Best Practices: Python & Flask Tutorial” Great article. For me, this is a really useful template! I like your style: you put a lot of interesting links to expand the subject! Nice. Thanks curl -X POST -H “Content-Type: application/json” -d ‘{“email”: “[email protected]”, “name”: “John Smith”, “id”: 3}’ Just what i wanted to know! Thank a lot! 😀 Nice tutorials i almost wrote my first Flask Restful API , im looking forward for more insight. Good job Polyglot!!!!!!!! Thank you! I’m happy that it helped! Nice Article , really helped scaffold my API, but the separation between the collection and the model made me uncomfortable. Isn’t there a way to join the two in only one resource? Is it a bad thing ? I am not familiar with any built in way to do that. You could create your own “router” and “viewset” like Django REST Framework does. Really nice tutorial. Got a great grasp of REST and the benefits of using the flask_restful library over just base Flask. I love that the flask_restful library simplifies not having to know how to code all the routes your self. This simplifies my ability to pass this code on to others to expand functionality without having to teach them how to provide HTTP routes. I am looking forward to going through your JWT authentication tutorial next. I just started with Flask. This is useful, maybe you should submit this tutorial on Hackr.io. I’ve been using that website for quite a while in order to find recommended programming resources. Really Nice Tutorial. Good job Polyglot! Hello, this is very helpful. I finished building a coding interview API through tutorials guide.
http://polyglot.ninja/rest-api-best-practices-python-flask-tutorial/
CC-MAIN-2021-17
en
refinedweb
FileExists FileExists event occurs when the following three conditions hold: The TargetFolder or TargetPhysicalFolder property is set. The OverwriteExistingFiles property is set to false. The target folder already contains a file with the same name as the currently uploaded file. The FileExists event handler receives two arguments: The RadUpload control that initiated the file upload. This argument is of type object, but can be cast to the RadUpload type. An UploadedFileEventArgs object. This object has an UploadedFile property that you can use to access the file that could not be automatically saved. The example below demonstrates how to use the FileExists event to rename the uploaded files: using Telerik.Web.UI.Upload; ... protected void RadUpload1_FileExists( object sender, UploadedFileEventArgs e) { int counter = 1; UploadedFile file = e.UploadedFile; string targetFolder = Server.MapPath(RadUpload1.TargetFolder); string targetFileName = System.IO.Path.Combine(targetFolder, file.GetNameWithoutExtension() + counter.ToString() + file.GetExtension()); while (System.IO.File.Exists(targetFileName)) { counter ++; targetFileName = System.IO.Path.Combine(targetFolder, file.GetNameWithoutExtension() + counter.ToString() + file.GetExtension()); } file.SaveAs(targetFileName); } Protected Sub RadUpload1_FileExists(ByVal sender As Object, _ ByVal e As UploadedFileEventArgs) _ Handles RadUpload1.FileExists Dim counter As Integer = 1 Dim file As UploadedFile = e.UploadedFile Dim targetFolder As String = Server.MapPath(RadUpload1.TargetFolder) Dim targetFileName As String = System.IO.Path.Combine(targetFolder, _ file.GetNameWithoutExtension & counter & file.GetExtension) While System.IO.File.Exists(targetFileName) counter += 1 targetFileName = System.IO.Path.Combine(targetFolder, _ file.GetNameWithoutExtension & counter & file.GetExtension) End While file.SaveAs(targetFileName) End Sub
https://docs.telerik.com/devtools/aspnet-ajax/controls/upload/server-side-programming/fileexists
CC-MAIN-2021-17
en
refinedweb
It says i need an identifier to put in the parentheses but i don't know what identifier. import javax.swing.; import java.awt.; class Main { public static void main(String[] agrs) { JFrame window = new JFrame("Jason's Canvas"); //title name window.setSize(800, 600);//set the sixe of the window window.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);//the methood window.setDefaultCloseOperation is specifcally for the constant Jframe.EXIT_ON_CLOSE to close the application window.setVisible(true);//Function window.setVisible. A function available to window-based terminal objects created via the window API, which toggles the visibility flag } void drawRectangles(Graphics) { Graphics2D g2d = (Graphics2D) g; g2d.drawRect(30, 50, 420, 120); } } Voters Vandesm14 (2636) Vandesm14 (2636) CSharpIsGud (924) Here is a fixed version without any syntax errors, but it opens 2 windows one of them has a rectangle in it but it was hard to tell what in the world you were trying to do. As my working version is almost entirely different from the original repl.
https://replit.com/talk/ask/It-says-i-need-an-identifier-to-put-in-the-parentheses-but-i-dont-know-what-identifier/27044
CC-MAIN-2021-17
en
refinedweb
Tiny TypesTiny Types TinyTypes is an npm module that makes it easy for TypeScript and JavaScript projects to give domain meaning to primitive types. It also helps to avoid all sorts of bugs and makes your code easier to refactor. Learn more. InstallationInstallation To install the module from npm: npm install --save tiny-types API DocsAPI Docs API documentation is available at jan-molak.github.io/tiny-types/. For EnterpriseFor Enterprise TinyTypes are available as part of the Tidelift Subscription. The maintainers of TinyTypes and thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use. If you want the flexibility of open source and the confidence of commercial-grade software, this is for you. Learn more. Defining Tiny TypesDefining Tiny Types An int on its own is just a scalar with no meaning. With an object, even a small one, you are giving both the compiler and the programmer additional information about what the value is and why it is being used. ‐ Jeff Bay, Object Calisthenics Single-value typesSingle-value types To define a single-value TinyType - extend from TinyTypeOf<T>(): import { TinyTypeOf } from 'tiny-types'; class FirstName extends TinyTypeOf<string>() {} class LastName extends TinyTypeOf<string>() {} class Age extends TinyTypeOf<number>() {} Every tiny type defined this way has a readonly property value of type T, which you can use to access the wrapped primitive value. For example: const firstName = new FirstName('Jan'); firstName.value === 'Jan'; EqualsEquals Each tiny type object has an equals method, which you can use to compare it by value: const name1 = new FirstName('Jan'), name2 = new FirstName('Jan'); name1.equals(name2) === true; ToStringToString An additional feature of tiny types is a built-in toString() method: const name = new FirstName('Jan'); name.toString() === 'FirstName(value=Jan)'; Which you can override if you want to: class Timestamp extends TinyTypeOf<Date>() { toString() { return `Timestamp(value=${this.value.toISOString()})`; } } const timestamp = new Timestamp(new Date()); timestampt.toString() === 'Timestamp(value=2018-03-12T00:30:00.000Z))' Multi-value and complex typesMulti-value and complex types If the tiny type you want to model has more than one value, or you want to perform additional operations in the constructor, extend from TinyType directly: import { TinyType } from 'tiny-types'; class Person extends TinyType { constructor(public readonly firstName: FirstName, public readonly lastName: LastName, ) { super(); } } You can also mix and match both of the above definition styles: import { TinyType, TinyTypeOf } from 'tiny-types'; class UserName extends TinyTypeOf<string>() {} class Timestamp extends TinyTypeOf<Date>() { toString() { return `Timestamp(value=${this.value.toISOString()})`; } } abstract class DomainEvent extends TinyTypeOf<Timestamp>() {} class AccountCreated extends DomainEvent { constructor(public readonly username: UserName, timestamp: Timestamp) { super(timestamp); } } const event = new AccountCreated(new UserName('jan-molak'), new Timestamp(new Date())); Even such complex types still have both the equals and toString methods: const now = new Date(2018, 2, 12, 0, 30), event1 = new AccountCreated(new UserName('jan-molak'), new Timestamp(now)), event2 = new AccountCreated(new UserName('jan-molak'), new Timestamp(now)); event1.equals(event2) === true; event1.toString() === 'AccountCreated(username=UserName(value=jan-molak), value=Timestamp(value=2018-03-12T00:30:00.000Z))' Guaranteed runtime correctnessGuaranteed runtime correctness The best way to guarantee runtime correctness of your domain models is to ensure that no tiny type can ever hold invalid data at runtime. This way, when a function receives an instance of a tiny type, it does not need to perform any checks on it and can simply trust that its value is correct. OK, but how do you guarantee that? Let me show you an example. Imagine that upon registering a customer on your website you need to ask them their age. How would you model the concept of "age" in your system? You might consider using a number for this purpose: const age = 35; However, this is far from ideal as "age" is not just any number: it can't be negative, it has to be an integer, and it's highly unlikely that your customers would ever be 253-1 years old. All that means that there are certain rules that an object representing "age" needs to obey, certain constraints that its value has to meet in order to be considered valid. You might have already guessed that my recommendation to you would be to define a tiny type representing Age, but not just that. You should also take it a step further and use the ensure function together with other predicates to describe the constraints the underlying value has to meet: import { TinyType, ensure, isDefined, isInteger, isInRange } from 'tiny-types' class Age extends TinyType { constructor(public readonly value: number) { ensure('Age', value, isDefined(), isInteger(), isInRange(0, 125)); } } With a tiny type defined as per the above code sample you can eliminate entire classes of errors. You also have one place in your system where you define what "age" means. Serialisation to JSONSerialisation to JSON Every TinyType defines a toJSON() method, which returns a JSON representation of the object. This means that you can use TinyTypes as Data Transfer Objects. Single-value TinyTypes are serialised to the value itself: import { TinyTypeOf } from 'tiny-types'; class FirstName extends TinyTypeOf<string>() {} const firstName = new FirstName('Jan'); firstName.toJSON() === 'Jan' Complex TinyTypes are serialised recursively: import { TinyType, TinyTypeOf } from 'tiny-types'; class FirstName extends TinyTypeOf<string>() {} class LastName extends TinyTypeOf<string>() {} class Age extends TinyTypeOf<number>() {} class Person extends TinyType { constructor( public readonly firstName: FirstName, public readonly lastName: LastName, public readonly age: Age, ) { super(); } } const person = new Person(new FirstName('Bruce'), new LastName('Smith'), new Age(55)); person.toJSON() === { firstName: 'Bruce', lastName: 'Smith', age: 55 } De-serialisation from JSONDe-serialisation from JSON Although you could define standalone de-serialisers, I like to define them as static factory methods on the TinyTypes themselves: import { TinyTypeOf } from 'tiny-types'; class FirstName extends TinyTypeOf<string>() { static fromJSON = (v: string) => new FirstName(v); } const firstName = new FirstName('Jan'), FirstName.fromJSON(firstName.toJSON()).equals(firstName) === true When working with complex TinyTypes, you can use the (experimental) Serialised interface to reduce the likelihood of your custom fromJSON method being incompatible with toJSON: import { TinyTypeOf, TinyType, Serialised } from 'tiny-types'; class EmployeeId extends TinyTypeOf<number>() { static fromJSON = (id: number) => new EmployeeId(id); } class DepartmentId extends TinyTypeOf<string>() { static fromJSON = (id: string) => new DepartmentId(id); } class Allocation extends TinyType { static fromJSON = (o: Serialised<Allocation>) => new Allocation( EmployeeId.fromJSON(o.employeeId as number), DepartmentId.fromJSON(o.departmentId as string), ) constructor(public readonly employeeId: EmployeeId, public readonly departmentId: DepartmentId) { super(); } } This way de-serialising a complex type becomes trivial: const allocation = new Allocation(new EmployeeId(1), new DepartmentId('engineering')); const deserialised = Allocation.fromJSON({ departmentId: 'engineering', employeeId: 1 }); allocation.equals(deserialised) === true Although Serialised is by no means 100% foolproof as it's only limited to checking whether your input JSON has the same fields as the object you're trying to de-serialise, it can at least help you to avoid errors caused by typos. Your feedback matters!Your feedback matters! Do you find TinyTypes useful? Give it a star! ★ Found a bug? Need a feature? Raise an issue or submit a pull request. Have feedback? Let me know on twitter: @JanMolak Before you goBefore you go LicenseLicense TinyTypes library is licensed under the Apache-2.0 license.
https://www.npmjs.com/package/tiny-types
CC-MAIN-2021-17
en
refinedweb
equivalent of "Update All Materials" in the GUI The function you are looking for is LinkMaterialWithDatabase(). This is how you can add new material settings with properties taken from the database import s4l_v1.document as document import s4l_v1.model as model sim = document.AllSimulations[0] for ent in model.AllEntities(): mats = sim.AddMaterialSettings([ent]) mats.Name = ent.Name sim.LinkMaterialWithDatabase(mats, str(ent.MaterialName)) To modify the settings of an existing simulation and update their properties, you can do something like this: import s4l_v1.simulation as simulation material_settings = [s for s in sim.AllSettings if isinstance(s, simulation.fdtd.MaterialSettings)] for mats in material_settings: material_name = mats.Name sim.LinkMaterialWithDatabase(mats, material_name)
https://forum.zmt.swiss/topic/16/how-to-update-material-settings-using-the-material-database-from-python
CC-MAIN-2021-17
en
refinedweb
pwd.h(0P) POSIX Programmer's Manual pwd.h(0P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. pwd.h — password structure #include <pwd.h>. None. None. None. sys_types.h(0p) The System Interfaces volume of POSIX.1‐2008, endpwent(3p), getpwnam(3p), getpwuidwd.h(0P) Pages that refer to this page: endpwent(3p), getpwnam(3p), getpwnam_r(3p), getpwuid(3p), getpwuid_r(3p)
https://man7.org/linux/man-pages/man0/pwd.h.0p.html
CC-MAIN-2020-50
en
refinedweb
Methods Initialization methods createApp(config) Returns an app object. Used to initialize your app instance. The config object should contain the following keys: App object methods app.dispatch(action) Dispatches an action to Shopify App Bridge. Hosts (like Shopify Admin and Shopify Mobile) can subscribe to actions to listen for these dispatches. app.dispatch( Redirect.toRemote({ url: '', }), ); app.error(callback) Subscribe to all errors, including those that are caused by actions. Returns a method you can use to unsubscribe from all errors. const unsubscribeFromErrors = app.error((data) => { const { type, // the error type action, // the original action including its id message, // additional hints on how to fix the error } = data; // Handle all errors here switch(type) { case Error.ActionType.INVALID_PAYLOAD: // Do something with the error break; } }); // Unsubscribe from all errors unsubscribeFromErrors(); app.getState() Returns a Promise which, when resolved, returns information about your app’s current state, including the currently logged in staff member. app.getState().then((data) => { const { appInfo, loading, modal, navigation, pos, resourcePicker, staffMember, titleBar, toast } = data; }); app.subscribe(callback, id?) Subscribe to all actions. Returns a method you can use to unsubscribe. Arguments: const unsubscribeFromAll = app.subscribe((data) => { // Handle all actions here console.log(data); }); // Unsubscribe from all actions unsubscribeFromAll(); app.subscribe(eventNameSpace, callback, id?) When eventNameSpace or id are provided, this method subscribes to actions of the provided type. Arguments: const unsubscribeModalOpen = app.subscribe(Modal.Action.OPEN, (data) => { // Do something whenever a Modal open action is dispatched }); // Unsubscribe from Modal open actions unsubscribeModalOpen(); Platform methods The following utility methods, available in the app-bridge-utils package, return true or false depending on which platform an embedded app is running on: isShopifyMobile: Returns trueif the app is running on Shopify Mobile. isShopifyPOS: Returns trueif the app is running on Shopify POS. isShopifyPing: Returns trueif the app is running on Shopify Ping. isMobile: Returns true if any of the conditions above are true. import {isMobile} from '@shopify/app-bridge-utils'; if (isMobile()) { // app is running on mobile } else { // app is running in a desktop web browser }
https://shopify.dev/tools/app-bridge/methods
CC-MAIN-2020-50
en
refinedweb
I would like to know, suppose if I have inherited a Class A in Class B then How can I call the constructor of the base class(Class A) in C#? yes you can call base class constructor from derived class in C#, In the inheritance hierarchy, always the base class constructor is called first. In c#, the base keyword is used to access the base class constructor as shown below. We have used the ':base(...)' keyword after the constructor declaration with a specific parameter list. Take a look the below example: using System; public class Person { protected string ssn = "111-222-333"; protected string name = "John Wick"; public virtual void GetInfo() { Console.WriteLine("Name: {0}", name); Console.WriteLine("SSN: {0}", ssn); } } public class Employee : Person { public string id = "ABC123"; public override void GetInfo() { // Calling the base class GetInfo method: base.GetInfo(); Console.WriteLine("Employee ID: {0}", id); } } public class TestClass { public static void Main() { Employee E = new Employee(); E.GetInfo(); } } In the above example, both the base class, Person, and the derived class, Employee, have a method named Getinfo. By using the base keyword, it is possible to call the Getinfo method on the base class, from within the derived class. Working fiddle : Output of the above code will be Name: John Wick SSN: 111-222-333 Employee ID: ABC123 Here is the another working example:. // ... B executes the base constructor. A a = new A(0); B b = new B(1); } } Output: Base constructor A() Base constructor A() Derived constructor B() Note:Class A and class B both introduce constructors. Class A is the parent or base class for class B, which is referred to as the derived class. In the above example, the constructor in class B calls into the constructor of class A using base initializer syntax. As we specify that the base class constructor is called upon entry to the derived constructor. In the B constructor, we use base initializer syntax.The compiler inserts the constructor call at the start of the method body. Above details are correct, if you want to learn more about C# inheritance, read this tutorial Above tutorial explains more about Inheritance in C#. Subscribe to our weekly Newsletter & Keep getting latest article/questions in your inbox weekly
https://qawithexperts.com/questions/285/how-can-i-call-base-class-constructor-from-derived-class-in
CC-MAIN-2020-50
en
refinedweb
Opened 21 months ago Closed 21 months ago #27352 closed defect (fixed) Add checks for matrix multiplication Description It seems the compatibility of dimensions is not always tested when multiplying matrices. This ticket is opened following the report at In particular, multiplying two 3 by 2 matrices should fail, but: sage: A = matrix(QQ, [[1, 2], [-1, 0], [1, 1]]) sage: B = matrix(QQ, [[0, 4], [1, -1], [1, 2]]) sage: A [ 1 2] [-1 0] [ 1 1] sage: B [ 0 4] [ 1 -1] [ 1 2] sage: A*B [ 1 0] [ 1 -2] [ 1 3] In this case A * B amounts to A.__mul__(B) which ends up calling A._multiply_flint(B). For matrices over the integers, the multiplication above fails as it should: sage: A = matrix(ZZ, [[1, 2], [-1, 0], [1, 1]]) sage: B = matrix(ZZ, [[0, 4], [1, -1], [1, 2]]) sage: A * B Traceback (most recent call last) ... IndexError: Number of columns of self must equal number of rows of right. but we get a similar surprise by calling sage: A._multiply_linbox(B) [ 2 2] [ 0 -4] [ 1 3] In case _multiply_linbox and _multiply_flint skip dimension tests for speed, tests should be performed before calling them, which seems to be the case for _multiply_linbox but not _multiply_flint. Change History (16) comment:1 Changed 21 months ago by comment:2 Changed 21 months ago by Those random answers are likely because upstream uses whatever is in that next spot in memory. I think this also has a chance of causing a segfault or destroying some other computation for the same reason. comment:3 Changed 21 months ago by Regarding the ticket description, is _multiply_linbox used anywhere? _multiply_flint is funny because it has a doctest sage: matrix(QQ, 2, 3) * matrix(QQ, 4, 5) Traceback (most recent call last): ... TypeError: unsupported operand parent(s) for *: 'Full MatrixSpace of 2 by 3 dense matrices over Rational Field' and 'Full MatrixSpace of 4 by 5 dense matrices over Rational Field' but I think if the two matrices have the same parent, then the shapes aren't checked. comment:4 follow-up: ↓ 5 Changed 21 months ago by Can we just do something like this? src/sage/structure/element.pyx diff --git a/src/sage/structure/element.pyx b/src/sage/structure/element.pyx index 1b167b7ab4..eef1798d54 100644 (with an appropriate message for the ValueError)? I don't know the element.pyx code well. Or should the change be done in _matrix_times_matrix in matrix_rational_dense.pyx? comment:5 in reply to: ↑ 4 ; follow-up: ↓ 6 Changed 21 months ago by Replying to jhpalmieri: Can we just do something like this? - src/sage/structure/element.pyxdiff --git a/src/sage/structure/element.pyx b/src/sage/structure/element.pyx index 1b167b7ab4..eef1798d54 100644 (with an appropriate message for the ValueError)? I don't know the element.pyxcode well. Or should the change be done in _matrix_times_matrixin matrix_rational_dense.pyx? Yes but use self._ncols ( Py_ssize_t attribute) instead of ncols() (Python function). Note that the coercion model raises TypeError sage: Zmod(3).an_element() * Zmod(2).an_element() and I think it is better to follow this convention. comment:6 in reply to: ↑ 5 Changed 21 months ago by Replying to vdelecroix: Replying to jhpalmieri: Yes but use self._ncols( Py_ssize_tattribute) instead of ncols()(Python function). This won't work, use (<Matrix> left)._nrows != (<Matrix> left)._ncols (note that since left and right have the same parent we want them to be square matrices) comment:7 Changed 21 months ago by - Branch set to u/jhpalmieri/matrix-mult comment:8 Changed 21 months ago by - Commit set to 4a59d8e5d817894193b8f9ebe1ac5168d5954536 - Status changed from new to needs_review Here's a first draft. Feel free to modify it. New commits: comment:9 Changed 21 months ago by - Reviewers set to Vincent Delecroix - Status changed from needs_review to needs_work `trac`:27352: should be :trac:`27352` comment:10 Changed 21 months ago by - Commit changed from 4a59d8e5d817894193b8f9ebe1ac5168d5954536 to c9c2109d2c2647facf1781d40e416e91cf54b9a7 Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits: comment:11 Changed 21 months ago by Fixed. (I built the reference manual, but of course this doesn't appear in the reference manual, so I didn't catch it.) comment:12 Changed 21 months ago by - Status changed from needs_work to positive_review Indeed... comment:13 Changed 21 months ago by Usually, no space between the type declaration <Matrix> and the name left or right. See other uses elsewhere in the source code as found by search_src("<matrix>"): matrix/action.pyx:248: cdef Matrix A = <Matrix>g matrix/action.pyx:249: cdef Matrix B = <Matrix>s matrix/action.pyx:305: cdef Matrix A = <Matrix>g matrix/action.pyx:356: cdef Matrix A = <Matrix>g matrix/action.pyx:367: return (<Matrix>A)._vector_times_matrix_(v) # v * A matrix/args.pyx:652: M = <Matrix>self.entries matrix/args.pyx:880: m = <Matrix>self.entries matrix/matrix1.pyx:1461: other = <Matrix>bottom matrix/matrix1.pyx:1503: cdef Matrix other = <Matrix>bottom matrix/matrix2.pyx:790: return (<Matrix>A)._elementwise_product(B) matrix/matrix2.pyx:2450: cdef Matrix M = <Matrix> self matrix/matrix2.pyx:2478: cdef Matrix a = <Matrix> matrix(R, n-1, n) matroids/lean_matrix.pyx:1033: if int((<Matrix>M).get_unsafe(i, j)) & 1: matroids/lean_matrix.pyx:1655: s = int((<Matrix>M).get_unsafe(i, j)) % 3 matroids/lean_matrix.pyx:2208: self._gf4 = (<Matrix>M).base_ring() matroids/lean_matrix.pyx:2215: self.set(i, j, (<Matrix>M).get_unsafe(i, j)) structure/element.pyx:3679: return (<Matrix>left)._matrix_times_matrix_(<Matrix>right) Not sure if it's still time to change this here or it should happen in a follow-up ticket or we don't care. comment:14 Changed 21 months ago by - Commit changed from c9c2109d2c2647facf1781d40e416e91cf54b9a7 to a2bac95e56e6855f3393038c04c2eb537929dc30 - Status changed from positive_review to needs_review Branch pushed to git repo; I updated commit sha1 and set ticket back to needs_review. This was a forced push. New commits: comment:15 Changed 21 months ago by - Status changed from needs_review to positive_review I don't care much, but it's easy to change. If Volker complains, we will know it was too late. comment:16 Changed 21 months ago by - Branch changed from u/jhpalmieri/matrix-mult to a2bac95e56e6855f3393038c04c2eb537929dc30 - Resolution set to fixed - Status changed from positive_review to closed With 8.7.beta5, I actually get random (?) answers:
https://trac.sagemath.org/ticket/27352
CC-MAIN-2020-50
en
refinedweb
The “final” Keyword in Java Last modified: May 3, 2020 1. Overview While inheritance enables us to reuse existing code, sometimes we do need to set limitations on extensibility for various reasons; the final keyword allows us to do exactly that. In this tutorial, we'll take a look at what the final keyword means for classes, methods, and variables. 2. Final Classes Classes marked as final can’t be extended. If we look at the code of Java core libraries, we’ll find many final classes there. One example is the String class. Consider the situation if we can extend the String class, override any of its methods, and substitute all the String instances with the instances of our specific String subclass. The result of the operations over String objects will then become unpredictable. And given that the String class is used everywhere, it’s unacceptable. That’s why the String class is marked as final. Any attempt to inherit from a final class will cause a compiler error. To demonstrate this, let’s create the final class Cat: public final class Cat { private int weight; // standard getter and setter } And let’s try to extend it: public class BlackCat extends Cat { } We’ll see the compiler error: The type BlackCat cannot subclass the final class Cat Note that the final keyword in a class declaration doesn’t mean that the objects of this class are immutable. We can change the fields of Cat object freely: Cat cat = new Cat(); cat.setWeight(1); assertEquals(1, cat.getWeight()); We just can’t extend it. If we follow the rules of good design strictly, we should create and document a class carefully or declare it final for safety reasons. However, we should use caution when creating final classes. Notice that making a class final means that no other programmer can improve it. Imagine that we're using a class and don’t have the source code for it, and there's a problem with one method. If the class is final, we can’t extend it to override the method and fix the problem. In other words, we lose extensibility, one of the benefits of object-oriented programming. 3. Final Methods Methods marked as final cannot be overridden. When we design a class and feel that a method shouldn’t be overridden, we can make this method final. We can also find many final methods in Java core libraries. Sometimes we don’t need to prohibit a class extension entirely, but only prevent overriding of some methods. A good example of this is the Thread class. It’s legal to extend it and thus create a custom thread class. But its isAlive() methods is final. This method checks if a thread is alive. It’s impossible to override the isAlive() method correctly for many reasons. One of them is that this method is native. Native code is implemented in another programming language and is often specific to the operating system and hardware it's running on. Let’s create a Dog class and make its sound() method final: public class Dog { public final void sound() { // ... } } Now let’s extend the Dog class and try to override its sound() method: public class BlackDog extends Dog { public void sound() { } } We’ll see the compiler error: - overrides com.baeldung.finalkeyword.Dog.sound - Cannot override the final method from Dog sound() method is final and can’t be overridden If some methods of our class are called by other methods, we should consider making the called methods final. Otherwise, overriding them can affect the work of callers and cause surprising results. If our constructor calls other methods, we should generally declare these methods final for the above reason. What’s the difference between making all methods of the class final and marking the class itself final? In the first case, we can extend the class and add new methods to it. In the second case, we can’t do this. 4. Final Variables Variables marked as final can't be reassigned. Once a final variable is initialized, it can’t be altered. 4.1. Final Primitive Variables Let’s declare a primitive final variable i, then assign 1 to it. And let’s try to assign a value of 2 to it: public void whenFinalVariableAssign_thenOnlyOnce() { final int i = 1; //... i=2; } The compiler says: The final local variable i may already have been assigned 4.2. Final Reference Variables If we have a final reference variable, we can’t reassign it either. But this doesn’t mean that the object it refers to is immutable. We can change the properties of this object freely. To demonstrate this, let’s declare the final reference variable cat and initialize it: final Cat cat = new Cat(); If we try to reassign it we’ll see a compiler error: The final local variable cat cannot be assigned. It must be blank and not using a compound assignment But we can change the properties of Cat instance: cat.setWeight(5); assertEquals(5, cat.getWeight()); 4.3. Final Fields Final fields can be either constants or write-once fields. To distinguish them, we should ask a question — would we include this field if we were to serialize the object? If no, then it’s not part of the object, but a constant. Note that according to naming conventions, class constants should be uppercase, with components separated by underscore (“_”) characters: static final int MAX_WIDTH = 999; Note that any final field must be initialized before the constructor completes. For static final fields, this means that we can initialize them: - upon declaration as shown in the above example - in the static initializer block For instance final fields, this means that we can initialize them: - upon declaration - in the instance initializer block - in the constructor Otherwise, the compiler will give us an error. 4.4. Final Arguments The final keyword is also legal to put before method arguments. A final argument can’t be changed inside a method: public void methodWithFinalArguments(final int x) { x=1; } The above assignment causes the compiler error: The final local variable x cannot be assigned. It must be blank and not using a compound assignment 5. Conclusion In this article, we learned what the final keyword means for classes, methods, and variables. Although we may not use the final keyword often in our internal code, it may be a good design solution. As always, the complete code for this article can be found in the GitHub project. is it a good practice to use the final keyword as much as possible (mainly for fields and arguments) ? Generally yes, but sometimes the overhead of having so many `final`s scattered around your code might be not worth the struggle. This is why variables in languages like Scala are implicitly final
https://www.baeldung.com/java-final
CC-MAIN-2020-50
en
refinedweb
3D Arrays in C language – How to declare, initialize and access elements In our previous tutorials we have discussed that C programming allows multiple dimensions in arrays like 1D arrays, 2D arrays. Similarly, we can have three or more dimensions too. A 3D array is like a three dimensional figure ,eg: a cube or a cuboid. A 3D array has rows and columns like a 2D array and another dimension which contains sets of these 2D rows and columns. The following figure illustrates the three dimensions of a 3D array: In this figure, we have an integer array. We have sets of numbers present in the rows and columns of this array. Like a two dimensional array, the representation of dimensions is done as 3x3x3. Where the numbers are – number of the row and column set * number of rows * number of column. We can have different values of all the dimensions like 2x3x2 or 4x3x3. To have a better understanding of how the data is stored in the memory for a 3D array, have a look at the explained figure of the array mentioned above: The above figure shows memory mapping of a 3D array. The consecutive memory addresses differ by 2 because the size of the int data type is 2 bytes. The above memory address has been taken randomly. In reality, the processor can store the array in other memory locations too. We also see that the numbers are stored in a linear fashion in the given array. In the upcoming sections, we shall learn more about the declaration, accessing and displaying the data in a 3D array. Declaration, Initialization and storing data The declaration of a 3D array takes place in a very similar manner to that of any other array, like 1D or 2D array. A datatype is given to array to specify how the data in the array should be interpreted. Here is the syntax for declaration of an array: int arr[3][3][3]; The int specifies that the data stored in the array will be of integer type. arr is the variable name under which all the data is stored. The first [3] refers to the sets of rows and columns of the array, the second [3] refers to the number of rows of the inner array and the third [3] refers to the number of columns of the inner array. This is also static memory allocation, that is, we are allocating the array a size equal to 3x3x3 , that is, the array can store 3x3x3 = 27 number of elements. The three [][][] brackets specifies the array is three dimensional. Initialization of a 3D array needs to be done as follows: //the following syntax is used for proper readability int arr[3][3][3]={{ //set 1 {4,10,6}, {17,0,12}, {5,56,13} }, { //set 2 {10,23,15}, {2,5,9}, {1,16,20} }, { //set 3 {5,16,0}, {4,35,19}, {8,13,2}}}; Here we have initialized an integer array. Array of any type can be initialized using this format. We already know that data can be stored in an array using for loops. Since we have a 3D array, we use 3 for loops for that purpose. Here is the syntax for storing data in a 3D array: for(i=0 ;i < 3 ;i++){ for(j=0 ;j < 3 ;j++){ for(k=0 ;k < 3 ;k++){ scanf("%d",&arr[i][j][k]); }}} Note that if we do not initialize the array or store data in it by user input, it will result in an output of garbage values. Accessing and Reading an array We are already familiar with the concept of accessing the array using subscripts or index numbers in 2D arrays. Accessing 3D array is done in a very similar manner. The data present in row 2, column 1 and 2nd set is referred to as: arr[2][1][2]; For the purpose of reading all the elements of the array, we use three nested for loops. This syntax is given below: for(i=0 ;i < 3 ;i++){ //the outer loop is for the set of rows and columns for(j=0 ;j < 3 ;j++){ //this middle loop is for the rows for(k=0 ;k < 3 ;k++){ //this inner loop is for the columns printf("%d ",arr[i][j][k]); }printf("\n"); }printf("\n"); } Program to initialize 3D array with User input, process and print it Here is a simple program of 3D array which adds two arrays and stores the result in another array. One array has already been initialized and the other one will have data input by the user. #include <stdio.h> int main() { //the first array has been initialized. int a[3][3][3]={{ //set 1 {4,10,6}, {17,0,12}, {5,56,13} }, { //set 2 {10,23,15}, {2,5,9}, {1,16,20} }, { //set 3 {5,16,0}, {4,35,19}, {8,13,2}}}; //second array shall have user input values and third array stores the sum of the other two int b[3][3][3], c[3][3][3]; int i, j, k; //input elements into the second array printf("Enter the elements in the array:\n"); for(i=0 ;i < 3; i++){ for(j=0 ;j<3 ;j++){ for(k=0 ;k<3 ;k++){ scanf("%d",&b[i][j][k]); c[i][j][k]=a[i][j][k]+b[i][j][k];//summing up the two arrays } } } printf("The sum of the two arrays : \n"); for(i=0 ;i<3 ;i++){ //the outer loop is for the set of rows and columns for(j=0 ;j<3 ;j++){ //this middle loop is for the rows for(k=0 ;k<3 ;k++){ //this inner loop is for the columns printf("%d ",c[i][j][k]); }printf("\n"); }printf("\n"); } return 0; } Output:- Enter the elements in the array: 6 7 34 4 9 6 0 2 3 18 9 7 13 4 0 1 23 65 8 6 3 9 3 5 3 15 32 The sum of the two arrays: 10 17 40 21 9 18 5 58 16 28 32 22 15 9 9 2 39 85 13 22 3 13 38 24 11 28 34 Learning never exhausts the mind.So, do come back for more. Hope this helps and you like the tutorial. Do ask for any queries in the comment box and provide your valuable feedback. Share and subscribe. Keep Coding!! Happy Coding!! 🙂
https://www.codingeek.com/tutorials/c-programming/3d-arrays-in-c-language-how-to-declare-initialize-and-access-elements/
CC-MAIN-2020-50
en
refinedweb
curl_mime_type - set a mime part content type NAME curl_mime_type - set a mime part's content type SYNOPSIS #include <curl/curl.h> CURLcode curl_mime_type(curl_mimepart * part, const char * mimetype); DESCRIPTION curl_mime_type) This HTML page was made with roffit.
https://curl.se/libcurl/c/curl_mime_type.html
CC-MAIN-2020-50
en
refinedweb
Created on 2019-02-28 04:18 by brandtbucher, last changed 2020-07-07 05:39 by Raps Uk. This issue is now closed. ...as discussed in python-ideas. Semantically: d1 + d2 <-> d3 = d1.copy(); d3.update(d2); d3 d1 += d2 <-> d1.update(d2) Attached is a working implementation with new/fixed tests for consideration. I've also updated collections.UserDict with the new __add__/__radd__/__iadd__ methods. I believe that Guido rejected this when it was proposed a few years ago. Python ideas discussion in 2015 : LWN summary : I believe it was proposed and rejected multiple times. For the record, I'm opposed to the idea. * Use of the + operator is a temptation to produce new dictionaries rather than update an existing dict in-place which is usually what you want. * We already have ChainMap() which presents a single view of multiple mappings with any copying. * It is natural to expect the plus operator to be commutative, but this operation would necessarily be non-commutative. * Many other APIs are modeled on the dict API, so we should not grow the API unless there is a big win. The effects would be pervasive. * I don't see other languages going down this path, nor am I seeing dict subclasses that implement this functionality. Those are indications that this more of a "fun thing we could do" rather than a "thing that people need". * The existing code already reads nicely: options.update(user_selections) That reads more like self explanatory English than: options += user_selections The latter takes more effort to correctly parse and makes it less clear that you're working with dicts. * It isn't self-evident that the right operand needs to be another dictionary. If a person is trying to "add a key / value pair" to an existing dictionary, the "+=" operator would be tempting but it wouldn't work. > * It is natural to expect the plus operator to be commutative, but this operation would necessarily be non-commutative. In Python, the plus operator for sequences (strings, lists, tuples) is non-commutative. But I have other arguments against it: * It conflicts with the plus operator of Counter (which is a specialized dict): Counter(a=2) + Counter(a=3) == Counter(a=5), but the proposed idea makes dict(a=2) + dict(a=3) == dict(a=3). * We already have a syntax for dict merging: {**d1, **d2}. It works with arbitrary mappings, in contrary to the plus operator, which needs a special support in argument types. > In Python, the plus operator for sequences (strings, lists, > tuples) is non-commutative. For sequences, that is obvious and expected, but not so much with mappings where the order of overlapping keys is determined by the left operand and the value associated with those keys is determined by the right operand. Also with sequences the + operator actually means "add to", but with dictionaries it means "add/or replace" which is contrary to the normal meaning of plus. I think that was one of Guido's reasons for favoring "|" instead of "+" for set-to-set operations. > We already have a syntax for dict merging: {**d1, **d2}. > It works with arbitrary mappings, This is a good point. > We already have a syntax for dict merging: {**d1, **d2}. Which doesn't mean that "d1 + d2" isn't much more intuitive than this special-character heavy version. It takes me a while to see the dict merge under that heap of stars. And that's already the shortest example. > It works with arbitrary mappings, The RHS of "d += M" doesn't have to be a dict IMHO, it could be any mapping. And even "dict(X) + M" doesn't look all too bad to me, even though there's "dict(X, **M)". > Use of the + operator is a temptation to produce new dictionaries rather than update an existing dict in-place which is usually what you want. That's why there would be support for "+=". The exact same argument already fails for lists, where concatenation is usually much more performance critical than for the average little dict. (And remember that most code isn't performance critical at all.) > We already have ChainMap() which presents a single view of multiple mappings with any copying. Which is a different use case that is unlikely to go away with this proposal. > makes it less clear that you're working with dicts. This is a valid argument, although it always depends on the concrete code what the most readable way to express its intentions is. Again, this doesn't really differ for lists. Let's wait for the PEP, I'd say. scoder: dict(X, **M) is broken unless M is known to be string keyed (it used to work, but in Python 3, it will raise a TypeError). It's part of the argument for the additional unpacking generalizations from PEP 448; {**X, **M} does what dict(X, **M) is trying to do, but without abusing the keyword argument passing convention. You also claim "It takes me a while to see the dict merge under that heap of stars", but that's at least as much about the newness of PEP 448 (and for many Python coders, a complete lack of familiarity with the pre-existing varargs unpacking rules for functions) as it is about the punctuation; after all, you clearly recognize dict(X, **M) even though it's been wrong in most contexts for years. In any event, I'm a strong -1 on this, for largely the same reasons as Raymond and others: 1. It doesn't provide any new functionality, just one more way to do it; += is satisfied by .update, + is satisfied (more generally and efficiently) by the unpacking generalizations 2. It's needlessly confusing; addition is, for all existing types in the standard library I can think of, lossless; the information from both sides of the + is preserved in some form, either by addition or concatenation (and in the concatenation case, addition is happening, just to the length of the resulting sequence, and order is preserved). Addition for dictionaries would introduce new rules specific to dicts that do not exist for any other type regarding loss of values, non-additive resulting length, etc. Those rules would likely be similar to those of dict literals and the update method, but they'd need to be made explicit. By contrast, the PEP 448 unpacking generalization rules followed the existing rules for dict literals; no special rules occur, it just behaves intuitively (if you already knew the rules for dict literals without unpacking being involved). 3. Almost any generic, duck-typing based code for which addition makes sense will not make sense for dicts simply because it loosens the definition of addition too much to be useful, so best case, it still raises TypeError (when dicts added to non-dict things), worst case, it silently operates in a way that violates the rules of both addition and concatenation rather than raising a TypeError that the generic code could use to determine the correct thing to do. 4. The already mentioned conflict with Counter (which already has an addition operator, with lossless semantics) 5. (Minor) It means PyDict_Type needs a non-NULL tp_as_number, so now it's slightly slower to reject dicts as being non-numeric at the C layer Problem #2 could be used to argue for allowing | instead of + (which would also resolve #4, and parts of #3), since | is already used for unioning with sets, and this operation is much closer to a union operation than addition or concatenation. Even so, it would still be misleading; at least with sets, there is no associated value, so it's still mostly lossless (you lose the input lengths, but the unique input data is kept); with dicts, you'd be losing values too. Basically, I think the PEP 448 unpacking syntax should remain as the "one-- and preferably only one --obvious way to" combine dictionaries as a one-liner. It's more composable, since it allows adding arbitrary additional key/value pairs, and more efficient, since it allows combining more than two dicts at once with no additional temporaries: dicta + dictb + dictc requires "dictab" to be made first, then thrown away after dictab + dictc produces dictabc, while {**dicta, **dictb, **dictc} builds dictabc directly. The only real argument I can see for not sticking to unpacking is that it doesn't allow for arbitrary dict-like things to produce new dict-like things directly; you'd have to rewrap as myspecialdict({**speciala, **specialb}). But I don't think that's a flaw worth fixing if it means major changes to the behavior of what I'm guessing is one of the three most commonly used types in Python (along with int and tuple, thanks to the integration of dicts into so many facets of the implementation). I changed my mind and am now in favor. Most of the arguments against could also be used against list+list. Counter addition is actually a nice special case of this -- it produces the same keys but has a more sophisticated way of merging values for common keys. Please read the python-ideas thread! Also note: That Python ideas thread that xtreak linked ( ) largely rejected the proposal a couple weeks before PEP 448 was approved. At the time, the proposal wasn't just about +/+=; that was the initial proposal, but operator overloading was heavily criticized for the failure to adhere to either addition or concatenation semantics, so alternate constructors and top-level functions similar to sorted were proposed as alternatives (e.g. merged(dicta, dictb)). The whole thread ended up being about creating an approved, built-in way of one-lining: d3 = d1.copy(); d3.update(d2) A key quote though is that this was needed because there was no other option without rolling your own merged function. Andrew Barnert summarized it best: "I'm +1 on constructor, +0.5 on a function (whether it's called updated or merged, whether it's in builtins or collections), +0.5 on both constructor and function, -0.5 on a method, and -1 on an operator. "Unless someone is seriously championing PEP 448 for 3.5, in which case I'm -0.5 on anything, because it looks like PEP 448 would already give us one obvious way to do it, and none of the alternatives are sufficiently nicer than that way to be worth having another." As it happens, PEP 448 was put in 3.5, and we got the one obvious way to do it. Side-note: It occurs to me there will be one more "way to do it" in 3.8 already, thanks to PEP 572: (d3 := d1.copy()).update(d2) I think I'll stick with d3 = {**d1, **d2} though. :-) Current python-ideas thread for the issue : If we're going to forget about commutativity of +, should we also implement +/+= for sets? > should we also implement +/+= for sets? The question is: what would that do? The same as '|=' ? That would be rather confusing, I think. "|" (meaning: "or") seems a very natural operation for sets, in the same way that "|" operates on bits in integers. That suggests that "|" is the right operator for sets. In any case, this is an unrelated proposal that is better not discussed in this ticket. The only link is whether "|" is the more appropriate operator also for dicts, which is to be discussed in the PEP and thus also not in this ticket. Is this issue directly or indirectly related to the PEP 584 "Add + and - operators to the built-in dict class"? > Is this issue directly or indirectly related to the PEP 584 "Add + and - operators to the built-in dict class"? > Ah yes, it's written in the title of the PR. I add it to the bug title as well. Another obvious way to do it, but I'm +1 on it. A small side point however - PEP 584 reads: >]. ... > [2] Non-string keys: and The references cited does not back this assertion up. Perhaps the intent is to reference the "cool/weird hack" dict(d1, **d2) (see and), which allowed any hashable keys in Python 2 but only strings in Python 3. If I see {**d1, **d2}, my expectations are that this is the new generalized unpacking and I currently expect any keys to be allowed, and the PEP should be updated to accurately reflect this to prevent future misunderstandings. PEP 584 has been approved by the Steering Council (at my recommendation). We will shortly begin landing PRs related to this. New changeset eb8ac57af26c4eb96a8230eba7492ce5ceef7886 by Brandt Bucher in branch 'master': bpo-36144: Dictionary Union (PEP 584) (#12088) While the main code has been merged now, I propose to keep this issue open until some other things have happened: - Documentation - Add | operators to some dict subclasses in the stdlib - (What else?)? Yup, great plan. On Mon, Feb 24, 2020 at 22:29 Brandt Bucher <report@bugs.python.org> wrote: > > Brandt Bucher <brandtbucher@gmail.com> added the comment: > >? > > ---------- > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ > -- --Guido (mobile) Not sure if this is a big deal or not, and it seems likely that the preexisting behaviour of .update() and ** unpacking have already decided it, but is it intentional that you end up with the first-seen key and the last-seen value in the case of collisions? class C: def __init__(self, *a): self.a = a def __hash__(self): return hash(self.a[0]) def __eq__(self, o): return self.a[0] == o.a[0] def __repr__(self): return f"C{self.a}" >>> c1 = C(1, 1); c1 C(1, 1) >>> c2 = C(1, 2); c2 C(1, 2) For set union we get the first seen value: >>> {c1} | {c2} {C(1, 1)} For dict union we get the first seen key and the last seen value: >>> {c1: 'a'} | {c2: 'b'} {C(1, 1): 'b'} But similarly for dict unpack (and .update(); code left as an exercise to the reader): >>> {**{c1: 'a'}, **{c2: 'b'}} {C(1, 1): 'b'} So the union of two dicts may contain .items() elements that were not in either of the inputs. Honestly, I've never noticed this before, as the only time I create equivalent objects with meaningfully-distinct identities is to use with sets. I just figured I'd try it out after seeing suggestions that the dict union operands were transposed from set union. As a somewhat simpler example: >>> f = {False: False} >>> z = {0: 0} >>> f | z {False: 0} >>> {**f, **z} {False: 0} >>> f.update(z); f {False: 0} Though these hairier cases aren't explicitly addressed, the conflict behavior is covered in the Rationale and Reference Implementation sections of the PEP. All of the above examples share code (`dict_update_arg`), and that's definitely intentional. I for one think it would be confusing (and probably a bug) if one of the examples above gave a different key-value pair! I find it makes more sense if you see a set as valueless keys (rather than keyless values). That's a much simpler example. And of course: >>> z[False] = False >>> z {0: False} So the precedent is well established that the key doesn't get updated with the value. No further questions, yer honour ;) New changeset d0ca9bd93bb9d8d4aa9bbe939ca7fd54ac870c8f by Brandt Bucher in branch 'master': bpo-36144: Document PEP 584 (GH-18659) @Brandt: you have some more followup PRs planned right? Let's keep this issue open until you've done all of those. Yep. I'm currently working on OrderedDict, defaultdict, and MappingProxyType. My brother is looking to make his first contribution, so he'll be taking care of ChainMap. What is ChainMap going to do? Normally, the left-most argument to ChainMap is the "top level" dict, but in a regular union scenario, last value wins. Seems like layering the right hand side's dict on top of the left hand side's would match dict union semantics best, but it feels... wrong, given ChainMap's normal left-to-right precedence. And top-mostness affects which dict receives all writes, so if chain1 |= chain2 operates with dict-like precedence (chain2 layers over chain1), then that also means the target of writes/deletions/etc. changes to what was on top in chain2. The plan is to follow dict’s semantics. The |= operator will basically delegate to the first map in the chain. The | operator will create a new ChainMap where the first map is the merged result of the old first map, and the others are the same. So, basically update / copy-and-update, respectively. Sorry, I think I need examples to grok this in the general case. ChainMap unioned with dict makes sense to me (it's equivalent to update or copy-and-update on the top level dict in the ChainMap). But ChainMap unioned with another ChainMap is less clear. Could you give examples of what the expected end result is for: d1 = {'a': 1, 'b': 2} d2 = {'b': 3, 'c': 4} d3 = {'a': 5, 'd': 6} d4 = {'d': 7, 'e': 8} cm1 = ChainMap(d1, d2) cm2 = ChainMap{d3, d4) followed by either: cm3 = cm1 | cm2 or cm1 |= cm2 ? As in, what is the precise state of the ChainMap cm3 or the mutated cm1, referencing d1, d2, d3 and d4 when they are still incorporated by references in the chain? My impression from what you said is that the plan would be for the updated cm1 to preserve references to d1 and d2 only, with the contents of cm2 (d3 and d4) effectively flattened and applied as an in-place update to d1, with an end result equivalent to having done: cm1 = ChainMap(d1, d2) d1 |= d4 d1 |= d3 (except the key ordering would actually follow d3 first, and d4 second), while cm3 would effectively be equivalent to having done (note ordering): cm3 = ChainMap(d1 | d4 | d3, d2) though again, key ordering would be based on d1, then d3, then d4, not quite matching the union behavior. And a reference to d2 would be preserved in the final result, but not any other original dict. Is that correct? If so, it seems like it's wasting ChainMap's key feature (lazy accumulation of maps), where: cm1 |= cm2 could be equivalent to either: cm1.maps += cm2.maps though that means cm1 wins overlaps, where normal union would have cm2 win, or to hew closer to normal union behavior, make it equivalent to: cm1.map[:0] = cm2.maps prepending all of cm2's maps to have the same duplicate handling rules as regular dicts (right side wins) at the expense of changing which map cm1 uses as the target for writes and deletes. In either case it would hew to the spirit of ChainMap, making dict "union"-ing an essentially free operation, in exchange for increasing the costs of lookups that don't hit the top dict. I think for `|=` the only choice is for it to be essentially an alias to `.update()`. So that means `cm |= other` becomes `cm.maps[0].update(other)`. For `|` we are breaking new ground and we could indeed make `cm | other` do something like `ChainMap(other, *cm.maps)`. I've not used ChainMap much (though I've seen some code that uses it) so I'm probably not the best judge of whether this is a good feature to have. Note that `other | cm` will just do whatever `other.__or__` does, since ChainMap isn't a true subclass of dict, so it will not fall back to `cm.__ror__`. Basically ChainMap will not get control in this case. Other thoughts: - Maybe `cm1 | cm2` (both ChainMaps) ought to return `ChainMap(*cm2.maps, *cm1.maps)`? - These semantics make `|=` behave rather differently from `|`. Is that okay? If not, which of them should change, and how? > I think for `|=` the only choice is for it to be essentially an alias to `.update()`. So that means `cm |= other` becomes `cm.maps[0].update(other)`. Agreed. > These semantics make `|=` behave rather differently from `|`. Is that okay? If not, which of them should change, and how? I don’t like this. Let me try to explain why: So far (and to the best of my knowledge), setting and updating values on a ChainMap works exactly the same as it does for dict, with all of the same semantics (the docs themselves even say that “all of the usual dictionary methods are supported”… which now could be interpreted as meaning | and |= as well). It’s only when deleting or using the new interfaces that things get more specialized. But that doesn’t really apply here. Having different (or worse, inconsistent) behavior for these operators, I feel, would be more confusing than helpful. Remember, a major goal of this proposal is to aid in duck typing. So, Josh’s understanding of my intended semantics is correct, I propose that, for: d1 = {'a': 1, 'b': 2} d2 = {'b': 3, 'c': 4} d3 = {'a': 5, 'd': 6} d4 = {'d': 7, 'e': 8} cm1 = ChainMap(d1, d2) cm2 = ChainMap{d3, d4) cm3 = cm1 | cm2 Gives cm3 a value of: ChainMap(d1 | d4 | d3, d2) # Or, equivalently: ChainMap(d1 | dict(cm2), d2) And: cm1 |= cm2 Is equivalent to: d1 |= cm2 I don’t want to change which map is "first", and I think changing the winning behavior from that of dict will create more problems than it solves. We only need to look at how ChainMap handles the update method… it keeps the same exact behavior, rather than trying to be lazy or reversed or something. If we *are* deciding to do something different, then I think it should have no relationship to PEP 584, which reasons out a carefully considered merge operation for dict, not ChainMap. But, it would also probably need a different operator, and be able to stand on its own merits. I had just come to a different conclusion. Maybe ChainMap should just not grow `|` and `|=` operators? That way there can be no confusion. `dict() | ChainMap()` and `ChainMap() | dict()` will fail because ChainMap doesn't inherit from dict. (Note that in your last message, `d1 |= cm2` will fail for this reason. You can of course fix that with `d1 |= dict(cm2)`, although IIUC there's no reason one of the maps couldn't be some other [Mutable]Mapping.) > Note that in your last message, `d1 |= cm2` will fail for this reason. You can of course fix that with `d1 |= dict(cm2)`, although IIUC there's no reason one of the maps couldn't be some other [Mutable]Mapping. Mappings and iterables are fine for the in-place variant. :) >>> from collections import ChainMap >>> d = {} >>> c = ChainMap({"r": 2, "d":2}) >>> d |= c >>> d {'r': 2, 'd': 2} I think it would be confusing to have `ChainMap | ChainMap` behave subtly different than `dict | ChainMap`. It would be *especially* odd if it also differed subtly from `ChainMap | dict`. To recap: +1 on adding the operators with dict semantics, +0 on no PEP 584 for ChainMap. -0 on implementing them, but changing the winning behavior by concatenating the maps lists or something. This would probably make more sense to me as a `+` operator, honestly. :( -1 for having the operators behave differently (other than performance shortcuts) for `cm | d`, `cm | cm`, `cm |= d`, `cm |= cm`. OK, assuming `|=` gets the same semantics as update(), can you repeat once more (without motivation) what the specification for `cm | other` will be? I believe that: cm | other Should return the equivalent of: ChainMap(cm.maps[0] | dict(other), *cm.maps[1:]) ...however, I could also see the (similar): ChainMap(other, *cm.maps) # Note that `other` is the original reference here. Being okay as well. Maybe even better, now that I've written it out. OK, that makes sense, it works similar to ChainMap.copy(), which copies maps[0] and keeps links to the rest. So in particular `cm | {}` will do the same thing as cm.copy(). Im not sure if the dict(other) cast is the best way to go about it. Maybe this would work? def __or__(self, other): new = self.copy() new |= other # OR new.update(other) ??? return new def __ior__(self, other): self.update(other) return self Note that there is no ChainMap.update() definition -- it relies on MutableMapping.update(). I guess we need a __ror__ as well, in case there's some other mapping that doesn't implement __or__: def __ror__(self, other): new = other.copy() new.update(self) return new Note that this doesn't return a ChainMap but an instance of type(other). If other doesn't have a copy() method it'll fail. As a refinement, __or__ and __ror__ should perhaps check whether the operation can possibly succeed and return NotImplemented instead of raising? (Based on the type of other only, not its contents.) I didn't see your second reply, with `ChainMap(other, *cm.maps)`. I'm not so keen on that, because its special behavior can't be mimicked by `|=`. > Im not sure if the dict(other) cast is the best way to go about it. Maybe this would work? Yeah, I was imagining something like that... I used the cast for brevity in my reply but that probably wasn't helpful. Note that for __or__, we probably want to check the type of the argument (for either dict or ChainMap, or maybe just Mapping), to keep it from working on an iterable of key-value pairs. > I guess we need a __ror__ as well, in case there's some other mapping that doesn't implement __or__: Agreed. Again, we can check for Mapping here to assure success for the copy() move. > As a refinement, __or__ and __ror__ should perhaps check whether the operation can possibly succeed and return NotImplemented instead of raising? (Based on the type of other only, not its contents.) Yup, see above. I think a check for Mapping should be fine. Just to clarify: If we decide to check isinstance(other, (ChainMap, dict)), '|' should probably be used. If we decide to check isinstance(other, Mapping), I think the copy/update methods should be used. 1. def __or__(self, other): return self.__class__(self.maps[0] | other, *self.maps[1:]) def __ror__(self, other): return other | dict(self) 2. def __or__(self, other): return self.__class__(other, *self.maps) def __ror__(self, other): return self.__class__(*self.maps, other) There are problems with both variants, so I think it may be better to not add this operator to ChainMap. I think we're only seriously considering the first variant (although implemented slightly differently, see my last two messages). And __ror__ would probably change, returning the type of self. What are the "problems" with it, exactly? We seem to be in agreement that the update behavior is reasonable, even for ChainMaps. We already have somewhat different semantics of `|` for Counter, and hence I think it's fine to give it the most useful semantics for ChainMap given that class's special behavior. I think we've come up with the right solution there. Let's stop the debate and put up a PR. Sounds good, I'll have these up soon. New changeset 57c9d1725689dde068a7fccaa7500772ecd16d2e by Brandt Bucher in branch 'master': bpo-36144: Implement defaultdict union (GH-18729) Still waiting for ChainMap -- what else? My brother will have a ChainMap PR up soon. I'm just finishing up MappingProxyType, myself. Probably both this weekend. Then I'll move on to OrderedDict, which looks like it could be tricky. I'll need to familiarize myself with the implementation better (unless there's somebody who is already familiar with it who wants to take over). It looks well-commented, though. I think we can pass on the http.cookies subclasses since there don't appear to be any experts/maintainers for that module. New changeset 4663f66f3554dd8e2ec130e40f6abb3c6a514775 by Brandt Bucher in branch 'master': bpo-36144: Update MappingProxyType with PEP 584's operators (#18814) Issue 39857 just reminded me that we should update os._Environ as well (the type of os.environ and os.environb). I have another first-timer who will probably want to take it. Once this issue will be done, would you mind to update PEP 584? Currently, it only says "This PEP proposes adding merge (|) and update (|=) operators to the built-in dict class." It doesn't mention that other types like OrderedDict, MappingProxy or ChainMap are updated as well. Yes, I can add a section explaining that after the PEP was accepted, we decided to add the operators to several non-dict mappings as well. I'm also going to add some explanation as to why Mapping/MutableMapping didn't grow them, too. New changeset d648ef10c5c7659ed3c9f34d5c751dc55e2c6007 by Charles Burkland in branch 'master': bpo-36144: Update os.environ and os.environb for PEP 584 (#18911) New changeset 6d674a1bf456945eb758e85c11484a9f1494f2b4 by Brandt Bucher in branch 'master': bpo-36144: OrderedDict Union (PEP 584) (#18967) Three other MutableMappings we might want to update: - shelve.Shelf - weakref.WeakKeyDictionary - weakref.WeakValueDictionary Shelf is up in the air, since it doesn't look like it defines a copy() equivalent... I also have no experience with it. Since it's a MutableMapping subclass, (not a dict subclass), we could in theory hold off on updating this until someone asks for it, without backward compatibility issues. I think the other two should be updated, though. I can coordinate PRs for them this week. I definitely think we should leave Shelf alone, it's a toy class from a different era. It makes sense to update the weak dicts; hopefully the | and |= operators can be implemented in terms of other, more primitive operations, so we will have assurance that these classes' essential behavior is preserved. New changeset f393b2c588559162dc2e77f8079a42e48558870a by Curtis Bucher in branch 'master': bpo-36144: Add PEP 584 operators to collections.ChainMap (#18832) New changeset 25e580a73c163f472fdeb5489bebef85da21655c by Curtis Bucher in branch 'master': bpo-36144: Add union operators to WeakKeyDictionary (#19106) New changeset 8f1ed21ecf57cc8b8095d9d1058af2b9b3ed0413 by Curtis Bucher in branch 'master': bpo-36144: Add union operators to WeakValueDictionary584 (#19127) And... that's it! Big thanks to everybody who had a part in making this happen. I'm guessing this can be closed? I guess we should keep this open until Raymond Hettinger has given feedback on (where we have the option of changing to Brandt's proposal from). First off, thanks for adding the feature, it's much appreciated. But it'd be great if you guys can enable list merge for the dict having list as its value, in current form I believe it's handling only "key: value" merge. for e.g.: >>> d1 = {'spam': [1, 2, 3]} >>> d2 = {'spam': [2, 3, 4]} >>> d1 | d2 >>> {'spam': [1, 2, 3, 4]} Similar behavior was considered and ultimately rejected by the PEP as being too specialized: What you're asking for it subtly different, and even *more* specialized than that. I'd recommend just subclassing dict and overriding the operator, as the PEP suggests. "}
https://bugs.python.org/issue36144
CC-MAIN-2020-50
en
refinedweb
How to create pages dynamically in Gatsby using MDX 16 Jan 2020. 2 min read. Photo by Estée Janssens In this post, we will be looking into how to create pages programmatically using MDX in Gatsby. To get up and running, we need to install a couple of plugins npm i gatsby-plugin-mdx @mdx-js/mdx @mdx-js/react Then we need to configure gatsby-mdx inside gatsby-config.js plugins: [ { resolve: 'gatsby-plugin-mdx', options: { defaultLayouts: { default: require.resolve('./src/components/Layout.js'), }, }, }, ]; So first we need to resolve the plugin gatsby-plugin-mdx because we also want to pass in options object which defines what layout that we want to use in our MDX files. Note: require.resolve give us the absolute path name. As a result, any MDX files that we load will be loaded into the Layout.js template that we defined in the gatsby-config. Now that we have installed the plugin, the plugin will look for mdx files in the pages or posts directory which we defined in gatsby-config. So to get the post pages into gatsby, we are going to use another plugin gatsby-source-filesystem npm i gatsby-source-filesystem to get them to the data layer so that we can access them. The gatsby source file system is a way to use local files as part of the graphql data layer. Once it gets installed, we need to update gatsby config to resolve source filesystem plugin plugins: [ { resolve: 'gatsby-plugin-mdx', options: { defaultLayouts: { default: require.resolve('./src/components/Layout.js'), }, }, }, { resolve: 'gatsby-source-filesystem', options: { name: 'posts', path: `${__dirname}/content/posts`, }, }, ]; As a result, it will load anything that it finds in the path /content/posts as part of the data layer, and because we have gatsby MDX plugin installed it's going to look for MDX files and transform those into graphql nodes. The whole reason for using MDX is because we want to add some sort of interactivity in the markup generated pages. Now that we added configuration to look for files in the system and transform them to graphql nodes, we would need to generate those post files as pages programmatically using gatsby API createPages by configuring that in gatsby-node.js. Gatsby in itself has a couple of available APIs that can be used to extend how gatsby works, inside of those you can export a function that is the same name as one of the hooks that gatsby looks for. As a result, gatsby will do those instructions at the build phase. In this case, we want to create pages so we use exports.createPages and because we are going to load data we make the function async. Gatsby will give us a couple of utility methods such as actions, graphql helper and reporter(which can be used in case you want to put something in the console, it's a gatsby internal kind of console.log) exports.createPages = async ({ actions, graphql, reporter }) => { const result = await graphql(` query { allMdx { nodes { frontmatter { path } } } } `); if (result.errors) { reporter.panic('failed to create posts ', result.errors); } const pages = result.data.allMdx.nodes; pages.forEach((page) => { actions.createPage({ path: page.frontmatter.path, component: require.resolve('./src/templates/postTemplate.js'), context: { pathSlug: page.frontmatter.path, }, }); }); }; In the createPage function we will use graphql helper to fetch the nodes from the data layer by passing a graphql query as you can see in the snippet above. Then, we create the pages using actions.createPage as we loop through these pages that came back as an array to generate them programmatically as you can see in the screenshot below actions.createPage takes an options object as a parameter that has 3 properties: path, component and context. Path is what we have defined in the mdx frontmatter. Component takes in the path to the template you want to use for these pages. Below is a sample snippet used as page template. import { graphql } from 'gatsby'; import { MDXRenderer } from 'gatsby-plugin-mdx'; import React from 'react'; import Layout from '../components/Layout'; export const query = graphql` query($pathSlug: String!) { mdx(frontmatter: { path: { eq: $pathSlug } }) { frontmatter { title path } body } } `; const Post = ({ data: { mdx: post } }) => { const { title } = post.frontmatter; const { body } = post; return ( <div> <Layout> <h1>{title}</h1> <MDXRenderer>{body}</MDXRenderer> </Layout> </div> ); }; export default Post; Context takes in an object with pathSlug as its property which value is the page path. Once we finish adding the above, now we can add interactivity to our MDX pages which would look like this --- path: '/blog/hello-world' date: '2020/01/01' title: 'Hello World' summary: 'hello world post' --- import Counter from '../../../src/components/Counter'; Hello World <Counter /> Following the post and you can find a starter repo here that shows usage of mdx pages
https://malikgabroun.com/blog/gatsby-create-pages-with-mdx/
CC-MAIN-2020-50
en
refinedweb
. External Database Connection There are two types of database connection: External: Used to connect to any database, such as your sales or contacts database. Local: Connects to the current database your Atlassian application is using. The Resources page lists all previously configured database connections. To set up a connection to an external database, you need to know the following information for the database: The JDBC URL Any required credential information (username and password) The driver class To set up an external database connection and make the database available to scripts: Navigate to ScriptRunner > Resources > Add New Item > Database Connection. Provide a name for the connection in Pool Name. Enter the JDBC URL of the database to which you wish to connect. For more information on sample JDBC URLs, see JDBC Driver Connection URL Strings. Provide the Driver Class Name. Click Show examples to see a list of common driver classes. Enter the username required to authenticate the database in User and the corresponding Password. Optionally, enter a query into the SQL field to test receiving information from the database. Use Preview to test out different queries. Again, optionally, you can set advanced connection pool properties, such as the maximum pool size. This may be useful in several cases: If you are using Data Center, by default, the database resource will take 10 connections per node in your cluster. If you would like to limit a database session to ten minutes, enter maxLifetime=600000(ten minutes in milliseconds). If you see connections are growing, it could be that a prepared statement was not closed. Enter leakDetectionThreshold=2000to report any connection that was borrowed from the pool for more than two seconds. If your database query is expected to take more than two seconds, then this is probably not a leak, and you should raise this value. If the preview is successful, click Add. Other Drivers You might want to use these to create a custom field that allows users to pick from a row in a spreadsheet. In the example shown below, we are using a CSV driver. This makes available all CSV in the directory provided in the JDBC URL. So in /tmp, we have a CSV file called devs.csv. Local Database Connection. + SQL can be used to run queries harder to achieve using the API. For example, aggregate queries: If the preview is successful, click Add.] } LDAP Connection Adding an LDAP resource allows you to query your LDAP servers in a similar way to database connections. Use an LDAP resource to: Validate that a username provided in a custom field is a member of an LDAP group. Write a REST endpoint to list office addresses. In a Leavers workflow, use a post function to mark a user as having left the company. To set up an LDAP connection, and make the connection available to scripts: Navigate to ScriptRunner > Resources > Add New Item > LDAP Connection. Provide a name for the connection in Pool Name. Enter the Host. Optionally, check Use TLS to use TLS/SSL encryption. Enter the Port the LDAP connection is using. Enter the base dn into the Base field. Add the User dn. Enter the LDAP Password. Click Add. Clicking Preview validates that a successful connection and query can be made to the LDAP server. Use LDAP Resources in Scripts Having set up an LDAP connection, you can use it in a script as follows: This example uses the LDAP connection with the Pool Name corporate. import com.onresolve.scriptrunner.ldap.LdapUtil import org.springframework.ldap.core.AttributesMapper import javax.naming.directory.SearchControls def cnList = LdapUtil.withTemplate('corporate') { template -> template.search("", "(sn=Smi*)", SearchControls.SUBTREE_SCOPE, { attributes -> attributes.get('cn').get() } as AttributesMapper<String>) } // cnList now contains the list of common names of users whose surnames begin with "Smi"... LdapUtil.withTemplate takes two arguments: The name of the connection as defined by you in the Pool Name parameter when adding the connection (in this example corporate), A closure. The closure receives a org.springframework.ldap.core.LdapOperationsobject as an argument. See spring ldap for more information on querying. Where the documentation refers to an LdapTemplate, this is equivalent to the above-mentioned LdapOper.
https://scriptrunner.adaptavist.com/6.9.0/jira/resources.html
CC-MAIN-2020-50
en
refinedweb
Introduction : I have already explained to you how we can add one header and how to navigate between different screens in react native. Navigation Header is an important part in mobile application, also its style. It should follow a design pattern on all screens of your app to make it attractive. React navigation makes it easier to add style to the navigation header. We can change the color, tint color, or font easily using props. In this post, I will show you how we can customize the header of react-navigation with an example. Change the color of header : To change the color of the header, we can use headerStyle props. It takes one style object and backgroundColor in that object is used to change the header color. Let’s take a look at the below example snippet : <Stack.Screen name="HomeScreen" component={HomeScreen} options={{ title: 'Home Page', headerStyle: { backgroundColor: '#3f51b5', }, }} /> This screen is inside a Navigator. You can add headerStyle with different colors to different screens if you want. Normally we use only one color on all screen headers, not different. For that, we can put the style in the Navigator. I will show it to the end of the article. Changing header tint color and title style : The tint color is used for the title and the back button. Other that it, we can change title font family, title font weight and other font related settings. headerTintColor is used to change the tint color. headerTitleStyle takes props related to the title text style like fontWeight for font-weight. For example : <Stack.Screen name="HomeScreen" component={HomeScreen} options={{ title: 'Home Page', headerStyle: { backgroundColor: '#f4511e', }, headerTintColor: '#fff', headerTitleStyle: { fontFamily: 'Cochin', fontWeight: 'bold', fontSize: 20, }, }} /> Using common header style and color for all screens : The above example applies the style only to specific screens. That means if we want the same style in all of our application screens, we need to copy-paste the same header and header title style on each. That will make more repeated code. Instead, we can move the same style settings to the Navigator under a prop called screenOptions like below : import React from 'react'; import 'react-native-gesture-handler'; import {NavigationContainer} from '@react-navigation/native'; import HomeScreen from './screens/HomeScreen'; import DetailScreen from './screens/DetailScreen'; import {createStackNavigator} from '@react-navigation/stack'; export default function App() { const Stack = createStackNavigator(); return ( <NavigationContainer> <Stack.Navigator initialRouteName="HomeScreen" screenOptions={{ headerStyle: { backgroundColor: '#f4511e', }, headerTintColor: '#fff', headerTitleStyle: { fontFamily: 'Cochin', fontWeight: 'bold', fontSize: 20, }, }}> <Stack.Screen name="HomeScreen" component={HomeScreen} options={{ title: 'Home Page', }} /> <Stack.Screen name="DetailScreen" component={DetailScreen} options={({route}) => ({title: route.params.title})} /> </Stack.Navigator> </NavigationContainer> ); } We have two screens defined in this example: HomeScreen and DetailScreen. Putting the header style in the Navigator will apply the same settings to both of these screens. If you want to change the style on all screens, you can do it in one commonplace. Also, if you add any more screens, it will apply that style automatically to that screen. Putting the style in a commonplace makes the code easy to maintain. On an iOS device, the Home screen will look as like below :
https://www.codevscolor.com/react-navigation-change-header-text-color-font
CC-MAIN-2020-50
en
refinedweb
How To Train ML Models With Mislabeled Data 3 Tips on how to train machine learning models efficiently when your data is noisy and mislabeled… In this article, I would like to talk about 3 tricks that helped me to efficiently train models and win a silver medal in a kaggle competition where the dataset was mislabeled and contained a significant amount of noise. By using those 3 tricks, I managed to deal with the noisy data and finish in the 114th position out of 3900 teams in this competition. Rule n° 1 in data science: Garbage In = Garbage Out. Mislabeled data is part of real world data, not all the datasets are clean. Most datasets tend to have some amount of noise which can be challenging when training a machine learning model. The good news is that the Garbage In = Garbage Out rule can be overcome with some tricks that can help your model adapt to the mislabeled data. A brief introduction to the dataset: Cassava leaf disease prediction: It’s a computer vision competition with a dataset of 21,367 labeled images of cassava plants. The aim of the competition was to classify each cassava image into four disease categories or a fifth category indicating a healthy leaf. After a quick exploratory data analysis, I realized that some of the images were mislabeled, let’s have the example of the 2 images below: We can clearly see that the 1st image contains diseased leaves while the 2nd one has healthy leaves . Well, both images were labeled as ‘healthy’ in this dataset, which makes the task of the model harder since it has to extract and learn the features of healthy and diseased leaves and assign them to the same class: Healthy. In the following section, I would like to talk about 3 tricks I found useful to deal with noisy datasets: 1- Bi-Tempered loss function: Picking the right loss function is very critical in machine learning. It depends a lot on your data, task and metric. In this case, we have a multi-class classification (5 classes) with categorical accuracy as a metric. So, the first loss function that comes to mind is categorical cross-entropy. However, we have a mislabeled dataset and the cross-entropy loss is very sensitive to outliers. Mislabeled images can stretch the decision boundaries and dominate the overall loss. To solve this problem, Google AI researchers introduced a “bi-tempered” generalization of the logistic loss endowed with two tunable parameters that handle those situations well, which they call “temperatures” — t1, which characterizes boundedness, and t2 for tail-heaviness. It’s basically a cross-entropy loss with 2 new tunable parameters t1 and t2. The standard cross-entropy can be recovered by setting both t1 and t2 equal to 1. So, what happens when we tune t1 and t2 parameters? Let’s understand what’s happening here: - With small margin noise: The noise stretched the decision boundary in a heavy tailed form. This was solved with the Bi-Tempered loss by tuning the t2 parameter from t2=1 to t2=4. - With large margin noise: The large noise stretched the decision boundary in a bounded way, covering more surface than the heavy tail in the case of small margin noise. The Bi-Tempered loss solved this by tuning the t1 parameter from t1=1 to t1=0.2. - With random noise: Here, we can see both heavy tails and bounded decision boundaries, so both t1 and t2 are adjusted in the Bi-Tempered loss. The best way to finetune the t1 and t2 parameters is by plotting your model’s decision boundary and checking if your decision boundary is heavy tailed, bounded or both, then tweak the t1 and t2 parameters accordingly. If you are dealing with tabular data, you can use the Plot_decision_regions() function to visualize your model’s decision boundaries. #Import package from mlxtend.plotting import plot_decision_regions# Plot decision boundary plot_decision_regions(x=x_test, y=y_test, clf=model, legend=2) plt.show() You can learn more about the Bi-Tempered loss in the Google AI blog and their github repository. 2- Self Distillation: If you are already familiar with knowledge distillation where knowledge transfer takes place from a teacher to a student model, self distillation is a very similar concept. This new concept was introduced in the paper: Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation. The idea is so simple: Self Distillation: You train your model and then you retrain it using itself as a teacher. The paper discusses a more advanced approach that includes several loss functions and some architecture modifications (Additional bottleneck and fully connected layers). In this article. I’d like to introduce a much simpler approach. I read about this approach in the first place solution of the plant pathology competition on kaggle, where the winner team used self-distillation to deal with the noisy dataset. You can check the code in their github repository. Self Distillation in 3 steps: - 1- Split your dataset to k folds cross-validation. - 2- Train model 1 to predict the out of folds predictions. - 3- After saving the out of folds predictions predicted by our model, we load them and blend them with the original labels. The blending coefficients are tunable, the original labels should have a higher coefficient. The out of fold predictions here are class probabilities predicted by model 1: - In this particular example we have a multiclass classification with 5 classes [0,1,2,3,4]. - The labels are one hot encoded. Class 2 is represented as [0,0,1,0,0]. - Model 1 predicted the class 2 correctly: [0.1, 0.1 ,0.4 ,0.1 ,0.3]. Giving it a probability of 0.4, higher than the other classes. But, it also gave class 4 a high probability of 0.3. - Model 2 will use this information to improve its predictions. 3- Ensemble learning: Ensemble learning is well known to improve the quality of predictions in general. In the case of noisy datasets it can be very helpful because each model has a different architecture and learns different patterns. I was planning to try Vision Transformer models released by Google AI in the paper: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, and this competition was the perfect place to try and learn more about them since they introduce a new concept in computer vision that is different than convolutional neural networks that are dominating the field. In short, the ensemble of a vision transformer model with 2 different CNN architectures improves the predictions quality of single models: To sum up, You can train machine learning models with mislabeled data by using: - The Bi-Tempered loss function and tuning its parameters t1 and t2 correctly. - Self Distillation: Train your model and retrain it again using itself as a teacher. - Ensemble learning: Ensemble the predictions of different models. If you would like to learn more details about the models training process, check the summary of my approach on kaggle. The charts were made with the app draw.io THE END
https://aminey.medium.com/how-to-train-ml-models-with-mislabeled-data-cf4bb353b3d9?source=user_profile---------1----------------------------
CC-MAIN-2022-40
en
refinedweb
At Cloudflare we develop new products at a great pace. Their needs often challenge the architectural assumptions we made in the past. For example, years ago we decided to avoid using Linux's "conntrack" - stateful firewall facility. This brought great benefits - it simplified our iptables firewall setup, sped up the system a bit and made the inbound packet path easier to understand. But eventually our needs changed. One of our new products had a reasonable need for it. But we weren't confident - can we just enable conntrack and move on? How does it actually work? I volunteered to help the team understand the dark corners of the "conntrack" subsystem. What is conntrack? "Conntrack" is a part of Linux network stack, specifically part of the firewall subsystem. To put that into perspective: early firewalls were entirely stateless. They could express only basic logic, like: allow SYN packets to port 80 and 443, and block everything else. The stateless design gave some basic network security, but was quickly deemed insufficient. You see, there are certain things that can't be expressed in a stateless way. The canonical example is assessment of ACK packets - it's impossible to say if an ACK packet is legitimate or part of a port scanning attempt, without tracking the connection state. To fill such gaps all the operating systems implemented connection tracking inside their firewalls. This tracking is usually implemented as a big table, with at least 6 columns: protocol (usually TCP or UDP), source IP, source port, destination IP, destination port and connection state. On Linux this subsystem is called "conntrack" and is often enabled by default. Here's how the table looks on my laptop inspected with "conntrack -L" command: The obvious question is how large this state tracking table can be. This setting is under "/proc/sys/net/nf_conntrack_max": $ cat /proc/sys/net/nf_conntrack_max 262144 This is a global setting, but the limit is per container. On my system each container, or "network namespace", can have up to 256K conntrack entries. What exactly happens when the number of concurrent connections exceeds the conntrack limit? Testing conntrack is hard In past testing conntrack was hard - it required complex hardware or vm setup. Fortunately, these days we can use modern "user namespace" facilities which do permission magic, allowing an unprivileged user to feel like root. Using the tool "unshare" it's possible to create an isolated environment where we can precisely control the packets going through and experiment with iptables and conntrack without threatening the health of our host system. With appropriate parameters it's possible to create and manage a networking namespace, including access to namespaced iptables and conntrack, from an unprivileged user. This script is the heart of our test: # Enable tun interface ip tuntap add name tun0 mode tun ip link set tun0 up ip addr add 192.0.2.1 peer 192.0.2.2 dev tun0 ip route add 0.0.0.0/0 via 192.0.2.2 dev tun0 # Refer to conntrack at least once to ensure it's enabled iptables -t raw -A PREROUTING -j CT # Create a counter in mangle table iptables -t mangle -A PREROUTING # Make sure reverse traffic doesn't affect conntrack state iptables -t raw -A OUTPUT -p tcp --sport 80 -j DROP tcpdump -ni any -B 16384 -ttt & ... ./venv/bin/python3 send_syn.py conntrack -L # Show iptables counters iptables -nvx -t raw -L PREROUTING iptables -nvx -t mangle -L PREROUTING This bash script is shortened for readability. See the full version here. The accompanying "send_syn.py" is just sending 10 SYN packets over "tun0" interface. Here is the source but allow me to paste it here - showing off "scapy" is always fun: tun = TunTapInterface("tun0", mode_tun=True) tun.open() for i in range(10000,10000+10): ip=IP(src="198.18.0.2", dst="192.0.2.1") tcp=TCP(sport=i, dport=80, flags="S") send(ip/tcp, verbose=False, inter=0.01, socket=tun) The bash script above contains a couple of gems. Let's walk through them. First, please note that we can't just inject packets into the loopback interface using SOCK_RAW sockets. The Linux networking stack is a complex beast. The semantics of sending packets over a SOCK_RAW are different then delivering a packet over a real interface. We'll discuss this later, but for now, to avoid triggering unexpected behaviour, we will deliver packets over a tun/tap device which better emulates a real interface. Then we need to make sure the conntrack is active in the network namespace we wish to use for testing. Traditionally, just loading the kernel module would have done that, but in the brave new world of containers and network namespaces, a method had to be found to allow conntrack to be active in some and inactive in other containers. Hence this is tied to usage - rules referencing conntrack must exist in the namespace's iptables for conntrack to be active inside the container. As a side note, containers triggering host to load kernel modules is an interesting subject. After the "-t raw -A PREROUTING" rule, which we added "-t mangle -A PREROUTING" rule, but notice - it doesn't have any action! This syntax is allowed by iptables and it is pretty useful to get iptables to report rule counters. We'll need these counters soon. A careful reader might suggest looking at "policy" counters in iptables to achieve our goal. Sadly, "policy" counters (increased for each packet entering a chain), work only if there is at least one rule inside it. The rest of the steps are self-explanatory. We set up "tcpdump" in the background, send 10 SYN packets to 127.0.0.1:80 using the "scapy" Python library. Then we print the conntrack table and iptables counters. Let's run this script in action. Remember to run it under networking namespace as fake root with "unshare -Ur -n": This is all nice. First we see a "tcpdump" listing showing 10 SYN packets. Then we see the conntrack table state, showing 10 created flows. Finally, we see iptables counters in two rules we created, each showing 10 packets processed. Can conntrack table fill up? Given that the conntrack table is size constrained, what exactly happens when it fills up? Let's check it out. First, we need to drop the conntrack size. As mentioned it's controlled by a global toggle - it's necessary to tune it on the host side. Let's reduce the table size to 7 entries, and repeat our test: This is getting interesting. We still see the 10 inbound SYN packets. We still see that the "-t raw PREROUTING" table received 10 packets, but this is where similarities end. The "-t mangle PREROUTING" table saw only 7 packets. Where did the three missing SYN packets go? It turns out they went where all the dead packets go. They were hard dropped. Conntrack on overfill does exactly that. It even complains in the "dmesg": This is confirmed by our iptables counters. Let's review the famous iptables diagram: As we can see, the "-t raw PREROUTING" happens before conntrack, while "-t mangle PREROUTING" is just after it. This is why we see 10 and 7 packets reported by our iptables counters. Let me emphasize the gravity of our discovery. We showed three completely valid SYN packets being implicitly dropped by "conntrack". There is no explicit "-j DROP" iptables rule. There is no configuration to be toggled. Just the fact of using "conntrack" means that, when it's full, packets creating new flows will be dropped. No questions asked. This is the dark side of using conntrack. If you use it, you absolutely must make sure it doesn't get filled. We could end our investigation here, but there are a couple of interesting caveats. Strict vs loose Conntrack supports a "strict" and "loose" mode, as configured by "nf_conntrack_tcp_loose" toggle. $ cat /proc/sys/net/netfilter/nf_conntrack_tcp_loose 1 By default, it's set to "loose" which means that stray ACK packets for unseen TCP flows will create new flow entries in the table. We can generalize: "conntrack" will implicitly drop all the packets that create new flow, whether that's SYN or just stray ACK. What happens when we clear the "nf_conntrack_tcp_loose=0" setting? This is a subject for another blog post, but suffice to say - it's a mess. First, this setting is not settable in the network namespace scope - although it should be. To test it you need to be in the root network namespace. Then, due to twisted logic the ACK will be dropped on a full conntrack table, even though in this case it doesn't create a flow. If the table is not full, the ACK packet will pass through it, having "-ctstate INVALID" from "mangle" table forward. When doesn't a conntrack entry get created? There are important situations when conntrack entry is not created. For example, we could replace these line in our script: # Make sure reverse traffic doesn't affect conntrack state iptables -t raw -A OUTPUT -p tcp --sport 80 -j DROP With those: # Make sure inbound SYN packets don't go to networking stack iptables -A INPUT -j DROP Naively we could think dropping SYN packets past the conntrack layer would not interfere with the created flows. This is not correct. In spite of these SYN packets having been seen by conntrack, no flow state is created for them. Packets hitting "-j DROP" will not create new conntrack flows. Pretty magical, isn't it? Full Conntrack causes with EPERM Recently we hit a case when a "sendto()" syscall on UDP socket from one of our applications was erroring with EPERM. This is pretty weird, and not documented in the man page. My colleague had no doubts: I'll save you the gruesome details, but indeed, the full conntrack table will do that to your new UDP flows - you will get EPERM. Beware. Funnily enough, it's possible to get EPERM if an outbound packet is dropped on OUTPUT firewall in other ways. For example: marek:~$ sudo iptables -I OUTPUT -p udp --dport 53 --dst 192.0.2.8 -j DROP marek:~$ strace -e trace=write nc -vu 192.0.2.8 53 write(3, "X", 1) = -1 EPERM (Operation not permitted) +++ exited with 1 +++ If you ever receive EPERM from "sendto()", you might want to treat it as a transient error, if you suspect a filled conntrack problem, or permanent error if you blame iptables configuration. This is also why we can't send our SYN packets directly using SOCK_RAW sockets in our test. Let's see what happens on conntrack overfill with standard "hping3" tool: $ hping3 -S -i u10000 -c 10 --spoof 192.18.0.2 192.0.2.1 -p 80 -I lo HPING 192.0.2.1 (lo 192.0.2.1): S set, 40 headers + 0 data bytes [send_ip] sendto: Operation not permitted "send()" even on a SOCK_RAW socket fails with EPERM when conntrack table is full. Full conntrack can happen on a SYN flood There is one more caveat. During a SYN flood, the conntrack entries will totally be created for the spoofed flows. Take a look at second test case we prepared, this time correctly listening on port 80, and sending SYN+ACK: We can see 7 SYN+ACK's flying out of the port 80 listening socket. The final three SYN's go nowhere as they are dropped by conntrack. This has important implications. If you use conntrack on publicly accessible ports, during SYN flood mitigation technologies like SYN Cookies won't help. You are still at risk of running out of conntrack space and therefore affecting legitimate connections. For this reason, as a general rule consider avoiding conntrack on inbound connections (-j NOTRACK). Alternatively having some reasonable rate limits on iptables layer, doing "-j DROP". This will work well and won't create new flows, as we discussed above. The best method though, would be to trigger SYN Cookies from a layer before conntrack, like XDP. But this is a subject for another time. Summary Over the years Linux conntrack has gone through many changes and has improved a lot. While performance used to be a major concern, these days it's considered to be very fast. Dark corners remain. Correctly applying conntrack is tricky. In this blog post we showed how it's possible to test parts of conntrack with "unshare" and a series of scripts. We showed the behaviour when the conntrack table gets filled - packets might implicitly be dropped. Finally, we mentioned the curious case of SYN floods where incorrectly applied conntrack may cause harm. Stay tuned for more horror stories as we dig deeper and deeper into the Linux networking stack guts.
https://blog.cloudflare.com/conntrack-tales-one-thousand-and-one-flows/
CC-MAIN-2022-40
en
refinedweb
From: Michael Glassford (glassfordm_at_[hidden]) Date: 2004-06-29 10:39:12 Christopher Currie wrote: > Michael Glassford wrote: > > > Christopher Currie wrote: > >> TryLock: What would be the semantics of l(m, NO_LOCK, b)? In other > >> words, if you're not going to lock on construction, why specify a > >> blocking policy? > > > > > > The only reason is so that if you write l(m, LOCK, ...) you could also > > specify the blocking parameter. An expansion of Vladimir Batov's of > > using structs rather than enums could help here: > > > > struct nolock_t {}; > > nolock_t nolock; > > > > struct lock_t {}; > > lock_t lock; > > > > class TryLock > > { > > TryLock(TryMutex m, no_lock_t s) > > {...} > > > > TryLock(TryMutex m, lock_t s, blocking_t b) > > {...} > > } > > An interesting syntax, I can see how it does make explicit whether you > are locking or not. Personally, I dislike having to type an extra > argument when it's the only choice; IMO if you're specifying a blocking > parameter, the locking is implied, and therefore superfluous. That was an oversight on my part. Though in the case of read/write locks, you do need to specify both the lock type (read/write) and whether it is blocking or not. > I was thinking something like (going back to enums for a moment): > > enum { unlocked = 0, locked } lock_init; > enum { nonblocking = 0, blocking } blocking_action; > > class TryLock > { > public: > TryLock( TryMutex m, lock_init s = locked ) { ... } > > TryLock( TryMutex m, blocking_action b ) { ... } > }; > > Also, going back to the struct technique, do the struct instances need > to be in an anonymous namespace to prevent ODR violations? Just trying > to get my head around the concept. Probably they should be in a separate namespace, though possibly not anonymous. Mike Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/06/67107.php
CC-MAIN-2022-40
en
refinedweb
Generic Environment for Context-Aware Correction of Orthography Project description ======================================================================== GECCO - Generic Environment for Context-Aware Correction of Orthography by Maarten van Gompel Centre for Language and Speech Technology, Radboud University Nijmegen Sponsored by Revisely () Licensed under the GNU Public License v3 Gecco is a generic modular and distributed framework for spelling correction. Aimed to build a can be distributed over multiple servers. Given an input text, Gecco will add various suggestions for correction. The system can be invoked from the command-line, as a Python binding, as a RESTful webservice, or through the web application (two interfaces). Modules: - Generic built-in modules: - Confusible Module - A confusible module is able to discern which version of often confused word is correct given the context. For example, the words "then" and "than" are commonly confused in English. - Your configuration should specify between which confusibles the module disambiguates. - The module is implemented using the IGTree classifier (a k-Nearest Neighbour approximation) in Timbl. - Suffix Confusible Module - A variant of the confusible module that checks commonly confused morphological suffixes, rather than words. - Your configuration should specify between which suffixes the module disambiguates - The module is implemented using the IGTree classifier (a k-Nearest Neighbour approximation) in Timbl. - Language Model Module - A language model predicts what words are likely to follow others, similar to predictive typing applications commonly found on smartphones. - The module is implemented using the IGTree classifier (a k-Nearest Neighbour approximation) in Timbl. - Aspell Module - Aspell is open-source lexicon-based software for spelling correction. This module enables aspell to be used from gecco. This is not a context-sensitive method. - Hunspell Module - Hunspell is open-source lexicon-based software for spelling correction. This module enables hunspell to be used from gecco. This is not a context-sensitive method. - Lexicon Module - The lexicon module enables you to automatically generate a lexicon from corpus data and use it. This is not a context-sensitive method. - Typed words are matched against the lexicon and the module will come with suggestions within a certain Levenshtein distance. - Errorlist Module - The errorlist module is a very simple module that checks whether a word is in a known error list, and if so, provides the suggestions from that list. This is not a context-sensitive method. - Split Module - The split module detects words that are split but should be written together. - Implemented using Colibri Core - Runon Module - The runon module detects words that are written as one but should be split. - Implemented using Colibri Core - Punctuation & Recase Module - The punctuation & recase module attempts to detect missing punctuation, superfluous punctuation, and missing capitals. - The module is implemented using the IGTree classifier (a k-Nearest Neighbour approximation) in Timbl. - Modules suggested but not implemented yet: - Language Detection Module - (Not written yet, option for later) - Sound-alike Module - (Not written yet, option for later) Features - Easily extendible by adding modules using the gecco module API - Language independent - Built-in training pipeline (given corpus input): Create models from sources - Built-in testing pipeline (given an error-annotated test corpus), returns report of evaluation metrics per module - Distributed, Multithreaded & Scalable: - Load balancing: backend servers can run on multiple hosts, master process distributes amongst these - Multithreaded, modules can be invoked in parallel, module servers themselves may be multithreaded too - Input and output is FoLiA XML () - Automatic input conversion from plain text using ucto Gecco is the successor of Valkuil.net and Fowlt.net. Installation Gecco relies on a large number of dependencies, including but not limited to: Dependencies: - Generic: - python 3.3 or higher - PyNLPl, needed for FoLiA support () - python-ucto & ucto (in turn depending on libfolia, ticcutils) - Module-specific: - Timbl (mandatory) - Colibri Core (mandatory) - For the Aspell Module: (optional) - For the Hunspell Module: (optional) - Hunspell - PyHunspell (not supported out of the box on Mac OS X) - Webservice: (optional) - CLAM To install Gecco, we strongly recommend you to use our LaMachine distribution, which can be obtained from . LaMachine includes Gecco and can be run in multiple ways: as a virtual machine, as a docker app, or as a compilation script setting up a Python virtual environment. Gecco uses memory-based technologies, and depending on the models you train, may take up considerable memory. Therefore we recommend at least 16GB RAM, training may require even more. For various modules, model size may be reduced by increasing frequency thresholds, but this will come at the cost of reduced accuracy. Gecco will only run on POSIX-complaint operating systems (i.e. Linux, BSD, Mac OS X), not on Windows. Configuration To build an actual spelling correction system, you need to have corpus sources and create a gecco configuration that enable the modules you desire with the parameters you want. A Gecco system consists of a configuration, either in the form of a simple Python script or an external YAML configuration file. Example YAML configuration: name: fowlt path: /path/to/fowlt language: en modules: - module: gecco.modules.confusibles.TIMBLWordConfusibleModule id: confusibles source: - train.txt model: - confusible.model confusibles: [then,than] To list all available modules and the parameters they may take, run gecco --helpmodules. Alternatively, the configuration can be done in Python directly, in which case the script will be the tool that exposes all functionality: from gecco import Corrector from gecco.modules.confusibles import TIMBLWordConfusibleModule corrector = Corrector(id="fowlt", root="/path/to/fowlt/") corrector.append( TIMBLWordConfusibleModule("thenthan", source="train.txt",test_crossvalidate=True,test=0.1,tune=0.1,model="confusibles.model", confusible=('then','than'))) corrector.append( TIMBLWordConfusibleModule("its", source="train.txt",test_crossvalidate=True,test=0.1,tune=0.1,model="confusibles.model", confusible=('its',"it's"))) corrector.append( TIMBLWordConfusibleModule("errorlist", source="errorlist.txt",model="errorlist.model", servers=[("blah",1234),("blah2",1234)] ) corrector.append( TIMBLWordConfusibleModule("lexicon", source=["lexicon.txt","lexicon2.txt"],model=["lexicon.model","lexicon2.model"], servers=[("blah",1235)] ) corrector.main() It is recommended to adopt a file/directory structure as described below. If you plan on using multiple hosts, you should store it on a shared network drive so all hosts can access the models: - yourconfiguration.yml - sources/ - models/ An example system spelling correction system for English is provided with Gecco and resides in the example/ directory. Server setup gecco <yourconfig.yml> run <input.folia.xml> is executed to process a given FoLiA document or plaintext document, it starts a master process that will invoke all the modules, which may be distributed over multiple servers. If multiple server instances of the same module are available, the load will be distributed over them. Output will be delivered in the FoLiA XML format and will contain suggestions for correction. To start module servers on a host, issue gecco <yourconfig.yml> startservers. You can optionally specify which servers you want to start, if you do not want to start all. You can start servers multiple times, either on the same or on multiple hosts. The master process will distribute the load amongst all servers. To stop the servers, run gecco <yourconfig.yml> stopservers on each host that has servers running. A list of all running servers can be obtained by gecco <yourconfig.yml> listservers. Modules can also run locally within the master process rather than as servers, this is done by either by adding local: true in the configuration, or by adding the --local option when starting a run. But this will have a significant negative impact on performance and should therefore be avoided. Architecture Command line usage Invoke all gecco functionality through a single command line tool $ gecco myconfig.yml [subcommand] or $ myspellingcorrector.py [subcommand] Syntax: usage: gecco [-h] {run,startservers,stopservers,startserver,train,evaluate,reset} ... Gecco is a generic, scalable and modular spelling correction framework Commands: {run,startservers,stopservers,startserver,train,evaluate,reset} run Run the spelling corrector on the specified input file startservers Starts all the module servers that are configured to run on the current host. Issue once for each host. stopservers Stops all the module servers that are configured to run on the current host. Issue once for each host. listservers Lists all the module servers on all hosts startserver Start one module's server on the specified port, use 'startservers' instead train Train modules evaluate Runs the spelling corrector on input data and compares it to reference data, produces an evaluation report reset Reset modules, deletes all trained models that have sources. Issue prior to train if you want to start anew. Vital documentation regarding all modules and the settings they take can be obtained through: $ gecco --helpmodules Gecco as a webservice RESTUL webservice access will be available through CLAM. We are still working on better integration of this in Gecco. FOr now, an example implementation of this can be seen here: Gecco as a web-application A web-application will eventually be available, modelled after Valkuil.net/Fowlt.net. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Gecco/
CC-MAIN-2022-40
en
refinedweb
Each Answer to this Q is separated by one/two green lines. I am using Python to write chunks of text to files in a single operation: open(file, 'w').write(text) If the script is interrupted so a file write does not complete I want to have no file rather than a partially complete file. Can this be done? Write data to a temporary file and when data has been successfully written, rename the file to the correct destination file e.g f = open(tmpFile, 'w') f.write(text) # make sure that all data is on disk # see f.flush() os.fsync(f.fileno()) f.close() os.rename(tmpFile, myFile) According to doc also The operation may fail on some Unix flavors if src and dst are on different filesystems. Note: It may not be atomic operation if src and dest locations are not on same filesystem os.fsyncstep may be skipped if performance/responsiveness is more important than the data integrity in cases like power failure, system crash etc A simple snippet that implements atomic writing using Python tempfile. with open_atomic('test.txt', 'w') as f: f.write("huzza") or even reading and writing to and from the same file: with open('test.txt', 'r') as src: with open_atomic('test.txt', 'w') as dst: for line in src: dst.write(line) using two simple context managers import os import tempfile as tmp from contextlib import contextmanager @contextmanager def tempfile(suffix='', dir=None): """ Context for temporary file. Will find a free temporary filename upon entering and will try to delete the file on leaving, even in case of an exception. Parameters ---------- suffix : string optional file suffix dir : string optional directory to save temporary file in """ tf = tmp.NamedTemporaryFile(delete=False, suffix=suffix, dir=dir) tf.file.close() try: yield tf.name finally: try: os.remove(tf.name) except OSError as e: if e.errno == 2: pass else: raise @contextmanager def open_atomic(filepath, *args, **kwargs): """ Open temporary file object that atomically moves to destination upon exiting. Allows reading and writing to and from the same filename. The file will not be moved to destination in case of an exception. Parameters ---------- filepath : string the file path to be opened fsync : bool whether to force write the file to disk *args : mixed Any valid arguments for :code:`open` **kwargs : mixed Any valid keyword arguments for :code:`open` """ fsync = kwargs.get('fsync', False) with tempfile(dir=os.path.dirname(os.path.abspath(filepath))) as tmppath: with open(tmppath, *args, **kwargs) as file: try: yield file finally: if fsync: file.flush() os.fsync(file.fileno()) os.rename(tmppath, filepath) Since it is very easy to mess up with the details, I recommend using a tiny library for that. The advantage of a library is that it takes care all these nitty-gritty details, and is being reviewed and improved by a community. One such library is python-atomicwrites by untitaker which even has proper Windows support: From the README: from atomicwrites import atomic_write with atomic_write('foo.txt', overwrite=True) as f: f.write('Hello world.') # "foo.txt" doesn't exist yet. # Now it does. Installation via PIP: pip install atomicwrites I’m using this code to atomically replace/write a file: import os from contextlib import contextmanager @contextmanager def atomic_write(filepath, binary=False, fsync=False): """ Writeable file object that atomically updates a file (using a temporary file). :param filepath: the file path to be opened :param binary: whether to open the file in a binary mode instead of textual :param fsync: whether to force write the file to disk """ tmppath = filepath + '~' while os.path.isfile(tmppath): tmppath += '~' try: with open(tmppath, 'wb' if binary else 'w') as file: yield file if fsync: file.flush() os.fsync(file.fileno()) os.rename(tmppath, filepath) finally: try: os.remove(tmppath) except (IOError, OSError): pass Usage: with atomic_write('path/to/file') as f: f.write("allons-y!\n") It’s based on this recipe. Just link the file after you’re done: with tempfile.NamedTemporaryFile(mode="w") as f: f.write(...) os.link(f.name, final_filename) If you want to get fancy: @contextlib.contextmanager def open_write_atomic(filename: str, **kwargs): kwargs['mode'] = 'w' with tempfile.NamedTemporaryFile(**kwargs) as f: yield f os.link(f.name, filename) Answers on this page are quite old, there are now libraries that do this for you. In particular safer is a library designed to help prevent programmer error from corrupting files, socket connections, or generalized streams. It’s quite flexible and amongst other things it has the option to use either memory or temporary files, you can even keep the temp files in case of failure. Their example is just what you want: # dangerous with open(filename, 'w') as fp: json.dump(data, fp) # If an exception is raised, the file is empty or partly written # safer with safer.open(filename, 'w') as fp: json.dump(data, fp) # If an exception is raised, the file is unchanged. It’s in PyPI, just install it using pip install --user safer or get the latest at Atomic solution for Windows to loop folder and rename files. Tested, atomic to automate, you can increase probability to minimize risk not to event of having same file name. You random library for letter symbols combinations use random.choice method, for digit str(random.random.range(50,999999999,2). You can vary digits range as you want. import os import random path = "C:\\Users\\ANTRAS\\Desktop\\NUOTRAUKA\\" def renamefiles(): files = os.listdir(path) i = 1 for file in files: os.rename(os.path.join(path, file), os.path.join(path, random.choice('ABCDEFGHIJKL') + str(i) + str(random.randrange(31,9999999,2)) + '.jpg')) i = i+1 for x in range(30): renamefiles()
https://techstalking.com/programming/python/how-to-make-file-creation-an-atomic-operation/
CC-MAIN-2022-40
en
refinedweb
DI-hardDI-hard Simple, predictable dependency injection Features: - clear separation between wiring and application - minimal lock-in - control over component lifetimes - module hierarchy with private and public components Disclaimer: Despite (now) being written in Typescript, the dependency injection is not typesafe. ExampleExample container.registerFactory"myService", myServiceFactorycontainer.registerFactory"myDependency", myDependencyFactoryconsole.logmyService.getInfo // "(service - dependencies: (dependency))" TerminologyTerminology - component - a reusable piece of your application - registration - something which can be registered with the DI container (an instance, factory or module) - instance - a static value, or a value created by a factory - factory - a function which creates a (potentially cachable) instance - module - a namespaced set of registrations - register - map an ID to a registration - resolution - the process of mapping an ID to an instance (either externally or through injection) - injection - the process of automatically resolving dependencies for a factory - resolver - object which performs resolution (a Proxy) - container - object which contains registrations and cached instances - lifetime - how long a cached instance is expected to live - scope - which registrations are able to be resolved from a given context - visibility - whether a registration is private (visible only to other registrations in the same module) or public (shares parent module's visibility) ConceptsConcepts The purpose of this library is to enable easy creation of your application's components, without having to worry about those components' dependencies. ComponentsComponents Components are the individual pieces of your application. They can be functions, objects, classes, strings... whatever you want. Below is an example of a component with two dependencies. Here, we have a factory function, which takes an object with named dependencies, and returns an instance of the component (in this case, an object). Notice that there is no direct dependency on the DI library here. The only requirement is that factory functions take their dependencies as named arguments (an object). This makes the code portable, easy to test, and possible to wire together manually. // my-component.jsconst factory = {const instance ={return dep1 + dep2}return instance} Registration, resolution and injectionRegistration, resolution and injection When we have enough components in our application, and the dependency tree starts to get deeper (e.g. A depends on B which depends on C etc.), wiring these components together can become tedious. What we can do instead is register all our component factories with a 'container', and let it be responsible for creating our components (and their dependencies) for us. Here is how we would register the component factories for the above example, and manually resolve (create) an instance of the component, with its dependencies injected. const di =const container = di// register all component factoriescontainercontainercontainer// resolve an instance of the componentconst myInstance = containermyInstance Here we've called container.resolve() directly to manually resolve an instance of myComponent. But how do dep1 and dep2 get resolved? Let's have a look at myComponent again, but slightly re-write it to illustrate what's happening. // my-component.jsconst factory = {const dep1 = resolverdep1 // resolved when this property is accessedconst instance ={const dep2 = resolverdep2 // lazily resolve dep2return dep1 + dep2}return instance} Here we can see that what actually gets injected is a 'resolver' object. Accessing properties on the resolver is what triggers resolution of dependencies. LifetimesLifetimes A lifetime is a statement about how long a container caches a reference to an instance created by a factory. There are two lifetimes currently supported: Transient - no reference cached - one instance created per resolution - reference cached in container in which the factory was registered - one instance per registration container Transient is the default lifetime. To explicitly set a lifetime, use the registerFactory function like so: container If you have only stateless components, there is very little difference between the two. Generally, you can think of Transient as being for stateless components, and Registration for stateful, though Registration can also be used to avoid creating multiple instances unnecessarily. Child containersChild containers Sometimes you don't want a component to live for the entire life of your application, but also don't want to create a new instance every time it's resolved. For example, in an HTTP server, you might want to create a component which holds some data associated with a request. For this, we can use child containers. Create one like so: const di =const express =const app =const container = diapp Now, each request can register its own components and have its own instances. It is expected that a child container will have a shorter lifetime that its parent. Here, for example, request container will live only as long as the request, but the application container will live for the lifetime of the application. With that in mind, there are two simple rules which dictate how instances get resolved when using child containers: - instances get resolved in the container they were registered in - a container can only resolve dependencies from itself or a parent container This means nothing can depend on something with a shorter lifetime. For example, no components in the application container could depend on a component in the request container. However, components in the request container can depend on instances from the application container. ModulesModules So far we've registered all components into the same namespace. As your application grows, you might want group related components together into namespaces to avoid ID clashes. For this, we use modules. It's possible to namespace groups of components by creating a hierarchy of modules. const factory =// use nested object deconstruction to inject components from modulesmy_module:dependency// ...const dependencyDefinition = "dependencyInstance"const container =container// use shorthand dot-notation to externally resolve components inside modulesconst instance = container VisibilityVisibility You can control which components are visible to other components by using visiblity modifiers. Anything registered with the container (a component or a module) can be either Public or Private. Private components are only visible to other components in the same module. Public components share their parent module's visiblity. Visibility defaults to Private. The top-level (default) module is Public (otherwise you wouldn't be able to resolve anything!). In the example below, it would be possible to inject my_module.public_api into my_component, and my_module.internal into my_module.public_api, but trying to inject my_module.internal into my_component would throw an error. const container =container APIAPI const di = require("di-hard") DIDI di.createContainer(containerName: string) -> Container di.Lifetime.Transient di.Lifetime.Registration di.Visibility.Public di.Visibility.Private ContainerContainer Services and values can be registered with a container. It is responsible for resolving an ID to an instance. container.registerFactory(id: string, factory: function, options: {lifetime, visibility}) -> registrationApi container.registerValue(id: string, value: any, options: {visibility}) -> registrationApi container.registerSubmodule(id: string, options: {visibility}) -> registrationApi container.resolve(id: string) -> componentInstance container.child(containerName: string) -> container The registerX functions are chainable, for example: container Factory functionsFactory functions Factory functions are used to create the component instances. factory(resolver) -> componentInstance ResolverResolver The resolver gets injected into each factory function. It is a Proxy which resolves component instances on property lookup. resolver[id] -> componentInstance Further readingFurther reading Inversion of Control Containers and the Dependency Injection pattern - Martin Fowler Dependency Injection in Node.js - 2016 edition
https://www.npmjs.com/package/di-hard
CC-MAIN-2022-40
en
refinedweb
Borislav Hadzhiev Last updated: Mar 6, 2022 To re-export values from another file in TypeScript, make sure to export the name exports as export {myFunction, myConstant} from './another-file and the default export as export {default} from './another-file'. The values can be imported from the file that re-exported them. Here is an example of a file that has 2 named exports and a default export. // 👇️ named export export function getEmployee() { return { id: 1, salary: 100 }; } // 👇️ named export export const greeting = 'hello world'; // 👇️ default export export default function sum(a: number, b: number) { return a + b; } Here is how you would re-export the exported members of another-file.ts from a file called index.ts. export { getEmployee, greeting, default } from './another-file'; The example above directly re-exports the 2 named exports and the default export. getEmployee, greetingand the default export in the index.tsfile, because we haven't imported them, we directly re-exported them. If you have to use the values in the file, you would also have to import them. // 👇️ import (only if you need to use them in index.ts) import sum, { getEmployee, greeting } from './another-file'; // 👇️ re-export export { getEmployee, greeting, default } from './another-file'; console.log(sum(10, 15)); console.log(getEmployee()); console.log(greeting); The syntax for re-exporting members of another module is: // 👇️ re-export NAMED exports export { getEmployee, greeting } from './another-file'; // 👇️ re-export DEFAULT export export { default } from './another-file'; The two lines from the example above can be combined into a single line if you're re-exporting members of the same file. // 👇️ re-export NAMED exports export { getEmployee, greeting } from './first-file'; // 👇️ re-export default export export { default } from './second-file'; You could then import the re-exported members from the same module. import sum, { getEmployee, greeting } from './index'; console.log(sum(100, 50)); console.log(getEmployee()); console.log(greeting); The pattern you often see is - re-export members of different files from a file called index.ts. The name index.ts is important, because you don't have to explicitly specify the name index when importing. For example, assuming that third-file.ts and index.ts are located in the same directory, I could import from index.ts like so: import sum, { getEmployee, greeting } from './'; // 👈️ implicit console.log(sum(100, 50)); console.log(getEmployee()); console.log(greeting); This is useful when you group your code in directories with descriptive names, because you would be importing from ./utils, rather than ./utils/index or ./utils/nested1, ./utils/nested2, etc. Many of the files you use might make use of multiple utility functions that have been extracted into separated files, and you might not want to have 5 lines of imports just for utility functions or constants - this is when re-exporting from an index.ts file comes in.
https://bobbyhadz.com/blog/typescript-export-from-another-file
CC-MAIN-2022-40
en
refinedweb
Stokes Equations¶ A simple example of a saddle-point system, we will use the Stokes equations to demonstrate some of the ways we can do field-splitting with matrix-free operators. We set up the problem as a lid-driven cavity. As ever, we import firedrake and define a mesh.: from firedrake import * N = 64 M = UnitSquareMesh(N, N) V = VectorFunctionSpace(M, "CG", 2) W = FunctionSpace(M, "CG", 1) Z = V * W u, p = TrialFunctions(Z) v, q = TestFunctions(Z) a = (inner(grad(u), grad(v)) - inner(p, div(v)) + inner(div(u), q))*dx L = inner(Constant((0, 0)), v) * dx The boundary conditions are defined on the velocity space. Zero Dirichlet conditions on the bottom and side walls, a constant \(u = (1, 0)\) condition on the lid.: bcs = [DirichletBC(Z.sub(0), Constant((1, 0)), (4,)), DirichletBC(Z.sub(0), Constant((0, 0)), (1, 2, 3))] up = Function(Z) Since we do not specify boundary conditions on the pressure space, it is only defined up to a constant. We will remove this component of the solution in the solver by providing the appropriate nullspace.: nullspace = MixedVectorSpaceBasis( Z, [Z.sub(0), VectorSpaceBasis(constant=True)]) First up, we will solve the problem directly. For this to work, the sparse direct solver MUMPS must be installed. Hence this solve is wrapped in a try/except block so that an error is not raised in the case that it is not, to do this we must import PETSc: from firedrake.petsc import PETSc To factor the matrix from this mixed system, we must specify a mat_type of aij to the solve call.: try: solve(a == L, up, bcs=bcs, nullspace=nullspace, solver_parameters={"ksp_type": "gmres", "mat_type": "aij", "pc_type": "lu", "pc_factor_mat_solver_type": "mumps"}) except PETSc.Error as e: if e.ierr == 92: warning("MUMPS not installed, skipping direct solve") else: raise e Now we’ll use a Schur complement preconditioner using unassembled matrices. We can do all of this purely by changing the solver options. We’ll define the parameters separately to run through the options.: parameters = { First up we select the unassembled matrix type: "mat_type": "matfree", Now we configure the solver, using GMRES using the diagonal part of the Schur complement factorisation to approximate the inverse. We’ll also monitor the convergence of the residual, and ask PETSc to view the configured Krylov solver object.: "ksp_type": "gmres", "ksp_monitor_true_residual": None, "ksp_view": None, "pc_type": "fieldsplit", "pc_fieldsplit_type": "schur", "pc_fieldsplit_schur_fact_type": "diag", Next we configure the solvers for the blocks. For the velocity block, we use an AssembledPC and approximate the inverse of the vector laplacian using a single multigrid V-cycle.: "fieldsplit_0_ksp_type": "preonly", "fieldsplit_0_pc_type": "python", "fieldsplit_0_pc_python_type": "firedrake.AssembledPC", "fieldsplit_0_assembled_pc_type": "hypre", For the Schur complement block, we approximate the inverse of the schur complement with a pressure mass inverse. For constant viscosity this works well. For variable, but low-contrast viscosity, one should use a viscosity-weighted mass-matrix. This is achievable by passing a dictionary with “mu” associated with the viscosity into solve. The MassInvPC will choose a default value of 1.0 if not set. For high viscosity contrasts, this preconditioner is mesh-dependent and should be replaced by some form of approximate commutator.: "fieldsplit_1_ksp_type": "preonly", "fieldsplit_1_pc_type": "python", "fieldsplit_1_pc_python_type": "firedrake.MassInvPC", The mass inverse is dense, and therefore approximated with a Krylov iteration, which we configure now: "fieldsplit_1_Mp_ksp_type": "preonly", "fieldsplit_1_Mp_pc_type": "ilu" } Having set up the parameters, we can now go ahead and solve the problem.: up.assign(0) solve(a == L, up, bcs=bcs, nullspace=nullspace, solver_parameters=parameters) Last, but not least, we’ll write the solution to a file for later visualisation. We split the function into its velocity and pressure parts and give them reasonable names, then write them to a paraview file.: u, p = up.split() u.rename("Velocity") p.rename("Pressure") File("stokes.pvd").write(u, p) By default, the mass matrix is assembled in the MassInvPC preconditioner, however, this can be controlled using a mat_type argument. To do this, we must specify the mat_type inside the preconditioner. We can use the previous set of parameters and just modify them slightly. parameters["fieldsplit_1_Mp_mat_type"] = "matfree" With an unassembled matrix, of course, we are not able to use standard preconditioners, so for this example, we will just invert the mass matrix using unpreconditioned conjugate gradients. parameters["fieldsplit_1_Mp_ksp_type"] = "cg" parameters["fieldsplit_1_Mp_pc_type"] = "none" up.assign(0) solve(a == L, up, bcs=bcs, nullspace=nullspace, solver_parameters=parameters) A runnable python script implementing this demo file is available here.
https://www.firedrakeproject.org/demos/stokes.py.html
CC-MAIN-2022-40
en
refinedweb
#include using namespace std; function getTheLambda() { int x=7; // will return an undefined value because x is going to be removed from // the stack when this function exits. return [&x] (int a, int b) { return x+a+b; }; } int main(int argc, char* argv[]) { auto m = [] (int y) { cout << "lamda value: " << y << endl; }; // call it m(3); auto l=getTheLambda(); cout << "Lambda Result: " << l(1,2) << endl;!
https://allmybrain.com/2013/03/06/fun-with-c11-lambdas/
CC-MAIN-2022-40
en
refinedweb
netCDF4 via h5py Project description A Python interface for the netCDF4 file-format that reads and writes local or remote HDF5 files directly via h5py or h5pyd, without relying on the Unidata netCDF library. Why h5netcdf? It has one less binary dependency (netCDF C). If you already have h5py installed, reading netCDF4 with h5netcdf may be much easier than installing netCDF4-Python. We’ve seen occasional reports of better performance with h5py than netCDF4-python, though in many cases performance is identical. For one workflow, h5netcdf was reported to be almost 4x faster than netCDF4-python. Anecdotally, HDF5 users seem to be unexcited about switching to netCDF – hopefully this will convince them that netCDF4 is actually quite sane! Finally, side-stepping the netCDF C library (and Cython bindings to it) gives us an easier way to identify the source of performance issues and bugs in the netCDF libraries/specification. Install Ensure you have a recent version of h5py installed (I recommend using conda or the community effort conda-forge). At least version 2.1 is required (for dimension scales); versions 2.3 and newer have been verified to work, though some tests only pass on h5py 2.6. Then: $ pip install h5netcdf Or if you are already using conda: $ conda install h5netcdf Usage h5netcdf has two APIs, a new API and a legacy API. Both interfaces currently reproduce most of the features of the netCDF interface, with the notable exception of support for operations that rename or delete existing objects. We simply haven’t gotten around to implementing this yet. Patches would be very welcome. New API The new API supports direct hierarchical access of variables and groups. Its design is an adaptation of h5py to the netCDF data model. For example: import h5netcdf import numpy as np with h5netcdf.File('mydata.nc', 'w') as f: # set dimensions with a dictionary f.dimensions = {'x': 5} # and update them with a dict-like interface # f.dimensions['x'] = 5 # f.dimensions.update({'x': 5}) v = f.create_variable('hello', ('x',), float) v[:] = np.ones(5) # you don't need to create groups first # you also don't need to create dimensions first if you supply data # with the new variable v = f.create_variable('/grouped/data', ('y',), data=np.arange(10)) # access and modify attributes with a dict-like interface v.attrs['foo'] = 'bar' # you can access variables and groups directly using a hierarchical # keys like h5py print(f['/grouped/data']) # add an unlimited dimension f.dimensions['z'] = None # explicitly resize a dimension and all variables using it f.resize_dimension('z', 3) Notes: Automatic resizing of unlimited dimensions with array indexing is not available. Dimensions need to be manually resized with Group.resize_dimension(dimension, size). Arrays are returned padded with fillvalue (taken from underlying hdf5 dataset) up to current size of variable’s dimensions. The behaviour is equivalent to netCDF4-python’s Dataset.set_auto_mask(False). Legacy API The legacy API is designed for compatibility with netCDF4-python. To use it, import h5netcdf.legacyapi: import h5netcdf.legacyapi as netCDF4 # everything here would also work with this instead: # import netCDF4 import numpy as np with netCDF4.Dataset('mydata.nc', 'w') as ds: ds.createDimension('x', 5) v = ds.createVariable('hello', float, ('x',)) v[:] = np.ones(5) g = ds.createGroup('grouped') g.createDimension('y', 10) g.createVariable('data', 'i8', ('y',)) v = g['data'] v[:] = np.arange(10) v.foo = 'bar' print(ds.groups['grouped'].variables['data']) The legacy API is designed to be easy to try-out for netCDF4-python users, but it is not an exact match. Here is an incomplete list of functionality we don’t include: Utility functions chartostring, num2date, etc., that are not directly necessary for writing netCDF files. h5netcdf variables do not support automatic masking or scaling (e.g., of values matching the _FillValue attribute). We prefer to leave this functionality to client libraries (e.g., xarray), which can implement their exact desired scaling behavior. Nevertheless arrays are returned padded with fillvalue (taken from underlying hdf5 dataset) up to current size of variable’s dimensions. The behaviour is equivalent to netCDF4-python’s Dataset.set_auto_mask(False). Invalid netCDF files h5py implements some features that do not (yet) result in valid netCDF files: - Data types: Booleans Complex values Non-string variable length types Enum types Reference types - Arbitrary filters: Scale-offset filters By default [1], h5netcdf will not allow writing files using any of these features, as files with such features are not readable by other netCDF tools. However, these are still valid HDF5 files. If you don’t care about netCDF compatibility, you can use these features by setting invalid_netcdf=True when creating a file: # avoid the .nc extension for non-netcdf files f = h5netcdf.File('mydata.h5', invalid_netcdf=True) ... # works with the legacy API, too, though compression options are not exposed ds = h5netcdf.legacyapi.Dataset('mydata.h5', invalid_netcdf=True) ... In such cases the _NCProperties attribute will not be saved to the file or be removed from an existing file. A warning will be issued if the file has .nc-extension. Footnotes Decoding variable length strings h5py 3.0 introduced new behavior for handling variable length string. Instead of being automatically decoded with UTF-8 into NumPy arrays of str, they are required as arrays of bytes. The legacy API preserves the old behavior of h5py (which matches netCDF4), and automatically decodes strings. The new API matches h5py behavior. Explicitly set decode_vlen_strings=True in the h5netcdf.File constructor to opt-in to automatic decoding. Datasets with missing dimension scales By default [2] h5netcdf raises a ValueError if variables with no dimension scale associated with one of their axes are accessed. You can set phony_dims='sort' when opening a file to let h5netcdf invent phony dimensions according to netCDF behaviour. # mimic netCDF-behaviour for non-netcdf files f = h5netcdf.File('mydata.h5', mode='r', phony_dims='sort') ... Note, that this iterates once over the whole group-hierarchy. This has affects on performance in case you rely on laziness of group access. You can set phony_dims='access' instead to defer phony dimension creation to group access time. The created phony dimension naming will differ from netCDF behaviour. f = h5netcdf.File('mydata.h5', mode='r', phony_dims='access') ... Footnotes Track Order In h5netcdf version 0.12.0 and earlier, order tracking was disabled in HDF5 file. As this is a requirement for the current netCDF4 standard, it has been enabled without deprecation as of version 0.13.0 [*]. However in version 0.13.1 this has been reverted due to a bug in a core dependency of h5netcdf, h5py upstream bug. Datasets created with h5netcdf version 0.12.0 that are opened with newer versions of h5netcdf will continue to disable order tracker. Changelog License Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/h5netcdf/
CC-MAIN-2022-40
en
refinedweb
All code for challenges I am doing here: leetcode-jan-2021 Just a quick note to say, I am doing this series in the hope this habit of writing about the problems will help me finish the 30 days. I am also improving my Python knowledge while doing this so I am sorry if the code looks terrible! I limit my time each day so the solutions are just my best effort and not the best (or even particularly good) solutions. The Problem You are given an array of distinct integers arrand an array of integer arrays pieces, where the integers in piecesare distinct. Your goal is to form arrby concatenating the arrays in piecesin any order. However, you are not allowed to reorder the integers in each array pieces[i]. Return trueif it is possible to form the array arrfrom pieces. Otherwise, return false. My Tests import pytest from .Day1 import Solution s = Solution() @pytest.mark.parametrize( "arr,pieces", [([49, 18, 16], [[16, 18, 49]]), ([1, 3, 5, 7], [[2, 4, 6, 8]])] ) def test_cannot_form_array(arr, pieces): assert not s.canFormArray(arr, pieces) @pytest.mark.parametrize( "arr,pieces", [ ([], []), ([85], [[85]]), ([15, 88], [[88], [15]]), ([91, 4, 64, 78], [[78], [4, 64], [91]]), ], ) def test_can_form_array(arr, pieces): assert s.canFormArray(arr, pieces) My Solution from typing import List class Solution: def canFormArray(self, arr: List[int], pieces: List[List[int]]) -> bool: index = 0 new_index = index pieces_m = pieces while index < len(arr): curr = arr[index] for a in pieces_m: if a[0] == curr and a == arr[index : index + len(a)]: new_index += len(a) pieces_m.remove(a) if new_index == index: return False else: index = new_index return True Analysis My Commentary The performance wasn't terrible I guess but memory usage was pretty bad. The way I thought about it was, if you loop over the array, look to see if any of the pieces start with that value. If it does, check the next segment for a match. If there's a match, bump the index to the end of the segment. If ever there is not a match just short circuit out. I am a bit new to python but I think one place where I may have paid a price in memory was arr[index : index + len(a)]. I need to research and see if that creates a new list each time. I stopped at the easiest solution that day as I actually ended up doing 2 solutions (the bonus one for that week also) and I went over my allocated time for the evening. Thinking about it after writing here, like most other problems, I could have made this faster with a dict (HashMap). Might give that a shot after the challenge is over. Top comments (0)
https://dev.to/ruarfff/day1-check-array-formation-through-concatenation-53fd
CC-MAIN-2022-40
en
refinedweb
In the below code, inside MyComponent I'm rendering Home component and also passing props count to it. And then one can use count of MyComponent in Home component. Home component is independent of MyComponent, any component can pass props to Home component. import React, { useState } from "react"; export default function MyComponent() { const[count, setCount] = useState(0); return ( <div> <Home count = {count}/> //passing props </div> ); } The below code snippet is using the passed props, one can either destructure the object or accept the single props object. import React from "react"; export default function Home({count}) { return ( <div> {count} </div> ); } or import React from "react"; export default function Home(props) { return ( <div> {props.count} </div> ); } The Home component will only display 0 on-screen since I'm not updating the value of count. Top comments (2) epic tutorial! props in React are kind of like the properties you initialise under the init function of a class in python Interested in crypto?
https://dev.to/aasthapandey/passing-props-to-component-in-react-5cja
CC-MAIN-2022-40
en
refinedweb
table of contents NAME¶ xcb_get_selection_owner - Gets the owner of a selection SYNOPSIS¶ #include <xcb/xproto.h> Request function¶ xcb_get_selection_owner_cookie_t xcb_get_selection_owner(xcb_connection_t *conn, xcb_atom_t selection); Reply datastructure¶ typedef struct xcb_get_selection_owner_reply_t { uint8_t response_type; uint8_t pad0; uint16_t sequence; uint32_t length; xcb_window_t owner; } xcb_get_selection_owner_reply_t; Reply function¶ xcb_get_selection_owner_reply_t *xcb_get_selection_owner_reply(xcb_connection_t *conn, xcb_get_selection_owner_cookie_t cookie, xcb_generic_error_t **e); REQUEST ARGUMENTS¶ REPLY FIELDS¶ - response_type - The type of this reply, in this case XCB_GET_SELECTION_OWNER.). - owner - The current selection owner window. DESCRIPTION¶ Gets the owner of the specified selection. TODO: briefly explain what a selection is. RETURN VALUE¶ Returns an xcb_get_selection_owner_cookie_t. Errors have to be handled when calling the reply function xcb_get_selection_owner_reply. If you want to handle errors in the event loop instead, use xcb_get_selection_owner_unchecked. See xcb-requests(3) for details. ERRORS¶ - xcb_atom_error_t - selection does not refer to a valid atom. SEE ALSO¶ xcb-requests(3), xcb_set_selection_owner(3) AUTHOR¶ Generated from xproto.xml. Contact xcb@lists.freedesktop.org for corrections and improvements.
https://manpages.debian.org/bullseye/libxcb-doc/xcb_get_selection_owner_unchecked.3.en.html
CC-MAIN-2022-40
en
refinedweb
>> grid plot in Bokeh library be created with Python? Beyond Basic Programming - Intermediate Python 36 Lectures 3 hours Practical Machine Learning using Python 91 Lectures 23.5 hours Practical Data Science using Python 22 Lectures 6 hours Bokeh is a Python package that helps in data visualization. It is an open source project. Bokeh renders its plot using HTML and JavaScript. This indicates that it is useful while working with web-based dashboards. − Numpy Pillow Jinja2 Packaging Pyyaml Six Tornado Python−dateutil Installation of Bokeh on Windows command prompt pip3 install bokeh Installation of Bokeh on Anaconda prompt conda install bokeh Example import numpy as np from bokeh.plotting import figure, output_file, show N = 420 x = np.linspace(0, 14, N) y = np.linspace(0, 14, N) x1, y1 = np.meshgrid(x, y) d = np.sin(x1)*np.cos(y1) p = figure(tooltips=[("x", "$x"), ("y", "$y"), ("value", "@image")]) p.x_range.range_padding = p.y_range.range_padding = 0 p.image(image=[d], x=0, y=0, dw=11, dh=11, palette="Spectral11", level="image") p.grid.grid_line_width = 0.6 output_file("gridplot.html", title="grid plot example") show(p) Output Explanation The required packages are imported, and aliased. The figure function is called along with plot width and height. The data is defined using NumPy library. The ‘output_file’ function is called to mention the name of the html file that will be generated. The ‘image’ function present in Bokeh is called, along with data. The ‘show’ function is used to display the plot. - Related Questions & Answers - How can Bokeh library be used to plot horizontal bar plots using Python? - How can patch plot with multiple patches be visualized in Bokeh? - How can Bokeh library be used to generate line graphs in Python? - How can Bokeh library be used to visualize twin axes in Python? - How can Bokeh be used to generate patch plot in Python? - How can Bokeh be used to visualize bar plot in Python? - How can Bokeh be used to generate scatter plot using Python? - How can Bokeh library be used to visualize stacked bar charts in Python? - How can Bokeh be used to create step line plot in Python? - How can Bokeh be used to generate candle stick plot in Python? - How can bar plot be used in Seaborn library in Python? - How can Bokeh be used to visualize multiple shapes on a plot in Python? - How can multiple lines be visualized using Bokeh Python? - How can Seaborn library be used to display a Scatter Plot in Python? - How can Seaborn library be used to display a hexbin plot in Python?
https://www.tutorialspoint.com/how-can-grid-plot-in-bokeh-library-be-created-with-python
CC-MAIN-2022-40
en
refinedweb
Hi. Edit: I am using a shell sort procedure, and duplicates' ranks are arbitrarily chosen, based on which came first in the original array. views:1541 answers:6 Hi. Well, there's a trival n^2 solution. In python: newArray = sorted(oldArray) blankArray = [0] * len(oldArray) for i in xrange(len(newArray)): dex = oldArray.index(newArray[i]) blankArray[dex] = i Depending on how large your list is, this may work. If your list is very long, you'll need to do some strange parallel array sorting, which doesn't look like much fun and is a quick way to introduce extra bugs in your code. Also note that the above code assumes unique values in oldArray. If that's not the case, you'll need to do some post processing to solve tied values. Since you're using C++, I would do it something like this. The SortIntPointers function can be any sort algorithm, the important part is that it sorts the array of pointers based on the int that they are pointing to. Once that is done, you can go through the array of pointers and assign their sorted index which will end up in the original position in the original array. int* intArray; // set somewhere else int arrayLen; // set somewhere else int** pintArray = new int*[arrayLen]; for(int i = 0; i < arrayLen; ++i) { pintArray[i] = &intArray[i]; } // This function sorts the pointers according to the values they // point to. In effect, it sorts intArray without losing the positional // information. SortIntPointers(pintArray, arrayLen); // Dereference the pointers and assign their sorted position. for(int i = 0; i < arrayLen; ++i) { *pintArray[i] = i; } Hopefully that's clear enough. create a new array with increasing values from 0 to n-1 (where n is the length of the array you want to sort). Then sort the new array based on the values in the old array indexed by the values in the new array. For example, if you use bubble sort (easy to explain), then instead of comparing the values in the new array, you compare the values in the old array at the position indexed by a value in the new array: function bubbleRank(A){ var B = new Array(); for(var i=0; i<A.length; i++){ B[i] = i; } do{ swapped = false; for(var i=0; i<A.length; i++){ if(A[B[i]] > A[B[i+1]]){ var temp = B[i]; B[i] = B[i+1]; B[i+1] = temp; swapped = true; } } }while(swapped); return B; } Despite my best efforts, I havent been able to implement a sort algorithm that sorts an array of pointerse by the values they point to. Could someone please tell me whats going wrong? The current example wont compile, and I've changed it around so much that it doesnt really matter. I'd very much appreciate it if someone could help me fix this! void SortArray( int ** pArray, int ArrayLength ) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no * for(i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j=0; j < (ArrayLength -1); j++) { if (*pArray[j+1] > *pArray[j]) // ascending order simply changes to < { &temp = &pArray[j]; // swap elements &pArray[j] = &pArray[j+1]; //the problem lies somewhere in here &pArray[j+1] = &temp; flag = 1; // indicates that a swap occurred. } } } }; Thanks in advance. Ok, here is my atempt in C++ #include <iostream> #include <algorithm> struct mycomparison { bool operator() (int* lhs, int* rhs) {return (*lhs) < (*rhs);} }; int main(int argc, char* argv[]) { int myarray[] = {1, 3, 6, 2, 4, 9, 5, 12, 10}; const size_t size = sizeof(myarray) / sizeof(myarray[0]); int *arrayofpointers[size]; for(int i = 0; i < size; ++i) { arrayofpointers[i] = myarray + i; } std::sort(arrayofpointers, arrayofpointers + size, mycomparison()); for(int i = 0; i < size; ++i) { *arrayofpointers[i] = i + 1; } for(int i = 0; i < size; ++i) { std::cout << myarray[i] << " "; } std::cout << std::endl; return 0; } Parallel sorting of vector using boost::lambda... std::vector<int> intVector; std::vector<int> rank; // set up values according to your example... intVector.push_back( 1 ); intVector.push_back( 3 ); intVector.push_back( 4 ); intVector.push_back( 9 ); intVector.push_back( 6 ); for( int i = 0; i < intVector.size(); ++i ) { rank.push_back( i ); } using namespace boost::lambda; std::sort( rank.begin(), rank.end(), var( intVector )[ _1 ] < var( intVector )[ _2 ] ); //... and because you wanted to replace the values of the original with // their rank intVector = rank; Note: I used vectorS instead of arrays because it is clearer/easier, also, I used C-style indexing which starts counting from 0, not 1.
http://ansaurus.com/question/13473-how-does-one-rank-an-array-sort-by-value-with-a-twist
CC-MAIN-2018-34
en
refinedweb
Allows the user to select number of human players. Validates input and returns a matching tuple of players. src/t/i/tictascii-0.0.3/tictascii/tests.py tictascii(Download) def get_participating_players(raw_input=raw_input): """ Allows the user to select number of human players. Validates input and returns a matching tuple of players. """ no_players = 0 while no_players != 1 and no_players != 2: inp = raw_input("Single player or multiplayer? (1/2): ") try: no_players = int(inp) except ValueError: print "Invalid input - please try again" pass if no_players is 1: return (HumanPlayer('X'), ComputerPlayer('O')) else: return (HumanPlayer('X'), HumanPlayer('O')) from ticlib.players import HumanPlayer, ComputerPlayer from ticlib.base import Tournament, Board from ticli import get_participating_players import unittest def test_getting_1_players(self): player_collection = get_participating_players(one_raw_input) self.assertNotEqual(player_collection, None) def test_getting_2_players(self): player_collection = get_participating_players(two_raw_input) self.assertNotEqual(player_collection, None) def test_if_1_player_have_correct_marks(self): player_collection = get_participating_players(one_raw_input) def test_if_2_player_have_correct_marks(self): player_collection = get_participating_players(two_raw_input) player_1 = player_collection[0] player_2 = player_collection[1] self.assertEquals('X', player_1.marker)
http://nullege.com/codes/search/ticli.get_participating_players
CC-MAIN-2018-34
en
refinedweb
java.util.scanner Here are some of the many features of Scanner objects. Some Features of java.util.scanner - Madeline Holland - 2 years ago - Views: Transcription 1 java.util.scanner java.util.scanner is a class in the Java API used to create a Scanner object, an extremely versatile object that you can use to input alphanumeric characters from several input sources and convert them to binary data.. The following three constructors allow you to scan data from the standard input object (System.in), from string objects (especially ones provided by an input dialog box) and from external text files. Three Constructors of java.util.scanner public Scanner( InputStream source ) // Creates a scanner that scans values from the input stream. public Scanner( String source ) // Creates a scanner that scans values from the string. public Scanner( File source ) // Creates a scanner that scans values from the external file. Here are some of the many features of Scanner objects. Some Features of java.util.scanner They can scan data of all the primitive data types (e.g. int, double, etc.) except char. They can scan strings. They easily scan multiple tokens from a single input source. They can change the characters used to delimit tokens in the input. They can specify the precise pattern of legal input. They generate an InputMismatchException if the input does not conform to the expected data type, allowing you to use try and catch constructs to validate the user s input. java.util.scanner Page 2 Scanning from the Standard Input Object The first Scanner constructor allows you to create a scanner for the standard input object System.in, which is an object of class InputStream. Example The following Java code reads two floating-point values from the standard input object. Below shows the user interaction (the underlined text shows what the user types; the program doesn t underline anything). To the right is a picture of memory after the input values are scanned. weight.0 height 0.0 Enter your weight and height: double weight, height; // patient stats Scanner in = new Scanner( System.in ); System.out.print( "Enter your weight and height: " ); weight = in.nextdouble( ); height = in.nextdouble( ); Scanning from a String The second Scanner constructor allows you to create a scanner for a string, which you can use to scan data input from a dialog box. Example This Java code gets its input from the dialog box shown to the right import javax.swing.joptionpane; String prompt = "Enter your weight and height: "; String input = JOptionPane.showInputDialog( prompt ); Scanner in = new Scanner( input ); double weight = in.nextdouble( ); double height = in.nextdouble( ); java.util.scanner Page 3 Scanning from an External Text File The third Scanner constructor allows you to create a scanner that gets its input from a file of text residing on the external disk. Example Suppose you have an external file named data.txt that contains the values you wish to input. See the picture to the right. data.txt The following Java code opens the file as a scanner and inputs the two values. It assumes that the data file resides in the same folder as the application s class file. 5 import java.io.file; Scanner in = new Scanner( new File( "data.txt" ) ); double weight = in.nextdouble( ); double height = in.nextdouble( ); java.util.scanner Page 4 Scanning Primitive Data and Strings java.util.scanner has methods to scan all the primitive data types except char. In addition, it has two methods that scan strings. Scanner Methods that Scan Primitive Data public double nextdouble( ) Scans floating-point data public float nextfloat( ) public byte nextbyte( ) public int nextint( ) Scans integer data public long nextlong( ) public short nextshort( ) Scans true and false public boolean nextboolean( ) Scanner Methods that Scan Strings next( ) Method nextline( ) What it Does Skips leading white space, collects alphanumeric characters until encountering trailing white space, returns the characters collected as a string. Collects all characters (visible and invisible) until encountering the next newline character and returns the characters collected as a string. To scan a primitive value, call the appropriate method within the Scanner object and store the scanned value into a variable of the same primitive data type. Example 5 Scanner in = new Scanner( System.in ); double salary = in.nextdouble( ); boolean ismarried = in.nextboolean( ); int numberchildren = in.nextint( ); java.util.scanner Page 5 To scan a string, you must a declare reference variable for the String object that is to be returned. You don t need to initialize it the object is built and returned by the method. Example 5 6 Scanner in = new Scanner( System.in ); System.out.print( "Enter name and address: " ); String firstname = input.next( ); String lastname = input.next( ); String address = input.nextline( ); Suppose in response to the user prompt, the user enters the underlined text shown below: Enter name and address: Herb Green 50 Maple St. The input is read in three parts. The call to next on line reads Herb and ends because of the trailing space. The string reference is placed into firstname: firstname Herb String object Line 5 reads Green and places it into a string object referenced by lastname: lastname Green String object Line 6 uses a call to nextline to read the entire address including its embedded spaces. nextline scans characters until reaching the newline character. The scanned string is placed into a string object referenced by address: address String object 50 Maple St. java.util.scanner Page 5 6 Specifying Token Delimiters By default a Scanner object expects tokens to be delimited by white space (i.e. spaces, horizontal tabs and newline characters). You can change the delimiter by calling these Scanner methods: Scanner Methods to Change Token Delimiters Scanner usedelimiter( String pattern ) // Set the scanner's token delimiter to the characters specified // by the 'pattern' string. Scanner reset( ) // Restores the scanner's token delimiter to its default. Although these methods return a Scanner object, it s the same as the altered scanner so there s no reason to save it. Example Line in the Java code below tells the scanner to use a single comma as a token delimiter. For example, the input string Herb Green,50 Maple St,Kansas City Would be divided into these three tokens: Herb Green 50 Maple St Kansas City Scanner in = new Scanner( System.in ); in.usedelimiter( "," ); java.util.scanner Page 6 7 A pattern matches a set of string literals. It looks very much like a string literal itself but some of its characters are metacharacters that represent different combinations of characters. A full explanation of patterns is an entire topic in itself. Following are some simple, but useful, token delimiting patterns. Pattern " character " "[ characters ]" "[ characters ]+" What it Means character is any character literal (including an escape sequence). The pattern matches the character characters is a sequence of characters and[ and ] are metacharacters. The pattern matches any single character in the sequence. + is a metacharacter. The pattern matches one or more occurrences of each of the characters. Examples This pattern Matches " " A single space "\n" A single newline character "," A single comma "[, ]" Any single comma or space "[ \n]" Any single space or newline character "[/-]" Any single slash or hyphen "[.,;]" Any single period, comma or semi-colon "[ ]+" One or more spaces "[, ]+" Any combination of one or more commas or spaces java.util.scanner Page 7 8 Making the Newline Character Portable The pattern "\n" stands for the newline character, which, unfortunately, is not standard across operating systems. For example, a new line is marked in Microsoft Windows with a combination of two Unicode characters \u000d followed by \u000a whereas MacOS uses a single \u000. This can create problems when trying to move code from one platform to another. To avoid this problem, use the static method lineseparator that is found in the API class java.lang.system. static String lineseparator( ) // Returns the system-dependent line separator as a string. You can use it by itself or use concatenation to embed it within a more complicated pattern. Examples This pattern System.lineSeparator( ) "[," + System.lineSeparator( ) + "]" "[ " + System.lineSeparator( ) + "]+" Matches A single newline character A single comma or a single newline character A combination of one or more spaces and newline characters Specifying Input Patterns You can also use a pattern, along with the following method, to specify the precise format of legal input. String next( String pattern ) // Returns the next token if it matches the pattern. If the next input token doesn t match the pattern then the method throws an InputMismatchException. java.util.scanner Page 8 9 Example This Java code reads a vehicle s license plate number consisting of three digits followed by three upper-case letters and terminated by the system dependent line separator. Scanner in = new Scanner( System.in ); in.usedelimiter( System.lineSeparator( ) ); String input = in.next( "[0-9]{}[A-Z]{}" ); For a more detailed description of Java patterns, you should refer to the API specification for class java.util.regex.pattern. Input Validation Generally, Scanner methods throw an InputMismatchException if the next input token does not conform to what is expected. Example The following Java code, if the user were to enter the input $,5.50, would halt with the run-time error: Exception in evaluation thread java.util.inputmismatchexception Scanner in = new Scanner( System.in ); System.out.print( "Enter salary: " ); double salary = in.nextdouble( ); This behavior enables to you, by using Java s try and catch constructs, to control what happens in such situations so that your program responds to bad input in a user-friendly manner. java.util.scanner Page 9 10 Example The following Java code uses a loop to trap user input errors. The loop (lines 7 through ) continues to cycle so long as the user s input is not a valid floating-point number. If the user enters valid input, the loop quits import java.util.inputmismatchexception; double salary; Scanner in = new Scanner( System.in ); in.usedelimiter( System.lineSeparator( ) ); boolean inputok = false; // set flag to indicate bad input while (! inputok ) // while flag indicates bad input { try { System.out.println( "Enter salary: " ); salary = in.nextdouble( ); // may throw exception inputok = true; // no exception thrown } catch ( InputMismatchException ex ) { // exception thrown System.out.println( "Bad input. Try again." ); inputok = false; in.nextline( ); // drop line separator in input } } java.util.scanner Page 0 11 Exercises In each problem below, the first two statements are correct; there is something wrong with method call in the third statement. Circle what's wrong and explain. None of them is correct.. Scanner input = new Scanner( System.in ); double x = input.next( );. Scanner input = new Scanner( System.in ); int m = input.nextdouble( );. Scanner input = new Scanner( System.in ); int m = nextint( );. Scanner input = new Scanner( System.in ); int m = input.nextint( ); 5. Scanner input = new Scanner( System.in ); String line = input.next; 6. Scanner input = new Scanner( System.in ); String line = input.nextline( ); java.util.scanner Page 12 Enter the application given below into jgrasp, save it to a file and compile it. For each of the exercises that follow, run the application, enter the input given and explain why the program works or doesn t work public class Scanning { public static void main( String [] args ) { // declare data String name; // child's first name double height; // child's height int age; // child's age // build Scanner Scanner input = new Scanner( System.in ); // prompt for and read data System.out.println( "Name? Age? Height?" ); name = input.next( ); age = input.nextint( ); height = input.nextdouble( ); // print results System.out.println( "Name: " + name ); System.out.println( "Age: " + age ); System.out.println( "Height: " + height ); } } 7. Tom Tom Jones Tom Tom java.util.scanner Page 13 . Modify application Scanning given above so that it will read a child s name that has embedded spaces.. Modify application Scanning given above so that it reads all three values from a single JOptionPane input dialog. If you modified application Scanning according to previous exercises, then retrieve the original shown on page. Add the following after line : input.usedelimiter( "," ); For each of the following exercises, run the application, enter the input given and explain why the program works or doesn t work.. Tom Jones,,.75. Tom Jones,,.75, Continuing with the Scanning application from exercises 9 and 0, change line to: input.usedelimiter( "[," + System.lineSeparator( ) + "]" ); For the following exercise, run the application, enter the input given and explain why the program works or doesn t work. 5. Tom Jones,,.75 java.util.scanner Page 14 For each of the following exercises: (a) Write the Java statements to scan the described values from the standard input object. (b) Write the Java statements to input and scan the described values from a single JOptionPane input dialog. In each case, you may have to change the scanner delimiter to something appropriate for the situation. 6. Input an int value into a variable named score. 7. Input a person s height (a double), weight (a double) and age (an int). Declare appropriate variables for the values. 8. Input a string and store its reference into phone. The string has no embedded spaces. 9. Input a string and store its reference into title. The title may have embedded spaces and is followed by the newline character. 0. Input a company s department name (a string with no embedded spaces) and its budget (a double).. Input a company s department name (this time allowing the department name to have embedded spaces such as INFORMATION SYSTEMS) and its budget (a double).. Input a person s name (a string that may have embedded spaces), height (a double), weight (a double) and age (an int). Declare appropriate variables for the values.. Scan a person s name (a string that may have embedded spaces), address (another string that may have embedded spaces), marital status (true or false) and number of tax exemptions (an int). Declare appropriate variables for the values.. Scan hours, minutes, seconds in the form hh:mm:ss. Read the values into three int variables. java.util.scanner Page Introduction to Java Introduction to Java The HelloWorld program Primitive data types Assignment and arithmetic operations User input Conditional statements Looping Arrays CSA0011 Matthew Xuereb 2008 1 Java Overview A high JAVA.UTIL.SCANNER CLASS JAVA.UTIL.SCANNER CLASS Copyright tutorialspoint.com Introduction The java.util.scanner class is a simple text scanner which can parse primitive Lecture Set 2: Starting Java Lecture Set 2: Starting Java 1. Java Concepts 2. Java Programming Basics 3. User output 4. Variables and types 5. Expressions 6. User input 7. Uninitialized Variables CMSC 131 - Lecture Outlines - 2 Elementary Programming Chapter 2 Elementary Programming 2.1 Introduction You will learn elementary programming using Java primitive data types and related subjects, such as variables, constants, operators, expressions, and input Java: Primitive Data Types, Variables and Constants Java: Primitive Data Types, Variables and Constants Introduction A primitive data type is a data type provided as a basic building block by a programming language. It is predefined by the programming language 2: Elements of Java Chapter 2: Elements of Java Basic components of a Java program Primitive data types Arithmetic expressions Type casting. The String type (introduction) Basic I/O statements Importing packages. 1 Introduction Introduction to Java. CS 3: Computer Programming in Java Introduction to Java CS 3: Computer Programming in Java Objectives Begin with primitive data types Create a main class with helper methods Learn how to call built-in class methods and instance methods Chapter 3. Input and output. 3.1 The System class Chapter 3 Input and output The programs we ve looked at so far just display messages, which doesn t involve a lot of real computation. This chapter will show you how to read input from the keyboard, use Pemrograman Dasar. Basic Elements Of Java Pemrograman Dasar Basic Elements Of Java Compiling and Running a Java Application 2 Portable Java Application 3 Java Platform Platform: hardware or software environment in which a program runs. Oracle CSE 1223: Introduction to Computer Programming in Java Chapter 2 Java Fundamentals CSE 1223: Introduction to Computer Programming in Java Chapter 2 Java Fundamentals 1 Recall From Last Time: Java Program import java.util.scanner; public class EggBasket { public static void main(string[] CS 106 Introduction to Computer Science I CS 106 Introduction to Computer Science I 01 / 29 / 2014 Instructor: Michael Eckmann Today s Topics Comments and/or Questions? import user input using JOptionPane user input using Scanner psuedocode import AP Computer Science Static Methods, Strings, User Input AP Computer Science Static Methods, Strings, User Input Static Methods The Math class contains a special type of methods, called static methods. A static method DOES NOT operate on an object. This is because: Variables, Constants, and Data Types Variables, Constants, and Data Types Primitive Data Types Variables, Initialization, and Assignment Constants Characters Strings Reading for this class: L&L, 2.1-2.3, App C 1 Primitive Data There are eight Basics of Java Programming Input and the Scanner class Basics of Java Programming Input and the Scanner class CSC 1051 Algorithms and Data Structures I Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website: Sample CSE8A midterm Multiple Choice (circle one) Sample midterm Multiple Choice (circle one) (2 pts) Evaluate the following Boolean expressions and indicate whether short-circuiting happened during evaluation: Assume variables with the following names Reading Input From A File Reading Input From A File In addition to reading in values from the keyboard, the Scanner class also allows us to read in numeric values from a file. 1. Create and save a text file (.txt or.dat extension) Topic 11 Scanner object, conditional execution Topic 11 Scanner object, conditional execution "There are only two kinds of programming languages: those people always [complain] about and those nobody uses." Bjarne Stroustroup, creator of C++ Copyright J a v a Quiz (Unit 3, Test 0 Practice) Computer Science S-111a: Intensive Introduction to Computer Science Using Java Handout #11 Your Name Teaching Fellow J a v a Quiz (Unit 3, Test 0 Practice) Multiple-choice questions are worth 2 points A Comparison of the Basic Syntax of Python and Java Python Python supports many (but not all) aspects of object-oriented programming; but it is possible to write a Python program without making any use of OO concepts. Python is designed to be used interpretively. Using Files as Input/Output in Java 5.0 Applications Using Files as Input/Output in Java 5.0 Applications The goal of this module is to present enough information about files to allow you to write applications in Java that fetch their input from a file instead Java Interview Questions and Answers 1. What is the most important feature of Java? Java is a platform independent language. 2. What do you mean by platform independence? Platform independence means that we can write and compile the java Object-Oriented Programming in Java CSCI/CMPE 3326 Object-Oriented Programming in Java Class, object, member field and method, final constant, format specifier, file I/O Dongchul Kim Department of Computer Science University of Texas Rio COUNTING LOOPS AND ACCUMULATORS COUNTING LOOPS AND ACCUMULATORS Two very important looping idioms are counting loops and accumulators. A counting loop uses a variable, called the loop control variable, to keep count of how many cycles Java Foundations: Unit 9. Error/Exception Handling Java Foundations: Unit 9 Error/Exception Handling Introduction to Class Diagrams Containment vs. Inheritance is a relationship Person Inheritance: Student extends Person Employee extends Person Student Building Java Programs Building Java Programs Chapter 3 Lecture 3-3: Interactive Programs w/ Scanner reading: 3.3-3.4 self-check: #16-19 exercises: #11 videos: Ch. 3 #4 Interactive programs We have written programs that print Programming Languages CIS 443 Course Objectives Programming Languages CIS 443 0.1 Lexical analysis Syntax Semantics Functional programming Variable lifetime and scoping Parameter passing Object-oriented programming Continuations Exception AP Computer Science File Input with Scanner AP Computer Science File Input with Scanner Subset of the Supplement Lesson slides from: Building Java Programs, Chapter 6 by Stuart Reges and Marty Stepp ( ) Input/output Intro to OOP with Java, C. Thomas Wu Chapter 3 Numerical Data Objectives After you have read and studied this chapter, you should be able to Select proper types for numerical data. Write arithmetic expressions in Java. Evaluate arithmetic Event-Driven Programming Event-Driven Programming Lecture 4 Jenny Walter Fall 2008 Simple Graphics Program import acm.graphics.*; import java.awt.*; import acm.program.*; public class Circle extends GraphicsProgram { public void CSCE 111 Exam 1 TRUE/FALSE CSCE 111 Exam 1 FORM B TRUE/FALSE 1. Java runs differently on different CPU architectures. F 2. A declared variable is always visible to the entire method in which it is declared. F 3. Because the operator TRUE / FALSE MULTIPLE CHOICE CS 111 Exam 3 FORM A TRUE / FALSE 1. If you write a tostring method for a class, it can be invoked automatically to output the string generated by that method. T 2. You can change the contents of a String Week 1: Review of Java Programming Basics Week 1: Review of Java Programming Basics Sources: Chapter 2 in Supplementary Book (Murach s Java Programming) Appendix A in Textbook (Carrano) Slide 1 Outline Objectives A simple Java Program Data-types Lecture 1 Introduction to Java Programming Languages: Java Lecture 1 Introduction to Java Instructor: Omer Boyaci 1 2 Course Information History of Java Introduction First Program in Java: Printing a Line of Text Modifying Our First Primitive Data Types. Trail: Learning the Java Language Lesson: Language Basics Section: Variables 1 of 5 2/18/2013 10:49 AM Trail: Learning the Java Language Lesson: Language Basics Section: Variables Primitive Data Types The Java programming language is statically-typed, which means that all variables CS 106 Introduction to Computer Science I CS 106 Introduction to Computer Science I 01 / 21 / 2014 Instructor: Michael Eckmann Today s Topics Introduction Homework assignment Review the syllabus Review the policies on academic dishonesty and improper PRIMITIVE DATA TYPE JAVA PRIMITIVE DATA TYPE Description Not everything in Java is an object. There is a special group of data types (also known as primitive types) that will be used quite often in programming. For performance Building Java Programs Building Java Programs Chapter 4 Lecture 4-1: Scanner; if/else reading: 3.3 3.4, 4.1 Interactive Programs with Scanner reading: 3.3-3.4 1 Interactive programs We have written programs that print console 6.1. Example: A Tip Calculator 6-1 Chapter 6. Transition to Java Not all programming languages are created equal. Each is designed by its creator to achieve a particular purpose, which can range from highly focused languages designed for Install Java Development Kit (JDK) 1.8 CS 259: Data Structures with Java Hello World with the IntelliJ IDE Instructor: Joel Castellanos e-mail: joel.unm.edu Web: Office: Farris Engineering Center 319 8/19/2015 Install Crash Course in Java Crash Course in Java Based on notes from D. Hollinger Based in part on notes from J.J. Johns also: Java in a Nutshell Java Network Programming and Distributed Computing Netprog 2002 Java Intro 1 What is 1.00 Lecture 23. Streams 1.00 Lecture 23 Input/Output Introduction to Streams Exceptions Reading for next time: Big Java 19.3-19.4 Streams Java can communicate with the outside world using streams Picture a pipe feeding data into Computer Programming I Computer Programming I COP 2210 Syllabus Spring Semester 2012 Instructor: Greg Shaw Office: ECS 313 (Engineering and Computer Science Bldg) Office Hours: Tuesday: 2:50 4:50, 7:45 8:30 Thursday: 2:50 4:50, 13 File Output and Input SCIENTIFIC PROGRAMMING -1 13 File Output and Input 13.1 Introduction To make programs really useful we have to be able to input and output data in large machinereadable amounts, in particular we have to Part I. Multiple Choice Questions (2 points each): Part I. Multiple Choice Questions (2 points each): 1. Which of the following is NOT a key component of object oriented programming? (a) Inheritance (b) Encapsulation (c) Polymorphism (d) Parallelism ****** Java Review (Essentials of Java for Hadoop) Java Review (Essentials of Java for Hadoop) Have You Joined Our LinkedIn Group? What is Java? Java JRE - Java is not just a programming language but it is a complete platform for object oriented programming. A First Book of C++ Chapter 2 Data Types, Declarations, and Displays A First Book of C++ Chapter 2 Data Types, Declarations, and Displays Objectives In this chapter, you will learn about: Data Types Arithmetic Operators Variables and Declarations Common Programming Errors CS 1302 Ch 19, Binary I/O CS 1302 Ch 19, Binary I/O Sections Pages Review Questions Programming Exercises 19.1-19.4.1, 19.6-19.6 710-715, 724-729 Liang s Site any Sections 19.1 Introduction 1. An important part of programming Unit 2: Libraries, Packages, Components, Types, and Variables Unit 2: Libraries, Packages, Components, Types, and Variables Preview of Coming Attractions In this unit be sure to look for different types of libraries packages import command primitive types and component Introduction to Programming Introduction to Programming Lecturer: Steve Maybank Department of Computer Science and Information Systems sjmaybank@dcs.bbk.ac.uk Spring 2015 Week 2b: Review of Week 1, Variables 16 January 2015 Birkbeck Interactive Programs and Graphics in Java Interactive Programs and Graphics in Java Alark Joshi Slide credits: Sami Rollins Announcements Lab 1 is due today Questions/concerns? SVN - Subversion Versioning and revision control system 1. allows IMDB Data Set Topics: Parsing Input using Scanner class. Atul Prakash IMDB Data Set Topics: Parsing Input using Scanner class Atul Prakash IMDB Data Set Consists of several files: movies.list: contains actors.list: contains aka-titles.list: Programming in Java. What is in This Chapter? Chapter 1 Chapter 1 Programming in Java What is in This Chapter? This first chapter introduces you to programming JAVA applications. It assumes that you are already familiar with programming and that you have taken) Comments are areas of text ignored by the Java compiler. 1) Java applications a) A java application has a special method, called the main method, which is where the program will start. i) Each class can only have one main method. ii) You can tell eclipse which
http://docplayer.net/18041960-Java-util-scanner-here-are-some-of-the-many-features-of-scanner-objects-some-features-of-java-util-scanner.html
CC-MAIN-2018-34
en
refinedweb
gsl man page gsl — GNU Scientific Library Synopsis #include <gsl/...> Description The GNU Scientific Library (GSL) is a collection of routines for numerical computing. The routines are written from scratch by the GSL team in C, and present a modern Applications Programming Interface (API) for C programmers, allowing wrappers to be written for very high level languages. The library covers the following areas, Complex Numbers Roots of Polynomials Special Functions Vectors and Matrices Permutations Combinations For more information please consult the GSL Reference Manual, which is available as an info file. You can read it online using the shell command info gsl-ref (if the library is installed). Please report any bugs to bug-gsl@gnu.org. Referenced By gdl(1), gsl-histogram(1), gsl-randist(1). GNU Scientific Library GSL Team
https://www.mankier.com/3/gsl
CC-MAIN-2018-34
en
refinedweb
Hi !In fact, we changed our minds. Apparently, the Arduino is very bad to recognize images, so what we will do is to analyse the QR codes with a mobile app and then send the information to the Arduino card. It will be much more easier for us ! Now we just have to search how to send the information to the Arduino...Anyway, thanks a lot for all your precise and great anwsers, much appreciated ! Serial.println(sizeof(bool)); Serial.println(sizeof(boolean)); Serial.println(sizeof(unsigned char)); Serial.println(sizeof(unsigned int)); Serial.println(sizeof(unsigned long)); #include <iostream>using namespace std;int main(){ unsigned int line,bit;// instead of unsigned int use unsinged long on the Arduino (32 bits) /* The bits of a QR code could be stored in the array elements in various way. I decided to store them in the high order bits of elements because I felt it made converting the hex numbers into a QR image by hand easier */ unsigned int qr[21]= { 0xFEBBF800, 0x82320800, 0xBAD2E800, 0xBACAE800, 0xBA92E800, 0x827A0800, 0xFEABF800, 0x00180000, 0xF2FCE800, 0xA9FFE800, 0x26951800, 0x3C915000, 0xF3AE0800, 0x00F32800, 0xFE3F8000, 0x82457800, 0xBA286000, 0xBA827000, 0xBAC92000, 0x82D78800, 0xFEB72000 }; // Each element stores a 21 bit row of the QR code in its highest order bits i.e. lowest 11 bits of the 32 bit elements are zero for(int irow=0;irow<21;++irow)// process each row in the QR code { line=qr[irow];// copy the row for(int icol=0;icol<21;++icol)// process each bit in the QR code row { bit=line&0x80000000;// get the highest order bit line=line<<1;// shift all the bits left by one // Display 'X' if the highest order bit was 1 and O if it was 0 if(bit!=0) { printf("X"); } else { printf("O"); } } printf("\n"); } return 0;} $g++ -o main *.cpp$mainXXXXXXXOXOXXXOXXXXXXXXOOOOOXOOOXXOOXOOOOOXXOXXXOXOXXOXOOXOXXXOXXOXXXOXOXXOOXOXOXXXOXXOXXXOXOXOOXOOXOXXXOXXOOOOOXOOXXXXOXOOOOOXXXXXXXXOXOXOXOXXXXXXXOOOOOOOOOOOXXOOOOOOOOXXXXOOXOXXXXXXOOXXXOXXOXOXOOXXXXXXXXXXXXOXOOXOOXXOXOOXOXOXOOOXXOOXXXXOOXOOXOOOXOXOXOXXXXOOXXXOXOXXXOOOOOXOOOOOOOOXXXXOOXXOOXOXXXXXXXXOOOXXXXXXXOOOOXOOOOOXOOXOOOXOXOXXXXXOXXXOXOOOXOXOOOOXXOOXOXXXOXOXOOOOOXOOXXXOXOXXXOXOXXOOXOOXOOXOOXOOOOOXOXXOXOXXXXOOOXXXXXXXXOXOXXOXXXOOXOO Hi, just a question :in our project, the QR codes are useful because they allow us to associate each book with a subject. There's 6 books and 6 QR codes, so we think that we could put the 6 QR codes in the Arduino, and with the camera, compare the "basic" QR codes with the recognized QR codes. We think it would be easier to translate a QR code from nothing. How we should code this ?Thanks again for everything you did for us, much appreciated ! Thanks again for your reply, sorry if we came to you as ungrateful.The thing is that it's the first time we work with Arduino, and we have limited knowledge about this stuff, so your explications, while useful, are still pretty complex for us. We understand what the process should be, but it's when we have to write the code that we have some problems.
https://forum.arduino.cc/index.php?PHPSESSID=b1lkqgeo1hqgg3keb0k1drf2g6&topic=522230.15
CC-MAIN-2018-34
en
refinedweb
gd_move man page gd_move — move a Dirfile entry between format specification fragments Synopsis #include <getdata.h> int gd_move(DIRFILE *dirfile, const char *field_code, int new_fragment, unsigned int flags); Description The gd_move() function transfers the field or alias specified by field_code, which should not have a representation suffix, defined in the dirfile specified by dirfile from it's current format specification fragment to the fragment indexed by new_fragment. If the field is already defined in the fragment index by new_fragment, this function does nothing and returns no error. If the new fragment has different affixes, the field will be renamed as part of the move. See gd_rename(3) for details on field renaming. The field is closed before moving, resulting in it's I/O pointer being reset to the beginning-of-field. The flags parameter should be zero or more of the following flags, bitwise or'd together: - GD_REN_DANGLE By default, if the move results in a change of name for the field due to differing fragment affixes, ALIAS entries pointing to this field will be updated with the field's new name. Specifying this flag prohibits this behaviour, turning these aliases into dangling aliases. If moving the field doesn't rename it, this flag is ignored. - GD_REN_DATA If field_code specifies a RAW field, the binary file associated with the field will be translated to account for the possibly different encoding, endianness, and frame offset of the new format specification fragment. It will also be moved to a new directory, if necessary. If this flag is not specified, no changes will be made to the binary file. If field_code specifies a field of type other than RAW, this flag is ignored. If the binary file is translated, and the frame offset of the destination fragment is larger than that of the source fragment, this will result in permanent deletion of data from the database. If the new frame offset is smaller than the old frame offset, the binary file will be padded at the front with zeroes. - GD_REN_FORCE Skip updating entries which would be invalid (see gd_rename(3) for details). By default, an invalid field causes the move to fail. If moving the field doesn't rename it, this flag is ignored. - GD_REN_UPDB If moving the field renames it, update entries which use this field as an input to account for the new name (see gd_rename(3)). If moving the field doesn't rename it, this flag is ignored. Return Value On success, gd_move() returns else the move resulted in the field being renamed and the resultant metadata update tried to change a field code into something prohibited by a fragment's affixes. - GD_E_BAD_DIRFILE The supplied dirfile was invalid. - GD_E_BAD_FIELD_TYPE An attempt was made to move the immutable INDEX field. - GD_E_BAD_INDEX The new_fragment argument did not index a valid format specification fragment. - GD_E_IO An I/O error occurred while attempting to translate a binary file. - GD_E_PROTECTED The metadata of the source or destination format specification fragments was protected from change, or the binary data of the source or destination fragments was protected from change and binary file translation was requested. - GD_E_UNKNOWN_ENCODING The encoding scheme of the source or destination fragment is unknown. - GD_E_UNSUPPORTED The encoding scheme of the source or destination fragment does not support binary file translation. The error code is also stored in the DIRFILE object and may be retrieved after this function returns by calling gd_error(3). A descriptive error string for the error may be obtained by calling gd_error_string(3). Notes A binary file translation occurs out-of-place. As a result, sufficient space must be present on the filesystem for both the binary file before translation and the binary file after translation. History The dirfile_move() function appeared in GetData-0.5.0. It had no flags parameter. In place of flags was int move_data. Passing a non-zero value for this parameter had the same effect as the GD_REN_DATA flag does now. In GetData-0.7.0, this function was renamed to gd_move(). In all GetData-0.8.x releases, passing an alias name to this function would move the target of the alias. To move an alias itself, a separate function, gd_move_alias() was available. In GetData-0.9.0, gd_move_alias() was removed. Also in this release, the move_data parameter was repaced with the flags parameter. In GetData-0.10.0, the error return from this function changed from -1 to a negative-valued error code. See Also gd_metaflush(3), gd_open(3), gd_error(3), gd_error_string(3), dirfile(5), dirfile-format(5) Referenced By dirfile-encoding(5), gd_alter_entry(3), gd_getdata(3).
https://www.mankier.com/3/gd_move
CC-MAIN-2018-34
en
refinedweb