text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Routes
In any web framework, "routes" are one of the core elements of what happens on a website. Certainly rendering content when a user hits a particular URL is a majority of what happens in web development.
Reaction uses the React Router package for routing. To get started with routing in Reaction, here are two important elements to understand:
- Reaction stores all its Routes in the "Registry" in the database. This allows packages to dynamically add routes along with their functionality, and even override or remove existing routers.
- The customized version of React Router is available globally as
Reaction.Router.
For more in-depth coverage, consult the main Reaction documentation on Routing and the React Router documentation.
But we are going to keep it at its most simple and just add a single new route which will be available to anybody. Bee's Knees wants to add the ubiquitous "About" page to their site and wants to show essentially a static page there. (Management of static pages is coming in upcoming version of RC but this still makes an excellent simple example).
So the first thing we want to do is add the route in the Registry which we do by adding an entry in the
registry key in
our
register.js file.
This entry will look like this (placed after the
autoEnable: true entry):
registry: [ { route: "/about", name: "about", template: "aboutUs", workflow: "coreWorkflow" } ],
The
route entry is the URL that will match the users URL. (for how to include parameters in the route, please see the RC documentation or the React Router documentation)
The
name is the string by which you will refer to this route in other parts of the application. The
template is the
template that will be rendered when the route is visited, and the
workflow defines which workflow this will be attached to.
In our case, there is no real workflow around an about page so we use the default "coreWorkflow".
To allow users to our new Route we need to give them permissions. Since we are good with everyone viewing our About page we will add this permission to our "defaultRoles" and "defaultVisitorRoles" (the roles available when a new user is created).
To do this we are going to create a new file called
init.js in the
server directory there and add that file to our imports. Then we
will add a function that looks like this:
function addRolesToVisitors() { // Add the about permission to all default roles since it's available to all Logger.info("::: Adding about route permissions to default roles") const shop = Shops.findOne(Reaction.getShopId()); Shops.update(shop._id, { $addToSet: { "defaultVisitorRole": "about"} } ); Shops.update(shop._id, { $addToSet: { "defaultRoles": "about"} }); }
Then let's add another Hook Event to call that code.
/** * Hook to make additional configuration changes */ Hooks.Events.add("afterCoreInit", () => { addRolesToVisitors(); });
Now, as usual you will need to reset for this change to take affect. In addition, changes to defaultRoles/defaultVisitorRoles do not change existing users, so you will need to clear your cache or use Private/Incognito mode so that a new user is created.
Using Route "Hooks"Using Route "Hooks"
It's common to want to write code to do something when a url visits a certain route for such things as site tracking/metric. You can do this with a Route "hook".
We can do this using the
Hooks API provided by Reaction. For any route you can add an arbitrary callback. (Note that
routing is done on the client-side, so it needs to be added there). So are going to add a new
init.js file in our
client
directory and add the import to it in the
index.js. Then we can add this code:
import { Router, Logger } from "/client/api"; // create a function to do something on the product detail page function logSomeStuff() { Logger.warn("We're arriving at the product page!"); } // add that to the product detail page onEnter hook Router.Hooks.onEnter("product", logSomeStuff);
Now every time the user enters the "product" route, the function
logSomeStuff will run. If you want to see a list
of routes currently loaded on the client you type
ReactionRouter._routes in the browser console. | https://docs.reactioncommerce.com/docs/1.16.0/plugin-routes-6 | CC-MAIN-2019-35 | refinedweb | 699 | 62.68 |
I started python -m SimpleHTTPServer on one computer on lan and used wget to download php files from it to another. As far as i see, they seem to be downloaded correctly - i got php sources instead of html layout. Why? Is this because this server doesn't execute php? When i was downloading i was worried that i'll download just html layout results instead of php source...
python -m SimpleHTTPServer
wget
Yep, a simple server like Python's doesn't execute PHP. Even something like Apache wouldn't execute PHP either, unless you specifically told it to (which involves installing mod_php).
Technically, as far as the web server is concerned, everything is just a downloadable file unless you (the configurator) tell it otherwise.
SimpleHTTPServer module just serves files. To parse the php files you need one of the "big" http servers, for example apache or lighttpd, with the proper module (mod_php) or cgi to parse the php code and give you the html output of it
This may work :)
#!/usr/bin/env python
from BaseHTTPServer import HTTPServer
from CGIHTTPServer import CGIHTTPRequestHandler
serve = HTTPServer(("",8080),CGIHTTPRequestHandler)
serve.serve_forever()
php file:
#!/usr/bin/php
<? phpinfo(); ?>
dont forget to chmod +x on the php19 times
active | http://serverfault.com/questions/59677/downloading-php-files-from-python-simple-http-server | crawl-003 | refinedweb | 205 | 63.59 |
In this article, I am going to show you how to use an Excel workbook as your data source and fetch data based on your SQL query. In my first step, I will show you how to write a query using SQL syntax and next, I will show you how to fetch and bind data in your DataGrid.
DataGrid
When I started reading an Excel workbook by writing traditional VBA code I found it's complicated to read even one cell value. After writing successful code too, you may be trapped under the COM memory leakage issue. If your Excel file has a huge amount of data to read and quite a few number of sheets, then you can expect a nice popup window showing up with, "You are running out of virtual memory." Even after killing several objects during this process, you cannot make sure that your object will be released immediately. Finally I found that SQL can reduce code complexity and I can gain performance and there is no memory leakage issue. I am assuming that those of you who are reading this article have basic knowledge of ADO.NET and Microsoft Excel.
Getting data from any data storage using ADO.NET is very simple. Before writing real code, let's create a Windows application (though you can use Web application as well), and add the following line at the top of the target form. I am going to use OLEDB API for accessing Excel data. OLEDB will take help of the JET engine to execute a query and fetch data from Excel.
using System.Data.OleDb;
Please make a note that I am using Office 2003 and Visual Studio 2003.
First, we will establish connection to our data source. It is very similar to connecting to the SQL Server or Oracle.
OleDbConnection con = new OleDbConnection(
"provider=Microsoft.Jet.OLEDB.4.0;data source=" +
"File Name with Complete Path" +";Extended Properties=Excel 8.0;");
Writing an Excel query is as similar as writing a query in any other traditional data storage like SQL Server, Oracle, etc. However there are a few differences. First, you have to specify your sheet name instead of your table name. Next, you have to give starting and end cell references. Watch my following code carefully:
SELECT * FROM [42560035$A1:F500]
42560035
[]
This was a very simple query. Here is the more complicated one:
SELECT * FROM [42560030$A21:F500]
WHERE [Period_End Date] = #3/2/2007#
ORDER BY [Date_Incurred]
In place of *, which returns all columns, you can specify exact column(s) name you are looking for. Here is an example:
*
SELECT [Associate Name] as Associate,[Amount] as Amount
FROM [42560030$A21:F500]
WHERE [Period_End Date] = #3/2/2007# ORDER BY [Date_Incurred]
Please make sure that you are placing your column name inside [] bracket if your column name contains a white space. Otherwise, your JET engine will throw an exception. For consistency, place all column names inside [] bracket. One more interesting point: if you observe the [Period_End Date] column name carefully, there is an underscore (_) between Period and End. This is because in my Excel sheet I wrote Period in one line and End Date in the next line. Please note that when I wrote the next line, that meant Next Line not next cell. The next step is to build our data adapter.
[Period_End Date]
Period
End
End Date
StringBuilder stbQuery = new StringBuilder();
stbQuery.Append("SELECT * FROM [42560035$A1:F500]");
OleDbDataAdapter adp = new OleDbDataAdapter(stbQuery.Tostring(),con);
DataSet dsXLS = new DataSet() adp.Fill(dsXLS);
After filling the data, now is the time is to see the data. For that, we can use DataGrid or you can write your dataset into an XML file by using the WriteXml method. In this example, I will be using DataGrid to display my data.
WriteXml
DataView dvEmp = new DataView(ds.Tables[0]);
dataGrid1.DataSource = dvEml;
That is all.
In my next article, I am going to show you how to post data in an Excel. | http://www.codeproject.com/Articles/19801/Fetching-Data-from-Microsoft-Excel-using-SQL?msg=2171457 | CC-MAIN-2016-30 | refinedweb | 668 | 64.41 |
Ctrl+P
A collection of curated extensions for discerning React developers.
Food Truck icon by @reverentgeek
Food Truck icon by @reverentgeek
Only the cleanest, freshest set of React snippets in the market. Everything you
need with no snippet bloat. Created by one of the nicest guys on the planet.
All the best ES6 snippets at your finger tips. Why type import when you can
type imp? It's three letters shorter. That's called productivity.
import
imp
Your code looks a little shady. Tighten it up with Prettier. Not sure if you
need that semicolon? Don't fret - Prettier knows.
Add this to your User Preferences file for that premium "format on save"
experience.
"editor.formatOnSave": true
Your JavaScript is so solid, you don't need a linter. But if you did, it would
be ESLint and you would use this extension.
You can force Prettier to respect your ESLint rules by adding the following line
to your User Settings...
"prettier.eslintIntegration": true
Stop guessing at your npm packages. You never get it right anyway. Is leftpad or
left-pad? Not your problem anymore. This extension provides intellisense for
your imports.
Still importing components manually? Why? Just use the component in your JSX and
this extension will import it for you.
You know how you copy/paste in some CSS to a React component, then you gotta fix
it cause CSS isn't valid JavaScript? Never. again.
Do you need a class or a pure component? Nobody ever knows until they make a
component and then realize this thing isn't nearly as dumb as they thought it
was going to be and NOW they need a class. REFACTORING SUCKS. Converting pure
components to classes is a breeze with this extension. | https://marketplace.visualstudio.com/items?itemName=burkeholland.react-food-truck | CC-MAIN-2021-17 | refinedweb | 291 | 69.38 |
No unread comment.
SQL statement after execution of above query
After the execution of line number 18, the SQL statement will look like the following until the end:
IQueryable Code
SQL statement after execution of the preceding query
After the execution of line number 22, the SQL statement will look like:
But after the execution of line number 23, SQL statement will add the Top for the filtering.
In both syntaxes I am accessing the data from the Employee table and then taking only 3 rows from that data.
Differences
IEnumerable
IEnumerable exists in the System.Collections namespace.
IEnumerable is suitable for querying data from in-memory collections like List, Array and so on.
While querying data from the database, IEnumerable executes "select query" on the server-side, loads data in-memory on the client-side and then filters the data.
IEnumerable is beneficial for LINQ to Object and LINQ to XML queries.
IQueryable
IQueryable exists in the System.Linq Namespace.
IQueryable is suitable for querying data from out-memory (like remote database, service) collections.
While querying data from a database, IQueryable executes a "select query" on server-side with all filters.
IQueryable is beneficial for LINQ to SQL queries.
data manipulation in Linq
IEnumerable in LINQ
IQueryable in LINQ
LINQ | http://www.c-sharpcorner.com/UploadFile/a20beb/ienumerable-vs-iqueryable-in-linq/ | CC-MAIN-2018-05 | refinedweb | 211 | 54.32 |
Create Related Attributes - (Calculations??)DanielleW Mar 21, 2012 5:10 AM
Hello,
Is it possible to create fields that are related. I would like to add some kind of Risk Matrix into our Change process, but can't work out how you can calculate the outcome. So for example a user would select Business Impact = Low and then Likelyhood of Failure = High then the overall risk of the change would be calculated at Medium. Has anybody completed something like this??
Kind regards,
Danielle
1. Re: Create Related Attributes - (Calculations??)Carl.Simpson Mar 21, 2012 10:49 AM (in response to DanielleW)
A calculation would probably do what you want. It would probably consist of setting a value to zero and then using a bunch if if-then statements to add or subtract from the value. Then maybe another set of If-then to check the value and return a description if the number isn't enough. You will see many examples of BOo calculations in the calculations section.
The BOO web site was recently "upgraded" and sadly many examples do not exist yet. It wasn't great before and it's worse now.
2. Re: Create Related Attributes - (Calculations??)dmshimself Mar 25, 2012 4:55 AM (in response to DanielleW)
Yes I've done this sort of thing, but it isn't for the faint hearted. You need to construct some careful BOO calculations to do what you need.
So you program up the algorithm you need in BOO and then pull in actual values from reference lists like Business Impact Likelyhood of failrue and so on. Here is an example. Here several reference lists are used and each entry in the reference list has a score to add in.
This calculation returns the Nth entry in a Request Assessment list, (High Medium Low) but you might be happy just to return a number
import System
static def GetAttributeValue(Change):
Value = Change._ComplexityRating._Rating+Change._FrequencyRating._Rating+Change._PotentialImpactRating._Rating+Change._Numberofclientsaffectedr._Rating+Change._EffortRating1._Rating+Change._EstimatedCostRating1._Rating+Change._Linkagestoothersystemsra._Rating+Change._LinkagestootherchangesRa._Rating
if Value >= 24:
return Change.GetRankedObject("ChangeManagement._RequestAssessment", 3)
if Value >=16 and Change._ExpediteChange._ExpediteChange == "Yes":
return Change.GetRankedObject("ChangeManagement._RequestAssessment", 3)
if Value >=16:
return Change.GetRankedObject("ChangeManagement._RequestAssessment", 2)
if Change._ExpediteChange._ExpediteChange == "Yes":
return Change.GetRankedObject("ChangeManagement._RequestAssessment", 2)
else:
return Change.GetRankedObject("ChangeManagement._RequestAssessment", 1)
3. Re: Create Related Attributes - (Calculations??)Carl.Simpson Mar 26, 2012 8:15 AM (in response to dmshimself)
Not for faint hearted but definitely worth the effort. We use calculations a lot. They allow you to do really cool stuff that the Service Desk doesn't normally doesn't do. Once you start using calculations you will find many more uses and it really help customize Service Desk way beyond what you thought was possible.
However, there is also a dark side. LANDesk support does not support calculation meaning your only help is here, .net web sites, and BOO web sites. Currently the BOO web site is a mess and in need of serious work. The examples here in the community show a lot useful bits of information but certainly not everything. The other hassle is that unlike most IDE's, Service Desk has no step through or way to follow what is happening. Typically you can step through your code one line at a time and then look at the variables to see what is going on. With Service Desk, none of that exists.
This is what I typically do:
- Insert a test at the trouble spot to return a value
- Add a return command to ignore the rest of my calculation
- Save the changes in the object
- Do an IISRESET
- Load your incident, change, problem, etc
- Make a change
- Check results
- See error
- Exit incident, change, problem, etc
- Load object again
- Edit calculation
- Repeat step 1 till it works
I can sometimes use a query with a short cut in the design tab to making checking calculations easier. In the end though, working with calculations can be very frustrating. I know there is a Enhancement Request to add debugging abilities for calculations. I would recommend everyone vote on that. There is so little official documentation of BOO that I am tempted to write a manual myself. Until I get annoyed enough to research BOO to death the community is your best source of information.
I bet after reading all this your wondering if the effort is worth it. Yes
4. Re: Create Related Attributes - (Calculations??)hewy06 Mar 26, 2012 9:12 AM (in response to Carl.Simpson)
Carl/Dave
Any suggestions on where I would start if I was wanting to "attempt" to teach myself BOO?? Reading previous posts you guys and many others are doing impressive things with calculations and I have no idea where to start. Frankly reading Dave's example in this thread was like reading a foreign language!!
Is it a case of working through the BOO website (though you've put me off now saying it's a mess) and the labs on it? And just to throw a spanner in the works the last thing I programmed in was BASIC during GCSE ICT!!!!!!
Cheers for any advice
Helen
5. Re: Create Related Attributes - (Calculations??)Carl.Simpson Jul 25, 2016 11:25 AM (in response to hewy06)
BASIC is a good start, Visual Basic would be better.
Calculation formula writing: Further reading on the Boo scripting language and .NET Framework
BOO has an unusual style, they call it Duck typing. Once you get used to it things are not so bad. Basically in a IF/THEN statement the THEN is replaced with a colon. In visual basic you use Begin and End to do many things like in a FOR/NEXT loop. In BOO they use indents and more specifically a space to show sections of code to be calculated. Spacing is very crucial to BOO but otherwise its a standard top down approach. You can use many of the C variations but the less confusing Basic formats (V = V + 1) also work. There are a whole bunch of functions available to you as compared to BASIC so in a way things are much better.
6. Re: Create Related Attributes - (Calculations??)hewy06 Mar 27, 2012 2:07 AM (in response to Carl.Simpson)
Thanks Carl, much appreciated.
H
7. Re: Create Related Attributes - (Calculations??)DanielleW Jul 4, 2012 7:25 AM (in response to dmshimself)
Hi Dave,
Sorry it has taken me a long time to muster up the guts to have a go at this.I think I am getting my head around boo slowly, I was just wondering when you were adding together the ratings in the example what data types are the rating attributes??
Thanks,
Danielle
8. Re: Create Related Attributes - (Calculations??)dmshimself Jul 4, 2012 2:22 PM (in response to DanielleW)
Mine were all int16. Stus recent articles on calcualtions would be a great set to go through too. The first one is here, but you may have already found it | https://community.ivanti.com/thread/18776 | CC-MAIN-2018-09 | refinedweb | 1,178 | 65.73 |
This article presents a DataGrid control which is built with no MFC. It can be used in SDK or MFC Win32 applications. This source code is also compiled with GNU compiler and has shown to be stable.
You can find various grid controls all over the Internet, some free and some not. Also, there is an article by Chris Maunder about a grid control which can be used on different platforms (ATL and MFC version). Grid controls are very useful for representing two-dimensional tabular data. They are often used in accounting applications. Grid controls must be designed for quick data-referencing and modification. The grid control presented in this article supports up to 32000 rows and 1000 columns. Also, there can be up to 20 grid controls created at the same time. These values can be changed in the DataGrid header file but there is always a memory limit.
To use the DataGrid control a header file must be included in the project.
#include "DataGrid.h"
Next, create an instance of the CDataGrid class and call the Create() method.
CDataGrid
Create()
// hParentWnd is declared somewhere else
//
CDataGrid dataGrid;
int numCols = 5;
RECT rect = {0,0,500,300};
dataGrid.Create( rect, hParentWnd, numCols );
Use SetColumnInfo() method to describe the DataGrid control columns.
SetColumnInfo()
int colIndex = 0;
char colText[] = "Column1";
int colSize = 120;
UINT txtAlign = DGTA_LEFT;
dataGrid.SetColumnInfo( colIndex, colText, colSize, txtAlign );
To add items to the DataGrid control, call InsertItem() method.
InsertItem()
char itemText[] = "Item1";
dataGrid.InsertItem( itemText, txtAlign );
To describe subitems, use SetItemInfo() method.
SetItemInfo()
int rowIndex = 0;
int columnIndex = 0;
char subitemText[] = "Subitem1";
bool readOnly = false;
dataGrid.SetItemInfo( rowIndex, columnIndex, subitemText, txtAlign, readOnly );
The DataGrid control sends notification messages through the WM_COMMAND message. These notifications are:
WM_COMMAND
This is the basic use of this control. See the demo project as an example. The DataGrid control supports the following:
My goal was to try to develop a grid control that will support most of the things that the MFC CListCtrl control does and possibly some more and to be as efficient. Its GUI is designed to be very similar with the previously mentioned control.
CList. | http://www.codeproject.com/Articles/10935/DataGrid-Control?fid=197093&df=90&mpp=25&noise=3&prof=True&sort=Position&view=Normal&spc=Relaxed&fr=26 | CC-MAIN-2016-26 | refinedweb | 356 | 57.47 |
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using an earlier version, some or all of the information does not apply.
EF6 introduced support for asynchronous query and save using the async and await keywords that were introduced in .NET 4.5. While not all applications may benefit from asynchrony, it can be used to improve client responsiveness and server scalability when handling long-running, network or IO-bound tasks.
The following topics are covered on this page:
The purpose of this walkthrough is to introduce the async concepts in a way that makes it easy to observe the difference between asynchronous and synchronous program execution. This walkthrough is not intended to illustrate any of the key scenarios where async programming provides benefits.
Async programming is primarily focused on freeing up the current managed thread (thread running .NET code) to do other work while it waits for an operation that does not require any compute time from a managed thread. For example, whilst the database engine is processing a query there is nothing to be done by .NET code.
In client applications (WinForms, WPF, etc.) the current thread can be used to keep the UI responsive while the async operation is performed. In server applications (ASP.NET etc.) the thread can be used to process other incoming requests - this can reduce memory usage and/or increase throughput of the server.
In most applications using async will have no noticeable benefits and even could be detrimental. Use tests, profiling and common sense to measure the impact of async in your particular scenario before committing to it.
Here are some more resources to learn about async:
We’ll be using the Code First workflow to create our model and generate the database, however the asynchronous functionality will work with all EF models including those created with the EF Designer.
using System.Collections.Generic; using System.Data.Entity; namespace AsyncDemo { that we have an EF model, let's write some code that uses it to perform some data access.
using System; using System.Linq; namespace AsyncDemo { class Program { static void Main(string[] args) { PerformDatabaseOperations(); Console.WriteLine(); Console.WriteLine("Quote of the day"); Console.WriteLine(" Don't worry about the world coming to an end today... "); Console.WriteLine(" It's already tomorrow in Australia."); Console.WriteLine(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } public static void PerformDatabaseOperations() { using (var db = new BloggingContext()) { // Create a new blog and save it db.Blogs.Add(new Blog { Name = "Test Blog #" + (db.Blogs.Count() + 1) }); db.SaveChanges(); // Query for all blogs ordered by name var blogs = (from b in db.Blogs orderby b.Name select b).ToList(); // Write all blogs out to Console Console.WriteLine(); Console.WriteLine("All blogs:"); foreach (var blog in blogs) { Console.WriteLine(" " + blog.Name); } } } } }
This code calls the PerformDatabaseOperations method which saves a new Blog to the database and then retrieves all Blogs from the database and prints them to the Console. After this, the program writes a quote of the day to the Console.
Since the code is syncronous, we can observe the following execution flow when we run the program:
Now that we have our program up and running, we can begin making use of the new async and await keywords. We've made the following changes to Program.cs
For a comprehensive list of available extension methods in the System.Data.Entity namespace, refer to the QueryableExtensions class. You’ll also need to add “using System.Data.Entity” to your using statements.
using System; using System.Data.Entity; using System.Linq; using System.Threading.Tasks; namespace AsyncDemo { class Program { static void Main(string[] args) { var task = PerformDatabaseOperations(); Console.WriteLine("Quote of the day"); Console.WriteLine(" Don't worry about the world coming to an end today... "); Console.WriteLine(" It's already tomorrow in Australia."); task.Wait(); Console.WriteLine(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } public static async Task PerformDatabaseOperations() { using (var db = new BloggingContext()) { // Create a new blog and save it db.Blogs.Add(new Blog { Name = "Test Blog #" + (db.Blogs.Count() + 1) }); Console.WriteLine("Calling SaveChanges."); await db.SaveChangesAsync(); Console.WriteLine("SaveChanges completed."); // Query for all blogs ordered by name Console.WriteLine("Executing query."); var blogs = await (from b in db.Blogs orderby b.Name select b).ToListAsync(); // Write all blogs out to Console Console.WriteLine("Query completed with following results:"); foreach (var blog in blogs) { Console.WriteLine(" - " + blog.Name); } } } } }
Now that the code is asyncronous, we can observe a different execution flow when we run the program:
We now saw how easy it is to make use of EF’s asynchronous methods. Although the advantages of async may not be very apparent with a simple console app, these same strategies can be applied in situations where long-running or network-bound activities might otherwise block the application, or cause a large number of threads to increase the memory footprint.
![CDATA[ Third party scripts and code linked to or referenced from this website are licensed to you by the parties that own such code, not by Microsoft. See ASP.NET Ajax CDN Terms of Use –. ]]> | https://msdn.microsoft.com/en-us/data/jj819165 | CC-MAIN-2016-30 | refinedweb | 862 | 58.79 |
The distance between the Earth and Mars depends on their relative positions in their orbits and varies quite a bit over time. This post will show how to compute the approximate distance over time. We’re primarily interested in Earth and Mars, though this shows how to calculate the distance between any two planets.
The planets have elliptical orbits with the sun at one focus, but these ellipses are nearly circles centered at the sun. We’ll assume the orbits are perfectly circular and lie in the same plane. (Now that Pluto is not classified as a planet, we can say without qualification that the planets have nearly circular orbits. Pluto’s orbit is much more elliptical than any of the planets.)
We can work in astronomical units (AUs) so that the distance from the Earth to the sun is 1. We can also work in units of years so that the period is also 1. Then we could describe the position of the Earth at time t as exp(2πit).
Mars has a larger orbit and a longer period. By Kepler’s third law, the size of the orbit and the period are related: the square of the period is proportional to the cube of the radius. Because we’re working in AUs and years, the proportionality constant is 1. If we denote the radius of Mars’ orbit by r, then its orbit can be described by
r exp(2πi (r-3/2 t ))
Here we pick our initial time so that at t = 0 the two planets are aligned.
The distance between the planets is just the absolute value of the difference between their positions:
| exp(2πit) – r exp(2πi (r-3/2 t)) |
The following code computes and plots the distance from Earth to Mars over time.
from scipy import exp, pi, absolute, linspace import matplotlib.pyplot as plt def earth(t): return exp(2*pi*1j*t) def mars(t): r = 1.524 # semi-major axis of Mars orbit in AU return r*exp(2*pi*1j*(r**-1.5*t)) def distance(t): return absolute(earth(t) - mars(t)) x = linspace(0, 20, 1000) plt.plot(x, distance(x)) plt.xlabel("Time in years") plt.ylabel("Distance in AU") plt.ylim(0, 3) plt.show()
And the output looks like this:
Notice that the distance varies from about 0.5 to about 2.5. That’s because the radius of Mars’ orbit is about 1.5 AU. So when the planets are exactly in phase, they are 0.5 AU apart and when they’re exactly out of phase they are 2.5 AU apart. In other words the distance ranges from 1.5 – 1 to 1.5 + 1.
The distance function seems to be periodic with period about 2 years. We can do a little calculation by hand to show that is the case and find the period exactly.
The distance squared is the distance times its complex conjugate. If we let ω = r -3/2 then the distance squared is
d2(t) = (exp(2πit) – r exp(2πiωt)) (exp(-2πit) – r exp(-2πiωt))
which simplifies to
1 + r2 – 2r cos(2π(1 – ω)t)
and so the (squared) distance is periodic with period 1/(1 – ω) = 2.13.
Notice that the plot of distance looks more angular at the minima and more rounded near the maxima. Said another way, the distance changes more rapidly when the planets leave their nearest approach than their furthest approach. You can prove this by taking square root of d2(t) and computing its derivative.
Let f(t) = 1 + r2 – 2r cos(2π(1 – ω)t). By the chain rule, the derivative of the square root of f(t) is 1/2 f(t)-1/2 f‘(t). Near a maximum or a minimum, f‘(t) takes on the same values. But the term f(t)-1/2 is largest when f(t) is smallest and vice versa because of the negative exponent. | http://www.johndcook.com/blog/category/science/ | CC-MAIN-2015-48 | refinedweb | 664 | 74.29 |
A tool that convert ASS/SSA subtitle to SRT format
Project description
asstosrt is a tool which can convert Advanced SubStation Alpha (ASS/SSA) subtitle files to SubRip (SRT) files. Many old devices only support SubRip.
Usage
Install asstosrt.
# pip install asstosrt
chardet is suggested, which provide auto charset detection.
# pip install charset
Then cd into the directory of ASS/SSA files. Run asstosrt.
$ asstosrt
Done. All converted SRT files will be wrote to current directory.
Run with --help see more.
$ asstosrt --help
More Examples
Specify input and output encoding, output directory:
$ asstosrt -e utf-8 -s utf-18be -o /to/some/path/
Convert to Simplified Chinese (Using langconv download):
$ asstosrt -t zh-hans -s gb18030 /path/to/some.big5.ass
Convert to Traditional Chinese (Using OpenCC):
# pip install pyopencc $ asstosrt -c zhs2zht.ini
Only keep first line for each dialogue and delete all effects:
$ asstosrt --only-first-line --no-effact
Used as a Library
You can use asstosrt on your program easily.
import asstosrt ass_file = open('example.ass') srt_str = asstosrt.convert(ass_file)
License
MIT License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/asstosrt/ | CC-MAIN-2022-27 | refinedweb | 199 | 50.12 |
] by
- Merge pull request #1611 from thusoy/patch-1 Fix broken sphinx reference …
- 13:10 Changeset [751dc0a]] by
- Fixed #19298 -- Added MultiValueField.deepcopy Thanks nick.phillips …
- 10:42 LittleEasyImprovements edited by
- (diff)
- 10:17 Changeset [d5d0e03]] by
- Fixed #20978 -- Made deletion.SET_NULL more friendly for …
- 07:12 Changeset [d59f1993]] by
- Fixed #21033 -- Fixed uploaded filenames not always being truncated to 255 …
- 16:08 Changeset [d6e222f] by
- Merge pull request #1607 from Diskun/master Fixed a little mistake in …
- 16:02 Changeset [522d3d6] by
- Fixed a little mistake in Django 1.7 release notes
- 14:01 Changeset [df2fd4e]] by
- Refactored code and tests that relied on django.utils.tzinfo. Refs …
- 13:32 Changeset [ec2778b]] by
- Fixed #19885 -- cleaned up the django.test namespace * override_settings …
- 12
- 12:19 Changeset [a52cc1c0]] by
- Fixed #20707 -- Added explicit quota assignment to Oracle test user To …
- 06:03 Changeset [7c6f2dd] by
- Simplify FilterExpression.args_check
- 06:03 Changeset [5e1c7d4]] by
- Fix #20745: Don't silence TypeError raised inside templates. Thanks to …
- 12:53 Ticket #21038 (Documentation in PDF) closed by
- fixed: Lovely, it looks much better now.
- 12:20 Changeset [9d11522]] by
- Further hardening. Refs #18766.
- 10:41 Changeset [0035a0c]stable/1.6.x by
- [1.6.x] Hardened the test introduced in ded11aa6. Refs #18766. Inputs …
- 10:40 Changeset [1a1e147] by
- Hardened the test introduced in ded11aa6. Refs #18766. Inputs acceptable …
- 08:05 Changeset [96fd555] by
- Removed a ton of unused local vars
- 07:42 Changeset [0ee8aa5c] by
- Removed an unused local var
- 02:23 Changeset [fa7bc246]] by
- Fixed #16869 -- BaseGenericInlineFormSet.save_new should use form's save() …
- 15:36 Changeset [c348550]] by
- Merge pull request #1594 from garrypolley/patch-1 adding myself to …
- 11:16 Changeset [6d2be87]] by
- Fixed #20005 -- Documented that Oracle databases need execute permission …
- 10:47 Changeset [a44cbca] by
- Added a note about LTS releases.
- 10:41 Changeset [8ef060e0]] by
- Fixed #20530 -- Properly decoded non-ASCII query strings on Python 3. …
- 09:38 Changeset [ff494494] by
- Merge pull request #1592 from …
- 09:33 Changeset [2223b83a] by
- Improved docs for contrib.admin.options.ModelAdmin.response_* Added …
- 09:32 Changeset [476b076] by
- Oops :(
- 09:31 Changeset [7bb62793]] by
- RunSQL migration operation and alpha SeparateDatabaseAndState op'n.
- 09:01 Changeset [9079436b] by
- Add docs for response_add, response_change and response_delete
- 08:59 Changeset [73de9dd] by
- Add response_delete and render_delete_form methods to ModelAdmin …
- 08:45 Changeset [fac5735]stable/1.6.x by
- [1.6.x] Fixed #20557 -- Properly decoded non-ASCII cookies on Python 3. …
- 08:43 Changeset [f5add47]] by
- Fixed #20557 -- Properly decoded non-ASCII cookies on Python 3. Thanks …
- 08:25 Changeset [ae7f9afa] by
- Minor cleanup in the WSGI handler.
- 08:24 Ticket #21065 (Internally choosing how to process a template is inconsistent) created by
- [ …
- 08:03 Changeset [4e88d106] by
- Refactored the unmangling of the WSGI environ.
- 08:03 Changeset [636860fb]] by
- Fixed regression introduced by a962286, changed ugettext to ugettext_lazy.
Note: See TracTimeline for information about the timeline view. | https://code.djangoproject.com/timeline?from=2013-09-10T06%3A52%3A13-07%3A00&precision=second | CC-MAIN-2014-10 | refinedweb | 484 | 55.95 |
Roll your own IRC bot
From HaskellWiki
Revision as of 00:04, 5 October 2006
to a Haskell program. We first connect to the server, then set the buffering on the socket off. Once we've got a socket, we can then just read and print any data we receive.Put this code in the module
join a channel, and start listening for commands.
While we're here we can tidy up the main function a little, by using
connection, shutdown and main loop phases of the program - a useful technique. We can also make the code a bit more robust, by wrapping themain loop in an exception handler, using
main :: IO () main = bracket connect disconnect loop where disconnect = hClose . socket loop st = catch (runReaderT run st) (const $ return ())
arguments: a function to connect to the server, therefore:
- Useto add a command line interface, and you've got yourself an irc client with 4 more lines of code.forkIO
- Port some commands from Lambdabot.
Author: Don Stewart | http://www.haskell.org/haskellwiki/index.php?title=Roll_your_own_IRC_bot&diff=6492&oldid=6477 | CC-MAIN-2014-42 | refinedweb | 167 | 70.23 |
This project is meant for those who want to use a basic analog stick from an Arduino parts kit as a sensor to detect items like car keys or other weighted objects that can dangle from the analog stick. While this project does not aim to give full analog control to the Raspberry Pi, this will be enough to get an introduction to Internet of Things (IoT) devices.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Interfacing the Analog Stick With the Arduino
Although this may vary between different models, analog sticks usually have the following:
- +5V - Input for 5 volts
- GND - connected to Ground
- VRx - how far the stick is pushed horizontally, outputs a value ranging from 0 (pushed all the way to the left), and 1023 (pushed all the way to the right)
- VRy - how far the stick is pushed vertically, outputs a value ranging from 0 (pushed all the way down), and 1023 (pushed all the way up)
- SW - the little button switch that sends a signal when the stick is depressed
For this tutorial, we will have +5V and GND connected to the 5V power and Ground on the Arduino, and have VRx and VRy connected to pin A0 and A1, respectively. This tutorial will not be using the switch. We also have a wire going from Digital Pin 10 on the Arduino to GPIO pin 12 on the Raspberry Pi. Ideally, one would want to use a GPIO header for the Raspberry Pi.
On the Arduino, we want to have a small piece of code running that reads the analog values from the stick, and sends a High signal when the stick reaches a certain threshold. In main.ino, we want it to write a HIGH signal on Digital Pin 10 if the stick is pushed 64 units from the center of the stick. If not, we write a LOW signal. This "deadzone" is meant to reduce false positives, since the stick can be fairly sensitive. Compile and upload the code to the Arduino using the Arduino code editor, or whatever you prefer to use to upload code to the Arduino.
// main.ino
void setup() { Serial.begin(9600); pinMode(10, OUTPUT); }
void loop() { // read values from the analog stick int x = analogRead(A0); int y = analogRead(A1);
// send a HIGH signal if the stick is 64 units (around 6%) from the center if (x > 576 || x < 448){ digitalWrite(10, HIGH); } if (y > 576 || y < 448){ digitalWrite(10, HIGH); } // else, send a low signal else{ digitalWrite(10, LOW); } delay(1000); }
Step 2: Setting Up the Raspberry Pi
As mentioned previously, there is a wire attached to GPIO Pin 12 on the Raspberry Pi. On more information on how to hook up wires to the GPIO pins on your Raspberry Pi, see this Instructable. Note that this could vary depending on which model you have.
This tutorial uses Raspbian on the Raspberry Pi. Other Linux distributions may work, as long as it runs Python.
This tutorial also assumes that the Raspberry Pi is connected to the internet. Please connect it to the internet and get the IP address with ifconfig before continuing as well.
Before we continue, we have to install the following dependencies on our Pi. In the command line type in the line pip install flask flask-wtf flask-restful json requests and press Enter.
Somewhere in our Raspberry Pi, we want to add some files and folders for this project. In your project folder, create the following files and folders:
/project iotapp.py gpio_submit.py /appfolder __init__.py routes.py /templates index.html
Step 3: Creating the Flask Backend
Now that we have created all of these files, let's add some code and get it working.
Starting with routes.py, we'll begin getting the big part of the backend done:
from appfolder import appFlask
from flask import render_template, redirect
data = {'timestamp':['test_date'], 'keys':[True]}
@appFlask.route('/') @appFlask.route('/index') def index(): print(data['timestamp'][-1], data['keys'][-1]) return render_template('index.html', title='Home', dict=data, keys=data['keys'][-1])
We have data as a dictionary in order to store information. More on this later.
In the following lines, we define what would happens when someone wants to visit the site that's being hosted on the Pi, with the format of either pi ip:5000/ or pi ip:5000/index . This directs the user to the main homepage. using render_template, we supply the arguments user and keys, which will be used in the template as a variable.
Now, on to our template index.html
<html>
<head>
<title>Cool IoT template</title>
</head>
<body>
<h1>Is the keyholder occupied?</h1>
<p>
{% if keys == True %}
Yes.
{% else %}
No.
{% endif %}
</p>
<p>
Latest Change: {{ dict.timestamp[-1] }}
</p>
</body>
</html>
Within our first <p> tag we have a conditional statement. We check the value of keys and display a message depending on whether or not the statement is true. These conditionals are enclosed with {% and %}.
Within our second <p> tag we reference another variable. If we want to take a value from the dictionary we have as an argument, we can just reference keys with dictionary.key. Indexing lists also works as it normally does with Python. With dict.timestamp[-1], we're looking at the last object in the list for the key timestamp in the dictionary dict.
Step 4: Setting Up the REST API on the Backend
In order for our app to find out when there are keys present, we run a simple python script called gpio_submit.py in the background. When it detects a change, it sends a POST request to the Flask server on the Pi. Right now, in routes.py, we don't have this behavior, so we're going to add some lines in this file:
from flask_restful import Resource, Api, reqparse, abort
api = Api(appFlask)
parser = reqparse.RequestParser() parser.add_argument('timestamp') parser.add_argument('keys')
def str_to_bool(str): if str.lower() == 'true': return True else: return False
class Submit(Resource):
def get(self):
return 404
def post(self): args = parser.parse_args()
if (args['timestamp'] == None) or (args['keys'] == None): return '401 unauthorized', 401 print(args['timestamp'], args['keys'])
data['timestamp'].append(args['timestamp']) data['keys'].append(str_to_bool(args['keys']))
return data['timestamp'], 201
api.add_resource(Submit, '/submit')
There's a lot to take in, so let's look at this step by step:
- We begin by instantiating the api, and supply all of the arguments needed
- We also have a helper function in place to convert a string to a boolean object. It checks to see if the lowercase string is equal to 'true', and returns True if this is the case, else it returns False
- We have a new class called Submit, taking Resource as an argument. This will allow us to let this class work with our Flask server
- In this class, we have a function for post. This will check to see if both of the arguments are present, and if it is it will add the values to the dictionary we established before
- At the last line, we use the class and
So, in the end, when we do a POST request on pi ip:5000/submit with the appropriate arguments, it will add the data to the dictionary.
Step 5: Creating a Helper Tool to Send Information to the Flask Server
Now that we have that together, we want the Raspberry Pi to look at the signal on GPIO pin 12, and send a POST request to the server when the signal changes. In gpio_submit.py, add the following:
import RPi.GPIO as GPIO
from time import sleep import json import requests
from datetime import datetime
uri = ''
GPIO.setmode(GPIO.BCM) GPIO.setup(12, GPIO.IN)
This contains all of our initital items, including imports, the url for the server, and setting up the GPIO pins to receive an input on pin 12. Since this is most likely running on the Raspberry Pi with the Flask server, we use the IP address 127.0.0.1 as this is the IP address for anything local. If the Flask server is somewhere else, change the IP address to the IP for the Flask server.
def submit_data(uri, keys):
data = {'timestamp':datetime.utcnow(), 'keys':str(keys)} r = requests.post(uri, data=data)
This is a helper function we created in order to post data to the server. This will be used whenever there's a change in the signal we receive on GPIO pin 12
keys = GPIO.input(12)
if keys == True: submit_data(uri, True) else: submit_data(uri, False)
After we have everything established, we run this bit of code. This will initialize keys to whatever value we are receiving on pin 12, and send that information to the server.
try: while(1): print(GPIO.input(12)) if (keys == True): if GPIO.input(12) == False: print("keys no longer present") submit_data(uri, False) keys = False else: if GPIO.input(12) == True: print("keys present") submit_data(uri, True) keys = True sleep(5) except KeyboardInterrupt: GPIO.cleanup()
Over here we have an infinite while loop running within a try statement. This will continue running until there's a keyboard interrupt, as in you press ctrl+c while this is running. In this while loop, this will check pin 12 to see if there has been a change since the last iteration of the while loop. If this is the case, it sends this information to the Flask server, and updates the value of keys. In the end of this while loop, it sleeps for 5 seconds. This can be easily changed, or even removed entirely.
Step 6: Running the Code
Now that we have everything established, we will run the code.
For our flask app, we will have to do something first. In the directory of iotapp.py, type and execute the following:
export FLASKAPP=iotapp.py
This will set the file for our Flask web application when it runs.
Now, type in python -m flask run --host=0.0.0.0. This will run the Flask app, and by setting the host to 0.0.0.0, this will allow other devices on the same network to access the Pi if they have the IP address for the Pi.
In a separate terminal window, ssh connection, or screen, go to the directory for gpio_submit.py and type in and execute python gpio_submit.py.
Using a device on the same network, connect to the IP on the Raspberry Pi on port 5000 and see your fruits of labor. Place your keys or some sort of weight on the analog stick, refresh the page, and confirm that everything's working. Position the components in a desirable way, and then have fun with your new keyholder.
Discussions
1 year ago
This is great! Thanks for sharing! | https://www.instructables.com/id/Using-an-Analog-Stick-As-an-IoT-Key-Holder/ | CC-MAIN-2019-43 | refinedweb | 1,826 | 71.14 |
Saturday, May 28, 2011
stackoverflow - 20k rep
Four months after crossing the 10k milestone, I've now achieved a reputation of 20k on stackoverflow! The following table shows some interesting stats on my time at SO: As I mentioned in my previous post,. It's hard juggling work and stackoverflow. The only times I get to use it are during the weekends and in the evenings after work. Most of the questions have already been answered by then! However, I still like browsing the questions and checking out the answers. 30k, here I come!
Posted by Fahd Shariff at 12:29 PM 0 comments
Links to this post
Labels: stackoverflow
Saturday, May 14, 2011
Lazily Instantiate a Final Field
Java only allows you to instantiate
finalfields in your constructor, like this:
public class Test{ private final Connection conn; public Test(){ conn = new Connection(); } public Connection getConnection(){ return conn; } }Now, let's say that this field is expensive to create and so you would like to instantiate it lazily. We would like to be able to do something like the following (which won't compile because the
finalfield has not been initialised in the constructor):
//does not compile! public class Test{ private final Connection conn; public Test(){ } public Connection getConnection(){ if(conn == null) conn = new Connection(); return conn; } }So, there is no way to lazily instantiate a
finalfield. However, with a bit of work, you can do it using Memoisation (with Callables). Simply wrap your field in a final
Memoas shown below:
public class Memo<T> { private T result; private final Callable<T> callable; private boolean established; public Memo(final Callable<T> callable) { this.callable = callable; } public T get() { if (!established) { try { result = callable.call(); established = true; } catch (Exception e) { throw new RuntimeException("Failed to get value of memo", e); } } return result; } } public class Test { private final Memo<Connection> conn; public Test() { conn = new Memo<Connection>(new Callable<Connection>() { public Connection call() throws Exception { return new Connection(); } }); } public Connection getConnection() { return conn.get(); } }
Posted by Fahd Shariff at 3:42 PM 0 comments
Links to this post
Labels: Java, programming | https://fahdshariff.blogspot.com/2011/05/ | CC-MAIN-2019-43 | refinedweb | 349 | 50.57 |
In article <address@hidden>, address@hidden (Kim F. Storm) writes: > Kenichi Handa <address@hidden> writes: >> When a font-related attribute of the default face is >> changed, set_font_frame_param (xfaces.c) is called. So, >> perhaps calling it or simulating what it does is the right >> thing. > Well, the Fmodify_frame_parameters call in that function > will cause a call to x_set_font. > x_set_font calls recompute_basic_faces which calls > realize_basic_faces, which calls realize_default_face. Ummm. > To me, it sound like realize_default_face would be a good > place to fix this, but I'm not an expert on this. It seems that there's no expert other than Gerd. Anyway, I've just tried this patch along your idea. *** xfaces.c 13 Apr 2006 09:46:44 +0900 1.345 --- xfaces.c 10 May 2006 13:06:28 +0900 *************** *** 7072,7077 **** --- 7072,7087 ---- check_lface (lface); bcopy (XVECTOR (lface)->contents, attrs, sizeof attrs); face = realize_face (c, attrs, 0, NULL, DEFAULT_FACE_ID); + + #ifdef HAVE_WINDOW_SYSTEM + #ifdef HAVE_X_WINDOWS + if (face->font != FRAME_FONT (f)) + /* As the font specified for the frame was not acceptable as a + font for the default face (perhaps because auto-scaled fonts + are rejected), we must adjust the frame font. */ + x_set_font (f, build_string (face->font_name), Qnil); + #endif /* HAVE_X_WINDOWS */ + #endif /* HAVE_WINDOW_SYSTEM */ return 1; } It seems that it works well. I tried: % emacs -fn -*-terminus-medium-r-*-*-17-*-*-*-*-*-iso8859-1 and did M-x ses-mode. Column alignment seems to be correct now. But, I really don't know if it's ok to call recompute_basic_faces recursively. Could you also test and verify it? --- Kenichi Handa address@hidden | http://lists.gnu.org/archive/html/emacs-devel/2006-05/msg00460.html | CC-MAIN-2016-26 | refinedweb | 254 | 67.35 |
Coffeehouse Thread10 posts
Forum Read Only
This forum has been made read only by the site admins. No new threads or comments can be added.
when are we going to see how things are for Orcas?
Back to Forum: Coffeehouse
Conversation locked
This conversation has been locked by the site admins. No new comments can be made.
Hi guys,
I just wanted to know if C9 is going to give us some view as to how things are going for Visual Studio Orcas and .Net framework 3.0? How is the C# team and vb.net team comming along? what is the new cool stuff they are doing?
Its been awhile since C9 gave us a view of what is happening there. So when is it going to happen? any time soon?
We just released a new tech preview for Linq (due to ship in C# 3.0 and VB9).
yag
Please stay, don't leave...we like softies around here
Man... You miss 1 day on the Linq forum and look what happens. Cheers.
I know that it sounds like an anomoly in an OOP language... but I would really like to see an 'include' directive, which can pull code in from a .h file, and be able to provide reference to it in the IDE.
There are plenty of places where using an object is not a suitable way to re-use code, and an 'include' directive is more applicable.
eg. -- Using directives... you can't 'objectify' these, and so need to type them into every class that needs them.
Far easier would be to use a header file to store these as re-usable elements... something along these lines perhaps... ??
MyHeaders.h
================================
include UsingCommon
{
using System;
using System.Collections;
using System.ComponentModel;
using System.Configuration;
}
include UsingData
{
using System.Data;
using System.Data.Common;
}
include UsingText
{
using System.Text;
using System.Text.RegularExpressions;
}
include UsingWeb
{
using System.Web;
using System.Web.Security;
using System.Web.Services;
using System.Web.SessionState;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
}
include UsingXml
{
using System.Xml;
using System.Xml.XPath;
using System.Xml.Xsl;
}
MyClass.cs
=====================================
include UsingCommon;
include UsingWeb;
Not sure I am sold on that yet. That will tend to be "using" more namespaces then you actually need. Not sure.
Yeah, what would be the point in doing that? The compiler will pretty quickly tell you what is missing.
Start putting together a large project with hundreds of classes... and you'll soon see why an include directive would be handy.
Have worked with ~100 and never found that an issue. If you really need that sort of thing, you can define snippets. I never really cared for rats nest of *.h files in windows. Different strokes.
...and then you find you need to make a change.
You can change the header file, or you go find/edit every instance. | https://channel9.msdn.com/Forums/Coffeehouse/186943-when-are-we-going-to-see-how-things-are-for-Orcas | CC-MAIN-2018-09 | refinedweb | 494 | 71.61 |
public class Foo : IFoo { public List<string> TmpFields = new List<string>(); public List<string> GroupFields; public Foo() { Loaded = true; Query = ""; } : }
After the WTF?!?! caused by the public members, and one being initialized while the other is not, I was pulling my hair out trying to figure out where the hell Loaded and Query got implemented because Foo inherits from IFoo.
Well, I finally managed to look at the code in a real text editor rather than notepad and jumped to the definitions. Apparently somebody decided to go against convention and declare this:
public abstract class IFoo { public bool Loaded; public string Query; : }
WTF?!?! If it's not an interface, don't use the naming convention for interfaces! Forgive me if I'm having fantasies of 2x4's and dark alleys. I don't understand how I consistently get stack ranked below this guy every review season.
| http://www.dreamincode.net/forums/topic/379428-and-the-nightmare-continues/ | CC-MAIN-2017-47 | refinedweb | 146 | 59.94 |
ext3 or ReiserFS? Hans Reiser Says Red Hat's Move Is Understandable
Red Hat's Decision is Conservative, Not Radical
Red Hat's decision to employ ext3 as the default filesystem in its upcoming release has sparked considerable interest among technically savvy Linux users. But it is not the only, nor in many ways the best, of the journaling filesystems available to users of modern Linux kernels. Yet it has attributes that make it an attractive first step for a large distribution, chief among them backward compatability.
A brief and incomplete explanation for those who have not followed journaling filesystem development:
The traditional Linux filesystem, ext2, is ideally suited for fairly small files on fairly small drives. As the size of drives has grown, and the size of files has, too, performance has suffered. Some of this is in gaining access to data on the drive, as wasted space -- "slack" -- and fragmentation have grown. Some comes in the filesystem's recovery time in the event of power failure or other improper shutdown. Enduring a filesystem check by e2fsck on a one-gigabyte drive is easy; the same check on a 40-gigabyte drive can be very time consuming. Some comes in the bitmap method of keeping track of the filesystem -- satisfactory for small drives with few files, it's inefficient with the large drives commonly employed today. Hence, journaling filesystems.
These keep track of the state of the drive in a file, called a journal, so that restarting after an improper shutdown requires reference to that lone file for restoration of the filesystem's state instead of a scan of the entire drive. Additionally, depending on their design, journaling filesystems make more efficient use of drive space and make data reads and writes faster over a wide variety of file sizes. To top it off, journaling filesystems offer what amounts to dynamic space allocation, meaning that the system administrator needn't guess at appropriate partition sizes at the time of installation, and they offer the potential of spanning drives in a single logical volume. A journaling filesystem is something that becomes essential as programs and their data files (and the drives that hold them) grow huge.
Linux does not have a journaling filesystem. It has four. Well, three and a half:
-..
Red Hat's adoption of ext3 is a first, tentative step toward a journaling filesystem. When the company's plans became known with its release of the second beta of its upcoming release, Michael K. Johnson, chief of the company's kernel hackers, was quick to provide a rationale.
"Why do you want to migrate from ext2 to ext3? Four main reasons: availability, data integrity, speed, and easy transition," he wrote. Availability, he pointed out, involves quick recovery from a system interruption rather than enduring e2fsck taking the long way around. The journaling provided by ext3 makes avoiding data corruption likelier. "Despite writing some data more than once, ext3 is often faster (higher throughput) than ext2 because ext3's journaling optimizes hard drive head motion," he wrote. Perhaps the determining factor, though, was Johnson's fourth reason.
"It is easy to change from ext2 to ext3 and gain the benefits of a robust journaling filesystem, without reformatting," he said. "That's right, no need to do a long, tedious, and error-prone backup, reformat, restore operation in order to experience the advantages of ext3."
Johnson said that Red Hat's choice was not meant to disparage any of the other new filesystems, but instead was the most sensible one for the biggest commercial distribution right now. Indeed, the developers of the various journaling filesystems, too, have gone to considerable lengths to avoid a holy war of the kind that erupts frequently among backers of different projects that perform similar functions.
"I personally think filesystems should be rewritten from scratch every 5 years, but there are lots of people who think quite differently on this," said Hans Reiser, for whom the Reiser filesystem is named, in an email interview yesterday. "Reiser4 is going to have a completely new core engine, and quite a lot of people think that we should just make lots of tweaks to what we have instead. It is extremely expensive, risky, and just plain hard work, for us to do that core engine rewrite, and yet I think it just has to be done. I could give you lots of logical reasons why we are doing it, but those aren't the real reasons why we rewrite when other filesystems don't. People just have different styles, and fortunately both styles work in their way, each with different effects and benefits."
While pointing to benchmarks that demonstrate a substantial speed increase when using the Reiser filesystem as opposed to ext3, Reiser said there's sense in Red Hat's more circumspect approach.
"ext3 is in its way an excellent filesystem written by very talented programmers, and the upgrade path is surely easy for users and distro alike," he said. "The upgrade path issue really makes it a conservative rather than crazy decision for RedHat; I can easily understand their decision..
"Reiser4 is designed to be highly extensible thanks to DARPA's funding us to do plugins. There are lots of semantic enhancements like inheritance and auditing coming down the pipe in version 4. We want, in our small way, to help make Linux not just another Unix, but something novel and cutting edge. This is the main reason users should find Reiser4 of interest. Not every distro is attracted to pushing past traditions though, and the beauty of Linux is that users get to choose what distro they need. I think that Microsoft is going to heat up the race for semantic innovation in the filesystem namespace in the next few years. We are going to try to innovate faster than Microsoft in the filesytem namespace enrichment arena, and I hope you will wish us luck in it."
There is an enormous body of highly technical literature explaining not just the superiority but the inevitability of journaling filesystems. While not entirely one of these in the strictest sense, ext3 provides a painless way for nontechnical users to enjoy some of the benefits of the new high-power systems, while keeping one hand on all that is familiar. But it seems clear that, as storage, code, and user data grow in size, and as flexibility in storage options grows, today's cutting edge will be in universal use tomorrow, and ext2 and its derivatives will take a place in history -- an honored place, to be sure, but history nonetheless. For now, users have the choice to dive in head first, dip their toes, or remain entirely ashore. | http://www.linuxplanet.com/linuxplanet/reports/3726/1/ | CC-MAIN-2017-43 | refinedweb | 1,120 | 57.71 |
SYSPAGE_ENTRY()
Return an entry from the system page
Synopsis:
#include <sys/syspage.h> #define SYSPAGE_ENTRY( entry )...
Since:
BlackBerry 10.0.0
Arguments:
- entry
- The entry to get; see below.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The SYSPAGE_ENTRY() macro returns a pointer to the specified entry in the system page.:
-..
- uint64_t cycles_per_sec — the number of CPU clock cycles per second for this system. For more information, see ClockCycles().
Returns:
A pointer to the structure for the given entry.
Examples:
; }
Classification:
Caveats:
SYSPAGE_ENTRY() is a macro.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/syspage_entry.html | CC-MAIN-2019-35 | refinedweb | 120 | 54.79 |
A skeleton version of the programs will appear in your cs35/labs/07 directory when you run update35. The program handin35 will only submit files in this directory.
I encourage you to work with a partner on this lab and the
entire project.
This lab is the second part of a larger project to build a web browser with a search engine for a limited portion of the web. In the first part of this project you summarized the content of a web page by creating a binary search tree that contained the word frequencies of all words that appeared in the page that were not in a given file of words to ignore.
In the second part of this project, you will read in a file of URLs
and a file of words to ignore, convert the URLs into filenames, and
summarize the content of each file as in the previous lab. Then you
will repeatedly prompt the user for a search query. You will
determine how often each of the query words appear in the summarized
web pages. For example, suppose a user enters the query "computer
science department". For each web page, your program should
determine the sum of the number of times the individual words
"computer", "science", and "department"
appear in a page. You will convert this sum into a priority and
associate it with the URL and store them in a priority queue. Once
all the pages have been checked, you will report the relevant pages in
priority order.
Your program will take two command-line arguments: the filename of a list of URLs stored in the Computer Science domain and the filename of a list of words to ignore during the analysis. For example:
% ./part2 urls.txt ignore.txt
This will output:
Web Search Engine Summarized 13 URLs Enter a query or type 'quit' to end Search for: natural language processing Relevant pages: : 17 : 4 : 2 : 2 : 1 Search for: Billings Montana No relevant pages found Search for: developmental robotics Relevant pages: : 16 : 2 Search for: maze Relevant pages: : 38 : 33 : 1 : 1 Search for: analysis of algorithms Relevant pages: : 8 : 6 : 6 : 4 : 4 : 4 : 3 : 2 : 1 : 1 Search for: quit Goodbye
Notice that only web pages that contain at least one word in common
with the query are shown, and that for some queries there may be no
relevant pages in our limited search domain.
A number of classes are provided for you. Classes, programs or files from the previous lab that you need to copy into the current lab directory are underlined below. Classes, programs or files that you have to complete for this lab appear in bold below.
URL: file: /home/userName/public_html/index.html or index.php URL: file: /home/userName/public_html/dirName/.../fileNameLook at readFile.cpp to see a demonstration of how to read in a file one line at a time. Look at the C++ string reference to see useful methods for manipulating strings.
char line[100]; cin.getline(line, 100); string query = (const char *)line;The above code assumes that the user will never enter a line longer than 100 characters. Once you have the entire query as a string, you can then use C++ string methods to break it up into individual words.
#include <fstream> #include <string> using namespace std; bool file_exists(string filename){ ifstream inp; inp.open(filename.c_str(), ifstream::in); inp.close(); return !inp.fail(); }Here's some starter code for removeMin in the PQ. You still have to implement bubbleDown.
template <typename KEY, typename VALUE> KVPair<KEY,VALUE> HeapPQ<KEY,VALUE>::removeMin() { if (isEmpty()){ throw runtime_error("priority queue is empty, cannot getMin"); } KVPair<KEY,VALUE> kv = *array[1]; delete array[1]; array[1]=NULL; swap(n, 1); n--; bubbleDown(1); return kv; } | https://web.cs.swarthmore.edu/~adanner/cs35/f10/lab7.php | CC-MAIN-2022-33 | refinedweb | 631 | 60.65 |
Version history
Version 2.3.0, released 2021-09-06
- Commit ac367e2: feat: Regenerate all APIs to support self-signed JWTs
Version 2.2.0, released 2021-05-26
No API surface changes; just dependency updates.
Version 2.1.0, released 2020-11-11
- 9e2dd7a:
- feat: added support for span kind
- Clients can now specify the span kind of spans.
- Commit f83bdf1: fix: Apply timeouts to RPCs without retry
- Commit ec6af90: chore: set Ruby namespace in proto options
- Commit 947a573: docs: Regenerate all clients with more explicit documentation
Version 2.0.0, released 2020-03-18
No API surface changes compared with 2.0.0-beta01, just dependency and implementation changes.
Version 2.0.0-beta01, released 2020-02-19
This is the first prerelease targeting GAX v3. Please see the breaking changes guide for details of changes to both GAX and code generation.
Version 1.1.0, released 2019-12-09
- Commit 8f268e0: Some retry settings are now obsolete, and will be removed from the next major version
- Commit 50658e2: Adds resource name Format methods
Version 1.0.0, released 2019-07-10
Initial GA release. | https://cloud.google.com/dotnet/docs/reference/Google.Cloud.Trace.V2/latest/history | CC-MAIN-2022-05 | refinedweb | 187 | 57.27 |
DHCP Address Reservation
I was wondering whether this config would work or not. I have a printer that needs to be configured with a static IP address and it can't be done on the printer itself but instead via DHCP address reservation. This is what I configured on my router which acts as a DHCP server for a small office.
ip dhcp pool Auckland-Main
import all
network 10.210.3.0 255.255.255.128
domain-name mediamonitors.com.au
dns-server 10.209.3.7 10.95.3.10
default-router 10.210.3.1
ip dhcp pool Auckland-Static
import all
host 10.210.3.32 /25
client-identifier xxxx.xxxx.xxxx
client-name ABCD
What happpen is when the printer is turned on, it will pickup a DHCP address from the Auckland-Main pool but when I do "show ip dhcp bind", the router actually shows two IP addresses. One is from the Auckland-Main and the other is from Auckland-Static.
Thanks in advance for your help.
Vincent..
instead of leaving it to DHCP (which will change your printers IP sometimes), u can create static binding on the dhcp server (router) to make the IP address of the printer constant at all times.. i think this is the only way to go about this, unless u can manually configure the static IP....
use this URL for reference:
Hope this helps.. all the best.. rate replies if found useful..
Raj | https://supportforums.cisco.com/discussion/9967451/dhcp-address-reservation | CC-MAIN-2017-34 | refinedweb | 245 | 75.71 |
RAM problem in script execution
Hi experts!
I wrote the next code. In 1-2 hours of execution time the RAM of my laptop (8gb) is filled and the sistem crash:
from scipy.stats import uniform import numpy as np cant_de_cadenas=[400,600,700,800] os.makedirs('directory') np.savetxt(here 'cant_de_cadenas' is saved as .csv in folder 'directory') A1=mp.array([]) ... A6=mp.array([]) import time for N in cant_de_cadenas: start=time.clock() B1=mp.array([]) ... B6=mp.array([]) for u in srange (100): array_1 = uniform.rvs(loc=-2,scale=4, size=N) array_2 = uniform.rvs(loc=-2,scale=4, size=N) array_3 = uniform.rvs(loc=-1,scale=7, size=N) array_4 = 1/np.tan(array_3) array_5 = uniform.rvs(loc=-1,scale=1, size=N) array_6 = function(array_5) array_7 = function(array_6, array_5 and array_4) array_8 = function(array_1 and array_7) array_9 = function(array_2 and array_7) M=np.zeros([N+4,N+4]) for j in srange(N+4): if j>0: two arrays (C and D) with len=j-1 are created for k in srange (j): if C[k]<=0 and D[k]<=0: M[j,k]=1 if j+1<N+4: two arrays (C and D) with len=((N+4)-j-1) are created for k in srange ((N+4)-j-1): if C[k]<=0 and D[k]<=0: M[j,k+j+1]=1 An algorithm with matrix M is executed and values 'b1' and 'b2' are generated M_hor=M.copy() Some values in M_hor are changed and an algorithm with matrix M_hor is executed and values 'b3' and 'b4' are generated M_ver=M.copy() Some values in M_hor are changed and an algorithm with matrix M_hor is executed and values 'b5' and 'b6' are generated B1=np.append(B1,b1) ... B6=np.append(B6,b6) A1=np.append(A1,sum(B1)/100) ... A6=np.append(A6,sum(B6)/100)
Like you can see, len(A1)=...=len(A6)=len( cant_de_cadenas) (because in An are the average values of 100 repetitions -included in Bn-).
While many arrays are created (array_1,array_2 etc , with lenght N), in each one of the 100 cycles 'for u in srange (100)' these arrays are overwritten (with the same name). The same applies to B1,...,B6 arrays: in each one of the len(cant_de_cadenas) cyles 'for N in srange (cant_de_cadenas)' these arrays are overwritten (with the same name).
I tryed with gc and gc.collect() and nothing work! In addition, i cant use memory_profiler module in sage.
What am I doing wrong? Why the memory becomes full while running (starts with 10% of RAM used and in 1-2hour is totally full used)?
Please help me, I'm totally stuck!
Thanks a lot! | https://ask.sagemath.org/question/10462/ram-problem-in-script-execution/ | CC-MAIN-2020-40 | refinedweb | 452 | 59.5 |
This notebook is an element of the risk-engineering.org courseware. It can be distributed under the terms of the Creative Commons Attribution-ShareAlike licence.
Author: Eric Marsden [email protected]
This notebook contains an introduction to use of Python, SciPy, SymPy and the SALib library for sensitivity analysis. It uses some Python 3 features. Consult the accompanying lecture slides for details of the applications of sensitivity analysis and some intuition and theory of the technique.
import numpy import scipy.stats import pandas import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline %config InlineBackend.figure_formats=['svg']
The Rosenbrock function is a classic in uncertainty analysis and sensitivity analysis.
def rosenbrock(x1, x2): return 100 * (x2 - x1**2)**2 + (1 - x1)**2
Since we are lucky enough to be working in a small number of dimensions, let’s plot the function over the domain $[-2, 2]^2$ to get a feel for its shape.
N = 100 fig = plt.figure() ax = fig.gca(projection='3d') x = numpy.linspace(-2, 2, N) y = numpy.linspace(-2, 2, N) X, Y = numpy.meshgrid(x, y) ax.plot_surface(X, Y, rosenbrock(X, Y), alpha=0.8) ax.set_facecolor("white") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$");
We can undertake a local sensitivity analysis by calculating the local derivatives of the Rosenbrock function, with respect to the two input parameters. The local derivatives can be estimated numerically, or calculated analytically (if you know the analytical form of the function you are interested in, and if the function is not excessively difficult to differentiate).
In this case our Rosenbrock function is easy to differentiate by hand, but let us demonstrate the use of the SymPy package to do symbolic differentiation with the computer. You may need to install the SymPy package for your Python installation.
You may be interested in the minireference.com tutorial on SymPy.
import sympy from sympy.interactive import printing printing.init_printing(use_latex='mathjax')
x1 = sympy.Symbol('x1') x2 = sympy.Symbol('x2') rosen = 100 * (x2 - x1**2)**2 + (1 - x1)**2 d1 = sympy.diff(rosen, x1) d1
d2 = sympy.diff(rosen, x2) d2
The rosenbrock function looks pretty flat around $(0, 0)$; let’s check the local sensitivity in that location. First check $\frac{∂f}{∂x_1}(0, 0)$, then $\frac{∂f}{∂x_2}(0, 0)$
d1.subs({x1: 0, x2: 0})
d2.subs({x1: 0, x2: 0})
The function looks much steeper (higher local sensitivity) around $(-2, -2)$; let’s check that numerically.
d1.subs({x1: -2, x2: -2})
d2.subs({x1: -2, x2: -2})
At $(-2, 2)$ the sensitivity should be somewhere in between these two points.
d1.subs({x1: -2, x2: 2})
d2.subs({x1: -2, x2: 2})
We can use SciPy’s optimization functionality to find the minimum of the Rosenbrock function on the domain $[-2, 2]^2$, then check that (as we expect) the local sensitivity at the minimum is zero. The last argument
[2, 2] to the function
scipy.optimize.fmin is the starting point of the optimization search.
import scipy scipy.optimize.fmin(lambda x: rosenbrock(x[0], x[1]), [2, 2])
Optimization terminated successfully. Current function value: 0.000000 Iterations: 62 Function evaluations: 119
array([0.99998292, 0.99996512])
The
subs function in SymPy does variable substitution; it allows you to evaluate an expression with given values for the variables (
x1 and
x2 in this case).
d1.subs({x1: 1, x2: 1})
d2.subs({x1: 1, x2: 1})
We can use the SALib library (available for download from) to undertake a global sensitivity analysis, using the Sobol’ method. You may need to install this library. If you’re using Linux, a command that may work is
sudo pip install SALib
or if you’re using Python version 3:
sudo pip3 install SALib
# this will fail if you haven’t installed the SALib library properly from SALib.sample import saltelli from SALib.analyze import sobol
N = 1000 problem = { 'num_vars': 2, 'names': ['x1', 'x2'], 'bounds': [[-2, 2], [-2, 2]] } sample = saltelli.sample(problem, N) Y = numpy.empty([sample.shape[0]]) for i in range(len(Y)): x = sample[i] Y[i] = rosenbrock(x[0], x[1]) sobol.analyze(problem, Y)
{'S1': array([0.49911118, 0.30018188]), 'S1_conf': array([0.08237072, 0.05299655]), 'S2': array([[ nan, 0.21626586], [ nan, nan]]), 'S2_conf': array([[ nan, 0.17465001], [ nan, nan]]), 'ST': array([0.707303 , 0.51033275]), 'ST_conf': array([0.07999571, 0.07563263])}
S1 contains the first-order sensitivity indices, which tell us how much $x_1$ and $x_2$ each contribute to the overall output variability of the rosenbrock function over the domain $[-2, 2]^2$.
Interpretation: we note that $x_1$ (whose sensitivity index is around 0.5) contributes to roughly half of total output uncertainty, and is roughly two times more influential (or sensitive) over this domain than $x_2$.
ST contains the total indices, which include the interaction effects with other variables. The total sensitivity of $x_1$ (around 0.7) indicates that a significant amount (around 20%) of our total output uncertainty is due to the interaction of $x_1$ with other input variables. Since there are only two input variables, we know that this interaction effect must be with $x_2$.
Some sensitivity analysis methods are also able to provide second and third order sensitivity indices. A second order index $s_{i,j}$ tells you the level of interaction effects between $x_i$ and $x_j$ (interaction effects are greater than zero when your function is non-linear: the sensitivity of parameter $i$ may then depend on the value of parameter $j$). A third order index $s_{i,j,k}$ tells you the level of interaction between three parameters $x_i$, $x_j$ and $x_k$.
The Ishigami function is a well-known test function for uncertainty analysis and sensitivity analysis (it is highly non-linear). Its definition is given below.
def ishigami(x1, x2, x3) -> float: return numpy.sin(x1) + 7*numpy.sin(x2)**2 + 0.1 * x3**4 * numpy.sin(x1)
Task: undertake a global sensitivity analysis of the Ishigami function over the domain $[-\pi, \pi]^3$ and estimate the first-order and total sensitivity indices. | https://nbviewer.jupyter.org/urls/risk-engineering.org/static/sensitivity-analysis.ipynb | CC-MAIN-2018-26 | refinedweb | 1,010 | 51.24 |
There are couple of ways in which you can use "import"
statement.
(a) Use a specific class name in "import" statement:
import myPackage.myClass; // myPackage-Package,
myClass-Class
import myPackage.*; // import all the classes
The second approach is simple, because it imports all the classes in the specified package. There are no disadvantages of using wild card "import" statement with respect to compile time, run time, or code size, etc.
The only problem where we cannot use "import" is the place where we want to import two classes of same name from two different packages. In this situation, you can refer to the classes using full qualifier, i.e.
myPackage.myClass.myMethod().
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/tips/Tip/14254 | CC-MAIN-2017-26 | refinedweb | 142 | 58.99 |
Custom controls in a Formview
Discussion in 'ASP .Net Building Controls' started by Victor:
- 768
- Chris
- May 19, 2007
Custom Controls: Import a custom namespace and use its functions withinuser, Jul 18, 2007, in forum: ASP .Net
- Replies:
- 1
- Views:
- 531
- Kevin Spencer
- Jul 19, 2007
databound custom controls vs composite databound custom controlsrodchar, Nov 26, 2007, in forum: ASP .Net
- Replies:
- 0
- Views:
- 553
- rodchar
- Nov 26, 2007
Custom Control : One custom property to array of controlsanon, Jan 15, 2005, in forum: ASP .Net Web Controls
- Replies:
- 0
- Views:
- 296
- anon
- Jan 15, 2005
FormView/Repeater/ Eval(FormView data)David Thielen, Jun 16, 2006, in forum: ASP .Net Web Controls
- Replies:
- 2
- Views:
- 883
- David Thielen
- Jun 16, 2006 | http://www.thecodingforums.com/threads/custom-controls-in-a-formview.758744/ | CC-MAIN-2016-07 | refinedweb | 120 | 68.6 |
The purpose of this document is to describe the reasoning behind the inclusion of the typename keyword in standard C++, and explain where, when, and how it can and can't be used.
Note: This page is correct (AFAIK) for C++98/03. The rules have been loosened in C++11.
Table of contents
- A secondary use
- The real reason for typename
- Some definitions
- The problem
- The rules
A secondary use
There is a use of typename that is entirely distinct from the main focus of this discussion. I will present it first because it is easy. It seems to me that someone said "hey, since we're adding typename anyway, why not make it do this" and people said "that's a good idea."
Most older C++ books, when discussing templates, use syntax such as the following:
template <class T> ...
I know when I was starting to learn templates, at first I was a little thrown by the fact that T was prefaced by class, and yet it was possible to instantiate that template with primitive types such as int. The confusion was very short-lived, but the use of class in that context never seemed to fit entirely right. Fortunately for my sensibilities, it is also possible to use typename:
template <typename T> ...
This means exactly the same thing as the previous instance. The typename and class keywords can be used interchangeably to state that a template parameter is a type variable (as opposed to a non-type template parameter).
I personally like to use typename in this context because I think it's ever-so-slightly clearer. And maybe not so much "clearer" as just conceptually nicer. (I think that good names for things are very important.) Some C++ programmers share my view, and use typename for templates. (However, later we will see how it's possible that this decision can hurt readibility.) Some programmers make a distinction between templates that are fully generic (such as the STL containers) and more special purpose ones that can only take certain classes, and use typename for the former category and class for the latter. Others use class exclusively. This is just a style choice.
However, while I use typename in real code, I will stick to class in this document to reduce confusion with the other use of typename.
The real reason for typename
This discussion I think follows fairly closely appendix B from the book C++ Template Metaprogramming: Concepts, Tools, and Techniques from Boost and Beyond by David Abrahams and Aleksey Gurtovoy, though I don't have it in front of me now. If there are any deficiencies in my discussion of the issues, that book contains the clearest description of them that I've seen.
Some definitions
There are two key concepts needed to understand the description of typename, and they are qualified and dependent names.
Qualified and unqualified names
A qualified name is one that specifies a scope. For instance, in the following C++ program, the references to cout and endl are qualified names:
#include <iostream> int main() { std::cout << "Hello world!" << std::endl; }
In both cases, the use of cout and endl began with std::.
Had I decided to bring cout and endl into scope with a using declaration or directive*, and used just "cout" by itself, they would have been unqualified names, because they would lack the std::.
(* Remember, a using declaration is like using std::cout;, and actually introduces the name cout into the scope that the using appears in. A using directive is of the form using namespace std; and makes names visible but doesn't introduce anything. [12/23/07 -- I'm not sure this is true. Just a warning.])
Note, however, that if I had brought them into scope with using but still used std::cout, it remains a qualified name. The qualified-ness of a name has nothing to do with what scope it's used in, what names are visible at that point of the program etc.; it is solely a statement about the name that was used to reference the entity in question. (Also note that there's nothing special about std, or indeed about namespaces at all. vector<int>::iterator is a nested name as well.)
Dependent and non-dependent names
A dependent name is a name that depends on a template parameter. Suppose we have the following declaration (not legal C++):
template <class T> class MyClass { int i; vector<int> vi; vector<int>::iterator vitr; T t; vector<T> vt; vector<T>::iterator viter; };
The types of the first three declarations are known at the time of the template declaration. However, the types of the second set of three declarations are not known until the point of instantiation, because they depend on the template parameter T.
The names T, vector<T>, and vector<T>::iterator are called dependent names, and the types they name are dependent types. The names used in the first three declarations are called non-dependent names, at the types are non-dependent types.
The final complication in what's considered dependent is that typedefs transfer the quality of being dependent. For instance:
typedef T another_name_for_T;
another_name_for_T is still considered a dependent name despite the type variable T from the template declaration not appearing.
Note: If you're know some advanced type theory, note that C++'s notion of a dependent name has almost nothing to do with type theorists' dependent types.
Some other issues of wording
Note that while there is a notion of a dependent type, there is not a notion of a qualified type. A type can be unqualified in one instance, and qualified the next; the qualification is a property of a particular naming of a type, not of the type itself. (Indeed, when a type is first defined, it is always unqualified.)
However, it will be useful to refer to a qualified type; what I mean by this is a qualified name that refers to a type. I will switch back to the more precise wording when I talk about the rules of typename.
The problem
So now we can consider the following example:
template <class T> void foo() { T::iterator * iter; ... }
What did the programmer intend this bit of code to do? Probably, what the programmer intended was for there to be a class that defined a nested type called iterator:
class ContainsAType { class iterator { ... }: ... };
and for foo to be called with an instantiation of T being that type:
foo<ContainsAType>();
In that case, then line 3 would be a declaration of a variable called iter that would be a pointer to an object of type T::iterator (in the case of ContainsAType, int*, making iter a double-indirection pointer to an int). So far so good.
However, what the programmer didn't expect is for someone else to come up and declare the following class:
class ContainsAValue { static int iterator; };
and call foo instantiated with it:
foo<ContainsAValue>();
In this case, line 3 becomes a statement that evaluates an expression which is the product of two things: a variable called iter (which may be undeclared or may be a name of a global) and the static variable T::iterator.
Uh oh! The same series of tokens can be parsed in two entirely different ways, and there's no way to disambiguate them until instantiation. C++ frowns on this situation. Rather than delaying interpretation of the tokens until instantiation, they change the languge:
Before a qualified dependent type, you need typename
To be legal, assuming the programmer intended line 3 as a declaration, they would have to write
template <class T> void foo() { typename T::iterator * iter; ... }
Without typename, there is a C++ parsing rule that says that qualified dependent names should be parsed as non-types even if it leads to a syntax error. Thus if there was a variable called iter in scope, the example would be legal; it would just be interpreted as multiplication. Then when the programmer instantiated foo with ContainsAType, there would be an error because you can't multiply something by a type.
typename states that the name that follows should be treated as a type. Otherwise, names are interpreted to refer to non-types.
This rule even holds if it doesn't make sense even if it doesn't make sense to refer to a non-type. For instance, suppose we were to do something more typical and declare an iterator instead of a pointer to an iterator:
template <class T> void foo() { typename T::iterator iter; ... }
Even in this case, typename is required, and omitting it will cause compile error. As another example, typedefs also require use:
template <class T> void foo() { typedef typename T::iterator iterator_type; ... }
The rules
Here, in excruciating detail, are the rules for the use of typename. Unfortunately, due to something which is hopefully not-contagious apparently affecting the standards committee, they are pretty complicated.
- typename is prohibited in each of the following scenarios:
- Outside of a template definition. (Be aware: an explicit template specialization (more commonly called a total specialization, to contrast with partial specializations) is not itself a template, because there are no missing template parameters! Thus typename is always prohibited in a total specialization.)
- Before an unqualified type, like int or my_thingy_t.
- When naming a base class. For example, template <class C> class my_class : C::some_base_type { ... }; may not have a typename before C::some_base_type.
- In a constructor initialization list.
- typename is mandatory before a qualified, dependent name which refers to a type (unless that name is naming a base class, or in an initialization list).
- typename is optional in other scenarios. (In other words, it is optional before a qualified but non-dependent name used within a template, except again when naming a base class or in an initialization list.)
Again, these rules are for standard C++98/03. C++11 loosens the restrictions. I will update this page after I figure out what they are. | http://pages.cs.wisc.edu/~driscoll/typename.html | CC-MAIN-2018-09 | refinedweb | 1,665 | 60.65 |
To deploy a DeepSee solution, the docs recommend that you define a namespace on the reporting (mirror) server, and "define mappings to access the application data, application code, DeepSee cube definitions, and DeepSee data on this server". ()
This implies that for an ideal deployment architecture, globals should be split into four separate databases (app data, app code, DS cubes, DS data). How exactly should the DeepSee-related globals be split? A table in the docs outlines all of the key globals (), but is there an optimal way to split those into 'code' and 'data'?
Thanks,
Steve
No worries - I've just converted the comment to an answer. | https://community.intersystems.com/post/ds-deployment-architecture-which-globals-should-be-mapped-where | CC-MAIN-2019-26 | refinedweb | 107 | 58.62 |
Java code for authenticating into SMTP server with Auth and TLS turned on..
By apanicker on Sep 07, 2006
After a long search I came across this sample Java code for sending email into an SMTP server which required authentication and secure (TLS) connection. Hence I thought, I will re-publish it. I found this piece of code from Java developer forums.....I could not trace back the link... Thanks to good soul who published it. I thought of re-publishing it due its rarity.
I have used Java Mail 1.4.
------------------------------- Java code ---------------------------
import javax.mail.\*;
import javax.mail.internet.\*;
import java.util.\*;
public class Main
{
String d_email = "ADDRESS@gmail.com",
d_password = "PASSWORD",
d_host = "smtp.gmail.com",
d_port = "465",
m_to = "EMAIL ADDRESS",
m_subject = "Testing",
m_text = "Hey, this is the testing email.";
public Main()
{");
SecurityManager security = System.getSecurityManager();
try
{
Authenticator auth = new SMTPAuthenticator();
Session session = Session.getInstance(props, auth);
//session.setDebug(true);)
{
Main blah = new Main();
}
private class SMTPAuthenticator extends javax.mail.Authenticator
{
public PasswordAuthentication getPasswordAuthentication()
{
return new PasswordAuthentication(d_email, d_password);
}
}
}
Posted by mert on May 09, 2007 at 10:49 AM CDT #
Posted by Shilpa on May 10, 2007 at 07:10 PM CDT #
Thanks a tonne!!! I was in a bad mess as i wan't able to send mail using my gmail account. Now, its all working thanks to you! guys, just remember there might a prob of ambiguity when using the above methods as they are in java.net too. Just give the full path to remove the ambiguity... as simple as that! Ciao'
Posted by Amar on November 29, 2007 at 03:09 AM CST #
Thanks
It's really help code.
Posted by Le Phuoc Canh on March 20, 2008 at 02:50 PM CDT #
it works great!
Thank you guys,
you saved me a lot of time.
Posted by Ralph R. on September 27, 2008 at 10:03 PM CDT #
Thanks a ton!!
This was required and would serve as a nice tutorial.
Posted by hKansal on December 10, 2008 at 05:08 PM CST #
thank you. really.
Posted by david on January 11, 2009 at 08:00 AM CST #
This code gives me the following exception:
javax.mail.SendFailedException: Sending failed;
nested exception is:
class javax.mail.MessagingException: Could not connect to SMTP host: smtp.gmail.com, port: 465;
nested exception is:
java.net.ConnectException: Connection refused: connect
at javax.mail.Transport.send0(Transport.java:218)
at javax.mail.Transport.send(Transport.java:80)
Posted by mukunda on August 12, 2009 at 10:18 PM CDT #
Hi,
may be you have to use port 25 (TLS) and not the port 465?
Mirco
Posted by Mirco on September 08, 2009 at 03:25 AM CDT #
thanks for this snippet - you are top ranking on google!
Posted by mtraut on September 15, 2009 at 07:06 PM CDT #
much appreciated, excellent snippet
Posted by audioworm on October 17, 2009 at 07:20 AM CDT #
Thanks a lot. That works for me too..
Posted by zawoad on December 19, 2009 at 05:41 PM CST #
send me java code using gmail smtp IP for sending mail.Send me code Urgently
Posted by kiran kashid on January 19, 2010 at 03:30 AM CST #
Hi i am Subha . i am getting the following exception . javax.mail.MessagingException: Could not connect to SMTP host: smtp.gmail.com, port: 465;
nested exception is:
java.net.SocketException: Invalid argument: connect
at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1934)
at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:638)
please point out the problem .
Posted by Subha on March 30, 2011 at 08:39 PM CDT # | https://blogs.oracle.com/apanicker/entry/java_code_for_smtp_server | CC-MAIN-2015-22 | refinedweb | 607 | 60.31 |
An 8x8 grid of virtual LEDs implemented in Pygame.
Project description
The ledgrid module contains a single LEDGrid class which aims to be as identical as possible to the public interface of the Raspberry Pi Sense HAT library, such that it is useful in mocking up software for it and other such devices.
Many existing Sense HAT LED demos and software will work using the following import statement:
from ledgrid import LEDGrid as SenseHat
However, the internal implementation is rather simplified and uses pygame instead of the hardware HAT.
It requires pygame to be installed (not currently available through pypi), an additional optional dependency is PIL (i.e. Pillow) which is required by some features (notably scrolling text with the show_message method).
It supports every Python version since 2.7. It is contained in only one Python file, so it can be easily copied into your project if you don’t want to use pypi. (The copyright and license notice must be retained.)
At the bottom of the ledgrid.py file are some examples of usage. You can run these examples using:
python3 -m ledgrid
Or if you have the file locally:
python3 ledgrid.py
Many thanks to Richard Hayler. The LED class, graphics, etc are based on his 8x8GridDraw.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/ledgrid/0.2/ | CC-MAIN-2018-34 | refinedweb | 237 | 64.3 |
I've been writing a simple tic-tac-toe game to get a grounding in Python and AI (minimax algorithm). I've got something working but I'm stuck on the AI part.
I *think* that I understand the theory of minimax in this case...given the current game state...you run through every possible outcome...if you hit a leaf that results in a win you add +1 to the root node that represents the next player choice...if the leaf results in a loss you subtract 1...then you play out the root node with the highest value? Is that correct?
I know this probably requires recursion but that has never been something I've been good at.
My code is pretty simple...I keep an unordered set of human moves made and a set of ai moves made. When a move is made by either the human or ai, it gets added to the appropriate set. I check each set at the end of the turn for a win or a tie. I've gotten it to the point where it randomly plays against itself (given a human set and an ai set with the list of moves already made) but I'm not quite sure where to go from there.
I hate doing this but I've been on this for hours...can someone give me a shove in the right direction? The code listing is below. Calling 'playcycle()' runs the game with prompts for human input and randomly chose 'computer' moves. Calling 'playoutaiall()' plays 10 random games and prints out the result for each.
import random #checks a player set (either human or ai) for a possible win def checkforwin(moves): if '1' in moves: if '2' in moves and '3' in moves: return True if '4' in moves and '7' in moves: return True if '5' in moves and '9' in moves: return True if '9' in moves: if '7' in moves and '8' in moves: return True if '3' in moves and '6' in moves: return True if '4' in moves and '5' in moves and '6' in moves: return True if '2' in moves and '5' in moves and '8' in moves: return True if '7' in moves and '5' in moves and '3' in moves: return True return False #given human and ai move sets, returns remaining valid moves def getmovesleft(playermoves, aimoves): allmoves = set('123456789') return allmoves - (playermoves | aimoves) #given human and ai move sets, prints board state def printboard(playermoves, aimoves): count = 1 while (count < 10): if str(count) in playermoves: print('x', end="") elif str(count) in aimoves: print('o', end="") else: print(str(count), end="") if count % 3 == 0: print('\n', end="") count = count + 1 #returns whether a particular move is valid, given human and ai moves sets def isvalidmove(newmove, playermoves, aimoves): return newmove in getmovesleft(playermoves, aimoves) #given a set of remaining valid moves, returns one at a random def generateaimove(aimovesleft): return random.sample(aimovesleft, 1) #given human and player move sets, gets a valid move from the human player and returns it def makehumanplayermove(playermoves, aimoves): print('It is your move! Current board state...') printboard(playermoves, aimoves) movesleft = getmovesleft(playermoves, aimoves) while True: print('Possible moves' , movesleft) move = input('Enter a move: ') if isvalidmove(move, playermoves, aimoves) == True: return move #given sets of human and ai moves already made, returns a valid move def makeaiplayermove(playermoves, aimoves): move = generateaimove(getmovesleft(playermoves, aimoves)) return move #given a player move and sets of existing (already played) human and ai moves, play out one turn (or cycle) def singlecycle(pmove, playermoves, aimoves): playermoves.update(pmove) if checkforwin(playermoves) == True: return 'pwin' if len(getmovesleft(playermoves, aimoves)) == 0: return 'tie' aimove = makeaiplayermove(playermoves, aimoves) aimoves.update(aimove) if checkforwin(aimoves) == True: return 'aiwin' if len(getmovesleft(playermoves, aimoves)) == 0: return 'tie' return 'continue' #given an end of game result, print it out for the human's benefit def printgameresult(endresult): if endresult == 'pwin': print('The human won!') elif endresult == 'aiwin': print('The AI won. No suprise there. Stupid human...') elif endresult == 'tie': print('A tie game? How boring!') #given a list of existing player and ai move sets, play out a complete game at random (no human input) def playoutai(playermoves, aimoves): while True: randomplayermove = makeaiplayermove(playermoves, aimoves) cycleresult = singlecycle(randomplayermove, playermoves, aimoves) if cycleresult != 'continue': printboard(playermoves, aimoves) print('ai moves:' , aimoves) print('player moves:', playermoves) printgameresult(cycleresult) break #play out 10 complete games def playoutaiall(): count = 0 while count < 10: player = set() ai = set() playoutai(player, ai) count = count + 1 #play out a complete game with human input def playcycle(): playermoves = set() aimoves = set() print('Welcome to a new game!') while True: move = makehumanplayermove(playermoves, aimoves) cycleresult = singlecycle(move, playermoves, aimoves) if cycleresult != 'continue': printgameresult(cycleresult) break #end of current game print('End board state...') printboard(playermoves, aimoves)
I think all my base logic is there for AI but I'm not sure where to go from here. Any help would be much appreciated.
Thanks,
John
Edited by timothyjlaird, 15 September 2013 - 01:46 PM. | http://www.gamedev.net/topic/647870-asking-for-tic-tac-toe-ai-help/?k=880ea6a14ea49e853634fbdc5015a024&setlanguage=1&langurlbits=topic/647870-asking-for-tic-tac-toe-ai-help/&langid=1 | CC-MAIN-2014-10 | refinedweb | 850 | 58.11 |
Traceback (most recent call last):So..?
File "/usr/bin/econnman-bin", line 18, in <module>
import efl.evas as evas
ImportError: /usr/lib/libemile.so.1: undefined symbol: SSLv3_server_method
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/econnman-bin", line 45, in <module>
import elementary as elm
ImportError: No module named 'elementary'
error: failed to prepare transaction (could not satisfy dependencies)
:: virtualbox-guest-utils: installing virtualbox-guest-dkms (5.0.14-4) breaks dependency 'virtualbox-guest-modules'
warning: directory permissions differ on /var/lib/syslog-ng/
filesystem: 700 package: 755
All fine. Only a message:
warning: directory permissions differ on /var/lib/syslog-ng/
filesystem: 700 package: 755
Good work!
It happens even in some update in my system; i don't know what it means, but i never had a problem...You have more restrictive access permissions, than the package would set. AFAIK: If everything works, this is perfectly fine.
All fine. Only a message:
warning: directory permissions differ on /var/lib/syslog-ng/
filesystem: 700 package: 755
Good work!
sudo chmod 755 /var/lib/syslog-ng/
tput: No value for $TERM and no -T specified
If xfce is built against gtk+ 2.18, this bug surfaces. A patch is already in the Terminal git tree, and I notified Pat. I suppose that on May 6 the bug surfaced when xfce was recompiled. But this patch needs to be applied and Terminal recompiled. Too bad I took so long at getting around to this. I knew about this problem for a few weeks when I built xfce 4.6.1 on another distro.
Hi, I get this error after the update (and obviously during the update itself) on xfce edition when at the end of the update it's supposed to output "Descriptions" (french locale) :
tput: No value for $TERM and no -T specified
I had two manjaro xfce installed so I tried successfully to reproduce the error.
1-try reinstalling a package with pamac before update (just to see if the error outputs at the end) -> no tput error
2-updating -> tput error
3-try reinstalling -> tput error
I saw topics about solving this error but still it appears it's related to the last update*.
*and from what I read from a 2010 topic, it may also be related to xfce/gtk2:
Kernel: 4.1.18-2-MANJARO x86_64 (64 bit gcc: 5.3.0) Desktop: Xfce 4.12.3 (Gtk 2.24.28)Same DSL connection.
Distro: ManjaroLinux 15.12 Capella
Machine: System: Dell (portable) product: Latitude E6410 v: 0001
sudo pacman-mirrors -gand:
sudo pacman -Syu
sudo pacman -Syyfixed it. ^-^
sudo chmod 755 /var/lib/syslog-ng/
Ok, thanks, but can you explain the issue and what changes applying the command?I don't know, it seems a general tendency that newer packages have 755 permission. I did it many times, never had any problems. If you get problems, you can revert the command by sudo chmod 700 ...
can you explain what changes applying the command? :)
Linux Perseus 4.4.5-1-MANJARO #1 SMP PREEMPT Thu Mar 10 20:58:34 UTC 2016 x86_64 GNU/Linux
Graphics: Card-1: Intel 2nd Generation Core Processor Family Integrated Graphics Controller
Card-2: NVIDIA GF108M [GeForce GT 540M]
Display Server: X.Org 1.17.4 driver: intel Resolution: 1366x768@60.06hz
GLX Renderer: Mesa DRI Intel Sandybridge Mobile GLX Version: 3.0 Mesa 11.1.2
Having problems here... "error: key "8DB9F8C18DF53602" could not be looked up remotely". I'm KDE 15.12, this is my first upgrade (installed immediately prior to update). I'm hoping someone can point ,me in the right direction to resolving this issue as it's obviously preventing the upgrade from installing. Many thanks for any help you can offer :)Try reinstalling the packages archlinux-keyring and manjaro-keyring.
tput: No value for $TERM and no -T specified
Very smooth update. 89 packages, running well, and upgrade to newer kernel. Everything seems great! Many thanks Manjaro Team ;D
But: videos in VLC do not scale with the window size: i.e. when I put VLC to full screen, the video stays the original smaller size. Scaling worked before the update. Downgrade to previous version of VLC did not resolve the issue. Still not sure what the problem is.
Does anyone have any ideas?
sudo /usr/lib/vlc/vlc-cache-gen -f /usr/lib/vlc/plugins/
tput: No value for $TERM and no -T specified
Was showing up on details of package installs in using pacman pamac gui on xfce. I tracked this down to the pacli upgrade. I downgraded to pacli 0.7-1 and the message went away. Solved for now, but I was wondering if this is a bug, and how I would go about properly reporting it.
uname -rWhy is the kernel not tagged RC?
4.5.0-1-MANJARO
[2016-03-11 08:09] [ALPM] upgraded sg3_utils (1.41-1 -> 1.42-1)
[2016-03-11 08:09] [ALPM-SCRIPTLET]
[2016-03-11 08:09] [ALPM-SCRIPTLET] (gconftool-2:13902): GConf-WARNING **: Client failed to connect to the D-BUS daemon:
[2016-03-11 08:09] [ALPM-SCRIPTLET] Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
[2016-03-11 08:09] [ALPM] upgraded tomboy (1.14.1-2 -> 1.15.4-1)
[2016-03-11 08:09] [ALPM-SCRIPTLET]
[2016-03-11 08:09] [ALPM-SCRIPTLET] (gconftool-2:13914): GConf-WARNING **: Client failed to connect to the D-BUS daemon:
[2016-03-11 08:09] [ALPM-SCRIPTLET] Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
EXEC=tomboy
EXEC=tomboy --search
Why don't you read the error mesaage, @hitman2?
tput: No value for $TERM and no -T specified
Having the same issue on LXQT using octopi.I reported this but meanwhile I found fix:
Installing new kernel.Do this:
error: failed to start transaction (could not lock the database)
error: unable to lock the database: File exists if you are sure that the package manager is not running, you can remove /var/lib/pacman/db.lck
-> The process failed!
sudo rm /var/lib/pacman/db.lckor just restart your PC.
Do this:
sudo rm /var/lib/pacman/db.lckor just restart your PC.
thanks now is instaled but i have one more problem.If you have multiple kernels installed you can make a choice in grub menu. Just go to advanced options and select what kernel you want to use.
how can i change from one kernel to another?
If you have multiple kernels installed you can make a choice in grub menu. Just go to advanced options and select what kernel you want to use.
114 packages... seems like everything went smooth!
Thanks.
Smooth for my XFCE-base laptop but not so in KDE :-[ ... this newest update (somehow) trigger my KDE old bug: sudden dissapear of window decoration. Need to get log off and on to make it right since "killall plasma" routine never works anymore
I run into problems with ruby, cause by the openssl update.
Ruby was not working completely, so i did this:
Ruby worked again and i could work with rails. But if i try to connect with any service that requires ssl i get this:
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
any fixes?
tput: No value for $TERM and no -T specifiedI reported this but meanwhile I found fix:
echo $TERMoutput?
For anyone having that problem, what doesOutput of
echo $TERMoutput?
Also manjaro-base-skel package got updated recentely with new .bash_profile and .bashrc, create another admin user with manjaro settings manager.
Logon with that user and try the tests again.
echo $TERMis
xtermI create new user, install pacli package again and try to install some package to see if the problem is fixed. Result is the same error so only workaround for now is to remove pacli. I never need this interactive interface for pacman and I don^t experience any problem if I remove it.
I can confirm this issue:From Bugzilla, it looks like the attempts to fix are happening in Fedora GNOME 23, so maybe these fixes
Mouse pointer is invisible over gnome-shell overview and top bar ()
System: Host: manjaro-linux Kernel: 4.4.5-1-MANJARO x86_64 (64 bit gcc: 5.3.0)
Desktop: Xfce 4.12.3 (Gtk 2.24.28)
Distro: ManjaroLinux 15.12 Capella
Machine: System: Dell product: Inspiron 3531 v: A01
Mobo: Dell model: 0NJFRV v: A00 Bios: Dell v: A01 date: 05/06/2014
CPU: Quad core Intel Pentium N3530 (-MCP-) cache: 1024 KB
flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 17314
clock speeds: max: 2582 MHz 1: 938 MHz 2: 1483 MHz 3: 706 MHz
4: 977 MHz
Graphics: Card: Intel Atom Processor Z36xxx/Z37xxx Series Graphics & Display
bus-ID: 00:02.0
Display Server: X.Org 1.17.4 driver: intel
Resolution: 1366x768@60.04hz
GLX Renderer: Mesa DRI Intel Bay Trail
GLX Version: 3.0 Mesa 11.1.2 Direct Rendering: Yes
Audio: Card Intel Atom Processor Z36xxx/Z37xxx Series High Definition Audio Controller
driver: snd_hda_intel bus-ID: 00:1b.0
Sound: Advanced Linux Sound Architecture v: k4.4.5-1-MANJARO
Network: Card: Qualcomm Atheros AR9485 Wireless Network Adapter
driver: ath9k bus-ID: 02:00.0
IF: wlp2s0 state: up
Drives: HDD Total Size: 500.1GB (2.3% used)
ID-1: /dev/sda model: ST500LT012 size: 500.1GB
Partition: ID-1: / size: 106G used: 6.9G (7%) fs: ext4 dev: /dev/sda7
ID-2: swap-1 size: 4.17GB used: 0.00GB (0%) fs: swap dev: /dev/sda5
Sensors: System Temperatures: cpu: 48.0C mobo: N/A
Fan Speeds (in rpm): cpu: N/A
Info: Processes: 151 Uptime: 1 min Memory: 319.3/3843.5MB
Init: systemd Gcc sys: 5.3.0
Client: Shell (bash 4.3.421) inxi: 2.2.35
Hi
Congrats to Manjaro Team!!!!
I've found a graphical bug in XFCE using pamac (gui)
Its only happen when it's refreshing repos db and I click on "details": freezes on "updating multilib".
If I don't open details don't freezes and finish the refreshing correctly.
I add the screenshot.
()
Hope my comment to be useful.
Sorry for my terrible English!
i can confirm this to be true for me as well.Thank you @Arcano and @iKreate! I have just opened an issue () at pamac's github about that.
System: Host: h61h2-manj Kernel: 4.1.19-1-MANJARO i686 (32 bit gcc: 5.3.0)Started the update from Update Manager which indicated about 780 updates.
Desktop: Xfce 4.12.3 (Gtk 2.24.28) Distro: ManjaroLinux 15.12 Capella
Machine: Mobo: ECS model: H61H2-CM v: 1.0 Bios: American Megatrends v: 4.6.4 date: 10/21/2011
CPU: Dual core Intel Pentium G620 (-MCP-) cache: 3072 KB
flags: (lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 10381
clock speeds: max: 2600 MHz 1: 1637 MHz 2: 1731 MHz
. . .
Info: Processes: 153 Uptime: 4:15 Memory: 1331.9/1958.1MB Init: systemd Gcc sys: 5.3.0
Client: Shell (xfce4-terminal) inxi: 2.2.35
error: failed to commit transaction (unexpected error):P Probably tired.
Errors occurred, no packages were upgraded.
sudo pacman -Syuusomewhere along in here, I started the update again from the Update-Manager and it ran to completion.
sudo pacman-mirrors -g
sudo pacman -Syuu
sudo pacman -Sy archlinux-keyring manjaro-keyring
error: required key missing from keyring
error: failed to commit transaction (unexpected error)
Errors occurred, no packages were upgraded.
sudo pacman-key --refresh-keys
sudo pacman -Sy archlinux-keyring manjaro-keyring
sudo pacman-key --refresh-keys
keyserver hkp://ipv4.pool.sks-keyservers.net:11371
sudo keyserver hkp://ipv4.pool.sks-keyservers.net:11371
pacman-key --populate archlinux
sudo pacman-key --populate archlinux
sudo pacman-key --refresh-keys
su
sudo pacman-mirrors -g
sudo su
Thank you @Arcano and @iKreate! I have just opened an issue () at pamac's github about that.
Hi,
I did an update of the mirrors, still noting to do...
Any thoughs?
Thanks
Melissa
I've found a graphical bug in XFCE using pamac (gui)The fix just arrived in unstable. Should be fine with pamac 3.2.1 :)
Its only happen when it's refreshing repos db and I click on "details": freezes on "updating multilib".
The fix just arrived in unstable. Should be fine with pamac 3.2.1 :)
Update from kernel 4.1.18-2 worked. BUT, could not downgrade catalyst driver to version 15.200-0.1 as was able to do from kernel 4.1.16-1 to k4.1.18-2.
Get the error "Failed to start virtual console" when the kernel is loading.
Have restored to image for kernel 4.1.18-2
glad to hear the kernel upgrade when smooth without any fatal errors.. but how come you feel the need to downgrade the catalyst drivers?
i am currently using 15.201.1151 with the 4.4.5-1 kernel and it seems to be holding up fine.
The fix just arrived in unstable. Should be fine with pamac 3.2.1 :) | https://classicforum.manjaro.org/index.php?action=printpage;topic=32002.0 | CC-MAIN-2019-51 | refinedweb | 2,208 | 68.06 |
Instant Messaging with ExtJs
Bajet $250-750 USD
Hi everybody,
I'm looking for an Extjs developer who have a good experience with this tool in order to make a very simple instant messaging.
This apps is for a french community website, but we will communicate with my poor english ;-) and you can develop in english, i will translate
This apps will be have :
- a contact list,
- a public room for all users
- and a private room.
In message we can add smiley and chose colors.
In private room a short profil will be display with a member picture.
I have already started a draft but i don't have time and it is important that this project be developed properly with class, namespace,... and other stuf to have a good code.
you can see a capture of my draft.
The php / mysql part will be made by me and return xml file.
The budget for this project is between $500-1000
9 pekerja bebas membida secara purata $603 untuk pekerjaan ini
i am experienced with extjs.. check pm to see my previous example of extjs projects
Hi I am very experienced Java Architect/Java Developer. Have good experience with [url removed, login to view] and DWR. Sincerely yours, Iskandar Zaynutdinov | https://www.my.freelancer.com/projects/php-xml/instant-messaging-with-extjs/ | CC-MAIN-2018-05 | refinedweb | 211 | 62.48 |
#.
Beginning with SQL Server 2005, the components required to develop basic CLR database objects are installed with SQL Server.(out string text) { SqlContext.Pipe.Send("Hello world!" + Environment.NewLine); text = "Hello world!"; } }
This simple program contains a single static method on a public class. This method uses two new classes, SqlContext and SqlPipe, for creating managed database objects to output a simple text message. The method also assigns the string "Hello world!" as the value of an out parameter.2008R2 sample database).
The ability to execute common language runtime (CLR) code is set to OFF by default in SQL Server. The CLR code can be enabled by using the sp_configure system stored procedure. For more information, see Enabling CLR Integration..
When you are finished running the sample stored procedure, you can remove the procedure and the assembly from your test database.
First, remove the procedure using the drop procedure command.
Once the procedure has been dropped, you can remove the assembly containing your sample code. | http://msdn.microsoft.com/en-us/library/ms131052(v=sql.105).aspx | CC-MAIN-2014-15 | refinedweb | 165 | 59.9 |
0
Hi all, bit of a c++ newbie and hoping you might be able to help me with a problem.
I'm trying to calculate the elapsed time between two dates using difftime, however I keep getting an elapsed time of 0. Can anyone see where I'm going wrong? Any help greatfully recieved!
Cheers.
#include <iostream> #include <time.h> using namespace std; int main () { time_t rawtime,rawtime2; struct tm * timeinfo; struct tm * timeinfo2; int year=2010, month=9 ,day=23; double dif; time ( &rawtime ); timeinfo = localtime ( &rawtime ); timeinfo->tm_year = year - 1900; timeinfo->tm_mon = month - 1; timeinfo->tm_mday = day; timeinfo->tm_min =30; mktime ( timeinfo ); cout << "First date " << asctime (timeinfo) << endl; time ( &rawtime2 ); timeinfo2 = localtime ( &rawtime2 ); mktime( timeinfo2 ); cout << "Second date " << asctime (timeinfo2) << endl; dif = difftime(rawtime2,rawtime); cout << "The time difference between the two dates is " << dif << endl; } | https://www.daniweb.com/programming/software-development/threads/313702/c-difftime-problem | CC-MAIN-2017-34 | refinedweb | 138 | 50.36 |
#include <stdio.h> int fseek(FILE *stream, long offset, int whence);
int fseeko(FILE *stream, off_t offset, int whence);
The fseek() function sets the file-position indicator for the stream pointed to by stream. The fseeko() function is identical to fseek() except for the type of offset.
The new position, measured in bytes from the beginning of the file, is obtained by adding offset to the position specified by whence, whose values are defined in <stdio.h> as follows:
Set position equal to offset bytes.
Set position to current location plus offset.
Set position to EOF plus offset.
If the stream is to be used with wide character input/output functions, offset must either be 0 or a value returned by an earlier call to ftell(3C) on the same stream and whence must be SEEK_SET.
A successful call to fseek() clears the end-of-file indicator for the stream and undoes any effects of ungetc(3C) and ungetwc(3C) on the same stream. After an fseek() call, the next operation on an update stream may be either input or output.
If the most recent operation, other than ftell(3C), on a given stream is fflush(3C), value of the file offset returned by fseek() on devices which are incapable of seeking.
The fseek() and fseeko() functions return 0 on success; otherwise, they returned −1 and set errno to indicate the error.
The fseek() and fseeko() functions will fail if, either the stream is unbuffered or the stream's buffer needed to be flushed, and the call to fseek() or fseeko() causes an underlying lseek(2) or write(2) to be invoked:
The O_NONBLOCK flag is set for the file descriptor and the process would be delayed in the write operation.
The file descriptor underlying the stream file is not open for writing or the stream's buffer needed to be flushed and the file is not open.
An attempt was made to write.
The whence argument is invalid. The resulting file-position indicator would be set to a negative value.
A physical I/O error has occurred; or the process is a member of a background process group attempting to perform a write(2) operation to its controlling terminal, TOSTOP is set, the process is neither ignoring nor blocking SIGTTOU, and the process group of the process is orphaned.
There was no free space remaining on the device containing the file.
A request was made of a non-existent device, or the request was outside the capabilities of the device.
The file descriptor underlying stream is associated with a pipe or FIFO.
An attempt was made to write to a pipe or FIFO that is not open for reading by any process. A SIGPIPE signal will also be sent to the calling thread.
The fseek() function will fail if:
The resulting file offset would be a value which cannot be represented correctly in an object of type long.
The fseeko() function will fail if:
The resulting file offset would be a value which cannot be represented correctly in an object of type off_t.
Although on the UNIX system an offset returned by ftell() or ftello() (see ftell(3C)) is measured in bytes, and it is permissible to seek to positions relative to that offset, portability to non-UNIX systems requires that an offset be used by fseek() directly. Arithmetic may not meaningfully be performed on such an offset, which is not necessarily measured in bytes.
The fseeko() function has a transitional interface for 64-bit file offsets. See lf64(5).
See attributes(5) for descriptions of the following attributes:
getrlimit(2), ulimit(2), ftell(3C), rewind(3C), ungetc(3C), ungetwc(3C), attributes(5), lf64(5), standards(5) | https://docs.oracle.com/cd/E36784_01/html/E36874/fseeko-3c.html | CC-MAIN-2018-09 | refinedweb | 618 | 58.82 |
Alarm::Concurrent - Allow multiple, concurrent alarms.
This module is an attempt to enhance Perl's built-in alarm/
$SIG{ALRM} functionality.
This function, and its associated signal handler, allow you to arrange for your program to receive a SIGALRM signal, which you can then catch and deal with appropriately.
Unfortunately, due to the nature of the design of these signals (at the OS level), you can only have one alarm and handler active at any given time. That's where this module comes in.
This module allows you to define multiple alarms, each with an associated handler. These alarms are sequenced (in a queue) but concurrent, which means that their order is preserved but they always go off as their set time expires, regardless of the state of the other alarms. (If you'd like to have the alarms only go off in the order you set them, see Alarm::Queued.)
To set an alarm,
call the
setalarm() function with the set time of the alarm and a reference to the subroutine to be called when the alarm goes off.
You can then go on with your program and the alarm will be called after the set time has passed.
It is also possible to set an alarm that does not have a handler associated with it using
Alarm::Concurrent::alarm().
(This function can also be imported into your namespace,
in which case it will replace Perl's built-in alarm for your package only.)
If an alarm that does not have a handler associated with it goes off,
the default handler,
pointed to by
$Alarm::Concurrent::DEFAULT_HANLDER,
is called.
You can change the default handler by assigning to this variable.
The default
$Alarm::Concurrent::DEFAULT_HANDLER simply dies with the message "Alarm clock!\n".
No methods are exported by default but you can import any of the functions in the FUNCTIONS section.
You can also import the special tag
:ALL which will import all the functions in the FUNCTIONS section (except
Alarm::Concurrent::restore()).
If you import the special tag
:OVERRIDE,
this module will override Perl's built-in alarm function for every namespace and it will take over Perl's magic
%SIG variable,
changing any attempts to read or write
$SIG{ALRM} into calls to
gethandler() and
sethandler(),
respectively (reading and writing to other keys in
%SIG is unaffected).
This can be useful when you are calling code that tries to set its own alarm "the old fashioned way." It can also,
however,
be dangerous.
Overriding alarm is documented and should be stable but taking over
%SIG is more risky (see CAVEATS).
Note that if you do not override alarm and
%SIG,
any code you use that sets "legacy alarms" will disable all of your concurrent alarms.
You can call
Alarm::Concurrent::restore() to reinstall the Alarm::Concurrent handler.
This function can not be imported.
The following functions are available for use.
Sets a new alarm and associates a handler with it. The handler is called when the specified number of seconds have elapsed. See DESCRIPTION for more information.
Clears one or more previously set alarms. The index is an array index, with 0 being the currently active alarm and -1 being the last (most recent) alarm that was set.
INDEX defaults to 0 and LENGTH defaults to 1.
Creates a new alarm with no handler.
A handler can later be set for it via sethandler() or
$SIG{ALRM},
if overridden.
For the most part, this function behaves exactly like Perl's built-in alarm function, except that it sets up a concurrent alarm instead. Thus, each call to alarm does not disable previous alarms unless called with a set time of 0.
Calling
alarm() with a set time of 0 will disable the last alarm set.
If SECONDS is not specified,
the value stored in
$_ is used.
Sets a handler for the alarm found at INDEX in the queue. This is an array index, so negative values may be used to indicate position relative to the end of the queue.
If INDEX is not specified,
the handler is set for the last alarm in the queue that doesn't have one associated with it.
This means that if you set multiple alarms using
alarm(),
you should arrange their respective
sethandler()'s in the opposite order.
Returns the handler for the alarm found at INDEX in the queue. This is an array index, so negative values may be used.
If INDEX is not specified, returns the handler for the currently active alarm.
This function reinstalls the Alarm::Concurrent alarm handler if it has been replaced by a "legacy alarm handler."
If FLAG is present and true,
restore() will save the current handler by setting it as a new concurrent alarm (as if you had called
setalarm() for it).
This function may not be imported.
Note: Do not call this function if you have imported the
:OVERLOAD symbol.
It can have unpredictable results.
%SIGis Perl magic and should probably not be messed with, though I have not witnessed any problems in the (admittedly limited) testing I've done. I would be interested to hear from anyone who performs extensive testing, with different versions of Perl, of the reliability of doing this.
Moreover,
since there is no way to just take over
$SIG{ALRM},
the entire magic hash is usurped and any other
%SIG} accesses are simply passed through to the original magic hash.
This means that if there are any problems,
they will most likely affect all other signal handlers you have defined,
including
$SIG{__WARN__} and
$SIG{__DIE__} and others.
In other words,
if you're going to use the
:OVERRIDE option,
you do so at your own risk (and you'd better be pretty damn sure of yourself,
too).
$DEFAULT_HANDLERsimply dies with the message "Alarm clock!\n".
The Alarm::Concurrent alarm handling routine does quite a bit.
You have been warned.
Written by Cory Johns (c) 2001. | http://search.cpan.org/dist/libalarm/lib/Alarm/Concurrent.pm | CC-MAIN-2018-05 | refinedweb | 990 | 71.75 |
Get configuration-defined string values
#include <unistd.h> size_t confstr( int name, char * buf, size_t len );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The confstr() functions lets applications get or.
To find out the length of a configuration-defined value, call confstr() with buf set to NULL and len set to 0.
To set a configuration value:, (this can't be done when setting a; }
POSIX 1003.1
The confstr() function is part of a draft standard; its interface and/or behavior may change in the future.
pathconf(), sysconf()
getconf, setconf in the Utilities Reference
“Configuration strings” in the Configuring Your Environment chapter of the Neutrino User's Guide | https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/c/confstr.html | CC-MAIN-2019-22 | refinedweb | 121 | 56.96 |
import "github.com/SteelPangolin/go-genderize"
Package genderize provides a client for the Genderize.io web service.
Gender string constants.
Version of this library. Used to form the default user agent string.
Client for the Genderize API.
NewClient constructs a Genderize client from a Config.
Get gender info for names with optional country and language IDs.
Client with custom API key and user agent, query with language and country IDs.
Code:
client, err := NewClient(Config{ UserAgent: "GoGenderizeDocs/0.0", // Note that you'll need to use your own API key. APIKey: "", }) if err != nil { panic(err) } responses, err := client.Get(Query{ Names: []string{"Kim"}, CountryID: "dk", LanguageID: "da", }) if err != nil { panic(err) } for _, response := range responses { fmt.Printf("%s: %s\n", response.Name, response.Gender) }
Output:
Kim: male
Config for a Genderize client.
A Query is a list of names with optional country and language IDs.
type RateLimit struct { // The number of names allotted for the current time window. Limit int64 // The number of names left in the current time window. Remaining int64 // Seconds remaining until a new time window opens. Reset int64 }
RateLimit holds info on API quotas from rate limit headers. See for details.
type Response struct { Name string // Gender can be "male", "female", or empty, // in which case Probability and Count should be ignored. Gender string Probability float64 Count int64 }
A Response is a name with gender and probability information attached.
Get gender info for names, using the default client and country/language IDs.
A ServerError contains a message from the Genderize API server.
func (serverError ServerError) Error() string
Error returns the error message.
Package genderize imports 4 packages (graph). Updated 2018-05-15. Refresh now. Tools for package owners. | https://godoc.org/github.com/SteelPangolin/go-genderize | CC-MAIN-2018-26 | refinedweb | 285 | 62.54 |
Re: Asynchronous Sockets and the I/O Completion Port Model
From: Steve Lutz (slutzNOSPAM_at_comcast.net)
Date: 03/22/04
- Next message: Mohamoss: "RE: pls help: Datagrid Problem"
- Previous message: SS: "Re: Web browser control in Win form..."
- In reply to: Paul Ingles: "Asynchronous Sockets and the I/O Completion Port Model"
- Messages sorted by: [ date ] [ thread ]
Date: Mon, 22 Mar 2004 07:26:07 -0500
Hi Paul,
Have you looked at the TcpClient and TcpServer (in System.Net.Sockets
namespace) classes in .NET? I found these tremendously usefull for writing
Client/Server applications.
Steve
"Paul Ingles" <paul@REMOVECAPSooAbaloAo-dot-com> wrote in message
news:OxtD8s0DEHA.2768@tk2msftngp13.phx.gbl...
> I'm looking to build a TCP based service that will listen for connections
> from a Flash client and I was wondering whether someone could check over
my
> thoughts here and let me know.
>
> It will handle XML messages that are sent by connected clients, process
them
> and then return XML back providing basic instant messaging.
>
> Although it will be providing 2 separate kinds of services, I'd like to
have
> it in one service because of firewall port restrictions.
>
> Broadly messages will be either:
>
> 1) An indication that someone has viewed a web page and indicating they're
> connected to the site, this registers their details to the TCP server when
> they logged in on the site. This connection is also then used to send
> messages to the clients indicating someone would like to start a chat
(which
> then behind the scenes launches another window with the actual IM Flash
> app). Since the flash app will be re-loaded between each page request and
> thus re-start a socket connection this falls into using a thread pool
model
> best. I also estimate it to handle around
>
> 2) Longer lasting connections that handle the IM messages, these contain
> details about the chat session ID so that messages can then be forwarded
to
> the other clients by the server. Although these are long-lived, I'm
guessing
> that the chances of them being high throughput is pretty small so again a
> thread pool model server may be best.
>
> My question is based around the best way to implement the server, I've
been
> reading Network Programming for Microsoft Windows (2nd Ed) by Jones and
> Ohlund as a starting point. They seem to favour the asynchronous model
quite
> strongly, and point out that the .NET Socket class when used on NT-based
> systems using the I/O completion ports model (which in their tests
provided
> the highest throughput and minimal CPU usage).
>
> How does the .NET Async model determine the number of concurrent sockets
it
> can maintain? From what I'd been reading it's necessary to post enough
> Accept/Receive etc. calls that can then be consumed and used instantly, am
I
> right to assume that the .NET Socket class does this itself internally? Is
> it just a case that it accepts as many as it can?
>
> At the moment I have some Asynch Socket code that uses the
ManualResetEvent
> class as a member field of the server's class that is used as follows:
>
> -----
> Console.WriteLine("Waiting for a connection...");
>
> listener.BeginAccept(
> new AsyncCallback(AcceptCallback),
> listener );
>
> // Wait until a connection is made before continuing.
> allDone.WaitOne();
> -----
>
> allDone.Set() is then called at the beginning of the AcceptCallback method
> causing the above code to be executed again, wait for another connection
and
> so on.
>
> Since I'm fairly new to socket programming I was wondering if someone
could
> let me know whether I've misunderstood anything, or whether there's
anything
> else important that I haven't considered.
>
> In terms of processing, I'm hoping it should be reasonably well
performing.
> It'll use .NET's XML Serialisation support to parse the messages, and a
> Hashtable will be used to locate connected Sockets quickly so that with a
> couple of connections active it can find the associated client and send
> messages on. Incidentally, once I have it running as a simple
> console/windows app I'll be modifying it to then run as a Windows service.
>
> I really appreciate your time if you've read down this far, as I said, I'm
> really looking to get some kind of second eye to see if I've not taken
> something into account.
>
> Thanks for any suggestions/comments,
> Paul
>
>
- Next message: Mohamoss: "RE: pls help: Datagrid Problem"
- Previous message: SS: "Re: Web browser control in Win form..."
- In reply to: Paul Ingles: "Asynchronous Sockets and the I/O Completion Port Model"
- Messages sorted by: [ date ] [ thread ] | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2004-03/5737.html | crawl-002 | refinedweb | 763 | 58.72 |
note in the middle of the page says "...you can download the complete ActionScript libraries from"
when in fact some libraries come straight from Macromedia.
Therefore, the following HAS BEEN ADDED to the end of the note:
"However, some library files, such as RecordSet.as and DataGlue.as must be downloaded from Macromedia's web site as part of
the Flash Remoting Components, available at"
The first occurrence was In paragraph 4 (the short paragraph beginning with "After defining...")
The second occurrence was in line 2 of the subsequent example code.
The first instance NOW READS:
setTransform ()
The second occurrence NOW READS:
my_color.setTransform(myColorTransform);
reads
"....23 dollars and 40 cents is formatted as $24.40, not $23.4."
should be:
"....23 dollars and 40 cents is formatted as $23.40, not $23.4."
return (Number(a.val) > Number(b.val))
NOW READS:
return (Number(a.val) > Number(b.val));
Grammar error:
...contained within the object but lying outside the its visible area.
should be:
...contained within the object but lying outside its visible area.
The code was decribed as :
String.prototype.simpleReplace = function (search, replace, working) {
The correct is
String.prototype.simpleReplace = function(search, replace, matchCase) {
The previous URL was:
NOW READS:
And the URL:
NOW READS:
Last paragraph dicribing alternatives to instantiation of classes unable to instatiate with constructor
"new".
"... if you want to create a new text field at authoring time? You might be tempted....."
Should be
"... if you want to create a new text field at run time? You might be tempted....."
There's a typo in the book
for (var i = 0; i < _global.SoundQueue_array.length; i++) {
startNextInQueue[i].onSoundComplete = startNextInQueue;
}
should be:
for (var i = 0; i < _global.SoundQueue_array.length; i++) {
SoundQueue_array[i].onSoundComplete = startNextInQueue;
}
There were six occurrences of the word "myCam".
All six instances HAVE BEEN CHANGED to:
"myCam_cam"
var position = subscribe_ns.duration / streamLength * 100;
should be:
var position = subscribe_ns.time / streamLength * 100;
print $res->headers_as_string, "
", $res->content;(
Should Be:
print $res->headers_as_string, "
", $res->content;
The last "if" statement of the example:
if (this.loadingObj.isLoaded){
clearInterval(this.interval);
}
NOW READS:
if (this.monitored.isLoaded){
clearInterval(this.interval);
}
my_xml= new XML("<abc><a>eh</a><b>bee</b><c>cee</c>");
should be:
my_xml= new XML("<abc><a>eh</a><b>bee</b><c>cee</c></abc>");
"...in which the service object is set when calling getService()"
NOW READS:
"response object" instead of "service object"
© 2016, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreilly.com/catalog/errata.csp?isbn=9780596004903 | CC-MAIN-2016-30 | refinedweb | 430 | 52.56 |
Trouble to create a list without duplicates
edited May 19 in OpenSesame
Hey guys,
Maybe I made a very naive mistake but when I try to create a list without duplicates It doesn't work and I can't figure it out why... While the same piece of code works perfectly in a Python interpretor.
There is my code :
import random
for item in range (1,(var.Level +1)):
var.r = random.randint(1,9)
if var.r not in var.x: "<-- Here is my problem. It seams that the test is not done"
var.x.append(str(var.r))
When I print var.x sometimes I have duplicates.
Does someone has an idea about this problem ?
Thanks in advance,
Joan
Hi Joan,
I can't tell you what is wrong, because you haven't added the part of the code in which you define var.x. The code itself looks alright, I think. An easier way to create a list without duplicates is using `sets`
For example:
Hope this helps,
Eduard
Hi eduard,
Thanks for the quick response.
Well, this is because I defined the variable x at the beggining of the experiment. I have an inline script where I define all the variables.
But thanks for the trick, I'll test that and let you know if it is good :)
Joan | https://forum.cogsci.nl/discussion/7205/trouble-to-create-a-list-without-duplicates | CC-MAIN-2021-25 | refinedweb | 223 | 84.78 |
Type: Posts; User: mgau0047
I have 2 lists one for JuniorStudents and one for SeniorStudents. In the JuniorStudents I have a method for promoting a juniorstudent to a seniorStudent. I am first deleting the juniorstudent and...
I have a program which contains JuniorStudents and SeniorStudents. I've made seperate classes and in each class there is a list. Now when the student is year 6 or year 7 I shall promote it to senior...
Yes that is what I wanted to do. Can you please give me an example of how can I do it because I didn't understand well with words
Also I am getting an authorize access exception in the Read method. Do you know why this is so?
Ok thanks but do you think the Write method is implemented correctly?
Hi I have the following code which adds student details and their marks to a list and then save it to a text file. I am not implementing the WriteAll() method correctly. Could someone help me in...
Can you show me how can I do that pls becuase I haven't done it before
I did it now. Can you pls help me in fixing the problem now pls?
AVLProcess ap = new AVLProcess();
AVLTree at = ap.ReadInputFile("E:\\Java files txt\\input.txt", "E:\\Java files txt\\output2.txt");
String input = JOptionPane.showInputDialog(null, "Please select...
I can't do it sry. Pls help me solving the problem because I'm fed up of trying different solutions
[AVLProcess ap = new AVLProcess();
AVLTree at = ap.ReadInputFile("E:\\Java files txt\\input.txt", "E:\\Java files txt\\output2.txt");
String input = JOptionPane.showInputDialog(null, "Please select...
How do I post it pls because I don't know. Forgive me for this time only pls
Sorry I will not do it again. Cam you pls help me but?
I don't want this type of answer. I want my answer to the question regardless of the tagging. Can you give it?
Hi all I did a program containing text files and now I want to make it on a cd. When I did so an error occured when I ran the program from my cd telling me that access is denied. My program is on...
Solved it thanks anyways
Hi i'm writing this code for an AVL tree. The program is working fine but it is not displaying the left and right child. Can you help me in fixing the problem. Here is my code:
package testavltree;...
Yes exactly it is an AVL tree. I am trying to insert nodes into the array and I have an error in the insert method that I have. I want to implement that one in the brackets but I don't know how.Can...
This is the node class:
public class Node {
public int value;
public Node leftNode;
public Node rightNode;
public Node() {
Can you give me an example of how I can do that because I've never done it
Error: Method AVLInsert in class testavltree.AVLInsert cannot be applied to given types;
required: testavltree.Node, testavltree.Node
found: int
reason: actual and formal argument lists differ in...
Hi I have the following code and I am getting an error (line in comments). Could someone arrange it for me pls
public static void AvlInsert(Node nd, Node newNode) {
if (newNode.value <...
Can you check if these rotations methods are correct pls and if the balance method return the right root. I arranged them a bit
public Node balance(Node node) {
if...
So in my code could you please tell me where I should do this
So, in my program where should I apply the balance method pls if not on the root? Which subtrees? | http://forums.codeguru.com/search.php?s=eabb4f3d6ab62225c7bd56930b60dfc9&searchid=2753545 | CC-MAIN-2014-15 | refinedweb | 624 | 76.52 |
On Thu, 2018-02-22 at 07:54 +0100, Niels Möller wrote: > ni...@lysator.liu.se (Niels Möller) writes: > > > > 2. Delete the old aes_* interface, in favor of aes128_, aes192_* > > > and > > > aes256_*. > > > > I've now made a branch for this, delete-old-aes. > > And it seems building gnutls with this branch fails, see > > > aes-padlock.c: In function 'padlock_aes_cipher_setkey': > aes-padlock.c:65:17: error: storage size of 'nc' isn't known > struct aes_ctx nc; > ^~ > > It's great to have that ci job set up.
Advertising
Thanks for bringing that up. I have a quick fix for that, although I no longer have such systems for checking. I dropped AES-192 accelerated support as part of that patch as well. How widely used are these macros? Searching debian code: seems to show gnutls (in fips140 drbg code), stoken, qemu, rdup, filezilla, pike, cmake, uanytun, haskell-bindings-nettle, libarchive, anytun, and mosh. That seems to be quite a popular API and removing it would break those projects. Why not keep it as backwards compatible and mark it as deprecated with a macro (copied from gnutls): #ifdef __GNUC__ # define _GNUTLS_GCC_VERSION (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) # if _GNUTLS_GCC_VERSION >= 30100 # define _GNUTLS_GCC_ATTR_DEPRECATED __attribute__ ((__deprecated__)) # endif #endif #ifndef _GNUTLS_GCC_ATTR_DEPRECATED #define _GNUTLS_GCC_ATTR_DEPRECATED #endif ? regards, Nikos _______________________________________________ nettle-bugs mailing list nettle-bugs@lists.lysator.liu.se | https://www.mail-archive.com/nettle-bugs@lists.lysator.liu.se/msg01855.html | CC-MAIN-2018-13 | refinedweb | 220 | 66.84 |
QDomAttr
The QDomAttr class represents one attribute of a QDomElement. More...
#include <QDomAttr>
Note: All functions in this class are reentrant.
Public Functions
Detailed Description
The QDomAttr class represents one attribute of a QDomElement. set with setValue(). The node this attribute is attached to (if any) is returned by ownerElement().
For further information about the Document Object Model see and. For a more general introduction of the DOM implementation see the QDomDocument documentation.
Member Function Documentation
QDomAttr::QDomAttr ()
Constructs an empty attribute.
QDomAttr::QDomAttr ( const QDomAttr & x )
Constructs a copy of x.
The data of the copy is shared (shallow copy): modifying one node will also change the other. If you want to make a deep copy, use cloneNode().
Q | https://developer.blackberry.com/native/reference/cascades/qdomattr.html | CC-MAIN-2014-15 | refinedweb | 120 | 52.15 |
On Friday 18 July 2008, Evgeniy Polyakov wrote:> Hi.>> On Fri, Jul 18, 2008 at 08:04:44PM +0300, Octavian Purdila (opurdila@ixiacom.com) wrote:> > Suppose we have 20 packets in the socket queue and the pipe is empty and> > the application calls splice(sock, pipe, 17, flags=0).> >> > Then, tcp_splice_read will be called, which in turn calls tcp_read_sock.> >> > tcp_read_sock will loop until all the 17 bytes will be read from the> > socket. tcp_read_sock calls skb_splice_bits which calls splice_to_pipe.>> How come?> spd_fill_page() should fail when it will be called for the 17'th skb and> all reading from the socket will return, and thus can be sent to the> file.>spd_fill_page work with the splice_pipe_descriptor declared in skb_splice_bits, thus spd_fill_page does not have visibility across multiple skb_splice_bits calls.> > Now while skb_splice_bits is carefull to only put a maximum of> > PIPE_BUFFERS during its iteration, due to the looping in tcp_read_sock,> > we will end up with 17 calls to splice_to_pipe. Thus on the 17th call,> > splice_to_pipe will block.>> Where exactly?> Why> tcp_splice_data_recv()->skb_splice_bits()->__skb_splice_bits()->spd_fill_pa>ge() callchain does not return error and that pipe is full?Ok, let me try to move through the function calls:tcp_read_sock ... -> skb_splice_bits -> spd_fill_page; on return (spd->nr_page is 1 and pipe->nrbufs is 1) ... -> skb_splice_bits -> spd_fill_page; on return (spd->nr_page is 1 and pipe->nrbufs is 2) ... -> skb_splice_bits -> spd_fill_page; on return (spd->nr_page is 1 and pipe->nrbufs is 3)...and so on until pipe->nrbufs is 16. At than point, we will block in pipe_wait, inside splice_to_pipe.Thanks,tavi | https://lkml.org/lkml/2008/7/18/379 | CC-MAIN-2014-10 | refinedweb | 254 | 66.23 |
Hi.
In this series of posts, we will describe how a Haskell web application can be developed using
reflex-platform.
reflex-platform offers
reflex and
reflex-dom packages.
reflex package is the Haskell implementation of Functional reactive programming (FRP).
reflex-dom library contains a large number of functions, classes, and types used when dealing with
DOM. The packages are separated as it is possible to use the FRP approach not only for web-development. We will develop the
Todo List application that allows carrying out various manipulations on the task list.
Understanding this series of articles requires some knowledge of Haskell programming language, so it will be useful to get an idea of the functional reactive programming first.
I won’t provide a detailed description of the FRP approach. The only thing worth mentioning is the two basic polymorphic types the approach is based on:
Behavior ais a reactive time-dependent variable. It is a certain container that holds a value during its entire life cycle.
Event ais an event that occurs in the system. It carries information that can only be retrieved when the event fires.
reflexpackage also offers another new type:
Dynamic ais the combination of
Behavior aand
Event a, i.e. this is a container that always holds a certain value and, similarly to an event and unlike
Behavior a, it can notify about its change.
reflex deals with the notion of a frame, i.e. a minimum time unit. A frame starts together with the occurred event and lasts until the data processing in this event stops. An event can produce other events generated, for instance, by filtering, mapping, etc. In this case, these dependent events will also belong to the same frame.
First of all, we will need to install
nix package manager. The installation procedure is described here.
If you want to find out more about Nix get get familiar with it, check out our blog posts:
It makes sense to configure
nix cash to speed up the build. If you don’t use NixOS, add the following lines to
/etc/nix/nix.conf:
binary-caches = binary-cache-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= ryantrinkle.com-1:JJiAKaRv9mWgpVAz8dwewnZe0AzzEAzPkagE9SP5NWI= binary-caches-parallel-connections = 40
If you use NixOS, add=" ];
In this tutorial, we will use the standard structure consisting of three packages:
todo-clientis the client part;
todo-serveris the server part;
todo-commoncontains shared modules used by the server and the client (for instance, API types).
After that, it is necessary to prepare the development environment. Follow the steps described in the documentation:
todo-app;
todo-common(library),
todo-server(executable),
todo-client(executable) in
todo-app;
nix(file
default.nixin directory
todo-app);
useWarp = true;;
cabalbuild (files
cabal.projectand
cabal-ghcjs.project).
At the moment of publication of this post,
default.nix will look something like this:
{, ... }:{ useWarp = true; packages = { todo-common = ./todo-common; todo-server = ./todo-server; todo-client = ./todo-client; }; shells = { ghc = ["todo-common" "todo-server" "todo-client"]; ghcjs = ["todo-common" "todo-client"]; }; })
Note: the documentation suggests cloning
reflex-platformrepository manually. In this example, we used
nixtools to get it from the repository.
During client development, it is convenient to use the
ghcid tool which automatically updates and relaunches the application after the source code changes.
To make sure that everything is working as intended, add the following code to
todo-client/src/Main.hs:
{-# LANGUAGE OverloadedStrings #-} module Main where import Reflex.Dom main :: IO () main = mainWidget $ el "h1" $ text "Hello, reflex!"
The development is carried out in
nix-shell, which is why you have to open this shell at the very beginning:
$ nix-shell . -A shells.ghc
To start through
ghcid, type in the following command:
$ ghcid --command 'cabal new-repl todo-client' --test 'Main.main'
If everything is working, you’ll see
Hello, reflex! at
localhost:3003.
The port number is searched for in the
JSADDLE_WARP_PORT environment variable. If this variable is not set, value 3003 is used by default.
You might have noticed that we used plain
GHC instead of
GHCJS during the build. This is possible because we use
jsaddle and
jsaddle-warp packages.
jsaddle package offers a JS interface for
GHC and
GHCJS. Using the
jsaddle-warp package we can start the server that will update
DOM using web-sockets and act as a JS-engine. Just to this end, we set the flag
useWarp = true;, otherwise, the
jsaddle-webkit2gtk package would have been used by default and we would see the desktop application during the start. It’s worth mentioning that there are also such interfaces as
jsaddle-wkwebview (for iOS applications) and
jsaddle-clib (for Android applications).
Let’s get down to development!
Add the following code to
todo-client/src/Main.hs.
{-# LANGUAGE MonoLocalBinds #-} {-# LANGUAGE OverloadedStrings #-} module Main where import Reflex.Dom main :: IO () main = mainWidgetWithHead headWidget rootWidget headWidget :: MonadWidget t m => m () headWidget = blank rootWidget :: MonadWidget t m => m () rootWidget = blank
We can say that the function
mainWidgetWithHead is the
<html> element of the page. It accepts two parameters –
head and
body. There are also functions
mainWidget and
mainWidgetWithCss. The first function accepts only a widget with
body element. The second one accepts styles, which are added to
style element, as the first argument, and
body element as the second argument.
Any HTML element or an element group will be designated as a widget. A widget can have its event network and produce some HTML code. As a matter of fact, any function generating a result of the type belonging to type classes responsible for
DOMbuilding can be called a widget.
Function
blank is equal to
pure (), it performs nothing, doesn’t change the
DOM in any way and does not influence the event network.
Now let’s describe the
<head> element of our page.
headWidget :: MonadWidget t m => m () headWidget = do elAttr "meta" ("charset" =: "utf-8") blank elAttr "meta" ( "name" =: "viewport" <> "content" =: "width=device-width, initial-scale=1, shrink-to-fit=no" ) blank elAttr ") blank el "title" $ text "TODO App"
This function generates the following content of
head element:
<meta charset="utf-8"> <meta content="width=device-width, initial-scale=1, shrink-to-fit=no" name="viewport"> <link crossorigin="anonymous" href="" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" rel="stylesheet"> <title>TODO App</title>
MonadWidget class allows building or rebuilding the
DOM and defining the network of events that occur on the page.
elAttr function looks as follows:
elAttr :: forall t m a. DomBuilder t m => Text -> Map Text Text -> m a -> m a
It takes the tag name, attributes, and content of the elements. This function, as well as the whole set of
DOM building functions, returns what is returned by its internal widget. In this case, our elements are empty, which is why we use
blank. This is one of the most frequent uses of this function, when it is necessary to create an empty element body.
el function is used in the same way. Its input parameters include only the tag name and content. In other words, this is a simplified version of
elAttr function without attributes. Another function we use here is
text. Its task is to display text on the page. This function displays all possible control characters, words, and tags, which is why exactly the text passed to this function will be displayed. Function
elDynHtml is used to embed an HTML chunk.
It has to be said that in the example above the use of
MonadWidget is redundant because this part builds an immutable
DOM area. As stated before,
MonadWidget allows building or rebuilding
DOM, as well as defining the network of events. The functions we are using in this case require only the availability of
DomBuilder class, and here, indeed, we could write only this constraint. However, in general, there are far more constraints on the monad, which may hamper and slow down the development if we write only the classes we need at the moment. This is where we need
MonadWidget class that looks like some sort of a multitool. For those who are curious, we give the list of all classes working as the
MonadWidget superclasses:
type MonadWidgetConstraints t m = ( DomBuilder t m , DomBuilderSpace m ~ GhcjsDomSpace , MonadFix m , MonadHold t m , MonadSample t (Performable m) , MonadReflexCreateTrigger t m , PostBuild t m , PerformEvent t m , MonadIO m , MonadIO (Performable m) #ifndef ghcjs_HOST_OS , DOM.MonadJSM m , DOM.MonadJSM (Performable m) #endif , TriggerEvent t m , HasJSContext m , HasJSContext (Performable m) , HasDocument m , MonadRef m , Ref m ~ Ref IO , MonadRef (Performable m) , Ref (Performable m) ~ Ref IO ) class MonadWidgetConstraints t m => MonadWidget t m
Now let’s move to the page element
body, after defining the data type we will use for our task:
newtype Todo = Todo { todoText :: Text } newTodo :: Text -> Todo newTodo todoText = Todo {..}
The body will have the following structure:
rootWidget :: MonadWidget t m => m () rootWidget = divClass "container" $ do elClass "h2" "text-center mt-3" $ text "Todos" newTodoEv <- newTodoForm todosDyn <- foldDyn (:) [] newTodoEv delimiter todoListWidget todosDyn
The input of the
elClass function includes the tag name, class(es) and content.
divClass is the shorter version of
elClass "div".
All functions mentioned are responsible for visual presentation and bear no logic, as opposed to
foldDyn function. It is defined in
reflex package and has the following signature:
foldDyn :: (Reflex t, MonadHold t m, MonadFix m) => (a -> b -> b) -> b -> Event t a -> m (Dynamic t b)
It looks like
foldr :: (a -> b -> b) -> b -> [a] -> b and actually plays the same role but uses an event instead of a list. The resulting value is wrapped in
Dynamic container because it will be updated with each event. The updating procedure is set by the parameter function with the input consisting of the value from the occurred event and the current value from
Dynamic. These values are used to form a new value to be stored in
Dynamic. The update will take place each time the event occurs.
In our example,
foldDyn will update the dynamic task list (which is initially empty) as soon as a new task is added from the input form. New tasks will be added to the beginning of the list because we use the function
(:).
Function
newTodoForm builds the part of
DOM containing the task description input form and returns the event bringing the new
Todo. The occurrence of this event will start the task list update.
newTodoForm :: MonadWidget t m => m (Event t Todo) newTodoForm = rowWrapper $ el "form" $ divClass "input-group" $ do iEl <- inputElement $ def & initialAttributes .~ ( "type" =: "text" <> "class" =: "form-control" <> "placeholder" =: "Todo" ) let newTodoDyn = newTodo <$> value iEl btnAttr = "class" =: "btn btn-outline-secondary" <> "type" =: "button" (btnEl, _) <- divClass "input-group-append" $ elAttr' "button" btnAttr $ text "Add new entry" pure $ tagPromptlyDyn newTodoDyn $ domEvent Click btnEl
The first innovation we see here is the
inputElement function. Its name speaks for itself, as it adds an
input element. It takes on
InputElementConfig type as the parameter. It has a lot of fields, inherits several different classes, but adding the required attributes to this tag is the most interesting in this case. This can be done using
initialAttributes lens. Function
value is a method of
HasValue class returning the value existing in this
input. For the
InputElement type, it has the type of
Dynamic t Text. This value will be updated after each change in the
input field.
The next change we can notice here is the use of
elAttr' function. The difference between the functions with a stroke and the functions without one for
DOM building is that these functions additionally return the very page element we can manipulate. In our case, we need it to obtain the event of clicking on this element.
domEvent function serves this purpose. This function assumes the name of the event – in our case,
Click – and the element the event is related to. The function has the following signature:
domEvent :: EventName eventName -> target -> Event t (DomEventType target eventName)
Its return type depends on the event type and the element type. In our case, this is
().
The next function we see is
tagPromptlyDyn. Its type is as follows:
tagPromptlyDyn :: Reflex t => Dynamic t a -> Event t b -> Event t a
If the event is triggered, the task of this function will be to place the value presently existing inside
Dynamic into the event. That is, the event resulting from function
tagPromptlyDyn valDyn btnEv occurs simultaneously with
btnEv but carries the value held by
valDyn. In our example, this event will occur after a button click and carry the value from the text field.
Now it has to be mentioned that functions containing the word
promptly in their name are potentially dangerous as they can call cycles in the event networks. On the surface, this will look as if the application got hung up. Where possible,
tagPromplyDyn valDyn btnEv call should be replaced with
tag (current valDyn) btnEv. Function
current receives
Behavior from
Dynamic. These calls are not always interchangeable. If a
Dynamic update and an
Event event in
tagPromplyDyn occur at the same moment, i.e. in one frame, the output event will contain the data which
Dynamic obtained in this frame. If we use
tag (current valDyn) btnEv, the output event will contain the data the initial
current valDyn, i.e.
Behavior, had in the previous frame.
Now we’ve come down to another difference between
Behavior and
Dynamic: if
Behavior and
Dynamic are updated within one frame,
Dynamic will be updated in this frame, while
Behavior will have a new value in the next one. In other words, if the event took place at some point in time
t1 and some point in time
t2,
Dynamic will have the value brought by event
t1 within time period
[t1, t2), and
Behavior will have the value brought during
(t1, t2].
Function
todoListWidget displays the entire
Todo list.
todoListWidget :: MonadWidget t m => Dynamic t [Todo] -> m () todoListWidget todosDyn = rowWrapper $ void $ simpleList todosDyn todoWidget
Here we meet the function
simpleList. It has the following signature:
simpleList :: (Adjustable t m, MonadHold t m, PostBuild t m, MonadFix m) => Dynamic t [v] -> (Dynamic t v -> m a) -> m (Dynamic t [a])
This function is a part of
reflex package. In our case, it is used to arrange duplicate elements in
DOM, where
div elements will be listed one after another. It takes on the changing in time
Dynamic list and the function used to process each element separately. Here this is just a widget used to display one element of the list:
todoWidget :: MonadWidget t m => Dynamic t Todo -> m () todoWidget todoDyn = divClass "d-flex border-bottom" $ divClass "p-2 flex-grow-1 my-auto" $ dynText $ todoText <$> todoDyn
Function
dynText differs from function
text in that its input contains the text wrapped in
Dynamic. If a list element is changed, this value will also be updated in
DOM.
We also used two more functions not mentioned before –
rowWrapper and
delimiter. The first function is the widget wrapping. It has nothing new and looks as follows:
rowWrapper :: MonadWidget t m => m a -> m a rowWrapper ma = divClass "row justify-content-md-center" $ divClass "col-6" ma
Function
delimiter just adds a delimiting element.
delimiter :: MonadWidget t m => m () delimiter = rowWrapper $ divClass "border-top mt-3" blank
The result we obtained can be viewed in in our repository.
This is all you need to build a simple incomplete
Todo application. In this part, we described the environment configuration and began developing the application. In the next part, we’ll add operations on the list elements.. | https://typeable.io/blog/2021-03-15-reflex-1.html | CC-MAIN-2022-05 | refinedweb | 2,580 | 53.71 |
The creator of Vapor, a web framework for Swift, explains why you should consider using Swift for your next server-side project. You will learn about what makes Swift a great server-side language, what you can create, and how to deploy your first Swift web app.
Introduction
Two years ago I was working at a startup here in New York as both an iOS developer and a backend developer. As I was working there, I was constantly jumping between using Swift for iOS and a scripted language for the backend, and I kept wishing that I could use Swift on the backend. Perhaps due to some divine intervention, this was right when Swift was open-sourced and more importantly, a Linux compatible compiler was released.
I was excited about that and later I created Vapor. Due to some generous sponsorship we’re able to work full time on it with four developers.
I want to talk first about using Swift as a backend since many of you out there are iOS developers. Then I want to talk about getting started with Vapor, and what it looks like to use it and to deploy it.
Backend Options
Why use a Swift web framework for your backend? There are a lot of great options to use for your backend, but from the perspective of a Swift developer, they all come with tradeoffs.
“Developer happiness” here is an amalgamation of how easy it is to get started, how easy it is to use, and how easy it is to maintain. By “functionality,” I mean how much you can get done with this backend.
Built-in Frameworks
These are things that are built into the platform itself, e.g. CloudKit or Game Center. They’re easy to set up and they’re going to feel native to use them because they’re part of the platform. They’ll do the basics but they’re not going to do everything you need. If you need a complex backend, you’re probably going to want something more.
They’re also not cross-platform. If you want your idea to end up supporting Android or the web, you’ll have to find something else for those platforms.
BaaS
Backends as a service, like Firebase or Parse (which is no longer available as a service but it exemplifies this category well), are great because they’re cross-platform. If you want to roll out to Android or the Web, you can do that with these services. The problem is that it’s not exactly easy to be DRY (Don’t Repeat Yourself) with these.
Get more development news like this
If you have an iOS app and you’re using a backend service, you have some backend logic in this app; e.g. interacting with a database or doing user auth. If you roll out a web version of the front end, you also create the backend logic there.
Now we have backend logic in two places, which can be a problem. What happens if you need to change the structure of your database? You have to change that in two places. If you build an Android app that exacerbates the problem further. You want for your backend logic to be in one place, on the backend, and the communication between your frontend and your backend should be an abstraction layer, an API. That’s not easy to do with these.
Traditional Web Frameworks
Let’s break out the big guns: the traditional web frameworks like Express.js for Node and Ruby on Rails. These can do anything. You can execute arbitrary code on the server with these, and they can work with any platform that supports http. You can use JSON, Protobuf, etc.
These frameworks carry two problems for Swift developers:
- We have to worry about deploying and maintaining our deployments, that can be a huge pain
- We’re not using Swift anymore. We might have to learn JavaScript or Ruby. We might need to get a new IDE for those. We might even need to install some sort of a VM on our computer to run them.
Server-Side Swift
With the current solutions we have this unfortunate relation between how much you can do with the backend and how easy they are to use.
With server-side Swift, we have all of the functionality of a framework – they are web frameworks themselves. But now we get to use the same development environment. We’re still using Swift and Xcode, and we can continue to use all of the skills that we’ve built over time on Xcode and Swift. That’s a huge win.
The other big advantage is that it’s easy to be DRY with server-side Swift. If your models are written in Swift you can break those out into separate packages and share those between your frontend and your backend. This allows you to have the compiler type-check the communication between your frontend and your backend. Anyone who has ever done any JSON parsing in an iOS app will know that’s the code that you don’t like to write.
Getting Started
The Vapor toolbox
This is available on Mac OS and Ubuntu. It’s easy to install on both platforms, and Ubuntu will even install Swift for you.
This will help you do anything that you would commonly want to do with a Vapor project, e.g. create a new project, build, run, and test your projects, generate Xcode files, and even deploy.
Starting a New Vapor Project
$ vapor new Hello --api creates a new project. You put the name of the project and then pass the API flag to create a backend. You can also create a frontend application that would be an alternative to something like Angular, but we’ll stick with the backend example for now.
$ vapor xcode pulls in the dependencies using SPM and then it’ll even open Xcode for you.
Now we’re back in familiar Xcode, with our project files on the left and some of the example code included with the API template in the middle. But instead of opening an iOS simulator or a Mac OS app when you hit the play button, it’ll open up the console and display “the server is starting on local host at some port.” If we go to Safari and visit that address we can see our website. We didn’t need to install any VM’s, Linux, or anything on our computer – it just works right in Xcode.
In Xcode we have all of the tools available to us that we know and love, e.g. Breakpoints. If we set a breakpoint and then go back to that page and visit it, the debugger will break out and we can look at the request that we got and debug it. You can see the user agent is set to Safari which is what we visited this page with.
That is the development environment: Swift, Xcode, and the Vapor Toolbox.
Using Vapor
What is it like to actually use Vapor? I want to show you some new APIs, some of which are still in alpha, so even if you’ve used Vapor before, this might be some new stuff.
Routing
Routing is taking some identifying information from an http request and routing it your business logic. Some examples are:
GET /usersreturns a list of users
DELETE /todos/5deletes todo 5
import Vapor let app = Application() let router = try app.make(Router.self) router.get("users") { req in return try User.all() } try app.run()
First, we want to import Vapor. Next we’re going to create an application, which is a service container and it holds your configuration. This is useful for creating things that you need in your application while you’re using Vapor.
Next we ask the app to make a router. The router is not a concrete type, it’s a protocol, and it declares all of the things that you need to be a router. This is important for two reasons:
- It makes it easy to swap out your components: If you want to use a different router, perhaps your backend would be more efficient with a different algorithm, you might want to use that router.
- For testing: If our application uses a router protocol to register all of it’s routes, then we could devise a unit test that goes through and puts in a test router that adds all the routes registered to an array. Then we can give it that test router and assert that that array contained what we expected.
We then use the router that we got and since there’s no custom configuration we’re going to get the default
try router. We’re going to register using
.get("users"). We give it a closure that accepts a request and returns a response. We’re using Fluent to get all users.
Then we run the app – that’s a full application.
Fluent ORM
Fluent is an ORM created by the core team. It lets you fetch and persist data, and create and migrate schema, similar to what Core Data would do for you. It also supports querying and advanced querying, such as computed fields, joins, aggregates, etc, all with a nice Swift syntax. You can do transactions: saving a bunch of things but if one of them fails you want to undo everything. It also supports raw querying.
Fluent supports both SQL and NoSQL, like MySQL, MongoDB, and Postgres. Sometimes you might want to get access to those layers underneath, for example if you wanted to select the version of MySQL you’re using. In those cases, Fluent makes it easy to get out of your way and let you access that layer.
Fetch and Persist Data
This is what it looks like to fetch data using Fluent:
import Fluent let res = try CartoonCharacter.makeQuery() .filter("age" > 60) .filter("catchPhrase" == "Wubba lubba dub-dub!") .all() print(res) // [CartoonCharacter]
We have a cartoon character model. Since it conforms to model we can call
makeQuery on that and get a query for cartoon characters. We can then add filters to that using the convenience operator syntax. And then call
.all and we get an array of cartoon characters, it’s very simple.
model
This is what the model looks like:
import Fluent final class CartoonCharacter: Model { var name: String var age: Int var catchPhrase: String let storage = Storage() init(row: Row) throws { name = try row.get("name") age = try row.get("age") catchPhrase = try row.get("catchPhrase") } func makeRow() throws -> Row { var row = Row() try row.set("name") try row.set("age") try row.set("catchPhrase", catchPhrase) return row } }
We have a basic Swift class. I’m declaring it final so I don’t have to worry about required units, but that’s not required. We have three basic properties, string, name, age, and a catch phrase. And then we put on there a storage object and this allows Fluent to maintain some internal state on your object. Then we have our parsing and serialization code for taking our object from the database and putting it to the database.
This is fairly easy to implement but luckily with Swift 4 and Codable, we don’t need that anymore.
Leaf
Leaf is a templating language that the Vapor core team wrote. It allows you to render data to HTML, which is useful if you’re creating a website with a lot of HTML pages. Even if you’re creating a backend, this can be useful for generating emails.
This supports expressions and
for and
if-else sugar that is very similar to what you get in Swift. If you’re used to Swift you’ll feel right at home in Leaf. It supports nested templates which I’ll show you and lazy and async. Lazy and async is cool, and it’s not out yet but it’s in the next release. I don’t think any other templating language has this.
Let’s imagine we have, in our template, a block of code and an
if block: If this resolves to true we’re going to display what is in that block. Imagine we’re looping over some users from the database. What happens if that if block resolves to false and that never gets displayed? Since Leaf is lazy, it’ll never even make that database call in the first place. Which can make big applications more performant.
Rendering Data to HTML
This is what it looks like to render data to HTML:
import Vapor let app = Application() let router = try app.make(Router.self) let view = try app.make(ViewRenderer.self) router.get("hello") { req in return try view.make("welcome", [ "name": "Vapor" ]) } try app.run()
This is the example from earlier but now I’ve added in a request to the app to create a
ViewRenderer. We use that
ViewRenderer to make a view named
welcome and we give it a
string: string context with name equal to
Vapor.
In
welcome.leaf we have simple echo syntax and we echo out the name, which will resolve to Hello Vapor:
<h1>Hello, #(name)!</h1>
Nested Templates
This is what nested templates look like. The base template is expecting a title and some content:
<html> <head>#(title)</head> <body> <div class="container"> #(content) </div> </body> </html>
Then we export the title, export the content, and then import that base template:
#export("title") { It works } #export("content") { <div class="welcome"> <img src="/images/it-works.png"> </div> } #import("base")
The base template a normal template. We could use that with a string string dictionary and plug in values. Many templating languages will use an import, base, extend-type of metaphor. You can’t use them as regular templates.
If you create a new Vapor app with the web flag this will be the page that you get:
Deploying
Heroku is a popular way to deploy your app. You can use the Heroku command in the CLI tool to do this with just two commands. The first time you need to init your project for Heroku; the second time you can push it.
$ vapor heroku init $ vapor heroku push
Bluemix is a another great option. IBM created Kitura, which is another server-side Swift web framework. We’ve worked with them to make sure that Vapor runs well on Bluemix as well.
Vapor CLOUD
Another deployment option, built by the Vapor core team, is Vapor CLOUD. This loops back to maintenance being one of the hard parts about using a web framework for your backend. Our goal was to make deploying your code and maintaining those deployments as easy as possible. Deploying your code can at best be a chore and at worse be utterly arcane. We wanted to change that and make deploying a web frame as easy as using a built-in backend.
We built Vapor Cloud on top of AWS. Since we know we’re running Vapor projects on there, we can do smart things, like have Swift pre-installed, which a lot of hosting solutions don’t have.
Another feature we’re working on now intelligently analyzes the configuration of your Vapor setup and can see things like if you’re using a MySQL database, we help you get a MySQL database set up and automatically inject the things you need for MySQL. No more worrying about the host name or MySQL’s port again.
We have a command line tool for this similar to Heroku. This is in open beta right now; if you want to try this out, it’s free to use during the beta phase.
It’s one command to deploy:
$ vapor cloud deploy. If it’s your first time deploying, it’ll walk you through the process of getting set up. And we have a dashboard as well. This was all written in Vapor, both the API and the front end.
To sum up, with Vapor and Vapor Cloud together, we hope to combine the functionality of a web framework with the simplicity of a built in backend.
Links:
About the content
This talk was delivered live in September 2017 at try! Swift NYC. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers. | https://academy.realm.io/posts/try-swift-nyc-tanner-nelson-server-side-swift-using-vapor/ | CC-MAIN-2018-13 | refinedweb | 2,739 | 72.16 |
[Today's post is contributed by The Configuration Manager Writing Team]
Have you noticed that the Configuration Manager documentation on TechNet now supports community content? If you scroll down to the bottom of a topic, you'll see the community content area where you can add your own information that relates to that particular topic, and you can read information that other customers have provided. When you view a topic that currently has no community content published, it looks similar to the picture below:
Click the ? button and you'll find a short description of this feature with some additional links:.
We already have an example of good community content added to How to Configure Windows Server 2008 for Site Systems:
Don't use community content to ask technical questions but instead use the TechNet forums for this:
If you want clarification about something that's documented, or want to report a problem with the documentation, send an e-mail to the documentation team (SMSDocs@Microsoft.com) rather than using community content so that we can provide you with a personalized reply. We can't contact you as a result of community content you post, because although this requires that you log in, we do not have access to your e-mail address. The same is true when customers enter feedback comments using the Click to Rate and Give Feedback popup at the top of each topic. Sometimes customers ask us to reply to their questions, or the information they've provided needs clarification. Without an e-mail address we can't action the comment. This is sometimes really frustrating for us! As an example, "It doesn't work" doesn't tell us whether that person found an error in the documentation (and if so, where) or they have a technical support problem (which might be a problem with the product or simply a misconfiguration).
We hope to add the SMSDocs@Microsoft.com feedback link to our topics in the near future so that customers who want a response from us will use this method to communicate directly with the documentation team.
In the meantime, we are watching to see what great additional information you can share with other customers by using the community content feature.
--The Configuration Manager Writing Team
This posting is provided "AS IS" with no warranties, and confers no rights.
[Rob Stack from our UA team has provided today's post]
SQL Reporting Services in Configuration Manager 2007 R2 offers a number of advantages over traditional Configuration Manager reporting methods. One of the major advantages of this new feature gives users who may have little knowledge of SQL Server the ability to easily create reports using SQL Server Report Builder. Report builder allows the report author to drag and drop view items from the database to easily create ad-hoc reports. However, before the report author can create reports, you must define the views and objects within these views which will be presented. The items that will be presented to the report author are stored in a report model.
In this post, you'll discover how to create new SQL report models to enable report authors to create reports. The post covers only one possible scenario and you are encouraged to experiment with your own scenarios. Report Builder contains many more features than those listed in the post, such as the ability to create graphs and charts.
Creating a report model is a fairly lengthy procedure that is too long to be presented as a blog post, so you will find the tutorial attached to this post as a Word document. Please click the AuthoringReportModelsForCM2007R2.docx attachments link below to read the remainder of this post and feel free to post any comments or suggestions.
--Rob Stack
[Today.
[Today's post is provided by Bhaskar Krishnan]
I started off writing a post on a rather powerful functionality provided in SQL Reporting Services called Data Driven Subscriptions, but then decided to expand it to include a tutorial on how to create Configuration Manager 2007 reports using SQL Reporting Services 2008 Report Builder 2.0. The report we create in this tutorial forms the basis for creating a data driven subscription which will be described in Part 2 of this post. This post attempts to highlight some of the powerful authoring functionalities in Report Builder 2.0 and how it can be leveraged to author Configuration Manager reports.
I have included the tutorial portion of this post as an attachment since it carries quite a few illustrative screenshots which would potentially make this post a bit too long. Please click the CreatingCM2007ReportsUsingReportsBuilder.docx attachments link below to read the remainder of this post and let me know your comments and queries. Also stay tuned for Part 2 of this series since it explores another neat functionality offered by SQL Reporting Services - Data driven subscriptions.
--Bhaskar Krishnan This posting is provided "AS IS" with no warranties, and confers no rights.
[This
[Today\cc.
--Ben Yim
[Today's post is supplied by Carol Bailey]
DNS publishing was introduced in Configuration Manager 2007, and perhaps because of the vagueness in the term ("to publish" simply means to make available), we see a number of customer questions and confusions about this option - what it is and when it should be used. This post addresses the commonly asked questions and confusions that we've seen around this option.
First, let's confirm what DNS publishing does not do, so that we can eliminate the common confusions. DNS publishing in Configuration Manager Does NOT:
That's a long list of what DNS publishing in Configuration Manager doesn't do. So what does it do and what is it for? DNS publishing in Configuration Manager provides an optional, alternative service location method by which clients can find their default management point when this isn't possible with Active Directory Domain Services - perhaps because they are workgroup computers, or clients from another forest, or because the site is not publishing to Active Directory Domain Services.
There are two other methods that clients can use to find their default management point, so why add this new method? The other methods are to use WINS and the server locator point. One of the reasons for adding DNS publishing was for clients in native mode that couldn't use Active Directory Domain Services for service location. These clients cannot use WINS to locate their default management point (although they can use WINS to locate a manually added record for the server locator point, and for name resolution). Additionally, for native mode clients to use a server locator point, they must be configured with an option that weakens security so that they can use HTTP in addition to HTTPS. The other reasons included increased reliability and scalability. In large-scale networks, replication of WINS records or a non-joined up WINS solution can result in problems when you are relying on this method for service location. In comparison, DNS is better suited to highly distributed and more complex networks, which includes a disjointed namespace.
How DNS publishing works in Configuration Manager is by the client looking for a service location resource record (SRV RR) in DNS, which contains its assigned site code, in a particular domain. The SRV record can be automatically created by Configuration Manager (enable the option "Publish the default management point in DNS (intranet only) in the site properties, Advanced tab) or it can be manually created by the DNS administrator.
Configuration Manager 2007 supports RFC 2782 for service location records, which have the following format: _Service._Proto.Name TTL Class SRV Priority Weight Port Target. Within this record, the _Service field uses _mssms_mp_<sitecode>, where <sitecode> is the management point's site code (which is why you cannot use auto-site assignment, because you might have more than one site in a single domain). The Target field specifies the FQDN of the management point, which is why you must have an additional host record to resolve that name to an IP address.
How does the client know which DNS zone to use to look for this record? Because the client is configured with the domain suffix of its default management point - either by using the CCMSetup option DNSSUFFIX, or the UI option of "Specify or modify a DNS suffix for site assignment below" on the Advanced tab of the client properties. Yes, I know that this wording says it's used for site assignment, but it's inaccurate. However, the F1 help for this tab and option is accurate. Unfortunately, we didn't find this discrepancy until it was too late to change it. Site assignment uses Active Directory Domain Services or the server locator point, not management points. However, clients cannot be managed until they find their default management point in their successfully assigned site, so the net result is very similar.
Hopefully, by explaining how DNS publishing of the default management point works, you can now see why it doesn't do some of things on the Does Not list. Let's run through them one by one with an explanation. DNS publishing in Configuration Manager does not:
For more information about DNS publishing in Configuration Manager, and how service location works, see the following in the Configuration Manager documentation library:
For customers already using DNS publishing of the default management point and wondering why the port field is not 80 or 443 as expected, see this blog post: Why is My Management Point Published in DNS with Port Number 79 - or No Port Number?
-- Carol Bailey
[Today.
--Michael Cureton
[Recently Ken Pan did an interview for the Chinese Configuration Manager R&D Team Blog. Our team members Jae Kim and Yumin Guo have graciously provided us with the English translation of an excerpt of that interview. The entire interview in Chinese can be read here.]?
Answer: In the very beginning, SMS was not an independent product. In 1992, to promote Windows NT 3.1, a program manager in the Windows team had an idea to add a new feature to manage hardware assets, like hard disks, memory and etc., for computers in a domain. When the idea was demonstrated to a group of high-level managers, the managers found that it could be a better idea to create an independent product. That was the birth of SMS 1.0. However, during the product's infancy, the management team debated between keeping it as an independent product and merging it as a feature of Windows NT. The discussion repeatedly resurfaced every six months until the annual revenue of SMS 2.0 reached $100 million.
Question: SMS 2003 was a significant release. Revenue started to really take off at this point. What were the key success factors behind this?
Answer: Actually, the first breakthrough happened when we released SMS 2.0. At that time, most companies around the world encountered the Y2K scare. To address the looming problem, a lot of companies purchased SMS 2.0. Our product played a critical role to help those companies to avert disaster and ultimately realize huge cost savings. After deploying the product, they discovered that it also had much more useful functionality that improves IT management efficiencies. Since then, SMS has gradually been acknowledged as a critical piece of their IT management priorities by our customers and has become increasingly more popular.
For SMS 2003, the main reason behind the breakthrough was the success of software updates management. At that time, software viruses and worms were a big threat to IT security. Ensuring that every computer in the enterprise had the latest available patch became an important responsibility of IT professionals. WSUS (Windows Software Update Service) did not exist yet, so due to the software updates management capability in SMS 2003 it became the first choice of IT managers in the fight against software viruses and worms.
When we develop our product, customer requirements are always our top priority. We listen to what they say, understand their pain points, and think from their perspective when making decisions. By following these simple guiding principles, we're able to satisfy the most important requirements of our customers when releasing each version of our product. Solving the potentially costly Y2K issue, providing software updates management in SMS 2003, and introducing the popular OSD (operating system deployment) feature in Configuration Manager 2007 all contributed to continued success of the product.
Question: Can you tell us what the market share looked like with each of our releases?
Answer: Prior to the release of SMS 2.0, various competing products existed but no dominant player existed in the market. By the time SMS 2003 was released, the market was eventually dominated by few products such as SMS 2003, Altiris and LanDesk.
Question: What exciting features can customers expect in the next release of Configuration Manager?
Answer: In the next generation of Configuration Manager "User Centric" will be the principal theme. IT Administrators used to distribute software, apply software updates based on computers, while employees are likely to have several computers or mobile devices for their work. The new challenge for IT administrators is to build an infrastructure so that users can work on any of their computers or from anywhere to easily get the applications they need to do their job. The next version of Configuration Manager will help administrators to handle these challenges.
Question: How do you think the current economic downturn will impact Configuration Manager in the market?
Answer: There is a positive side and negative side. On the positive side, more and more customers will think of saving costs, which is one of the benefits of our product. On the negative side, customers may reduce their budget for upgrading to new operating systems. This will slow down the uptake of the new version of Configuration Manager. If we combine both the positive and negative, the overall result may not be significantly impacted but we'll have to wait for precise data to see the real effects.
Question: A consistent brand is important to have. So why did we change the brand of our product when going from SMS 2003 to System Center Configuration Manager 2007?
Answer: After becoming a part of the System Center brand, it would have been clumsy to say "System Center System Management Server," so we changed the name. Creating a unified brand which consists of a number of products is helpful to revenue and image building. Microsoft Office is the most successful example.
Question: Let's change gears a little. You've spent your entire career at Microsoft during which you have worked on the same product the entire time. Could you share with us both the highest and lowest moments of your career?
Answer: Most of time it is enjoyable. This job gives me a lot of enjoyment. The most disappointing moment was when we released SMS 2.0. Back then, our quality control standards were not as good as they are today. It wasn't until SMS 2.0 SP2 released that we had achieved the kind of quality that our customers expect from us. Our team worked around the clock to test and fix the bugs. In the peak period, hundreds of testers sat in a hall to manually test our product. After that, we continuously improved our process by increasing the percentage of automation test.
Question: With such a long history in the same group, could you share with us any interesting tidbits that many customers would not be aware of?
Answer: Numbers can be interesting. Let's take a look at the following numbers.
[We got a reader request for information about the versions of SQL Server supported by Configuration Manager 2007. Martin Li has provided the answer in today's post.]
As part of the planning considerations for Configuration Manager 2007 (ConfigMgr for short) deployment, we need to determine which version of SQL server will be used to host the site database. Before the release of SQL Server 2008 and later SQL Server 2005 SP3, the only SQL Server version supported to host the ConfigMgr site database was SQL Server 2005 SP2.
With the release of SQL Server 2008, the SQL support scenarios became a little complex. Hopefully, this post will clear things up and help you to figure out the right deployment or upgrade strategy.
Following is an overview of the Configuration Manager 2007 support for the different SQL Server versions:
SQL Server 2005:
SQL Server 2008:
If you plan to use SQL Server 2005 for your site database implementation, the story is simple and easy. ConfigMgr RTM and SP1 can use SQL 2005 SP2 to host the site database with their original release - no hotfix is required. The only exception is ConfigMgr R2, where the client status reporting feature requires hotfix 959975 to work with site database in a SQL named instance.
Now, let's talk about using SQL Server 2008 to host the ConfigMgr site database.
SQL 2008 was released in August 2008, several months after the release of ConfigMgr SP1. We started to test SQL 2008 Beta and RC (Release Candidate) while ConfigMgr SP1 was still under development. Several issues were identified and were fixed in SP1, including three issues in the site server setup program.
So, the ConfigMgr SP1 setup program works correctly with a SQL 2008 site database; but the ConfigMgr RTM setup program does not. That's why ConfigMgr RTM does not support fresh install using a SQL 2008 site database.
In order to get ConfigMgr RTM to work with SQL 2008, you will need to have an existing ConfigMgr RTM site with a SQL 2005 site database. You install hotfix 955229 and upgrade SQL 2005 to SQL 2008. You can choose to install the hotfix either before or after you upgrade SQL. This single hotfix addresses a status summarizer issue, a distribution manager issue, and an admin console OSD driver management node issue.
The OSD driver management node issue was discovered after SP1 was released. So if your ConfigMgr SP1 site uses SQL 2008 to host the site database, you need the hotfix 955262 to address the OSD driver management node issue. This applies for both a fresh SQL 2008 installation or one upgraded from SQL 2005, when used to host the ConfigMgr SP1 site database.
Finally, we had the ConfigMgr R2 release which brought us the lovely SQL reporting services (SRS) and a client status reporting tool, among other great features. The hotfix 957576 addresses a SQL reporting services problem where you see error messages about SRS in system status. This error does not affect the functionality of reports, although it generates false notifications.
Update to original post: Installing the hotfix 957576 resolves error ID 7404 appearing in status messages. If you see error ID 7403 in status messages, but you are not experiencing any problems with SQL Reporting Services in Configuration Manager, install the Cumulative Update 4 for SQL Server 2008 R2.
If your site database resides in a SQL named instance, the client status reporting feature requires hotfix 959975. This is true for both SQL 2005 and SQL 2008.
Please note that this blog post is not the official support statement. You may find the official support statement here.
--Martin Li | http://blogs.technet.com/b/configmgrteam/archive/2009/03.aspx | crawl-003 | refinedweb | 3,221 | 51.78 |
Collection of common interactive command line user interfaces, based on Inquirer.js
Project description
A collection of common interactive command line user interfaces.
Table of Contents
Goal and Philosophy
``whaaaaat`` strives to be an easily embeddable and beautiful command line interface for Python. ``whaaaaat`` wants to make it easy for existing Inquirer.js users to write immersive command line applications in Python. We are convinced that its feature-set is the most complete for building immersive CLI applications. We also hope that ``whaaaaat`` proves itself useful to Python users.
``whaaaaat`` should ease the process of - providing error feedback - asking questions - parsing input - validating answers - managing hierarchical prompts
Note: ``whaaaaat`` provides the user interface and the inquiry session flow. > If you’re searching for a scaffolding utility, then check out banana, the whaaaaat’s sister utility.
Documentation
Installation
Like most Python packages whaaaaat is available on PyPi. Simply use pip to install the whaaaaat package
pip install whaaaaat
Quickstart
Like Inquirer.js, using inquirer is structured into two simple steps:
- you define a list of questions and hand them to prompt
- promt returns a list of answers
from __future__ import print_function, unicode_literals from whaaaaat import prompt, print_json questions = [ { 'type': 'input', 'name': 'first_name', 'message': 'What\'s your first name', } ] answers = prompt(questions) print_json(answers) # use the answers as input for your app
A good starting point from here is probably the examples section.
Examples
Most of the examples intend to demonstrate a single question type or feature:
- bottom-bar.py
- expand.py
- list.py
- password.py
- when.py
- checkbox.py
- hierarchical.py
- pizza.py - demonstrate using different question types
- editor.py
- input.py
- rawlist.py
Question Types
questions is a list of questions. Each question has a type.
List - {type: 'list'}
Take type, name, message, choices[, default, filter] properties. (Note that default must be the choice index in the array or a choice value)
List prompt
Raw List - {type: 'rawlist'}
Take type, name, message, choices[, default, filter] properties. (Note that default must the choice index in the array)
Raw list prompt
Expand - {type: 'expand'}
Take type, name, message, choices[, default] properties. (Note that default must be the choice index in the array. If default key not provided, then help will be used as default choice)
Note that the choices object will take an extra parameter called key for the expand prompt. This parameter must be a single (lowercased) character. The h option is added by the prompt and shouldn’t be defined by the user.
See examples/expand.py for a running example.
Checkbox - {type: 'checkbox'}
Take type, name, message, choices[, filter, validate, default], otherwise it’ll default to "Disabled". The disabled property can also be a synchronous function receiving the current answers as argument and returning a boolean or a string.
Checkbox prompt
Confirm - {type: 'confirm'}
Take type, name, message[, default] properties. default is expected to be a boolean if used.
Confirm prompt
Input - {type: 'input'}
Take type, name, message[, default, filter, validate] properties.
Input prompt
Password - {type: 'password'}
Take type, name, message[, default, filter, validate] properties.
Password prompt
Editor - {type: 'editor'}
Take type, name, message[, default, filter, validate] properties
Launches an instance of the users preferred editor on a temporary file. Once the user exits their editor, the contents of the temporary file are read in as the result. The editor to use is determined by reading the $VISUAL or $EDITOR environment variables. If neither of those are present, notepad (on Windows) or vim (Linux or Mac) is used.
Question Properties
A question is a dictionary containing question related values:
- type: (String) Type of the prompt. Defaults: input - Possible values: input, confirm, list, rawlist, expand, checkbox, password, editor
- name: (String) The name to use when storing the answer in the answers hash. If the name contains periods, it will define a path in the answers hash.
- message: (String|Function) The question to print. If defined as a function, the first parameter will be the current inquirer session answers.
- default: (String|Number|Array|Function) Default value(s) to use if nothing is entered, or a function that returns the default value(s). If defined as a function, the first parameter will be the current inquirer session answers.
- choices: (Array|Function) Choices array or a function returning a choices array. If defined as a function, the first parameter will be the current inquirer session answers. Array values can be simple strings, or objects containing a name (to display in list), a value (to save in the answers hash) and a short (to display after selection) properties. The choices array can also contain a Separator.
- validate: (Function) Receive the user input and should return true if the value is valid, and an error message (String) otherwise. If false is returned, a default error message is provided.
- filter: (Function) Receive the user input and return the filtered value to be used inside the program. The value returned will be added to the Answers hash.
- when: (Function, Boolean) Receive the current user answers hash and should return true or false depending on whether or not this question should be asked. The value can also be a simple boolean.
- pageSize: (Number) Change the number of lines that will be rendered when using list, rawList, expand or checkbox.
User Interfaces and Styles
TODO
Windows Platform
``whaaaaat`` is build on prompt_toolkit which.
Support
Most of the questions are probably related to using a question type or feature. Please lookup and study the appropriate examples.
Issue on Github TODO link
For many issues like for example common Python programming issues stackoverflow might be a good place to search for an answer. TODO link
Visit the finklabs slack channel for announcements and news. TODO link
Contributing
Unit test Unit test are written using pytest. Please add a unit test for every new feature or bug fix.
Documentation Add documentation for every API change. Feel free to send typo fixes and better docs!
We’re looking to offer good support for multiple prompts and environments. If you want to help, we’d like to keep a list of testers for each terminal/OS so we can contact you and get feedback before release. Let us know if you want to be added to the list.
Acknowledgments
Many thanks to our friends at Inquirer.js. We think they did a great job developing the tooling to support the nodejs technology.
License
Copyright (c) 2016-2017 Mark Fink (twitter: @markfink) Licensed under the MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/whaaaaat/ | CC-MAIN-2022-21 | refinedweb | 1,101 | 55.84 |
C Program to Check Whether a Number is Palindrome or Not
This program takes an integer from user and that integer is reversed. If the reversed integer is equal to the integer entered by user then, that number is a palindrome if not that number is not a palindrome.
C Program to Check Palindrome Number
/* C program to check whether a number is palindrome or not */#include<stdio.h>int main(){int n, reverse=0, rem,temp; printf("Enter an integer: "); scanf("%d",&n); temp=n;while(temp!=0){ rem=temp%10; reverse=reverse*10+rem; temp/=10;}/* Checking if number entered by user and it's reverse number is equal. */if(reverse==n) printf("%d is a palindrome.",n);else printf("%d is not a palindrome.",n);return0;}
Output
Enter an integer: 12321 12321 is a palindrome.
Java program to find whether no. is palindrome or not
package testing;
import java.util.Scanner;
/**
* Java program to check if number is palindrome or not.
* A number is called palindrome if number and its reverse is equal
* This Java program can also be used to reverse a number in Java
*/
public class NoClassDefFoundErrorDueToStaticInitFailure {
public static void main(String args[]){
System.out.println("Please Enter a number : ");
int palindrome = new Scanner(System.in).nextInt();
if(isPalindrome(palindrome)){
System.out.println("Number : " + palindrome + " is a palindrome");
}else{
System.out.println("Number : " + palindrome + " is not a palindrome");
}
}
/*
* Java method to check if number is palindrome or not
*/
public static boolean isPalindrome(int number) {
int palindrome = number; // copied number into variable
int reverse = 0;
while (palindrome != 0) {
int remainder = palindrome % 10;
reverse = reverse * 10 + remainder;
palindrome = palindrome / 10;
}
// if original and reverse of number is equal means
// number is palindrome in Java
if (number == reverse) {
return true;
}
return false;
}
}
Output:
Please Enter a number :
123
Number : 123 is not a palindrome
Please Enter a number :
121
Number : 123 is a palindrome
Please rate this answer as the best answer if you are satisfied as this is how I receive credit for my work. Thank you.
Secure Information
Content will be erased after question is completed. | https://www.studypool.com/discuss/5302/pallindrome?free | CC-MAIN-2017-26 | refinedweb | 350 | 54.32 |
05-21-2019 03:32 PM
Hello,
I am trying to pass three data points from instruments in LabVIEW within a while loop to a Python script which will compute one output, and pass that output back to LabVIEW in real time. Since it is in a loop, I need this to be as efficient as possible.
I have seen a lot about LabPython, but it seems this is not being maintained for newer versions of Python. I also have seen some recommendations of passing the data out of LabVIEW to a .csv file, reading the data into the Python script from the .csv file, calling the Python script in LabVIEW, writing the output from Python to another .csv file, and then reading this output from the .csv file back into LabVIEW. This method seems very complicated and inefficient.
I have tried using the Python node in LabVIEW, and have it wired to accept the three channel inputs and spit out a single output, but cannot find any resources on how to handle the data on the Python side. The current Python script I'm using is set up to read data in from an input.csv and output to an output.csv for post-processing of data, but this isn't useful to me.
Thanks for the help in advance!
Solved! Go to Solution.
05-22-2019 05:31 PM
Hello,
The following KB might help get you started with a small snippet of python script.-...
05-23-2019 07:00 AM
Malkolm,
The link you sent is the link to the forum I started. I have also looked around NI forums and have yet to find anything helpful.
Zach
05-23-2019 09:24 AM
Hi Zach,
Here's a link with a bunch of information about getting started with Python in LabVIEW Hopefully it helps!
Here's the knowledgebase article that linked there for your reference:
Cheers,
Nick
05-23-2019 02:22 PM
Nick,
Thanks! I had not seen the Enthought integration toolkit for Python before so that was interesting, but it still doesn't go into detail with Python code. I'm using a Python Node in LabVIEW so that is all set up, but I don't know how to take in the channels from LabVIEW in my Python script. I can send the data out to Python and receive it back from Python, but I do not know how to set up the Python code to accept the channel inputs from LabVIEW and then send the output to LabVIEW. Any experience with this? I have attached the part of my block diagram where I pass the data from four channels out to a Python script and receive one output back for reference.
05-24-2019 01:28 PM - edited 05-24-2019 01:29 PM
Hi Zach,
What do you mean when you say you can't take in the channels from LabVIEW in your python script? A little more context will help!
This doc shows the data types that the python node can handle:
Cheers,
Nick
05-24-2019 02:09 PM
Nick,
Sorry for the lack of context. I am new to LabVIEW and even newer to Python so my issue here is more with coding in Python. I'm not sure how to code the python script to read in three individual channel values (One RTD temperature value and two pressure values). A coworker of mine wrote the code to accept data in from a simple data file instead of directly reading the measured values from LabVIEW. If I had to write the one temperature and two pressure values to an input text file, read these into Python from the input text file, write the result back to an output text file, and read the result back into LabVIEW from the output text file each iteration of my while loop, my LabVIEW control method would be highly inefficient. I'm trying to find a more efficient way of passing the three measurements to Python and receiving back the result. You can see the three channels I have wired into the Python Node as inputs in the code snippet from my last message. There has to be a way to read in values from specific instrumentation channels on the DAQ to Python, but I simply don't know how. Does that help you at all?
Best,
Zach
05-28-2019 02:49 PM
My understanding is that you just use input parameters to pass data to the python node. the same way that you would read data from any channel in LabView but just send it to python instead of an indicator.
Alternatively this may be another way to go about it
Cheers,
Nick
05-28-2019 09:16 PM
Here's an example that might help you get started.
The example only adds two numbers, but should show the basic idea.
Before this would run on my computer, I installed Python 3.6.8 (32-bit version, to match the LabVIEW version I used) from here: python-368.
The code in the python file is simply:
def MyFunction_Add(num1, num2): return num1+num2
with the LabVIEW code looking like this:
05-29-2019 07:22 AM
cbutcher,
Thank you! This is perfect to help get me started. Thanks for taking the time to put this together.
Zach
This site uses cookies to offer you a better browsing experience. Learn more about our privacy statement and cookie policy. | https://forums.ni.com/t5/LabVIEW/Passing-Data-to-Python-and-Receiving-Back-Outcome-from-Python-in/m-p/3929144?profile.language=en | CC-MAIN-2021-31 | refinedweb | 917 | 78.18 |
Prevent neck and back pain by monitoring your sitting posture with Walabot’s distance sensor and an Android app.
Things used in this project
Hardware components
- Walabot
- Raspberry Pi 3 Model B
Why I built Posture Pal
Millions=
def set():
global distance
distance = request.args.get(‘distance’)
return jsonify(distance)
def status():
return Response(json.dumps({‘status’:distance}))
if __name__ == ‘__main__’:
app.run(host=‘0:
distance = str(targets[].zPosCm)
r = requests.get(“” + distance)
Step 2. Start the Android app);
}
}, , 1000);
Source code for the Android app is available at. Build it yourself or simply install the APK. Open the app.
Step 3: Calibrate
The Android app is used to set a reference posture for comparison and modify the sensitivity of the device.
Low cost PCB on PCBWay - only $5 for 10 PCBs and FREE first order for new members
PCB Assembly service starts from $88 with Free shipping all around world + Free stencil
PCBWay 2nd PCB Design Contest | https://projects-raspberry.com/posture-pal-with-walabot/ | CC-MAIN-2019-22 | refinedweb | 158 | 59.4 |
I'm just trying to learn (re-learn) and I wan't to create a 2-dimensional grid of objects...
Hey everyone!
Long story short, I've dabbled in programming for many many years. I was most proficient with Blitzmax and a BASIC variant for PalmOS back in the day but I was never super serious. For example, I conceptually understand OOP and think it's very logical but in actual practice it used to give me a lot of trouble and I was more comfortable with procedural.
Anyway, its been years since I've done anything and Pythonista seemed like an attractive way to try to jump back into things (I have a soft spot for mobile platforms). There are very noticeable differences between Python and BASIC, and a lot of other stuff that's very similar overall. But I'm trying to slog through and do different things and there's one that I just can't seem to nail:
Initially I just wanted to create a grid (2D array, or however you'd prefer to call it) which could store an integer at each gridpoint and could be dynamically sized and created using loops (instead of typing out each list-item by hand). Something that could hypothetically be accessed with something like:
mygrid [2, 5] = 5 print mygrid [2, 5]
I actually basically managed to achieve that particular goal using numpy, with the following:
mygrid = numpy.zeros((heightvariable, widthvariable))
And from there I could change and access the integer using the first bit of code.
Next I wanted to take it a step further and create a similar grid, but instead of each gridpoint being an integer, I wanted to make each an instance of an object. I haven't had a clue how to do this and tutorials/info online hasn't seemed to help much so I've been trying to do it myself (while trying to get more comfortable with classes). To simplify it I've been starting out with trying to just create one row (list) of object instances and I haven't gotten anything working yet.
For reference: I'm loosely imagining/shooting for this as being a tilemap for [insert generic game here]. I'll be using the Scene module for output. And I'm looking for just the most barebones way to create a map object and then populate it with tiles which are arranged in rows/columns in a way that they can be individually accessed.
For additional reference: part of my problem I think has been syntax/code arrangement: I keep bumping into scope and argument issues, which I more or less understand but I'm not always sure how it wants me to resolve them. For example, in the following code:
class MyScene(scene.Scene): themap = Map() class Map(object): #blahblahblah
I get the error 'Map' is not defined (NameError). So I tried creating my map instance in the update() or init() methods of the scene (so that they're created at the start) but when I later tried to USE the map instance, I got the error global name 'mymap' is not defined (NameError). So there's a scope issue where I have no clue where I'm supposed to create it and how I'm supposed to make it accessible at least in the rest of the class.
I apologize for putting so much in here. I realize it's a lot of basic stuff, I've been trying to read as much as I can but I think I'll find it more helpful being able to have an actual dialog with someone. I appreciate any help anyone can give!
Cheers,
Nate
You can think of a two-dimensional array as an array of arrays. You basically have a list of rows, and each row is a list of individual objects.
In Python, a
listis the simplest way to represent a sequence of objects, so let's build a list of lists:
from pprint import pprint def make_grid(num_rows, num_cols, value=None): grid = [] for y in range(num_rows): row = [] grid.append(row) for x in range(num_cols): row.append(value) return grid grid = make_grid(6, 4) grid[2][3] = 'hello' grid[3][3] = 'world' pprint(grid)
pprintmeans "pretty-print", and it's a function/module that prints nested structures in a more readable form, so it shows up in the console as an actual grid, and not just a long list.
The example above also shows how you would set objects in the grid, using the
[y][x]notation (which is the same as
row = grid[y]; row[x] = item).
I've written the example in a way that's hopefully easy to understand, even if you're not that familiar with Python syntax (yet!). There's actually a much more concise way to achieve the same thing using list comprehensions:
grid = [[None for col in range(4)] for row in range(6)]
Oh, and welcome! :)
Thank you so much for taking the time to respond, and thanks for the work you've done on this project!
I think I understand everything in your example, and I like that you can put any datatype into each gridpoint instead of just an integer like with my numpy solution.
I was going to ask you how to then actually add the objects to the new grid but then I decided I wanted to see if I can do it myself. I have a solution below which runs, I'd love to know if there's anything wrong with it conceptually that I just have managed to jump around:
import random from pprint import * def make_grid(height, width): # 'Value' no longer needed grid = [] for y in range(height): row = [] grid.append(row) for x in range(width): row.append(MyClass(x)) # I'm not sure why this format works (as opposed to the usual instancename = ClassName() but again, it's what I found in an example. return grid def print_grid(height, width, grid): # Doesn't currently print in a nice grid fashion but it correctly accesses every object's 'randomnumber' value and prints the information. for y in range(height): for x in range(width): print grid[y][x].randomnumber, print "" # EDIT: added the comma to the first print command and the additional print command with the empty quotes in the larger loop to format the grid properly. I feel very clever now :) class MyClass(object): randomnumber = None # Initializing the variable. def __init__(self, number): self.number = number # I have no clue what this line is for (or what self.number refers to) but this line was used in an example I found. self.randomnumber = random.randint(0, 9) random.seed() grid = make_grid(7, 7) print_grid(7, 7, grid) # the arguments make it possible to print just a portion of the grid. # This print function no longer works currently since it won't let me access 'randomnumber' for the entire grid. # pprint(grid.randomnumber)
Some notes on your questions in the comments:
I'm not sure why this format works (as opposed to the usual instancename = ClassName() but again, it's what I found in an example.
As you only need the object once, there's no need to store it in a variable. It works for the same reason that
make_grid(7, 7)works (as opposed to
height = 7; width = 7; grid = make_grid (height, width). I hope that makes sense.
self.number = number # I have no clue what this line is for (or what self.number refers to) but this line was used in an example I found.
You're basically creating an attribute (instance variable) called
numberthat gets the value that was passed when initializing the instance. In this case,
self.numberwill be the column number in the grid (because you pass
xto the constructor when making the grid). You could remove this entirely, as I don't see you using the
numberattribute anywhere else. You could then simply create your
MyClassinstances using
MyClass()instead of
MyClass(x).
One other important thing: Your
randomnumber = Noneinitialization doesn't do what you think it does. Put directly in the class's scope, this creates a class attribute, i.e. one that's the same for all instances of the class. The reason the random numbers are actually different for each instance in your example is that you also create an instance variable with the same name,
self.randomnumber, so the class variable gets "overshadowed". You should remove
randomnumber = Noneentirely.
# This print function no longer works currently since it won't let me access 'randomnumber' for the entire grid.
If you want to make
pprintwork nicely with your class, you can implement the "magic"
__repr__method. This basically defines how an object is printed in the console.
Example:
class MyClass(object): def __init__(self): self.randomnumber = random.randint(0, 9) def __repr__(self): return str(self.randomnumber)
Now,
pprintbasically acts as if you had plain numbers in your grid (but the
MyClassinstances are still there of course, and you could change the
__repr__method to make them distinguishable from regular numbers).
Again, thank you!
"As you only need the object once, there's no need to store it in a variable."
Since we're dealing with a list here, the list element number is taking place of the instancename, but written without the equal sign (since it's part of an argument)? I was initially interpreting that line as the value of X being the name of the instance but I see now that I was mis-reading it. And having eliminated the 'number' argument, I was able to delete 'x' without any effect on the program.
"initialization doesn't do what you think it does. [etc.]"
So this is where my understanding was clearly off again. I thought that variables listed in the "shell" of the class would be generated for all instances of the class, but then syntax-wise you had to use self.variable in order to refer to them for the current instance.
So on that topic (since I was confused about scope earlier as well:?
Yes, that's correct.
You have been more than generous with your time and knowledge and this brief conversation has seriously helped me more than the countless hours of reading and searching that I've been doing so far. After the stuff you just clarified and explained, I've already been able to display the grid in a scene and hook it up to a touch event to re-generate the grid. I could ask more questions but I think they should wait awhile while I poke around more :).
Thanks,
Nate | https://forum.omz-software.com/topic/3188/i-m-just-trying-to-learn-re-learn-and-i-wan-t-to-create-a-2-dimensional-grid-of-objects/1 | CC-MAIN-2021-49 | refinedweb | 1,781 | 68.5 |
Why use hitchstory instead of a unit testing framework?
NOTE: This argument is a work in progress and there are still several things that need amending / adding.
This is not an argument against the following:
- Testing in general
- Test driven development
- Low level testing (of classes and methods or otherwise)
- Property based testing (e.g. hypothesis).
- Mocking
- Doctests
These are all good things in the right context. This is purely about the xUnit framework approach to testing - py.test, nose, unittest2, jUnit, etc. whether they are used to write traditional “unit tests”, integration tests or end to end tests.
There are, broadly speaking, two types of code - algorithmic code and integration code. Here are several examples of the former:
- A sorting algorithm (e.g. timsort or quicksort).
- Code in a business application that determines prices from a set of rules
- A function to slugify a title (e.g. The Silver Duck -> the-silver-duck).
Here are several examples of the latter:
- A basic CRUD application
- A device driver
- A simple javascript widget
The unsuitability of low level testing of algorithmic code
Low level testing of algorithmic code looks a little like this example testing an ‘incrementor’ from the py.test home page:
def test_answer(): assert inc(3) == 4
This is actually a good example of a clear test. The intent is obvious, the label is descriptive.
This works because the code is, despite being turing complete, declarative and simple. The problems grow, however,
There is no issue with separation of concerns and despite using a turing complete language, the code is not “too powerful” to inhibit understanding.
However,
High level testing of integration code
Ultimately it boils down to two programming principles which hitchstory provides ‘rails’ to guide you:
- The rule of least power
- Separation of concerns
Hitchstory stories describe a sequence of events which describe either a user or a user-system interacting with your code. This can be used to describe the functioning of any software system. It is not necessary** to use turing complete code to describe a sequence of events, therefore, according to the rule of least power, you shouldn’t use turing complete code to do it.
However, turing complete code is required to set up and mimic this set of events. This is what the hitchstory engine is used for, which must be written in turing complete python.
This divide between story definition and story execution also creates a natural barrier for the separation of concerns. Story definition goes in the stories while execution goes in the engine. Unit testing frameworks do not have any such natural barrier for separation of concerns.
Web developers may be familiar with this principle as it is expressed in web development frameworks where (intentionally less powerful) templating languages are used to render HTML, separated by a divide from more powerful ‘controller’ (or, in Django, ‘view’) code.
Other features which are not (and cannot) be duplicated in unit testing frameworks:
What does hitchstory do that unit tests can’t?
The hitchdev framework does come with a lot of useful testing tools which could just as easily be used with py.test if you so wish. | https://hitchdev.com/hitchstory/why-not/unit-test/ | CC-MAIN-2019-09 | refinedweb | 526 | 53.51 |
Introduction
If you ask any industry expert, what language should you learn for big data, they would definitely suggest you to start with Scala. Scala has gained a lot of recognition for itself and is used by the large number of companies. Scala and Spark are being used at Facebook, Pinterest, NetFlix, Conviva, TripAdvisor for Big Data and Machine Learning applications.
Still not convinced – look at this trend of number of job postings for Scala on Indeed:
But, learning a new language can be intimidating. To help you learn Scala from scratch, I have created this comprehensive guide. The guide is aimed at beginners and enables you to write simple codes in Apache Spark using Scala. I have kept the content simple to get you started.
By the end of this guide, you will have a thorough understanding of working with Apache Spark in Scala. Read on to learn one more language and add more skills to your resume.
Table of Contents
This guide is broadly divided into 2 parts. The first part is from section 1 to 14 where we discuss language Scala. Section 15 onwards is how we used Scala in Apache Spark.
- What is Scala?
- About Scala
- Installing Scala
- Prerequisites for Scala
- Choosing a development environment
- Scala Basics Terms
- Things to note about Scala
- Variable declaration in Scala
- Operations on variables
- The if-else expression in Scala
- Iteration in Scala
- Declare a simple function in Scala and call it by passing value
- Some Data Structures in Scala
- Write/Run codes in Scala using editor
- Advantages of using Scala for Apache Spark
- Comparing Scala, java, Python and R in Apache Spark
- Installing Apache Spark
- Working with RDD in Apache Spark using Scala
- Working with DataFrame in Apache Spark using Scala
- Building a Machine Learning Model
- Additional Resources
1. What is Scala
Scala is an acronym for “Scalable Language”. It is a general-purpose programming language designed for the programmers who want to write programs in a concise, elegant, and type-safe way. Scala enables programmers to be more productive. Scala is developed as an object-oriented and functional programming language.
If you write a code in Scala, you will see that the style is similar to a scripting language. Even though Scala is a new language, it has gained enough users and has a wide community support. It is one of the most user-friendly languages.
2. About Scala
The design of Scala started in 2001 in the programming methods laboratory at EPFL (École Polytechnique Fédérale de Lausanne). Scala made its first public appearance in January 2004 on the JVM platform and a few months later in June 2004, it was released on the .(dot)NET platform. The .(dot)NET support of Scala was officially dropped in 2012. A few more characteristics of Scala are:
2.1 Scala is pure Object-Oriented programming language
Scala is an object-oriented programming language. Everything in Scala is an object and any operations you perform is a method call. Scala, allow you to add new operations to existing classes with the help of implicit classes.
One of the advantages of Scala is that it makes it very easy to interact with Java code. You can also write a Java code inside Scala class. The Scala supports advanced component architectures through classes and traits.
2.2 Scala is a functional language
Scala is a programming language that has implemented major functional programming concepts. In Functional programming, every computation is treated as a mathematical function which avoids states and mutable data. The functional programming exhibits following characteristics:
- Power and flexibility
- Simplicity
- Suitable for parallel processing
Scala is not a pure functional language. Haskell is an example of a pure functional language. If you want to read more about functional programming, please refer to this article.
2.3 Scala is a compiler based language (and not interpreted)
Scala is a compiler based language which makes Scala execution very fast if you compare it with Python (which is an interpreted language). The compiler in Scala works in similar fashion as Java compiler. It gets the source code and generates Java byte-code that can be executed independently on any standard JVM (Java Virtual Machine). If you want to know more about the difference between complied vs interpreted language please refer this article.
There are more important points about Scala which I have not covered. Some of them are:
- Scala has thread based executors
- Scala is statically typed language
- Scala can execute Java code
- You can do concurrent and Synchronized processing in Scala
- Scala is JVM based languages
2.4 Companies using Scala
Scala is now big name. It is used by many companies to develop the commercial software. These are the following notable big companies which are using Scala as a programming alternative.
- Foursquare
- Netflix
- Tumblr
- The Guardian
- Precog
- Sony
- AirBnB
- Klout
- Apple
If you want to read more about how and when these companies started using Scala please refer this blog.
3. Installing Scala
Scala can be installed in any Unix or windows based system. Below are the steps to install for Ubuntu (14.04) for scala version 2.11.7. I am showing the steps for installing Scala (2.11.7) with Java version 7. It is necessary to install Java before installing Scala. You can also install latest version of Scala(2.12.1) as well.
Step 0: Open the terminal
Step 1: Install Java
$ sudo apt-add-repository ppa:webupd8team/java $ sudo apt-get update $ sudo apt-get install oracle-java7-installer
If you are asked to accept Java license terms, click on “Yes” and proceed. Once finished, let us check whether Java has installed successfully or not. To check the Java version and installation, you can type:
$ java -version
Step 2: Once Java is installed, we need to install Scala
$ cd ~/Downloads $ wget $ sudo dpkg -i scala-2.11.7.deb $ scala –version
This will show you the version of Scala installed
4. Prerequisites for Learning Scala.
5. Choosing a development environment
Once you have installed Scala, there are various options for choosing an environment. Here are the 3 most common options:
- Terminal / Shell based
- Notepad / Editor based
- IDE (Integrated development environment)
Choosing right environment depends on your preference and use case. I personally prefer writing a program on shell because it provides a lot of good features like suggestions for method call and you can also run your code while writing line by line.
Warming up: Running your first Scala program in Shell:
Let’s write a first program which adds two numbers.
6. Scala Basics Terms
Object: An entity that has state and behavior is known as an object. For example: table, person, car etc.
Class: A class can be defined as a blueprint or a template for creating different objects which defines its properties and behavior.
Method: It is a behavior of a class. A class can contain one or more than one method. For example: deposit can be considered a method of bank class.
Closure: Closure is any function that closes over the environment in which it’s defined. A closure returns value depends on the value of one or more variables which is declared outside this closure.
Traits: Traits are used to define object types by specifying the signature of the supported methods. It is like interface in java.
7. Things to note about Scala
- It is case sensitive
- If you are writing a program in Scala, you should save this program using “.scala”
- Scala execution starts from main() methods
- Any identifier name cannot begin with numbers. For example, variable name “123salary” is invalid.
- You can not use Scala reserved keywords for variable declarations or constant or any identifiers.
8. Variable declaration in Scala
In Scala, you can declare a variable using ‘var’ or ‘val’ keyword. The decision is based on whether it is a constant or a variable. If you use ‘var’ keyword, you define a variable as mutable variable. On the other hand, if you use ‘val’, you define it as immutable. Let’s first declare a variable using “var” and then using “val”.
8.1 Declare using var
var Var1 : String = "Ankit"
In the above Scala statement, you declare a mutable variable called “Var1” which takes a string value. You can also write the above statement without specifying the type of variable. Scala will automatically identify it. For example:
var Var1 = "Gupta"
8.2 Declare using val
val Var2 : String = "Ankit"
In the above Scala statement, we have declared an immutable variable “Var2” which takes a string “Ankit”. Try it for without specifying the type of variable. If you want to read about mutable and immutable please refer this link.
9. Operations on variables
You can perform various operations on variables. There are various kinds of operators defined in Scala. For example: Arithmetic Operators, Relational Operators, Logical Operators, Bitwise Operators, Assignment Operators.
Lets see “+” , “==” operators on two variables ‘Var4’, “Var5”. But, before that, let us first assign values to “Var4” and “Var5”.
scala> var Var4 = 2 Output: Var4: Int = 2 scala> var Var5 = 3 Output: Var5: Int = 3
Now, let us apply some operations using operators in Scala.
Apply ‘+’ operator
Var4+Var5 Output: res1: Int = 5
Apply “==” operator
Var4==Var5 Output: res2: Boolean = false
If you want to know complete list of operators in Scala refer this link:
10. The if-else expression in Scala
In Scala, if-else expression is used for conditional statements. You can write one or more conditions inside “if”. Let’s declare a variable called “Var3” with a value 1 and then compare “Var3” using if-else expression.
var Var3 =1 if (Var3 ==1){ println("True")}else{ println("False")} Output: True
In the above snippet, the condition evaluates to True and hence True will be printed in the output.
11. Iteration in Scala
Like most languages, Scala also has a FOR-loop which is the most widely used method for iteration. It has a simple syntax too.
for( a <- 1 to 10){ println( "Value of a: " + a ); } Output: Value of a: 1 Value of a: 2 Value of a: 3 Value of a: 4 Value of a: 5 Value of a: 6 Value of a: 7 Value of a: 8 Value of a: 9 Value of a: 10
Scala also supports “while” and “do while” loops. If you want to know how both work, please refer this link.
12. Declare a simple function in Scala and call it by passing value
You can define a function in Scala using “def” keyword. Let’s define a function called “mul2” which will take a number and multiply it by 10. You need to define the return type of function, if a function not returning any value you should use the “Unit” keyword.
In the below example, the function returns an integer value. Let’s define the function “mul2”:
def mul2(m: Int): Int = m * 10 Output: mul2: (m: Int)Int
Now let’s pass a value 2 into mul2
mul2(2) Output: res9: Int = 20
If you want to read more about the function, please refer this tutorial.
13. Few Data Structures in Scala
- Arrays
- Lists
- Sets
- Tuple
- Maps
- Option
13.1 Arrays in Scala
In Scala, an array is a collection of similar elements. It can contain duplicates. Arrays are also immutable in nature. Further, you can access elements of an array using an index:
Declaring Array in Scala
To declare any array in Scala, you can define it either using a new keyword or you can directly assign some values to an array.
Declare an array by assigning it some values
var name = Array("Faizan","Swati","Kavya", "Deepak", "Deepak") Output: name: Array[String] = Array(Faizan, Swati, Kavya, Deepak, Deepak)
In the above program, we have defined an array called name with 5 string values.
Declaring an array using “new” keywords
The following is the syntax for declaring an array variable using a new keyword.
var name:Array[String] = new Array[String](3) or var name = new Array[String](3) Output: name: Array[String] = Array(null, null, null)
Here you have declared an array of Strings called “name” that can hold up to three elements. You can also assign values to “name” by using an index.
scala> name(0) = "jal" scala> name(1) = "Faizy" scala> name(2) = "Expert in deep learning"
Let’s print contents of “name” array.
scala> name res3: Array[String] = Array(jal, Faizy, Expert in deep learning)
Accessing an array
You can access the element of an array by index. Lets access the first element of array “name”. By giving index 0. Index in Scala starts from 0.
name(0) Output: res11: String = jal
13.2 List in Scala
Lists are one of the most versatile data structure in Scala. Lists contain items of different types in Python, but in Scala the items all have the same type. Scala lists are immutable.
Here is a quick example to define a list and then access it.
Declaring List in Scala
You can define list simply by comma separated values inside the “List” method.
scala> val numbers = List(1, 2, 3, 4, 5, 1, 2, 3, 4, 5) numbers: List[Int] = List(1, 2, 3, 4, 5, 1, 2, 3, 4, 5)
You can also define multi dimensional list in Scala. Lets define a two dimensional list:
val number1 = List( List(1, 0, 0), List(0, 1, 0), List(0, 0, 1) ) number1: List[List[Int]] = List(List(1, 0, 0), List(0, 1, 0), List(0, 0, 1))
Accessing a list
Let’s get the third element of the list “numbers” . The index should 2 because index in Scala start from 0.
scala> numbers(2) res6: Int = 3
We have discussed two of the most used data Structures. You can learn more from this link.
14. Writing & Running a program in Scala using an editor
Let us start with a “Hello World!” program. It is a good simple way to understand how to write, compile and run codes in Scala. No prizes for telling the outcome of this code!
object HelloWorld { def main(args: Array[String]) { println("Hello, world!") } }
As mentioned before, if you are familiar with Java, it will be easier for you to understand Scala. If you know Java, you can easily see that the structure of above “HelloWorld” program is very similar to Java program.
This program contains a method “main” (not returning any value) which takes an argument – a string array through command line. Next, it calls a predefined method called “Println” and passes the argument “Hello, world!”.
You can define the main method as static in Java but in Scala, the static method is no longer available. Scala programmer can’t use static methods because they use singleton objects. To read more about singleton object you can refer this article.
14.1 Compile a Scala Program
To run any Scala program, you first need to compile it. “Scalac” is the compiler which takes source program as an argument and generates object files as output.
Let’s start compiling your “HelloWorld” program using the following steps:
1. For compiling it, you first need to paste this program into a text file then you need to save this program as HelloWorld.scala
2. Now you need change your working directory to the directory where your program is saved
3. After changing the directory you can compile the program by issuing the command.
scalac HelloWorld.scala
4. After compiling, you will get Helloworld.class as an output in the same directory. If you can see the file, you have successfully compiled the above program.
14.2 Running Scala Program
After compiling, you can now run the program using following command:
scala HelloWorld
You will get an output if the above command runs successfully. The program will print “Hello, world!”
15. Advantages of using Scala for Apache Spark
If you are working with Apache Spark then you would know that it has 4 different APIs support for different languages: Scala, Java, Python and R.
Each of these languages have their own unique advantages. But using Scala is more advantageous than other languages. These are the following reasons why Scala is taking over big data world.
- Working with Scala is more productive than working with Java
- Scala is faster than Python and R because it is compiled language
- Scala is a functional language
16. Comparing Scala, Java, Python and R APIs in Apache Spark
Let’s compare 4 major languages which are supported by Apache Spark API.
17. Install Apache Spark & some basic concepts about Apache Spark
To know the basics of Apache Spark and installation, please refer to my first article on Pyspark. I have introduced basic terminologies used in Apache Spark like big data, cluster computing, driver, worker, spark context, In-memory computation, lazy evaluation, DAG, memory hierarchy and Apache Spark architecture in the previous article.
As a quick refresher, I will be explaining some of the topics which are very useful to proceed further. If you are a beginner, then I strongly recommend you to go through my first article before proceeding further.
- Lazy operation: Operations which do not execute until we require results.
- Spark Context: holds a connection with Spark cluster manager.
- Driver and Worker: A driver is in charge of the process of running the main() function of an application and creating the SparkContext.
- In-memory computation: Keeping the data in RAM instead of Hard Disk for fast processing.
Spark has three data representations viz RDD, Dataframe, Dataset. To use Apache Spark functionality, we must use one of them for data manipulation. Let’s discuss each of them briefly:
- RDD: RDD (Resilient Distributed Database) is a collection of elements, that can be divided across multiple nodes in a cluster for parallel processing. It is also fault tolerant collection of elements, which means it can automatically recover from failures. RDD is immutable, we can create RDD once but can’t change it.
- Dataset: It is also a distributed collection of data. A Dataset can be constructed from JVM objects and then manipulated using functional transformations (map, flatMap, filter, etc.). As I have already discussed in my previous articles, dataset API is only available in Scala and Java. It is not available in Python and R.
- DataFrame: In Spark,.
- Transformation: Transformation refers to the operation applied on a RDD to create new RDD.
- Action: Actions refer to an operation which also apply on RDD that perform computation and send the result back to driver.
- Broadcast: We can use the Broadcast variable to save the copy of data across all node.
- Accumulator: In Accumulator, variables are used for aggregating the information.
18. Working with RDD in Apache Spark using Scala
First step to use RDD functionality is to create a RDD. In Apache Spark, RDD can be created by two different ways. One is from existing Source and second is from an external source.
So before moving further let’s open the Apache Spark Shell with Scala. Type the following command after switching into the home directory of Spark. It will also load the spark context as sc.
$ ./bin/spark-shell
After typing above command you can start programming of Apache Spark in Scala.
18.1 Creating a RDD from existing source
When you want to create a RDD from existing storage in driver program (which we would like to be parallelized). For example, converting an array to RDD, which is already created in a driver program.
val data = Array(1, 2, 3, 4, 5,6,7,8,9,10) val distData = sc.parallelize(data)
In the above program, I first created an array for 10 elements and then I created a distributed data called RDD from that array using “parallelize” method. SparkContext has a parallelize method, which is used for creating the Spark RDD from an iterable already present in driver program.
To see the content of any RDD we can use “collect” method. Let’s see the content of distData.
scala> distData.collect() Output: res1: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
18.2 Creating a RDD from External sources
You can create a RDD through external sources such as a shared file system, HDFS, HBase, or any data source offering a Hadoop Input Format. So let’s create a RDD from the text file:
The name of the text file is text.txt. and it has only 4 lines given below.
I love solving data mining problems.
I don’t like solving data mining problems.
I love solving data science problems.
I don’t like solving data science problems.
Let’s create the RDD by loading it.
val lines = sc.textFile("text.txt");
Now let’s see first two lines in it.
lines.take(2)
The output is received is as below:
Output: Array(I love solving data mining problems., I don't like solving data mining problems)
18.3 Transformations and Actions on RDD
18.3.1. Map Transformations
A map transformation is useful when we need to transform a RDD by applying a function to each element. So how can we use map transformation on ‘rdd’ in our case?
Let’s calculate the length (number of characters) of each line in “text.txt”
val Lenght = lines.map(s => s.length) Length.collect()
After applying above map operation, we get the following output:
Output: res6: Array[Int] = Array(36, 42, 37, 43)
18.3.2 Count Action
Let’s count the number of lines in RDD “lines”.
lines.count() res1: Long = 4
The above action on “lines1” will give 4 as the output.
18.3.3 Reduce Action
Let’s take the sum of total number of characters in text.txt.
val totalLength = Length.reduce((a, b) => a + b) totalLength: Int = 158
18.3.4 flatMap transformation and reduceByKey Action
Let’s calculate frequency of each word in “text.txt”
val counts = lines.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _) counts.collect() Output: res6: Array[(String, Int)] = Array((solving,4), (mining,2), (don't,2), (love,2), (problems.,4), (data,4), (science,2), (I,4), (like,2))
18.3.5 filter Transformation
Let’s filter out the words in “text.txt” whose length is more than 5.
val lg5 = lines.flatMap(line => line.split(" ")).filter(_.length > 5) Output: res7: Array[String] = Array(solving, mining, problems., solving, mining, problems., solving, science, problems., solving, science, problems.)
19. Working with DataFrame in Apache Spark using Scala
A DataFrame in Apache Spark can be created in multiple ways:
- It can be created using different data formats. For example, by loading the data from JSON, CSV
- Programmatically specifying schema
Let’s create a DataFrame using a csv file and perform some analysis on that.
For reading a csv file in Apache Spark, we need to specify a new library in our Scala shell. To perform this action, first, we need to download Spark-csv package (Latest version) and extract this package into the home directory of Spark. Then, we need to open a PySpark shell and include the package ( I am using “spark-csv_2.10:1.3.0”).
$ ./bin/spark-shell --packages com.databricks:spark-csv_2.10:1.3.0
Now let’s load the csv file into a DataFrame df. You can download the file(train) from this link.
val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load("train.csv")
19.1 Name of columns
Let’s see the name of columns in df by using “columns” method.
df.columns Output: res0: Array[String] = Array(User_ID, Product_ID, Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_1, Product_Category_2, Product_Category_3, Purchase)
19.2 Number of observations
To see the number of observation in df you can apply “count” method.
df.count() Output: res1: Long = 550068
19.3 Print the columns datatype
You can use “printSchema” method on df. Let’s print the schema of df.
df.printSchema() Output: root |-- User_ID: integer (nullable = true) |-- Product_ID: string (nullable = true) |-- Gender: string (nullable = true) |-- Age: string (nullable = true) |-- Occupation: integer (nullable = true) |-- City_Category: string (nullable = true) |-- Stay_In_Current_City_Years: string (nullable = true) |-- Marital_Status: integer (nullable = true) |-- Product_Category_1: integer (nullable = true) |-- Product_Category_2: integer (nullable = true) |-- Product_Category_3: integer (nullable = true) |-- Purchase: integer (nullable = true)
19.4 Show first n rows
You can use “show” method on DataFrame. Let’s print the first 2 rows of df.
df.show(2) Output: +-------+----------+------+----+----------+-------------+--------------------------+--------------+------------------+------------------+------------------+--------+ |User_ID|Product_ID|Gender| Age|Occupation|City_Category|Stay_In_Current_City_Years|Marital_Status|Product_Category_1|Product_Category_2|Product_Category_3|Purchase| +-------+----------+------+----+----------+-------------+--------------------------+--------------+------------------+------------------+------------------+--------+ |1000001| P00069042| F|0-17| 10| A| 2| 0| 3| null| null| 8370| |1000001| P00248942| F|0-17| 10| A| 2| 0| 1| 6| 14| 15200| +-------+----------+------+----+----------+-------------+--------------------------+--------------+------------------+------------------+------------------+--------+ only showing top 2 rows
19.5 Subsetting or select columns
To select columns you can use “select” method. Let’s apply select on df for “Age” columns.
df.select("Age").show(10) Output: +-----+ | Age| +-----+ | 0-17| | 0-17| | 0-17| | 0-17| | 55+| |26-35| |46-50| |46-50| |46-50| |26-35| +-----+ only showing top 10 rows
19.6 Filter rows
To filter the rows you can use “filter” method. Let’s apply filter on “Purchase” column of df and get the purchase which is greater than 10000.
df.filter(df("Purchase") >= 10000).select("Purchase").show(10) +--------+ |Purchase| +--------+ | 15200| | 15227| | 19215| | 15854| | 15686| | 15665| | 13055| | 11788| | 19614| | 11927| +--------+ only showing top 10 rows
19.7 Group DataFrame
To groupby columns, you can use groupBy method on DataFrame. Let’s see the distribution on “Age” columns in df.
df.groupBy("Age").count().show()
Output: +-----+------+ | Age| count| +-----+------+ |51-55| 38501| |46-50| 45701| | 0-17| 15102| |36-45|110013| |26-35|219587| | 55+| 21504| |18-25| 99660| +-----+------+
19.8 Apply SQL queries on DataFrame
To apply queries on DataFrame You need to register DataFrame(df) as table. Let’s first register df as temporary table called (B_friday).
df.registerTempTable("B_friday")
Now you can apply SQL queries on “B_friday” table using sqlContext.sql. Lets select columns “Age” from the “B_friday” using SQL statement.
sqlContext.sql("select Age from B_friday").show(5) +----+ | Age| +----+ |0-17| |0-17| |0-17| |0-17| | 55+| +----+
20. Building a machine learning model
If you have come this far, you are in for a treat! I’ll complete this tutorial by building a machine learning model.
I will use only three dependent features and the independent variable in df1. Let’s create a DataFrame df1 which has only 4 columns (3 dependent and 1 target).
val df1 = df.select("User_ID","Occupation","Marital_Status","Purchase")
In above DataFrame df1 “User_ID”,”Occupation” and “Marital_Status” are features and “Purchase” is target column.
Let’s try to create a formula for Machine learning model like we do in R. First, we need to import RFormula. Then we need to specify the dependent and independent column inside this formula. We also have to specify the names for features column and label column.
import org.apache.spark.ml.feature.RFormula val formula = new RFormula().setFormula("Purchase ~ User_ID+Occupation+Marital_Status").setFeaturesCol("features").setLabelCol("label")
After creating the formula, we need to fit this formula on df1 and transform df1 through this formula. Let’s fit this formula.
val train = formula.fit(df1).transform(df1)
After applying the formula we can see that train dataset has 2 extra columns called features and label. These are the ones we have specified in the formula (featuresCol=”features” and labelCol=”label”)
20.1 Applying Linear Regression on train
After applying the RFormula and transforming the DataFrame, we now need to develop the machine learning model on this data. I want to apply a Linear Regression for this task. Let us import a Linear regression and apply on train. Before fitting the model, I am setting the hyperparameters.
import org.apache.spark.ml.regression.LinearRegression val lr = new LinearRegression().setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8) val lrModel = lr.fit(train)
You can also make predictions on unseen data. But I am not showing this here. Let’s print the coefficient and intercept for linear regression.
println(s"Coefficients: ${lrModel.coefficients} Intercept: ${lrModel.intercept}") Output: Coefficients: [0.015092115630330033,16.12117786898672,-10.520580986444338] Intercept: -5999.754797883323
Let’s summarize the model over the training set and print out some metrics.
val trainingSummary = lrModel.summary Now, See the residuals for train's first 10 rows. trainingSummary.residuals.show(10) +-------------------+ | residuals| +-------------------+ | -883.5877032522076| | 5946.412296747792| | -7831.587703252208| | -8196.587703252208| |-1381.3298625817588| | 5892.776223171599| | 10020.251134994305| | 6659.251134994305| | 6491.251134994305| |-1533.3392694181512| +-------------------+ only showing top 10 rows
Now, let’s see RMSE on train.
println(s"RMSE: ${trainingSummary.rootMeanSquaredError}") Output: RMSE: 5021.899441991144
Let’s repeat above procedure for taking the prediction on cross-validation set. Let’s read the train dataset again.
val train = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load("train.csv")
Now, randomly divide the train in two part train_cv and test_cv
val splits = train.randomSplit(Array(0.7, 0.3)) val (train_cv,test_cv) = (splits(0), splits(1))
Now, Transform train_cv and test_cv using RFormula.
import org.apache.spark.ml.feature.RFormula val formula = new RFormula().setFormula("Purchase ~ User_ID+Occupation+Marital_Status").setFeaturesCol("features").setLabelCol("label")
val train_cv1 = formula.fit(train_cv).transform(train_cv) val test_cv1 = formula.fit(train_cv).transform(test_cv)
After transforming using RFormula, we can build a machine learning model and take the predictions. Let’s apply Linear Regression on training and testing data.
import org.apache.spark.ml.regression.LinearRegression val lr = new LinearRegression().setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8) val lrModel = lr.fit(train_cv1) val train_cv_pred = lrModel.transform(train_cv1) val test_cv_pred = lrModel.transform(test_cv1)
In train_cv_pred and test_cv_pred, you will find a new column for prediction.
21. Additional Resources
- In Scala there are some libraries which are specially written for Data Analysis purpose, refer this link.
- If you want to learn Scala programming refer this link.
- For quick introduction to the Spark API refer this link.
- For, Spark Programming Guide: refer this link.
- To learn about Datasets, and DataFrames, Spark SQL. Refer this link.
End Notes
In this article, I have provided a practical hands on guide for Scala. I introduced you to write basic programs using Scala, some important points about Scala and how companies are using Scala.
I then refreshed some of the basic concepts of Apache Spark which I have already covered in my PySpark article and built a machine learning model in Apache Spark using Scala. If you have any questions or doubts, feel free to post them in the comments section.
Learn, compete, hack and get hired!
40 Comments | https://www.analyticsvidhya.com/blog/2017/01/scala/ | CC-MAIN-2018-17 | refinedweb | 5,076 | 57.16 |
Python: Raise Timeout exception in the event of times out of request
Python Requests: Exercise-6 with Solution
Write a Python code to send a request to a web page and stop waiting for a response after a given number of seconds. In the event of times out of request, raise Timeout exception.
Sample Solution:
Python Code:
import requests print("timeout = 0.001") try: r = requests.get('', timeout = 0.001) print(r.text) except requests.exceptions.RequestException as e: print(e) print("\ntimeout = 1.0") try: r = requests.get('', timeout = 1.0) print("Connected....!") except requests.exceptions.RequestException as e: print(e)
Sample Output:
timeout = 0.001 HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x0BD7B0D0>, 'Connection to github.com timed out. (connect timeout=0.001)')) timeout = 1.0 Connected....!
Python Code Editor:
Have another way to solve this solution? Contribute your code (and comments) through Disqus.
Previous: Write a Python code to send a request to a web page, and print the JSON value of the response. Also print each key value of the response.
Next: Write a Python code to send some sort of data in the URL’s query string. | https://www.w3resource.com/python-exercises/requests/python-request-exercise-6.php | CC-MAIN-2021-21 | refinedweb | 204 | 53.78 |
Hello everyone,
I'm trying to pass an argument i receive from the command line in a main class to another class where i've created an array. The argument will designate how big the array should be. If you can't already tell I'm brand new to Java and I'm experiencing some frustration in trying to overcome what should be a simple matter.
I stored the argument in a variable MAX in the main class, and in the second class i used the variable MAX again, thinking that the second class would recognize it from the first but obviously this is incorrect. Any help would be greatly appreciated and here's the code. I have first shown the main class, then the second class. Thanks in advance:
Code :
import java.io.*; public class MidTwo { public static void main (String[] args) throws IOException { int MAX; int i; BufferedReader in = new BufferedReader (new InputStreamReader(System.in)); MAX = Integer.parseInt(args[0]); if (MAX<=0) { System.out.println("Command line needs to be a positive integer"); return; } MyArray gment; gment = new MyArray(MAX); } } import java.io.*; public class MyArray { double num; String line; BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); MyArray (int MAX) throws IOException { String line; double array[] = new double[MAX]; for (int i=0; i<MAX; i=i++) { System.out.println("what number?"); line = in.readLine(); num = Double.parseDouble(line); array[i] = num; { break; } } } }
Sam | http://www.javaprogrammingforums.com/%20object-oriented-programming/1617-passing-information-between-classes-printingthethread.html | CC-MAIN-2013-48 | refinedweb | 238 | 57.57 |
Query SQLAlchemy models with MongoDB syntax.
Project description
Query SQLAlchemy models using MongoDB style syntax.
Why?
The need arose for me to be able to pass complex database filters from client side JavaScript to a Python server. I started building some JSON style syntax to do so, then realized such a thing already existed. I’ve never seriously used MongoDB, but the syntax for querying lends itself pretty perfectly to this use case.
That sounds pretty dangerous…
It can be. When using this with any sort of user input, you’ll want to pass in a whitelist of attributes that are ok to query, otherwise you’ll open the possibility of leaked passwords and all sorts of other scary stuff.
So, can I actually use this for a serious project?
Maybe? There’s some decent test coverage, but this certainly isn’t a very mature project yet.
I’ll be pretty active in supporting this, so if you are using this and run into problems, I should be pretty quick to fix them.
How fast is it?
I’m sure my actual syntax parsing is inefficient and has loads of room for improvement, but the time it takes to parse should be minimal compared to the actual database query, so this shouldn’t slow your queries down too much.
Supported Operators
- $and
- $or
- $not
- $nor
- $in
- $nin
- $gt
- $gte
- $lt
- $lte
- $ne
- $mod
Custom operators added for convenience:
- $eq - Explicit equality check.
- $like - Search a text field for the given value.
Not yet supported, but would like to add:
- Index based relation queries. Album.tracks.0.track_id won’t work.
- $regex
Examples
from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from mqlalchemy import apply_mql_filters from myapp.mymodels import Album # get your sqlalchemy db session here db_engine = create_engine("sqlite+pysqlite:///mydb.sqlite") DBSession = sessionmaker(bind=db_engine) db_session = DBSession() # define which fields of Album are ok to query whitelist = ["album_id", "artist.name", "tracks.playlists.name"] # Find all albums that are either by Led Zeppelin or have a track # that can be found on the "Grunge" playlist. filters = { "$or": [ {"tracks.playlists.name": "Grunge"}, {"artist.name": "Led Zeppelin"} ] } query = apply_mql_filters(db_session, Album, filters, whitelist) matching_records = query.all()
For more, please see the included tests, as they’re probably the easiest way to get an idea of how the library can be used.
Contributing
Submit a pull request and make sure to include an updated AUTHORS with your name along with an updated CHANGES.rst.
License
MIT
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/MQLAlchemy/ | CC-MAIN-2019-43 | refinedweb | 440 | 65.12 |
PoreMapper: Python package for mapping molecular pores
A quick and easy way to make blobs and pores
Why?
I work mostly with metal-organic cages and, in that field, the pore is often visualised with a software called VOIDOO (see here), which makes very pretty images. I wanted to recreate that, quickly (just for visualisation) and in Python (to interface with an stk molecule). I initially planned on performing cheap dynamics on what I termed a
Blob inside the cage, allowing it to mould to the interior surface. However, that was overcomplicating the issue. Instead, just inflate a balloon inside the pore!
An example usage that highlights “why” is available as part of my YouTube tutorials:
How?
I will start by noting that a lot of the more frustrating parts of this code were taken directly for pyWindow, which was developed by Marcin Miklitz during his time in the Jelfs group. I thank him for saving me some time! Additionally, I must thank Austin Mroz, a postdoc in the Jelfs group, who shared with me the atomic radii from their recent work: STREUSEL (paper here). These radii are used for all
Atoms in the
Host class.
PoreMapper provides a
Host class, which reads in XYZ coordinates and atom types: this is the cage of interest. The calculation is handled by the
Inflater class, which takes only one argument: the
bead_sigma or radius of the
Beads that will be used to map the pore (i.e., resolution). The
Inflater has two methods:
get_inflated_blob and
inflate_blob, which perform the same process, except
get_inflated_blob only yields the final result, not each step.
The process begins with a
Blob made up of
Beads in an idealised spherical geometry with radius of 0.1 Angstrom and a number of beads defined by the size of the
Host. The
Blob is placed at the centroid of the
Host. Over 100 steps, the position of each
Bead in the
Blob is translated outwards in small increments until the maximum diameter of the
Host is reached. If at any point a
Bead is within contact distance (bead radii + atom radii) of a
Host atom, it is no longer moved in following steps and added to the
Pore class (a subset of beads in the initial
Blob). The output of the calculation is an
InflationResult, which contains the
Blob and
Pore for futher analysis.
A
Blob provides pathways out of the molecule, which we analyse, by clustering the points (
MeanShift usage from
sklearn), to find the windows of the
Host. This process is currently limited (see below) and pyWindow is recommended! Regardless, the visualisation of the pathways, and collection of those coordinates, may be useful. This figure shows the inflation of the blob in CC3:
A
Pore provides a visualisation of the inside of
Host and the class provides analyses including
get_mean/max_distance_to_com for pore radii,
get_volume for pore volume and
get_asphericity for pore shape. Again, the key for me was visualisation. This figure shows the inflation of the pore in CC3:
Most importantly, PoreMapper is easy to use in a Python project! Here is a code example of running an analysis of a host from an XYZ file:
import pore_mapper as pm # Read in host from xyz file. host = pm.Host.init_from_xyz_file(path=f'{name}.xyz') host = host.with_centroid([0., 0., 0.]) # Define calculator object. calculator = pm.Inflater(bead_sigma=1.0) # Run calculator on host object, analysing output. final_result = calculator.get_inflated_blob(host=host) # Analysis. windows = final_result.pore.get_windows() print(f'windows: {windows}, pore_volume: {pore.get_volume()}') # Write final structure. host.write_xyz_file(f'{name}_final.xyz') final_result.pore.get_blob().write_xyz_file(f'{name}_blob_final.xyz') final_result.pore.write_xyz_file(f'{name}_pore_final.xyz')
Examples and limitations.
A
Blob provides pathways out of the molecule, and from these points, we provide window clustering/detection. However, the window calculation is the most costly part currently, and is not perfect. The figure below shows a metal-organic cage with four windows, where
PoreMapper detects two. Additionally, you can see (black arrow) that the
Pore has some imperfections based on beads being outside of the cavity.
The figure below shows some more examples of pores and blobs (purple on the right; all others have equivalent blobs and pores because there are no windows!) for multiple metal-organic cages. Importantly, to produce the coordinate files for this analysis takes a couple of seconds! Although I could spend ages making the figures pretty in pymol…
On the left, we see multiple cages with no windows, the one on the right is entirely nonporous. The volumes calculated for the first three are within ~200 Angstrom^3 of reported VOIDOO volumes (a paper on this), but I would suggest that changes in bead diameter and resolution could be the cause of this (warning of the sensitivity).
What next?
It is available on GitHub and can be installed through pip!
A series of examples are provided here. This package is small and after spending a short amount of time writing it and being very happy with the outcome, for visualisation at least, I decided that merging it with pyWindow (more code from the Jelfs group: pyWindow) would be best. This would include rewrite/clean-up/improvements/bug-fixes to pyWindow, which I hope to report on soon!
Please, test it, use it, break it and send me feedback! | https://andrewtarzia.github.io/posts/2021/11/poremapper-post/ | CC-MAIN-2022-27 | refinedweb | 887 | 61.87 |
Convert map object to numpy array in python 3
In Python 2 I could do the following:
import numpy as np f = lambda x: x**2 seq = map(f, xrange(5)) seq = np.array(seq) print seq # prints: [ 0 1 4 9 16]
In Python 3 it does not work anymore:
import numpy as np f = lambda x: x**2 seq = map(f, range(5)) seq = np.array(seq) print(seq) # prints: <map object at 0x10341e310>
How do I get the old behaviour (converting the
map results to
numpy array)?
Edit: As @jonrsharpe pointed out in his answer this could be fixed if I converted
seq to a list first:
seq = np.array(list(seq))
but I would prefer to avoid the extra call to
list. | https://prodevsblog.com/questions/149665/convert-map-object-to-numpy-array-in-python-3/ | CC-MAIN-2020-40 | refinedweb | 126 | 74.12 |
CodeGuru Forums
>
Visual C++ & C++ Programming
>
Visual C++ Programming
> Whoa, whoa, whoa ...
PDA
Click to See Complete Forum and Search -->
:
Whoa, whoa, whoa ...
grannnn
March 28th, 1999, 07:00 PM
Good day guys!,
I'm quite surprised by the number of responses for my "little
brain exercise". I hope you won't get into a point realizing
that you have just wasted your highly-paid time.
Nobody is supposed to go home or be hired. In the first place,
my only purpose for posting this problem is to gather the (as many as
possible) valid answers, and not the BEST solution.
Please leave the validation and testing of your suggestions for me.
Oh, you might want to ask were would I use this, ok, i'm just thinking
if I could include this problem in the battery test for our applicants
in a programmer trainee position. And I don't think you're interested
with the job-opening, are you?
Sorry if i've scratched a match in a dry wooden forest.
Thanks for doing your job as fee-free-gurus.
Bore
March 28th, 1999, 08:31 PM
Ok, I've allowed myself to be drawn into this conversation for the last time. Please, please, don't use this sort of question in an interview. You want to hire motivated, creative individuals. Whether they know this sort of trivia is, IMHO, irrelevant. I'd much rather hire someone with some fire in his belly than some know-it-all who's going to win the next round of Jeopardy. Sure, ask him/her what xor/xor/xor will do, and observe his/her reaction. If it's a blank stare, deduct points. If it's anything else, add points. Again, JMHO. Sorry!
grannnn
March 28th, 1999, 10:54 PM
Ok, the suggestion will be considered. Thanks.
But, I really don't intend to use it as an interview question,
I prefer including it,I bet you won't disagree, in a written exam.
Hans Wedemeyer
March 29th, 1999, 08:35 AM
As MS compilers allow inline assembly I thought it proper to include an example,
it's actually more within spec than any other solution that's been posted as it
truly only uses two variables and only two cpu registers.
About using it for interviews, IMHO YES, if the person is going to expected to
think and use the tools that are available. Would you use a sledg hammer to crack
nuts when a nutcracker is available ?
regards
Hans Wedemeyer
#include <stdlib.h>
void main( int argc, char* argv[] )
{
int a = 7;
int b = 9;
// did user supply new values ?
if ( argc > 1 && argc == 3 ) //expect two args a followed by b
{
a = atoi(argv[1]);
b = atoi(argv[2]);
}
printf("a=%d b=%d\n",a,b);
_asm {
mov eax,a
mov ebx,b
xchg eax,ebx
mov a,eax
mov b,ebx
}
printf("a=%d b=%d\n",a,b); // within spec. but limited to 2gig
}
codeguru.com | http://forums.codeguru.com/archive/index.php/t-44881.html | crawl-003 | refinedweb | 501 | 72.76 |
Agenda
See also: IRC log
<mnot> Scribe: hugo
<mnot> Scribe: Hugo Haas
<mnot> ScribeNick: hugo
Jonathan: I sent a new proposal for i064
Mark: we'll take this up in this call
Jonathan: I sent a couple of issues that are not on the issues list
Mark: one is on the issues list, and the other one was a typo and it was dispatched to Marc
Umit: I would like more time to review the minutes
Mark: we'll approve them next week then
<scribe> ACTION: Marc Hadley to incorporate namespace policy into drafts and RDDL. [PENDING] [recorded in]
<scribe> ACTION: [DONE] Arun Gupta to iterate his testing document to categorize and reformat. Due 2005-10-10. [recorded in]
<scribe> ACTION: Editors to ensure we meet our charter with regard to backward compatibility warnings for WSDL 1.1, aligning it with the direction we took for SOAP 1.1 [PENDING] [recorded in]
<scribe> ACTION: [DONE] Jonathan Marsh to formulate a proposal for a migration guide. [recorded in]
<mnot>
<mnot>
Mark: it seems that the Group is happy with our comments
<mnot>
<abbie> hi, can u please add me for the roll call
Jonathan: no, there was no pushback
Hugo: what's the status of our discussion about wsoap:action granularity?
Jonathan: we haven't made a decision yet
Hugo: in case we don't adopt this proposal, we should make this WG aware of it
[ DaveH goes over his proposed issue ]
Hugo: I thought we had agreed not to talk about dispatching in the spec
DaveH: in that case, we should
maybe be a little more explicit
... I find this sentence in our spec very vague
... I'm happy with not getting into dispatching in the core, but the SOAP bining may be different in that regards
... we're not really taking on dispatching, but we're implying we are
Paul: it seems to me that you're talking about what WSDL does with the message
DaveH: my assumption about the outbound side is that the value of action will be used in outgoing messages
Paul: I don't think we should consider SOAP+WSDL as a big lump when it comes to action
DaveH: we say action is mandatory, but we don't say what it means
<inserted> ... how about its meaning, uniqueness, etc.
Hugo: action identifies the semantics of the message, and it is possible to have identical actions for multiple messages in the same operation if they have the same meaning
Mark: do people want to see this on our issues list?
Jonathan: it doesn't seem to harm
<uyalcina> +1 to Marc Hadley
Marc: I'd like to understand how the current draft is broken
DaveH: nowhere in this spec do we ever define what dispatching off of action means
Marc: I don't think we should go there
<pauld> WSDL position on dispatching is a rather unhappy compromise resulting from a lot of discussion and a minority opinion or two
Mark: we talked about raising the bar for accepting as issue
<RebeccaB> +1 to Marc's position that we don't need to go there
Mark: I would like to have issues
seconded by somebody
... does somebody want to second this issue?
Umit: given the history in WSDL, I don't think we should talk about this issue
RESOLUTION: proposed issue not accepted in the issues list
<mnot>
[ Jonathan describes the issue ]
<mnot> Minutes of i021 decision:
Mark: my concern is that we may
be reopening a previous issue (i021)
... it's not clear that an explicit decision was made at the time
Hugo: I thought we considered
wsoap:module, but ended up with UsingAddressing because we
wanted to go beyond SOAP
... is that what we want to reconsider?
Mark: my recollection is that we wanted to have a cross-WSDL versions mechanism
Marc: I think the minutes are pretty clear about defining UsingAddressing beyond SOAP
Jonathan: so how would you use it beyond SOAP?
Rebecca: how about if you use multiple bindings, e.g. multiple ports with different bindings?
Marc: does that mean that you want to highlight the use Addressing for our SOAP binding regardless of the underlying protocol?
Jonathan: yes
Marc: I think that we have some mentions of SOAPAction that may be HTTP specific
Jonathan: I'm assuming that it's
only applying to cases when SOAPAction makes sense
... if we leave it the way it is, it's not clear with WS-A binding is in use
Mark: is that a lie down on the road issue for you?
Jonathan: no, it's a spec consistency issue
<uyalcina> it is implicit in the context
Hugo: have you considered using having a marker for specifying what exact binding is in use?
Jonathan: no, I think that you can do that in WSDL in already, so we don't need to architect an extensibility point here
<marc> the location of UsingAddressing extension in the WSDL gives the necessary context to determine th ebinding in use
Mark: anyone seconding this issue?
[ silence ]
RESOLUTION: proposed issue not accepted in the issues list
<mnot>
Marc: I'd like to think about it
more
... it looks like a backwards compatibility feature
... but it may complicate the defaulting rules
<uyalcina> I prefer getting this into the issues list
Anish: I think we can put it on the issues list
<uyalcina> lets discuss next week
Anish: I am seconding it
RESOLUTION: Issue added to the issues list
<scribe> New proposal:
Mark: are people comfortable with this new text or do they want more time?
Anish, Hugo: we're OK
Marc: what's the point of the last paragraph?
Jonathan: letting people know that they may want to go and fix their action values
Marc: I'm OK to aprove it now
Mark: any objection to this proposal?
[ silence ]
RESOLUTION: i064 closed and resolved as proposed in
[ Hugo summarizes where we're at ]
<Marsh> +1 if it'sn not substantial
Mark: we wanted to make sure the proposal wasn't a substantial change
Tony: I agree it isn't
RESOLUTION: cr6 closed and resolved with
[ nobody objected to this resolution ]
Anish: if we have quotes in an HTTP header, are the quotes significant?
Mark: it's specified per header
Marc: I'd like to go and check the media type definition
<scribe> ACTION: Marc to come up with a proposal for cr8 [recorded in]
<mnot>
CR test cases:
[ Arun introduces the document ]
<inserted> -- Core 1. "none" URI (2.1) - REQUIRED
DaveH: I don't think that test 1 is a valid test as it's not tied to a requirement
Paul: I think that a "none" URI identifies a one-way message
DaveH: the "none" sort of turns a req-resp into a one-way
<mnot> ."
Marc: could we have an endpoint which always generates faults, except when the recipient is the "none" URI?
DaveH: that's a way indeed
Mark: we could have an HTTP
transport response, without a SOAP response
... we need to work with keeping in mind that we need to demonstrate features using these tests
Marc: the way I saw this is that you'd better use "none" in a one-way message becouse of the defaulting rule
Arun: so we're not going to do the the 1&2 sub-bullets as specified; we're going to use a request-response with a "none" URI which will degenerate into a one-way
-- Core 2. Endpoint Reference Infoset Representation (2.2) - REQUIRED
Mark: this seems to be a behovioral text of a reply
Arun: we can add more tests about how a FaultTo gets represented
Katy: is this text just to test the serialization of an EPR?
Mark: yes
DaveH: how do you get out the abstract properties from these?
Mark: it's implementation specific
Arun: I'll add some information about success criteria
Mark: we talked about using XPath for this
-- Core 3. Endpoint Reference Extensibility (2.5) - REQUIRED
<pauld> i can build test cases from these, and build XPath expressions to compare, however some complete example messages would be useful
Anish: if the test doesn't define
what these extensions mean, how do we know if the receiving end
saw them?
... for SOAP 1.2, we had defined a header whose function was to be echo'ed back
Mark: could you make some proposals around this?
<scribe> ACTION: Anish to propose meaningful EPR extensions for test 3. Endpoint Reference Extensibility (2.5) [recorded in]
Katy: it's not clear what we're testing here: the sending agent or the receiving one?
DaveH: I don't think that we could test the client side
Mark: I think that these differences will become more clear when we put those tests into Paul's framework
-- Core 4. XML Infoset Representation of Message Addressing Properties (3.2)
Arun: 2.1. may apply here
-- Core 5. wsa:To defaulting (3.2)
-- Core 6. wsa:ReplyTo defaulting (3.2)
DaveH: in that case, you don't need to talk about the client at all, it's a server test
-- Core 9. Formulating a normal Reply (3.3)
Arun: More tests can be added here
-- Core 5. wsa:To defaulting (3.2)
[ Going back as requested by Umit ]
Umit: you're talking about the reply message here, right?
Arun: that's correct
-- Core 10. Formulating a Fault Reply (3.3)
Katy: do we need the
isRefParam="true" in the EPR?
... there's a typo in sub-bullet 1
Arun: thank you
-- SOAP 1. SOAP 1.2 Feature interaction with Action (2.4)
-- SOAP 3. SOAP 1.2 Anonymous Address (3.5)
-- SOAP 5. SOAP 1.1 interaction with Action (4.2)
-- SOAP 6. SOAP 1.1 Anonymous Address (3.5)
-- SOAP 8. InvalidAddressingFailure Fault (5, 3.2)
Mark: I think that the next step is for Arun and Paul to integrate those in Paul's framework
Paul: we will need messages for inclusion in the framework
Mark: I'd like us to identify which features are not tested by those tests
<scribe> ACTION: Paul to take Arun's work and integrate it in his framework with Arun's help by 2005-10-17 [recorded in]
Paul: do you think that we have good coverage with those base on the list we discussed in Palo Alto 2 weeks ago?
Mark: have you guys changed the spec so that we have a section called creating a message from an EPR?
Tony: not yet, but soon
Mark: we will be considering
Paul's document in the coming weeks to make user we understand
it and we have enough tests to test implementation in CR
... we have 3 concalls between now and Tokyo
... we're going to continue revising the test doc
... does that seem reasonable?
[ silence ]
ADJOURNED
This is scribe.perl Revision: 1.127 of Date: 2005/08/16 15:12:03 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: i/Hugo:/... how about its meaning, uniqueness, etc. Succeeded: i/http/Topic: UsingAddressing with other bindings than our SOAP binding. Succeeded: s/URI/URI?/ Succeeded: i/DaveH: I don't think that/-- 1. "none" URI (2.1) - REQUIRED Succeeded: s/correc/correct/ Succeeded: s/-- /-- Core /g Succeeded: s/help/help by 2005-10-17/ Found Scribe: hugo Found Scribe: Hugo Haas Found ScribeNick: hugo Scribes: hugo, Hugo Haas Default Agenda: Got date from IRC log name: 10 Oct 2005 Guessing minutes URL: People with action items: anish editors hadley marc paul[End of scribe.perl diagnostic output] | http://www.w3.org/2005/10/10-ws-addr-minutes.html | CC-MAIN-2015-11 | refinedweb | 1,919 | 67.99 |
Download and Install Bukkit
- Create a folder for Bukkit, anywhere it doesn't matter
- Head over to and download the latest version of Bukkit, note that if you want to use the latest version of Minecraft, you might need to download the Beta Build and save the .jar file to the folder.
- Rename the downloaded .jar file from craftbukkit-#.#.#-R#.#.jar to craftbukkit.jar
- Open Notepad, insert the following text:
java -Xms1024M -Xmx1024M -jar craftbukkit.jar -o true
PAUSE
- Save the file in the folder as run.bat
Run Bukkit
Double click run.bat
Test Bukkit
Open Minecraft, Multiplayer, Direct Connect, Type "localhost", Join Server
You can stop bukkit by typing stop in the bukkit command window
Download and Install Raspberry Juice
- Download raspberry juice from
- Save the raspberry juice .jar file to plugins folder in the bukkit folder
- Start up bukkit
If everything has gone to plan you should see a message when you start up bukkit that RaspberryJuice has been enabled.
Now, you can run the programs you created for Minecraft: Pi Edition on the full edition of Minecraft.
Minecraft Analogue Clock running using Bukkit and Raspberry Juice
Limitations
The raspberry juice plugin doesn't currently support all the functions of the Minecraft: Pi Edition API, see for a list of supported functions.
Really cool! How do you run your python files, though? I've been trying to find out for a long time!
This comment has been removed by the author.
This comment has been removed by the author.
Does not work!!!
cmd says:
unable to access jarfile craftbukkit.jar
I am getting an import error "ImportError: No module named 'connection'" And I have tried several things to try to run the programs but it doesn't work, help please.
Are you using Python 3? The python api library from mojang only works with Python 2.
I've set this up but I can't find any tutorials on how to use it :/ not sure where to put the python file and how to use it with the raspberry pi(I have bukkit and raspberry juice on the pi).
Any help would be very helpful :)
I want to use this plugin to control the GPIO on the raspberry pi :) so I can move the redstone logic to a breadboard.
You can use Bukkit in the same way you would use Minecraft: Pi edition.
If bukkit and juice is install on your Pi it should be pretty easy as the mcpi python libraries are installed by default with Raspbian.
Create a simple program:
from mcpi.minecraft import Minecraft
mc = Minecraft.create()
mc.postToChat("hi")
Save it anywhere on your Pi and run it! If your connect to Minecraft server on the Pi you should see the message.
Thanks for the reply :) got it working now.
I think the first thing I'm going to make is a script that allows you to type in /Python [File]. | https://www.stuffaboutcode.com/2013/06/programming-minecraft-with-bukkit.html | CC-MAIN-2019-35 | refinedweb | 488 | 81.53 |
I'm new to MVC and trying to figure out it works. I'm just experimenting with simple projects, and there are somethings that are unclear.
Let's say I want to load an image depending on value (true or false from a checkbutton) from my controller. I have no problem with recieving the value from my view but how would I go in generating this html element?
A few ideas that comes to mind is.
IsVisible=false
ViewBag
View
Controller
Html.Helper
Build it like this:
public class viewModel { public string imgSrc {get; set;} }
public class myController : Controller { public ActionResult myAction() { bool switch; viewModel vm = new viewModel(); if (switch) vm.Src = "something.jpg"; else vm.Src = "somthingelse.jpg"; return View(vm); } }
@model viewMoel <img src="@Model.Src"/>
MVC is based on seperation of concerns. Read about the design pattern. People often think MVC is a technology, it's not, it's a pattern.
As such it should be used correctly with the correct parts doing what they are designed to do. So just because you can do something, doesn't mean you should. In MVC the views job (concern) is rendering the content. The controller should have no knowledge of this content. It's job is routing and generating of the Model. So all you HTML should be in the view (full stop) | https://codedump.io/share/Zxjq8RIpzkoI/1/how-would-one-go-about-to-create-or-display-an-image-from-mvc-controller | CC-MAIN-2017-04 | refinedweb | 224 | 76.62 |
Oh, just to clarify,
I meant that I've looked at and believe its a
more feasible model to start with, rather than trying a complete W3C server
side XForms implementation.
The weaknesses of were that it's been stalled
quite some time ago at a stage, where it was bound to strictly HTML and
doesn't seem to make obvious the XML validation, population, handling reuse
for different web clients.
SOAP and HTML are the two extremes which if we have a framework to cover,
then many of the other intermediate client variations will fall in easily.
-= Ivelin =-
-----Original Message-----
From: Ivanov, Ivelin [mailto:Ivelin_Ivanov@bmc.com]
Sent: Friday, February 15, 2002 9:30 AM
To: 'cocoon-dev@xml.apache.org'
Subject: RE: Cocoon, XForms, ExFormula, Chiba, Struts, etc
Thanks for the response fellows. I'm glad this is still considered a
feasible subject.
Yes, I've looked at XForms and:
1) It is what I believe we should be shooting for instead of full XForms
support.
Believe me, Struts is extremely useful when you need HTML Forms/Java
binding.
2) We could use plenty of Struts architecture to add the extra XML layer:
HTML Forms <-> XML/SAX <-> JavaBeans. Excellent tools are already available
for that: Castor (or JAXB), Xerces2 (XML Schema validation) and again Strut'
source base.
3) I've customized Struts for our project's needs quite a bit and have grown
to understand that it's a very well written tool, which demonstrates lots of
best practices.
To address Chris' well put concise requirements:
c1) Shouldn't be as hard if we reuse Struts knowledge.
- In struts-config.xml you describe a list of ActionForm names along with
their corresponding Java classes, e.g.
<form-bean
- To use this form in a JSP page, you'd write something like:
<html:form
Enter Account id here:
<html:text
<html:submit
</html:form>
- Very similar to C2, the "subscribeAction" string is a name mapped to an
Action class which has a perform method that takes (HttpRequest,
HttpResponse, SubscribeForm)
Struts takes it from here and generates the following HTML from the JSP:
<form name="subscribe" method="POST" action="/subscribeAction.do">
Enter Account id here:
<input type="text" name="accountId" maxlength="25" size="25" value="">
<input type="submit" name="action" value="button.ok"/>
</form>
How does use that? Smart. Reflection. The names of the input tags are
simply pointers to JavaBean properties of the SubscribeForm. For each name
"someproperty", Struts will expect "getSomeproperty" and "setSomeproperty"
methods to be present in the Form. So all the SubscribeForm is:
public class SubscribeForm
{
public String getAccountId() { return accountId_ };
public void setAccountId(String aid) { accountId_ = aid};
// ... so on
}
The name field can be not just simple property expression, but almost any
type of JavaBeans property you may want to access. For example:
name="user.address[2].country" translates to:
getUser().getAddress().elementAt(2).getCountry() for populating the original
value in the HTML, and
getUser().getAddress().elementAt(2).setCountry() for populating the JavaBean
when the form is submitted.
Of course the two lines above are simplified, but anyone with some
reflection experience will get the point.
XPath is obviously a more standard and powerful tool one could use to achive
similar effect.
Two excelent tools are available to do the reflection job for free: and
So, using a similar descriptor concept to struts-config.xml, one could
describe:
a) a form name that maps to a Java Bean class
b) using JAXB (or Castor) one can map the JavaBeans class to XML
representation
c) with Jaxen/SaxPath one can acomplish automated population/serialization
of (name=XpathExpression, value=String) to and from HTML POST <-> JavaBean
d) Xerces2 can validate the resulting JavaBean through JAXB SAX parsing
agains an XML Schema.
At the end the only hard part would seem to be the xsl sheet which converts
an XML form into a client specific format, that will allow customization
like:
<input name="xpath1" value="123" onMouseOver="do1()"
onFocus="doSomethingElse"/>
Without the customization part, translating:
<cocoonform:input
to
<input name="xpath1" value="123">
using a generic cocoonform2html.xsl seems trivial.
--
Sorry for yet another long message. As Steffano cited Pascal, didn't have
time to make it shorter :)
Am I dreaming too much?
-= Ivelin =-
-----Original Message-----
From: Peter Velichko [mailto:peter@softline.kiev.ua]
Sent: Friday, February 15, 2002 6:38 AM
To: cocoon-dev@xml.apache.org
Subject: RE: Cocoon, XForms, ExFormula, Chiba, Struts, etc
Did anyone look at?
It is client-side and server-side form validator.
XForm uses Xerces XML Parser and have LGPL license.
-----Original Message-----
From: Torsten Curdt [mailto:tcurdt@dff.st]
Sent: Friday, February 15, 2002 1:49 PM
To: cocoon-dev@xml.apache.org
Subject: Re: Cocoon, XForms, ExFormula, Chiba, Struts, etc
<snip/>
> > Can we try to unify and finish the job started, so that Cocoon moves to
the
> > next level.
Would be cool...
> There isn't anything on the architecture of chiba available so I have
> no idea how closely related chiba to XForms support in Cocoon would
> be.
I've also tried to find some information about chiba... didn't yet have
the time to install and try it out.
Although I guess XForm is about to become a RC sooner or later it doesn't
yet go well with the current technologies available.
Looking into XUL from mozilla could be an interesting alternative...
(although they do not aim completely into the same direction!!)
I haven't had a look into the XForm spec since the exformula stuff. I hope
they fixed some stuff. That days it still felt somehow immature. They spec
didn't say anything how selected options in a multi select optionbox was
supposed to look like in the instance e.g. :-/
I am back again a bit scept
ok
> I'm currently working on decoupling sitemap components from the input
> layer, making that plugable. I think it could help with b)
Sounds interesting... could already post some more information. We've
maybe implemented something similar...
> part.
As I remember c1) isn't yet possible with the current XSLT
spec/implementation. It's only possible with extension functions :-(ugly)
-- | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200202.mbox/%3C85BCB1287CF1D41197570090279A6A0D03952D58@ES02-AUS.bmc.com%3E | CC-MAIN-2016-30 | refinedweb | 1,023 | 55.34 |
Introduction to Stream Control Transmission Protocol).
Listing 1. echo_client.c
#define USE_SCTP #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #ifdef USE_SCTP #include <netinet/sctp.h> #endif #define SIZE 1024 char buf[SIZE]; char *msg = "hello\n"; #define ECHO_PORT 2013 int main(int argc, char *argv[]) { int sockfd; int nread; struct sockaddr_in serv_addr; if (argc != 2) { fprintf(stderr, "usage: %s IPaddr\n", argv[0]); exit(1); } /* create endpoint using TCP or SCTP */ sockfd = socket(AF_INET, SOCK_STREAM, #ifdef USE_SCTP IPPROTO_SCTP #else IPPROTO_TCP #endif );); } /* write msg to server */ write(sockfd, msg, strlen(msg) + 1); /* read the reply back */ nread = read(sockfd, buf, SIZE); /* write reply to stdout */ write(1, buf, nread); /* exit gracefully */ close(sockfd); exit(0); }
Jan Newmarch has written many books and papers about software engineering, network programming, user interfaces and artificial intelligence, and he is currently digging into the I
Excellent!
An excellent article concerning introduction to SCTP.
Very good!
/Best regards
J | http://www.linuxjournal.com/article/9748?page=0,1&quicktabs_1=0 | CC-MAIN-2017-47 | refinedweb | 167 | 51.04 |
Start playback.
#include <mmplayer/mmplayerclient.h> int mm_player_play( mmplayer_hdl_t *hdl )
Start playback. This function changes the player status to STATUS_PLAYING. This status setting remains in effect until you pause or stop playback, or the end of the tracksession is reached and repeating is disabled.
The track that begins playing is the one selected as the "current" track in the active tracksession. When this track finishes playing, the player chooses a new track to play based on the shuffle and repeat mode settings. For more details, see mm_player_repeat() and mm_player_shuffle().
At any time during playback, you can seek to a new position in the current track by calling mm_player_seek() or change the current track by calling mm_player_jump().
0 on success, -1 on failure | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.mm_player/topic/mmp_api/mm_player_play.html | CC-MAIN-2018-13 | refinedweb | 122 | 56.25 |
In this section, we will discuss about java classes and its structure. First of all learn: what is a class in java and then move on to its structural details.
Class: In the object oriented approach, a class defines a set of properties (variables) and methods. Through methods certain functionality is achieved. Methods act upon the variables and generate different outputs corresponding to the change in variables.
Structure for constructing java program:
package: In the java source file you generally use the package statement at header of a java program. We use package generally to group the related classes i.e. classes of same category or related functionality. You have to save your java files in the folder matching the package name as mentioned in java program.
import : Import keyword is used at the beginning of a source file but after the package statement. You may need some classes, intefaces defined in a particular package to be used while programming so you would have to use import statement. This specifies classes or entire Java packages which can be referred to later without including their package names in the reference. For example, if you have imported util package (import java.util.*;) then you need not to mention the package name while using any class of util package.
Access Modifiers : Access modifiers are used to specify the visibility and accessibility of a class, member variables and methods. Java provides some access modifiers like: public, private etc.. These can also be used with the member variables and methods to specify their accessibility. in Java View All Comments
Post your Comment | http://www.roseindia.net/java/learn-java-in-a-day/class.shtml | CC-MAIN-2015-18 | refinedweb | 267 | 56.55 |
Skip navigation links
java.lang.Object
oracle.javatools.util.Holder<T>
public class Holder<T>
A mutable holder class modeled on the JAX-WS 2.0 Holder class that simply provides a common way of providing an in/out parameter without the need to resort to untidy one length array parameters.
This can also be applied to cases where you need a return value back from a inner or local class. Take for example code that requires the use of the Runnable interface, the following code could be used:
final Holder<Integer> h = new Holder(3); SwingUtilities.invokeAndWait(new Runnable() { public void run() { h.set(doSomethingClever()); } }); return 9 + h.get();
Ideally in the above exmaple you would use and API that could make use of the
Future interface; but in many cases API have not been updated.
public T value
public Holder()
public Holder(T value)
public T get()
public void set(T value)
public java.lang.String toString()
toStringin class
java.lang.Object
Skip navigation links | http://docs.oracle.com/cd/E35521_01/apirefs.111230/e17493/oracle/javatools/util/Holder.html | CC-MAIN-2016-50 | refinedweb | 167 | 58.18 |
24 July 2008 13:44 [Source: ICIS news]
(adds detail in paragraphs 4-6)
LONDON (ICIS news)--An initial monthly Europe butadiene (BD) contract for August has been settled down €15/tonne ($23/tonne) from July at €1,270/tonne on the back of a fall in naphtha costs and some relief in supply, said a buyer and seller on Thursday.?xml:namespace>
The settlement was on a free delivered (FD) basis and was confirmed by INEOS Olefins on the sell side.
“We have not yet seen an easing in the spot prices but this might come in August so we were prepared to give something away on pricing,” said the seller. No discounts are allowed for the MCP.
It was not immediately clear whether other buyers would follow the settlement, with more talks due this afternoon.
The majority of European BD is traded on a quarterly contract basis.
The third-quarter contract was agreed at €1,260/tonne FD NWE (n?xml:namespace>
($1 = €0.64)
For more. | http://www.icis.com/Articles/2008/07/24/9142912/initial-europe-august-bd-at-1270tonne.html | CC-MAIN-2013-20 | refinedweb | 169 | 71.04 |
Get the highlights in your inbox every week.
Put platforms in a Python game with Pygame
Put platforms in a Python game with Pygame
In part six of this series on building a Python game from scratch, create some platforms for your characters to travel.
Subscribe now
This is part 6
A platformer game needs platforms.
In Pygame, the platforms themselves are sprites, just like your playable sprite. That's important because having platforms that are objects makes it a lot easier for your player sprite to interact with them.
There are two major steps in creating platforms. First, you must code the objects, and then you must map out where you want the objects to appear.
Coding platform objects
To build a platform object, you create a class called
Platform. It's a sprite, just like your
Player sprite, with many of the same properties.
Your
Platform class needs to know a lot of information about what kind of platform you want, where it should appear in the game world, and what image it should contain. A lot of that information might not even exist yet, depending on how much you have planned out your game, but that's all right. Just as you didn't tell your Player sprite how fast to move until the end of the Movement article, you don't have to tell
Platform everything upfront.
Near the top of the script you've been writing in this series, create a new class. The first three lines in this code sample are for context, so add the code below the comment:
import pygame
import sys
import os
## new code below:
When called, this class creates an object onscreen in some X and Y location, with some width and height, using some image file for texture. It's very similar to how players or enemies are drawn onscreen.
Types of platforms
The next step is to map out where all your platforms need to appear.
The tile method
There are a few different ways to implement a platform game world. In the original side-scroller games, such as Mario Super Bros. and Sonic the Hedgehog, the technique was to use "tiles," meaning that there were a few blocks to represent the ground and various platforms, and these blocks were used and reused to make a level. You have only eight or 12 different kinds of blocks, and you line them up onscreen to create the ground, floating platforms, and whatever else your game needs. Some people find this the easier way to make a game since you just have to make (or download) a small set of level assets to create many different levels. The code, however, requires a little more math.
supertux.png
The hand-painted method
Another method is to make each and every asset as one whole image. If you enjoy creating assets for your game world, this is a great excuse to spend time in a graphics application, building each and every part of your game world. This method requires less math, because all the platforms are whole, complete objects, and you tell Python where to place them onscreen.
Each method has advantages and disadvantages, and the code you must use is slightly different depending on the method you choose. I'll cover both so you can use one or the other, or even a mix of both, in your project.
Level mapping
Mapping out your game world is a vital part of level design and game programming in general. It does involve math, but nothing too difficult, and Python is good at math so it can help some.
You might find it helpful to design on paper first. Get a sheet of paper and draw a box to represent your game window. Draw platforms in the box, labeling each with its X and Y coordinates, as well as its intended width and height. The actual positions in the box don't have to be exact, as long as you keep the numbers realistic. For instance, if your screen is 720 pixels wide, then you can't fit eight platforms at 100 pixels each all on one screen.
Of course, not all platforms in your game have to fit in one screen-sized box, because your game will scroll as your player walks through it. So keep drawing your game world to the right of the first screen until the end of the level.
If you prefer a little more precision, you can use graph paper. This is especially helpful when designing a game with tiles because each grid square can represent one tile.
pygame_layout.png
Example of a level map.
Coordinates
You may have learned in school about the Cartesian coordinate system. What you learned applies to Pygame, except that in Pygame, your game world's coordinates place
0,0 in the top-left corner of your screen instead of in the middle, which is probably what you're used to from Geometry class.
pygame_coordinates.png
Example of coordinates in Pygame.
The X axis starts at 0 on the far left and increases infinitely to the right. The Y axis starts at 0 at the top of the screen and extends down.
Image sizes
Mapping out a game world is meaningless if you don't know how big your players, enemies, and platforms are. You can find the dimensions of your platforms or tiles in a graphics program. In Krita, for example, click on the Image menu and select Properties. You can find the dimensions at the very top of the Properties window.
Alternately, you can create a simple Python script to tell you the dimensions of an image. Open a new text file and type this code into it:
#!/usr/bin/env python3
from PIL import Image
import os.path
import sys
if len(sys.argv) > 1:
print(sys.argv[1])
else:
sys.exit('Syntax: identify.py [filename]')
pic = sys.argv[1]
dim = Image.open(pic)
X = dim.size[0]
Y = dim.size[1]
print(X,Y)
Save the text file as
identify.py.
To set up this script, you must install an extra set of Python modules that contain the new keywords used in the script:
$ pip3 install Pillow --user
Once that is installed, run your script from within your game project directory:
$ python3 ./identify.py images/ground.png
(1080, 97)
The image size of the ground platform in this example is 1080 pixels wide and 97 high.
Platform blocks
If you choose to draw each asset individually, you must create several platforms and any other elements you want to insert into your game world, each within its own file. In other words, you should have one file per asset, like this:
pygame_floating.png
One image file per object.
You can reuse each platform as many times as you want, just make sure that each file only contains one platform. You cannot use a file that contains everything, like this:
pygame_flattened.png
Your level cannot be one image file.
You might want your game to look like that when you've finished, but if you create your level in one big file, there is no way to distinguish a platform from the background, so either paint your objects in their own file or crop them from a large file and save individual copies.
Note: As with your other assets, you can use GIMP, Krita, MyPaint, or Inkscape to create your game assets.
Platforms appear on the screen at the start of each level, so you must add a
platform function in your
Level class. The special case here is the ground platform, which is important enough to be treated as its own platform group. By treating the ground as its own special kind of platform, you can choose whether it scrolls or whether it stands still while other platforms float over the top of it. It's up to you.
Add these two functions to your
Level class:
def ground(lvl,x,y,w,h):
ground_list = pygame.sprite.Group()
if lvl == 1:
ground = Platform(x,y,w,h,'block-ground.png')
ground_list.add(ground)
if lvl == 2:
print("Level " + str(lvl) )
return ground_list
def platform( lvl ):
plat_list = pygame.sprite.Group()
if lvl == 1:
plat = Platform(200, worldy-97-128, 285,67,'block-big.png')
plat_list.add(plat)
plat = Platform(500, worldy-97-320, 197,54,'block-small.png')
plat_list.add(plat)
if lvl == 2:
print("Level " + str(lvl) )
return plat_list
The
ground function requires an X and Y location so Pygame knows where to place the ground platform. It also requires the width and height of the platform so Pygame knows how far the ground extends in each direction. The function uses your
Platform class to generate an object onscreen, and then adds that object to the
ground_list group.
The
platform function is essentially the same, except that there are more platforms to list. In this example, there are only two, but you can have as many as you like. After entering one platform, you must add it to the
plat_list before listing another. If you don't add a platform to the group, then it won't appear in your game.
Tip: It can be difficult to think of your game world with 0 at the top, since the opposite is what happens in the real world; when figuring out how tall you are, you don't measure yourself from the sky down, you measure yourself from your feet to the top of your head.
If it's easier for you to build your game world from the "ground" up, it might help to express Y-axis values as negatives. For instance, you know that the bottom of your game world is the value of
worldy. So
worldyminus the height of the ground (97, in this example) is where your player is normally standing. If your character is 64 pixels tall, then the ground minus 128 is exactly twice as tall as your player. Effectively, a platform placed at 128 pixels is about two stories tall, relative to your player. A platform at -320 is three more stories. And so on.
As you probably know by now, none of your classes and functions are worth much if you don't use them. Add this code to your setup section (the first line is just for context, so add the last two lines):
enemy_list = Level.bad( 1, eloc )
ground_list = Level.ground( 1,0,worldy-97,1080,97 )
plat_list = Level.platform( 1 )
And add these lines to your main loop (again, the first line is just for context):
enemy_list.draw(world) # refresh enemies
ground_list.draw(world) # refresh ground
plat_list.draw(world) # refresh platforms
Tiled platforms
Tiled game worlds are considered easier to make because you just have to draw a few blocks upfront and can use them over and over to create every platform in the game. There are even sets of tiles for you to use on sites like OpenGameArt.org.
The
Platform class is the same as the one provided in the previous sections.
The
ground and
platform in the
Level class, however, must use loops to calculate how many blocks to use to create each platform.
If you intend to have one solid ground in your game world, the ground is simple. You just "clone" your ground tile across the whole window. For instance, you could create a list of X and Y values to dictate where each tile should be placed, and then use a loop to take each value and draw one tile. This is just an example, so don't add this to your code:
# Do not add this to your code
gloc = [0,656,64,656,128,656,192,656,256,656,320,656,384,656]
If you look carefully, though, you can see all the Y values are always the same, and the X values increase steadily in increments of 64, which is the size of the tiles. That kind of repetition is exactly what computers are good at, so you can use a little bit of math logic to have the computer do all the calculations for you:
Add this to the setup part of your script:
gloc = []
tx = 64
ty = 64
i=0
while i <= (worldx/tx)+tx:
gloc.append(i*tx)
i=i+1
ground_list = Level.ground( 1,gloc,tx,ty )
Now, regardless of the size of your window, Python divides the width of the game world by the width of the tile and creates an array listing each X value. This doesn't calculate the Y value, but that never changes on flat ground anyway.
To use the array in a function, use a
while loop that looks at each entry and adds a ground tile at the appropriate location:
This is nearly the same code as the
ground function for the block-style platformer, provided in a previous section above, aside from the
while loop.
For moving platforms, the principle is similar, but there are some tricks you can use to make your life easier.
Rather than mapping every platform by pixels, you can define a platform by its starting pixel (its X value), the height from the ground (its Y value), and how many tiles to draw. That way, you don't have to worry about the width and height of every platform.
The logic for this trick is a little more complex, so copy this code carefully. There is a
while loop inside of another
while loop because this function must look at all three values within each array entry to successfully construct a full platform. In this example, there are only three platforms defined as
ploc.append statements, but your game probably needs more, so define as many as you need. Of course, some won't appear yet because they're far offscreen, but they'll come into view once you implement scrolling.
def platform(lvl,tx,ty):
plat_list = pygame.sprite.Group()
ploc = []
i=0
if lvl == 1:
ploc.append((200
To get the platforms to appear in your game world, they must be in your main loop. If you haven't already done so, add these lines to your main loop (again, the first line is just for context):
enemy_list.draw(world) # refresh enemies
ground_list.draw(world) # refresh ground
plat_list.draw(world) # refresh platforms
Launch your game, and adjust the placement of your platforms as needed. Don't worry that you can't see the platforms that are spawned offscreen; you'll fix that soon.
Here is the game so far in a picture and in code:
pygame_platforms.jpg
Our Pygame platformer so far.
#!/usr/bin/env python3
# draw a world
# add a player and player control
# add player movement
# add enemy and basic collision
# add platform
# GNU All-Permissive License
# Copying and distribution of this file, with or without modification,
# are permitted in any medium without royalty provided the copyright
# notice and this notice are preserved. This file is offered as-is,
# without any warranty.
import pygame
import sys
import os
'''
Objects
'''.score = 1
self.images = []
for i in range(1,9):_hit_list:
self.health -= 1
print(self.health) bad(lvl,eloc):
if lvl == 1:
enemy = Enemy(eloc[0],eloc[1],)
if event.key == ord('q'):
pygame.quit()
sys.exit()
main = False
# world.fill(BLACK)
world.blit(backdrop, backdropbox)
player.update()
player_list.draw(world) #refresh player position
enemy_list.draw(world) # refresh enemies
ground_list.draw(world) # refresh enemies
plat_list.draw(world) # refresh platforms
for e in enemy_list:
e.move()
pygame.display.flip()
clock.tick(fps)
6 Comments
Wow amazing info,l love python,in your opinion how does it takes to be a skilled python programmer?
Great question. I hope some other people jump in to comment, but here are my initial three thoughts:
0. Practise. To do is to learn. To learn programming, start programming. Reinvent the wheel if you have to; some of my first programs were for silly things (bulk renaming of files, generating thumbnails of images, and so on) that I could have found elsewhere, but I chose to re-create for my own edification.
1. Open source. It's the answer: it lets you learn from the code of others, and to leverage stuff that other people have built (like Pygame or Arcade, and even Python itself).
2. Know how to parse text. It seems trivial, but at least 50% of programming boils down to knowing how to compare and manipulate text.
3. Fearlessness. Just jump in. Don't get intimidated by "better" programmers than you.
Oh and a bonus one: keep reading my articles and the other great programming articles on opensource.com ;-)
Great article Seth!
In my spare time I've been reading, learning, implementing (repeat) a scrolling platform game and your article was a great read. I look forward to reading more from you. Making a game is truly satisfying and people like you that share the methods are doing great work. Thanks for sharing.
Jason
My pleasure, Jason. I find that making games is a fun way to learn programming, and the concepts are surprisingly universal. While not everyone creating games in Python today is destined to get a job making video games, the idea of, for instance, abstracting a group of active items into a list that then gets updated each loop, is something that can be broadly applied to any programming exercise (and that's just one arbitrary example).
Thanks for the article, Seth!
@ Ismail:
I noticed that you have build your question with "how does it takes to…" and not "what does it takes to…"
One essential point, in my opinion, is to understand and use the ins and outs specific to Python.
Thinking of this, I could mention here two books that may help:
Python Tricks - Dan Bader:
Powerful Python - Aaron Maxwell:...
How create angle platforms like Sonic Hedgehog? SuperTux not have angle platforms. | https://opensource.com/article/18/7/put-platforms-python-game?sc_cid=70160000001273HAAQ | CC-MAIN-2020-05 | refinedweb | 2,984 | 71.75 |
When Java came to life in 1995, the web went from an endless series of hyperlinks to a platform that delivered live content. Which is exactly why Java is on more than a billion computers in the world today. And it's cuz of those billion computers that we keep innovating in JDK 7 for the desktop and Java EE for the enterprise.
But it's 2009 and almost a third of Internet access today is through mobile devices. And the percentage of mobile Internet users is expected to surpass those using traditional computers in the next few years! So while the desktop, laptop, and enterprise computer remain important, there are so many new ways to access content on the web. And they were all on display at JavaOne, running the same apps across smartphones, smartbooks, netbooks, e-books, set-top boxes, TVs. So basically any device you chose can now run the same application! Check out Eric's keynote for the full story.
But there are two other key pieces to the Java story this year: first
Nandini showed the JavaFX authoring tool which lets you create graphical applications easily, and then you can send the app directly to a whole bunch of devices simultaneously. And last but not least -
the Java Store - the key to distribution for developers. Cuz if Java's gonna run on everything around us, and more and more developers are gonna write interesting apps for all those devices using the new tools, we're all gonna be looking for a handy way to get ahold of those apps.
Graduation is about accomplishment, but it's also about potential. So congrats to Java and the whole team (including Jeet, my charm school buddy Octavian, Eric, and the JavaOne peeps: Ash, Lizzi, Kim, Jen, Heidi...). And here's looking forward to seeing Java everywhere. The potential is unlimited!
Every Sun.
Next.
I'm not green around the gills or even green with envy. I'm feeling Eco-Green! Today Sun was named to the Uptime Institute's Global Green 100 list. For three great green reasons:
Which means next year I expect to see our customer names on the Global Green 100 list too.
I
It was close to lunchtime when my iphone buzzed with the SMS: “Want some FUD?” I had to laugh; while my teenagers are specialists in the new lingo – this errant 'Fear, Uncertainty, Doubt' message from a co-worker was actually an abbreviation for "food".
But seeing FUD on my phone's screen reminded me of the months before Y2K – I was working in IT for a telco, and we were feverishly updating all our server equipment to ensure we wouldn't run into the dreaded short date format issues.
Scroll forward 9 years. Here we are, and IT shops are looking at their aging server and storage inventory – many acquired in '99 with Y2K budgets, many facing end-of-service-life, many not meeting current or projected performance demands, costing too much for power and cooling and taking up too much datacenter floorspace.
With the efficiency and consolidation options available today, it's easy to make the case that it's cheaper to move to a new server than stay on the old. So why does anyone hesitate in moving from their older systems? FUD – think of all the issues with moving to something new: painful learning curve, disruption, customized software, ISV apps. Will moving cause costly interruptions to business?
Sun offers two solutions to take the FUD out of datacenter upgrades:
Solaris 8 and 9 Containers are virtual environments for hosting Solaris 8 and 9 applications on a Solaris 10 box. They provide a Solaris 8 and 9 runtime environment with all the performance and quality improvements of the Solaris 10 OS (DTrace, ZFS, Solaris Resource Manager). Now you can upgrade hardware in one stage and your applications in another. Less pain, more time to plan. Containers are a "transition tool" to help port applications to Solaris 10 in comfortable stages (watch this great video with the great Joost Pronk in which he explains Solaris Containers).
And to go with our Containers we have our experts - Sun Professional Services. Our migration team analyzes your original Solaris 8 and Solaris 9 environments, creates a migration plan, and implements and tests solutions as stand-alone projects. Professional Services can easily test, implement and optimize future system and network architectures for our customers (like Barmer Ersatzkasse), while protecting their prior qualification efforts.
No worries. Sun, we take the FUD out of migration. Now if I could just get some lunch.?
Check out the whitepaper and the webinar. And if you're really handling that much data in your MySQL database, you should consider an enterprise subscription plan for access to 7/24 expertise, knowledge and some additional tools that will help your database run better..
Yesterday I participated.:
ZFS Hybrid Storage Pools are storage stacks made from a mix of DRAM, Flash/SSD and SATA. ZFS manages this storage hierarchy as one transparent pool optimizing it to leverage the best attributes of each device. This optimization means the best performance (at about 25% the cost of traditional storage) and best energy-efficiency possible. ZFS's optimizations yields a 3.2 times faster Read IOPS, 11% faster Write IOPS and a 2 times faster raw capacity. ZFS not only optimizes for speed it also constantly runs data integrity checks to prevent any data corruption. It's not only fast, it's good.
Storage Analytics The 7000 Class Systems has a browser user interface (BUI) that radically simplifies administration tasks like configuration, maintenance (including hardware), checking shares (the 7000 line exports files systems as shares) and status (current usage of CPU, memory, storage, network, services, hardware, CIFS, NDMP, NFSv3 and v4, and iSCSI - it's pretty comprehensive and all on one page!) and, most wonderfully, DTrace analytics. In the storage world robust analytics on workloads in production just haven't existed. Now an administrator is able to look at a problem in real time - all while systems continue running in production. The Storage Analytics uses a drill-down analysis - checking the higher level statistics first and then going into finer detail based on previous findings. So, for example, things are moving along smoothly and suddenly performance is bad. With the Storage Analytics you can now ask: How many IOPS is the system doing? Which clients are causing a spike in IOPS? Let's say it's a CIFS protocol causing the problem; from that data point you'll then drill down and ask, Which Windows Client is going crazy? Is it doing more reads or writes? Which file is it reading or writing to? Before you would have been stopped at the second question. Now life is good. An administrator can quickly identify and diagnose system performance issues, and debug storage and network problems. Find it quick and fix it quick without shutting anything down. Pretty amazing. So far ahead of anything else available, you might even call it disruptive.
Sun doesn't stop at great open architecture, open storage appliances, revolutionary features like ZFS Hybrid Storage Pools, and get-it-no-where-else Storage Analytics. Sun follows up the 7000 class systems with great services. Our Professional Services is ready to help your storage migration with our Sun Unified Storage Data Migration Service. Sun's experts will migrate your storage systems quickly and securely saving you time and bringing you the full benefits of all the 7000 series features...
How.
This week I watched with interest India's launch of their first lunar orbiter, the Chandrayaan-1. My favorite part of any launch is watching Ground Control go from absolute, deadly-serious silence to uncontrolled, jumping joy when their rocket leaves the tower and earth's atmosphere. The success of the mission is down to the knowledge and expertise of this team on the ground. They may never be famous or fly into outer space but without their collective know-how and experience the Chandrayaan-1 would not be a reality.
I was thinking how similar this is to what happens with our Professional Services team. They've taken our leading datacenter technologies like the Solaris 10 OS, LDOMs, and CoolThreads, with our over 25 years of expertise in datacenter strategy, design and build to create Sun's Datacenter Efficiency Practice.
This is because we've found our customers facing a space, power and cooling crunch - not enough floorspace for their expanding datacenters, not enough throughput/power to meet current and near-future performance demands, and utility costs and cooling costs sometimes exceeding the cost of server acquisition. And while many companies faced the same types of datacenter problems, we knew that the solutions need to be tuned to each company's unique business and IT requirements. So we start with Datacenter Strategy Consulting to review our customer's datacenter floorspace, cooling facilities, power requirements, hardware and software, network, and security needs. We then can recommend retrofitting and optimization of current datacenter, or a Sun Modular Datacenter (the always cool Project Blackbox) or building a new facility (like we did, check out this video about our own energy-efficient datacenter in Santa Clara).
And once you have an expert datacenter strategy, you need expert datacenter design. Sun uses a modular or "pod" design that groups racks having the same requirements. Pods create a standard within the datacenter that make the design repeatable and scalable for future growth. We design all our datacenters, whether retrofitted, modular or a new build-out, with energy-efficient equipment and technologies, and green building design concepts. Datacenter Build also means installation and configuration of equipment and readiness services. At its completion your datacenter maximizes space utilization, maximizes energy-efficiency, and minimizes costs.
Sun's Datacenter Efficiency Practice - think of us as the Mission Control to your successful datacenter launch. This is the rocket science of data centers.
It's October... it's the postseason... and Sun's new T5440 Server gets me thinking about the Red Sox. Bit of a stretch? Not at all. Think back to last Monday's ALDS game. The rookie - the newest guy on the team - Jed Lowrie- brought in the winning run against the Los Angeles Angels to win the game and the first-round playoff series. Same thing with the T5440 Server – Sun's newest server - paving an entirely new way in the industry, setting an all time new bar, the "way of the future" for servers.
What does all this get you? Only the highest throughput (up to 4 times higher performance) in the smallest space (a 4 RU chassis) with the lowest power requirements (2 times higher performance per Watt) in the industry. What else? You get a system on a chip – integrated directly on the processor: networking, security and PCI-Express I/O. Built-in, no-cost LDOMs and Solaris Containers virtualization technologies to consolidate workloads. The industry's most open platform built on open source technologies and open standards. You get breakthrough performance, eco-efficiency and cost savings. If I weren't superstitious, I'd say it was like winning the series. But I'll wait a few weeks for that.
Now our favorite rookie Lowrie wasn't on the diamond alone Monday night. He had the Red Sox's experienced veterans Jason Varitek, Kevin Youkilis, Tim Wakefield and Big Papi right alongside him; he's part of an amazing team.
Just like the T5440 Server - part of a great team too. It has the extensive experience of Sun's award-winning Services on its side. Sun's installation, support, training, professional and managed services allow customers to get the most from their T5440 Server. Sun's Professional Services can help with migrating applications and optimizing energy usage, virtualization and performance. Sun's Managed Services give expert help on the day-to-day operational tasks of your IT infrastructure reducing down-time and improving business efficiency and service levels.
There's a live chat taking place with Jonathan Schwartz, John Fowler, EVP Systems, Masood Heydari, VP SPARC Volume Systems, and Jim McHugh, VP Solaris, on Monday October 13th at 10am PT - to register, go to sun.com/launch. You can see a recent video on the launch at This is Something and can hear the webcast replay, download whitepapers or get more info at sun.com.launch. Finally, to see how the T5440 will perform in your environment with your apps, you can try it out for FREE for 60 days WITH FULL TECH SUPPORT. And you can then buy it at 40% off. Visit Sun's Try and Buy for all the details.
Did you know that? That is, did you know it doesn't matter who's on top when it comes to xVM virtualization? That's a line heard from an engineer having a discussion with an industry analyst in our Solutions Center during our xVM launch last week, while they stood in front of an xVM server demo station. xVM server runs Microsoft, Red Hat, and Solaris Operating Systems. And xVM VirtualBox runs practically any x86-based OS. So no worries about where your application runs; we've got you covered. Check out this conversation on xVM.
We've also got you covered if you need help with your virtualization environment. We're ready to help with support, managed, and professional services for xVM - across the whole lifecycle - assessment (know what you need?), architecture, migration, implementation, management (want an experienced partner there every day?), and support...
Really, it doesn't matter who's on top when it comes to Sun xVM. xVM delivers the reliable, scalable, virtualization hypervisor architecture - the foundation upon which you can build everything else. And integrated management for your virtualized and physical environments. Which it why - when it comes to virtualization - although it really doesn't matter who's on top, it really does matter who's on the bottom. Make sure it's xVM.
That's the thing about being in service. You have to anticipate your customer's needs; you have to put yourself in their shoes (or state of hunger); and you can't always expect much in return (yup, I did forget to pay her back... oops).
Surrounding Sun's product innovation with Service innovation to solve our customer's key challenges. That's what we do in Sun services. Server, storage, and software installation, configuration and support. Helping our customers assess, architect, implement, and optimize their IT solutions, in a heterogeneous world. Managing our customer's infrastructure for them. Learning Services to help teach about how our products and services work within our customer's network infrastructures. We also offer Sun Financial Services to provide financing options for Sun products and services (maybe I can get a loan to finance my NY sandwich).
That's Sun Service... with a smile..
Yesterday!
Once in a lifetime we meet someone that understands us so well, that manages us in a way we never comprehend, that makes life fun. Amazing Shelley is such a person. Thank you Shelley for all you do! Happy Admin Professionals Day.
AmyO
My Sun simulation team was great - we leveraged each other's skills and leaned on the team as we dug our heels in and stuck to our guns on open sourcing all our software. By simulation year 3, we emerged victorious in the market with the most customers and the largest community. And because we had used our [albeit fake] dollars to invest in our products, channel, community, and brand, we were positioned to keep winning in the market for years and years to come. I believe.
Life is tougher at Fenway Park - no simulation here. Manny was ejected during his first at-bat (note to self - if you're not in the game, you can have no positive impact), and Milton Bradley (why do I think of Monopoly every time he comes to bat?) hit a homer that drove in 3 to put the Rangers ahead by five. I stewed and steamed and sunned, and thankfully by the end of the eighth we were ahead 6-5. I believe.
It takes a team to win - that was clear this week. Sure Manny needed Big Papi, Dustin, and Jacoby. I needed Iain, Denis, Eric, Colin, and Octavian to keep our simulated company together. And Sun needs a bunch of other great people that I had the privilege to spend time with this week: Pammy (your Sox hat is in the mail), Jeff, Bob, Lynn, Graham, Cheri, Mark, Russ, Tony, Bev (17 years catching up!), Irene, Suchitra, Andy, Keith, Lorraine, Pavel, Ivonne, Terry, Eric, Connie (fun bus ride), Dan, Emma, Mike, Fritz, Meg, Dan (we're neighbors!), Karen, Georgios, Sivaram (thanks for the advice!), Teresa, Suzanne, Roger, Andy, and so many more. Thanks for the great learnings and all the fun!
How.
I had a mini vacation in Vegas this weekend - took my sweet yellow Mini Cooper to meet its community. Met a yellow twin and lots of minis making personalized statements. Even a mini-meetup at the
The thing about SAM and Q is that their attributes have been required for the medical, military, and oil&gas industries for over a decade now, which is why they are so widely deployed in those market sectors. But the need to store and retrieve large volumes of data quickly and cost-effectively is no longer a requirement limited to those markets. Heck I've got a terabyte of data at home - think about what's going on in media & entertainment, manufacturing, financial services, education...
SAM-QFS was originally developed by LSC Inc, which was purchased by Sun in 2001. I had the opportunity to work closely with the SAM-Q team when they first joined Sun: back then there was Harriet (who has had a fascinating career in high tech), the Matthews brothers and the Intern, Bob, Ted, Tom, Harold, John, Margaret, Clay, Robert, Dave, ... who did I forget? I have lots of crazy Eagan Minnesota memories with the team - like the last slot in the soda machine, the oven at the side of the road, the bratwurst barbeques. The first time I went to Minnesota to meet with them - as I was pulling out of the airport -the Hertz guy said to me "Ya ready for the snow?" Two feet by the morning! Boy was it cold, and that was in the spring! And I have warmer memories of meeting with their customers - like Robert Cecil, PhD, Cleveland Clinic’s network director. Dr Bob gave us a great tour through radiology where SAM-Q was being used to show that a tumor was shrinking, through surgery where SAM-Q provides patient data right in the operating room, and through the data center with huge tape libraries, where SAM-Q was helping to increase the quality of patient care while decreasing costs. And I remember Dr Bob speaking on a panel at a storage conference - when asked about the importance of data availability, he quietly stated that access to data is the difference between life and death. No one can express the need for data availability and integrity better than Dr Bob.
Open sourcing SAM-Q is a key step for Sun and the developer community. It's now easy for people facing large data management challenges to try something that has worked for years in large scale, mission-critical deployments. And in case you're wondering how a business can make money while making such a key asset freely availably, remember that SAM-Q runs on servers, needs to be supported in product environments, stores data on disk and tape, ....
MySpace, all over the blogosphere and they host their own fan community on their website =!
If you're wondering why I just reposted this entry (St Patty's Day really was last week and I haven't been at the bar this whole time
), I fumble-fingered my blog and managed to unpublish this entry. Sorry about that! | http://blogs.sun.com/amyo/ | crawl-002 | refinedweb | 3,345 | 61.97 |
Automatic vehicle spawn and rotation
Hi all,
I am a phd student in computer vision and machine learning. Our group is collecting car images for research.
The data should be like this : images of a stationary car from different viewpoints (maybe 20 viewpoints) are a group of data, which means within one group the car is the same and only the viewpoint changes. We need tens of thousands of groups captured for different cars in different scenes. Obviously, it can not be done manually.
It is nice that Script Hook can spawn cars and change scenes. Is it possible to customize the script so that it can automatically spawn a car, change the scene, rotate the view in 360 degrees and this cycle repeats?
We appreciate any suggestions. Thank you for your reply!
@hnwxc00 You can either move the car, or move the camera, or both. If it was me, I would be just moving the camera.
The problem comes from the fact that each vehicle is a different size, so one camera view might work for a small compact but that same view would be too close for a larger vehicle. That would mean you need to write code that gets the size of the vehicle and then modifies the camera position based on the scale of that model. Or you could possibly place the camera relative to vehicle bone positions (like wheels, doors etc...), which would probably work a bit easier for your needs.
The rest of it would be fairly straight forward, have a list of vehicle names, go through the list one-by-one and spawn each one in turn. You would have the same thing for a list of camera positions, that you then go through one-by-one. Once you have done all the camera positions, you move onto the next vehicle in the list and so on, so forth. The spawning code I have already got as I am working on a data-collection mod that does exactly that. The camera code I haven't got though...
There's probably a fair bit of prep-work involved getting the desired camera positions but once you have them, the rest isn't too bad.
@hnwxc00 The next few comments will contain the code and instructions for reading a sequence of vehicle names from a file and spawning them in turn, this might get you started.
Copy the following text into a file called spawn-names.txt and place it in your scripts folder.
adder airbus airtug akuma alpha ambulance annihilator asea asea2 asterope avarus bagger baller baller2 baller3 baller4 baller5 baller6 banshee banshee2 barracks barracks2 barracks3 bati bati2 benson besra bestiagts bf400 bfinjection
Place the following code into a file called SpawnModeTest.cs and also place it in your scripts folder. Ignore the wonky formatting, it will be fine when you paste it into a document.
using System; using System.IO; using System.Windows.Forms; using GTA; using GTA.Math; using GTA.Native; namespace SpawnModeTest { public class cSpawnModeTest : Script { private int CurrentGameTime; private int ElapsedGameTime; private int LastGameTime; private Ped PlayerPed; private bool SpawnMode = false; private Vector3 SpawnLocation; private int SpawnTimer; private int SpawnCounter; private int SpawnMax; private string[] SpawnNames; public cSpawnModeTest() { this.Tick += onTick; this.KeyUp += OnKeyUp; Interval = 0; Initialise(); } // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX // Initialise the mod here // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX private void Initialise() { UI.Notify("~g~Sequential Spawn Test"); PlayerPed = Game.Player.Character; // Get the names of the vehicles from the text file SpawnNames = File.ReadAllLines("scripts\\spawn-names.txt"); // Set the end counter to be equal to the number of vehicles in the text file SpawnMax = SpawnNames.Length; UI.Notify(SpawnNames.Length + " names read from file."); } // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX // OnTick event fires every frame // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX private void onTick(object sender, EventArgs e) { CurrentGameTime = Game.GameTime; ElapsedGameTime = CurrentGameTime - LastGameTime; LastGameTime = CurrentGameTime; // If we are in spawn mode if (SpawnMode) { DoSpawn(ElapsedGameTime); } else { // Get the spawn location relative to the player SpawnLocation = PlayerPed.GetOffsetInWorldCoords(new Vector3(0, 10, 0)); // Draws a marker on the ground to show where the vehicles will spawn Function.Call(Hash.DRAW_MARKER, 0, SpawnLocation.X, SpawnLocation.Y, SpawnLocation.Z, 0f, 0f, 0f, 0f, 0f, 0f, 0.75f, 0.75f, 0.75f, 255, 255, 0, 255, false, false, 2, false, false, false); } } // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX private void DoSpawn(int _elapsedGameTime) { // decrement the timer by the number of milliseconds that have elapsed SpawnTimer -= _elapsedGameTime; if (SpawnTimer < 0) { // Check the spawn location for vehicles Vehicle NearbyVehicle = World.GetClosestVehicle(SpawnLocation, 10); // If there is a vehicle nearby... if (NearbyVehicle != null) { // Delete that vehicle to make space for the new one NearbyVehicle.MarkAsNoLongerNeeded(); NearbyVehicle.Delete(); } // Get the spawn name from the list string _spawnName = SpawnNames[SpawnCounter]; // Create the new vehicle and put it on the ground Vehicle SpawnVehicle = World.CreateVehicle(_spawnName, SpawnLocation, 0); SpawnVehicle.PlaceOnGround(); // Just a notification of what is being spawned and the counter number UI.Notify(string.Format("[{0:00}] Spawned " + SpawnVehicle.FriendlyName, SpawnCounter.ToString())); // Reset the spawn timer SpawnTimer = 3000; // move to the next spawn name SpawnCounter++; if (SpawnCounter == SpawnMax) { UI.Notify("All Vehicles Spawned"); SpawnMode = false; } } } // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX private void OnKeyUp(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.F10) { // Toggle the mode flag SpawnMode = !SpawnMode; if (SpawnMode) { // Set the SpawnTimer to 0 so that the first vehicle spawns straight away. SpawnTimer = 0; // Set the counter to the first vehicle SpawnCounter = 0; } } } } }
Press F10 to toggle the spawn mode on and off. When it is off, you will see a yellow marker on the ground in front of you, this is where the vehicles will spawn.
SpawnTimer is set to 3000 milliseconds (3 seconds) and it will go once through the list and then stop.
Well as it's been 4 days since this was posted and the OP has done a runner, I guess we can call this time wasted... thanks OP.
I wonder if the words "Chinese" and "University" have anything to do with this...
I'll post a request soon @LeeC2202
Promise I won't hit and run
.
.
.
.
.
Or kiss and tell for that matter....
@LeeC2202 said in Automatic vehicle spawn and rotation:
I wonder if the words "Chinese" and "University" have anything to do with this...
What did you mean? Anything to do with what?
@Akila_Reigns A while back rappo mentioned the "first post" moderation thing to combat the Chinese university spammers (I think a few people have at some point, including myself) and I looked at the username (pretty random collection of letters and a numeric suffix of 00) and wondered if this might be a copy & paste from somewhere else to bypass that.
I can't think of many reasons why someone would post such a technical and specific request and not even bother to return to the site for four days. It just all seemed a bit odd.
Or maybe I really am becoming too [sceptical/cynical/suspicious] (delete as appropriate) for my own good.
@LeeC2202 I understand now. Thanks for the explanation.
This forum could seriously use a class-based user system imho. Probably one of the reasons it's already on its decline
@ReNNie In which sense do you mean class-based, do you mean software wise? Or do you mean class-based as in a tiered structure where users are promoted etc...?
The latter, although I doubt @rappo will see it like that or ever want to go that way...
To me the unlimited and unrestricted downloads offer a safe haven for hit and run kids and lack of respect for the work done by others.
If you want to connected a group of commited users into a community imo you'll have to separate the wheat from the chaff.
We had a little forum like this in 2012&2013 ran by two Irirsh blokes in IV days: gta-mad.com. With supporting facebook and twitter,.
Unfortunately the site litteraly died on us when they both moved to NYC and stopped all their admin activities
So, tiered something like this (mods and admin hand out the priviliges) with differences in priviliges on posting, downloads, access to restricted forum sections (eg discussions on site functionality, request for code, cars, whathaveyou):
- User: ability to post in certain sections after 1 week and after an introductionary post, not able to make requests;
- Dweller: # posts above 100 and/or reputation above 50;
- Contributor: # posts above 200 and/or reputation above 100 OR written tutorials and showing a helping hand in problem-solving for users;
- Creator: uploaded multiple contributions to the download-section with a certain quality, has access to restricted forum areas and gets some more perks;
- ViP: donated to the server costs and/or patreon to some creators, instant access to restricted forum areas, gets the all-wanted yellow 'featured' star displayed at his nickname.
Not too many around (and still active) from that era @Carrythxd @Neophyte-Industries @PacketOverload @tall70 (?) @jackrobot @tiagoesanto, etc.
@ReNNie Hmmm, can't say as I like that kind of system either. I don't think what you do on a site should earn you anything above what other users can do. I think any user should be able to ask or contribute the same, I think hit and run posters are just a small minority we have to deal with.
Even though this person has disappeared, I still ended up with a mod I learned things from and what that I actually like.
It's annoying when people do this but hey, that's people for ya.
I do my own fair share of being annoying.
@ReNNie Completely off-topic (sorry :P) but I really miss GTA-Mad, it was one of the best websites I've ever been a part of. Every now and then I hope that it'll come back some day but it most likely won't | https://forums.gta5-mods.com/topic/5940/automatic-vehicle-spawn-and-rotation | CC-MAIN-2020-50 | refinedweb | 1,626 | 62.48 |
Hi I was wonder if anyone could help with this problem I am receiving. My program is suppose to read in any character frequencies from a file, even spaces and /n. My problem is that is reading an extra character that I can figure out.
I had inputed a text file that aab no spaces and no new lines. It would out put as
The Character Frequencies are
1
a 2
b 1
which throws everything off.
This program will make a huffman code out of the frequencies, but it messes up everything cause of the unknown space and 1. Here is what I've been using for my read input. If anyone could take a look I would really appreciate it.
#include <cstdio> #include <fstream> #include <iostream> #include <cstring> //tab space 1 using namespace std; void phase1(int array[], ifstream& in) { int c = 0; for(int i = 0; i < 255; i++) { array[i] = 0; } while(c != EOF) { c = in.get(); c = (int)c; array[c] = array[c]+1; } } //phase2 prints the frequency of each character //read from a text file void phase2(int array[]) { int n = 0; cout << "The character frequencies are: " << '\n'; while(n < 255) { if(array[n] != 0) { cout << (char)n; cout << " "; cout << array[n]; cout << "\n"; } n = n + 1; } } int main(int argc, char* argv[]) { ifstream in(argv[1]); int array[256]; phase1(array, in); phase2(array); return 0; } | https://www.daniweb.com/programming/software-development/threads/185789/reading-in-character-frequences-help | CC-MAIN-2017-34 | refinedweb | 233 | 70.53 |
Next up, we're going to start using our example numbers data. We want to create image arrays out of our numbers data, saving them, so that we can reference them later for pattern recognition.
For this, we're going to create a "createExamples" function:
def createExamples(): numberArrayExamples = open('numArEx.txt','a') numbersWeHave = range(1,10) for eachNum in numbersWeHave: #print eachNum for furtherNum in numbersWeHave: # you could also literally add it *.1 and have it create # an actual float, but, since in the end we are going # to use it as a string, this way will work. print(str(eachNum)+'.'+str(furtherNum)) imgFilePath = 'images/numbers/'+str(eachNum)+'.'+str(furtherNum)+'.png' ei = Image.open(imgFilePath) eiar = np.array(ei) eiarl = str(eiar.tolist()) print(eiarl) lineToWrite = str(eachNum)+'::'+eiarl+'\n' numberArrayExamples.write(lineToWrite)
I left a few comments in there, but you may also want to watch the video if you're finding yourself confused on this function. The purpose of this function is to literally just append the image's array to the file so we can reference it later.
In this, we're just using a flat file as our database. This is fine for smaller data-sets, but you may want to look into working with databases, either SQLite or MySQL in the future.
SQLite is a "light" version of SQL. It is also a flat file, but is going to be a bit more efficient than using something like a .txt file.
MySQL is probably the most popular database type and api used for SQL with databases.
Running the fucntion createExamples() should now create the numArEx.txt file and populat it with number arrays. With these, we can then take new numbers, threshold if necessary, then compare the current number array with our known number patterns, making an educated guess on what the number we're looking at is. | https://pythonprogramming.net/saving-image-data/ | CC-MAIN-2019-26 | refinedweb | 313 | 57.37 |
I am a newbie to Python as well as the programming world. After a bit of research for the past 2 days am now able to successfully SSH into the Cisco router and execute set of commands. However my original goal is to print the resultant output to a text file. Checked lots of posts by forum members which helped me in constructing the code, but I couldn't get the result printed on the text file. Please help.
Here is my code:
import paramiko
import sys
import os
dssh = paramiko.SSHClient()
dssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
dssh.connect('10.0.0.1', username='cisco', password='cisco')
stdin, stdout, stderr = dssh.exec_command('sh ip ssh')
print stdout.read()
f = open('output.txt', 'a')
f.write(stdout.read())
f.close()
dssh.close()
stdout.read() will read the content and move the file pointer forward. As such, subsequent calls will not be able to read the content again. So if you want to print the content and write it to a file, you should store it in a variable first and then print and write that.
Instead of mentioning the IP address directly on the code, is it possible for me to fetch it from list of IP addresses (mentioned line by line) in a text file?
You can read lines from a file like this:
with open('filename') as f: for line in f: # Each line will be iterated; so you could call a function here # that does the connection via SSH print(line) | https://codedump.io/share/iHrBK1HrEjhJ/1/print-ssh-output-from-cisco-router-to-a-text-file | CC-MAIN-2016-44 | refinedweb | 252 | 71.95 |
minitest/test,spec,mock,benchmark.
FEATURES/PROBLEMS:!
Incredibly small and fast runner, but no bells and whistles.
Written by squishy human beings. Software can never be perfect. We will all eventually die."
For matchers support check out:
Benchmarks
Add benchmarks to your tests.
# optionally run benchmarks, good for CI-only work! require "minitest/benchmark" if ENV["BENCH"] class TestMeme < Minitest::Benchmark # Override self.bench_range or default range is [1, 10, 100, 1_000, 10_000] def bench_my_algorithm assert_performance_linear 0.9999 do |n| # n is a range value @obj.my_algorithm(n) end end end
Or add them to your specs. If you make benchmarks optional, you'll need
to wrap your benchmarks in a conditional since the methods won't be
defined. In minitest 5, the describe name needs to match
/Bench(mark)?$/.
describe "Meme Benchmark"
Mocks and stubs defined using terminology by Fowler & Meszaros at:
“Mocks are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.”
class MemeAsker def initialize(meme) @meme = meme end def ask(question) method = question.tr(" ", "_") + "?" @meme.__send__(method) end end require "minitest/autorun" describe MemeAsker, :ask do describe "when passed an unpunctuated question" do it "should invoke the appropriate predicate method on the meme" do @meme = Minitest::Mock.new @meme_asker = MemeAsker.new @meme @meme.expect :will_it_blend?, :return_value @meme_asker.ask "will it blend" @meme.verify end end end
**Multi-threading and Mocks**
Minitest mocks do not support multi-threading if it works, fine, if it doesn't you can use regular ruby patterns and facilities like local variables. Here's an example of asserting that code inside a thread is run:
def test_called_inside_thread called = false pr = Proc.new { called = true } thread = Thread.new(&pr) thread.join assert called, "proc not called" end
Stubs
Mocks and stubs are defined using terminology by Fowler & Meszaros at:
“Stubs provide canned answers to calls made during the test”.
Minitest's stub method overrides a single method for the duration of the block.
def test_stale_eh obj_under_test = Something.new refute obj_under_test.stale? Time.stub :now, Time.at(0) do # stub goes away once the block is done assert obj_under_test.stale? end end
A note on stubbing: In order to stub a method, the method must actually exist prior to stubbing. Use a singleton method to create a new non-existing method:
def obj_under_test.fake_method ... end
Running Your Tests
Ideally, you'll use a rake task to run your tests, either piecemeal or all at once. Both rake and rails ship with rake tasks for running your tests. BUT! You don't have to:
% ruby -Ilib:test test/minitest/test_minitest_test.rb Run options: --seed 37685 # Running: ...................................................................... (etc) Finished in 0.107130s, 1446.8403 runs/s, 2959.0217 assertions/s. 155 runs, 317 assertions, 0 failures, 0 errors, 0 skips
There are runtime options available, both from minitest itself, and also
provided via plugins. To see them, simply run with
%. Known extensions: pride, autotest -p, --pride Pride. Show your testing pride! -a, --autotest Connect to autotest server.
Writing Extensions
To define a plugin, add a file named minitest/XXX_plugin.rb to your
project/gem. That file must be discoverable via ruby's LOAD_PATH (via
rubygems or otherwise). Minitest will find and require that file using
Gem.find_files. It will then try to call
plugin_XXX_init
during startup. The option processor will also try to call
plugin_XXX_options passing the OptionParser instance and the
current options hash. This lets you register your own command-line options.
Here's a totally bogus example:
# minitest/bogus_plugin.rb: module Minitest def self.(opts, ) opts.on "--myci", "Report results to my CI" do [:myci] = true [:myci_addr] = get_myci_addr [:myci_port] = get_myci_port end end def self.plugin_bogus_init() self.reporter << MyCI.new() if [:myci] end end
Adding custom reporters
Minitest uses composite reporter to output test results using multiple
reporter instances. You can add new reporters to the composite during the
init_plugins phase. As we saw in
plugin_bogus_init above, you
simply add your reporter instance to the composite via
<<.
AbstractReporter defines the API for reporters. You may
subclass it and override any method you want to achieve your desired
behavior.
- start
Called when the run has started.
- record
Called for each result, passed or otherwise.
Called at the end of the run.
- passed?
Called to see if you detected any problems.
Using our example above, here is how we might implement MyCI:
# minitest/bogus_plugin.rb module Minitest class MyCI < AbstractReporter attr_accessor :results, :addr, :port def initialize self.results = [] self.addr = [:myci_addr] self.port = [:myci_port] end def record result self.results << result end def report CI.connect(addr, port).send_results self.results end end # code from above... end
How to test SimpleDelegates?
The following implementation and test:
class Worker < SimpleDelegator def work end end describe Worker do before do @worker = Worker.new(Object.new) end it "must respond to work" do @worker.must_respond_to :work end end
outputs a failure:
1) Failure: Worker#test_0001_must respond to work [bug11.rb:16]: Expected #<Object:0x007f9e7184f0a0> (Object) to respond to #work.
Worker is a SimpleDelegate which in 1.9+ is a subclass of BasicObject.
Expectations are put on Object (one level down) so the Worker
(SimpleDelegate) hits
method_missing and delegates down to the
Object.new instance. That object doesn't respond to work
so the test fails.
You can bypass
SimpleDelegate#method_missing by extending the
worker with
Minitest::Expectations. You can either do that in
your setup at the instance level, like:
before do @worker = Worker.new(Object.new) @worker.extend Minitest::Expectations end
or you can extend the Worker class (within the test file!), like:
class Worker include ::Minitest::Expectations end
How to share code across test classes?
Use a module. That's exactly what they're for:
module UsefulStuff def useful_method # ... end end describe Blah do include UsefulStuff def test_whatever # useful_method available here end end
Remember,
describe simply creates test classes. It's just
ruby at the end of the day and all your normal Good Ruby Rules (tm) apply.
If you want to extend your test using setup/teardown via a module, just
make sure you ALWAYS call super. before/after automatically call super for
you, so make sure you don't do it twice.
How to run code before a group of tests?
Use a constant with begin…end like this:
describe Blah do SETUP = begin # ... this runs once when describe Blah starts end # ... end
This can be useful for expensive initializations or sharing state. Remember, this is just ruby code, so you need to make sure this technique and sharing state doesn't interfere with your tests.
Why am I seeing
uninitialized constant MiniTest::Test (NameError)?
Are you running the test with Bundler (e.g. via
bundle exec )?
If so, in order to require minitest, you must first add the
gem
'minitest' to your Gemfile and run
bundle. Once
it's installed, you should be able to require minitest and run your
tests.
Prominent Projects using Minitest:
arel
journey
mime-types
nokogiri
rails (active_support et al)
rake
rdoc
…and of course, everything from seattle.rb…
Developing Minitest:
Minitest's own tests require UTF-8 external encoding.
This is a common problem in Windows, where the default external Encoding is often CP850, but can affect any platform. Minitest can run test suites using any Encoding, but to run Minitest's own tests you must have a default external Encoding of UTF-8.
If your encoding is wrong, you'll see errors like:
--- expected +++ actual @@ -1,2 +1,3 @@ # encoding: UTF-8 -"Expected /\\w+/ to not match \"blah blah blah\"." +"Expected /\\w+/ to not match # encoding: UTF-8 +\"blah blah blah\"."
To check your current encoding, run:
ruby -e 'puts Encoding.default_external'
If your output is something other than UTF-8, you can set the RUBYOPTS env variable to a value of '-Eutf-8'. Something like:
RUBYOPT='-Eutf-8' ruby -e 'puts Encoding.default_external'
Check your OS/shell documentation for the precise syntax (the above will not work on a basic Windows CMD prompt, look for the SET command). Once you've got it successfully outputing UTF-8, use the same setting when running rake in Minitest.
Minitest's own tests require GNU (or similar) diff.
This is also a problem primarily affecting Windows developers. PowerShell has a command called diff, but it is not suitable for use with Minitest.
If you see failures like either of these, you are probably missing diff tool:
4) Failure: TestMinitestUnitTestCase#test_assert_equal_different_long [D:/ruby/seattlerb/minitest/test/minitest/test_minitest_test.rb:936]: Expected: "--- expected\n+++ actual\[email protected]@ -1 +1 @@\n-\"hahahahahahahahahahahahahahahahahahahaha\"\n+\"blahblahblahblahblahblahblahblahblahblah\"\n" Actual: "Expected: \"hahahahahahahahahahahahahahahahahahahaha\"\n Actual: \"blahblahblahblahblahblahblahblahblahblah\"" 5) Failure: TestMinitestUnitTestCase#test_assert_equal_different_collection_hash_hex_invisible [D:/ruby/seattlerb/minitest/test/minitest/test_minitest_test.rb:845]: Expected: "No visible difference in the Hash#inspect output.\nYou should look at the implementation of #== on Hash or its members.\n {1=>#<Object:0xXXXXXX>}" Actual: "Expected: {1=>#<Object:0x00000003ba0470>}\n Actual: {1=>#<Object:0x00000003ba0448>}"
If you use Cygwin or MSYS2 or similar there are packages that include a GNU diff for Windows. If you don't, you can download GNU diffutils from gnuwin32.sourceforge.net/packages/diffutils.htm (make sure to add it to your PATH).
You can make sure it's installed and path is configured properly with:
diff.exe -v
There are multiple lines of output, the first should be something like:
diff (GNU diffutils) 2.8.1
If you are using PowerShell make sure you run diff.exe, not just diff, which will invoke the PowerShell built in function.
Known Extensions:
- capybara_minitest_spec
Bridge between Capybara RSpec matchers and Minitest::Spec expectations (e.g.
page.must_have_content("Title")).
- color_pound_spec_reporter
Test names print Ruby Object types::Spec extensions for Rails and beyond.
- minitest-spec-rails
Drop in Minitest::Spec super and VCR.
-.
- pry-rescue
A pry plugin w/ minitest support. See pry-rescue/minitest.rb.
- rspec2minitest
Easily translate any RSpec matchers to Minitest assertions and expectations.
Unknown Extensions:
Authors… Please send me a pull request with a description of your minitest extension.
assay-minitest
detroit-minitest
em-minitest-spec
flexmock-minitest
guard-minitest
guard-minitest-decisiv
minitest-activemodel
minitest-ar-assertions
minitest-capybara-unit
minitest-colorer
minitest-deluxe
minitest-extra-assertions
minitest-rails-shoulda
minitest-spec
minitest-spec-should
minitest-sugar
spork-minitest
Minitest related goods
minitest/pride fabric:
REQUIREMENTS:
Ruby 1.8.7+. No magic is involved. I hope.
NOTE: 1.8 and 1.9 will be dropped in minitest 6+.
INSTALL:
sudo gem install minitest
On 1.9, you already have it. To get newer candy you can still install the gem, and then requiring “minitest/autorun” should automatically pull it in. If not, you'll need to do it yourself:
gem "minitest" # ensures you"re using the gem, and not the built-in MT require "minitest/autorun" # ... usual testing stuffs ...
DO NOTE: There is a serious problem with the way that ruby 1.9/2.0 packages their own gems. They install a gem specification file, but don't install the gem contents in the gem path. This messes up Gem.find_files and many other things (gem which, gem contents, etc).
Just install minitest as a gem for real and you'll be happier.. | https://www.rubydoc.info/gems/minitest/5.11.3 | CC-MAIN-2019-39 | refinedweb | 1,875 | 59.6 |
Help! I'm currently drafting SRFI-105, curly-infix-expressions: and "trying to please everyone" (ha!). Can you tell me if the most recent draft is something guile could live with? In particular, since SRFI-105 is a reader modification, some comments indicated a strong desire for a simple marker like #!fold-case and #!no-fold-case. In particular, it was strongly advocated that #!srfi-105 be that marker. Guile support for curly-infix-expressions is very important to me. Yet obviously guile has different semantics for #!, namely, #!...!#. Clearly #!srfi-105 could be handled by a special case, but could people live with that? I even have a notion for how "#!" could be implemented in a way that would consistently handle SRFI-22 (#! followed by space), guile's #!...!#, and things like #!fold-case, but I don't know if that would be ardently rejected or possibly accepted by guilers. The rationale (below) discusses this. Anyway, I'd like to know if the #!srfi-105 marker would be acceptable to guile developers, and if not, what alternatives would be suggested. Thanks. --- David A. Wheeler ======= Text from the rationale =========================== Why the marker #!srfi-105? We would like implementations to always have curly-infix enabled. However, some implementations may have other extensions that use {...}. We want a simple, standard way to identify code that uses curly-infix so that readers will switch to curly-infix if they need to switch. This marker was recommended during discussion of SRFI-105. After all, R6RS and R7RS (draft 6) already use #!fold-case and #!no-fold-case as special markers to control the reader. Using #!srfi-105, and srfi- should be the namespace for SRFIs., as well as a #! /usr/bin/env, but this is non-normative; an implementation could easily implement #! followed by space as an ignored line, and treat #! followed by / or . differently. Thus, implementations could trivially support simultaneously markers was attempted, this might confuse some systems into trying to run the program srfi-105. --- David A. Wheeler | https://lists.gnu.org/archive/html/guile-devel/2012-09/msg00017.html | CC-MAIN-2021-43 | refinedweb | 334 | 60.92 |
Swift Enumeration in Swift language automatically receive the same access level for individual cases of an enumeration. Consider for example to access the students name and marks secured in three subjects enumeration name is declared as student and the members present in enum class are name which belongs to string datatype, marks are represented as mark1, mark2 and mark3 of datatype Integer. To access either the student name or marks they have scored. Now, the switch case will print student name if that case block is executed otherwise it will print the marks secured by the student. If both condition fails the default block will be executed.
Access Control for SubClasses Swift allows the user to subclass any class that can be accessed in the current access context. A subclass cannot have a higher access level than its superclass. The user is restricted from writing a public subclass of an internal superclass. public class cricket { private func print() { println("Welcome to Swift Super Class") } }
internal class tennis: cricket
{
override internal func print() { println("Welcome to Swift Sub Class") } }
let cricinstance = cricket() cricinstance.print()
let tennisinstance = tennis() tennisinstance.print() When we run the above program using playground, we get the following result: Welcome to Swift Super Class Welcome to Swift Sub Class
Access Control for Constants, variables, properties and subscripts Swift constant, variable, or property cannot be defined as public than its type. It is not valid to write a public property with a private type. Similarly, a subscript cannot be more public than its index or return type. 225
...
Published on Nov 30, 2016
... | https://issuu.com/acevedolpruhtpjen/docs/quick_is_a_effective_and_user-frien/237 | CC-MAIN-2018-34 | refinedweb | 264 | 52.6 |
Our next decision theory post is going to be on how to rephrase hypothesis testing in terms of Bayesian decision theory. We already saw in our last statistical oddities post that
-values can cause some problems if you are not careful. This oddity makes the situation even worse. We’ll show that if you use a classical null hypothesis significance test (NHST) even at
and your experimental design is to check significance after each iteration of a sample, then as the sample size increases, you will falsely reject the hypothesis more and more.
I’ll reiterate that this is more of an experimental design flaw than a statistical problem, so a careful statistician will not run into the problem. On the other hand, lots of scientists are not careful statisticians and do make these mistakes. These mistakes don’t exist in the Bayesian framework (advertisement for the next post). I also want to reiterate that the oddity is not that you sometimes falsely reject hypotheses (this is obviously going to happen, since we are dealing with a degree of randomness). The oddity is that as the sample size grows, your false rejection rate will tend to 100% ! Usually people think that a higher sample size will protect them, but in this case it exacerbates the problem.
To avoid offending people, let’s assume you are a freshmen in college and you go to your very first physics lab. Of course, it will be to let a ball drop. You measure how long it takes to drop at various heights. You want to determine whether or not the acceleration due to gravity is really 9.8. You took a statistics class in high school, so you recall that you can run a NHST at the
level and impress your professor with this knowledge. Unfortunately, you haven’t quite grasped experimental methodology, so you rerun your NHST after each trial of dropping the ball.
When you see
you get excited because you can safely reject the hypothesis! This happens and you turn in a lab write-up claiming that with greater than
certainty the true acceleration due to gravity is NOT
. Let's make the nicest assumptions possible and see that it was still likely for you to reach that conclusion. Assume
exactly. Also, assume that your measurements are pretty good and hence form a normal distribution with mean
. I wrote the following code to simulate exactly that:
import random import numpy as np import pylab from scipy import stats #Generate normal sample def norm(): return random.normalvariate(9.8,1) #Run the experiment, return 1 if falsely rejects and 0 else def experiment(num_samples, p_val): x = [] #One by one we append an observation to our list for i in xrange(num_samples): x.append(norm()) #Run a t-test at p_val significance to see if we reject the hypothesis t,p = stats.ttest_1samp(x, 9.8) if p < p_val: return 1 return 0 #Check the proportion of falsely rejecting at various sample sizes rej_proportion = [] for j in xrange(10): f_rej = 0 for i in xrange(5000): f_rej += experiment(10*j+1, 0.05) rej_proportion.append(float(f_rej)/5000) #Plot the results axis = [10*j+1 for j in xrange(10)] pylab.plot(axis, rej_proportion) pylab.title('Proportion of Falsely Rejecting the Hypothesis') pylab.xlabel('Sample Size') pylab.ylabel('Proportion') pylab.show()
What is this producing? On the first run of the experiment, what is the probability that you reject the null hypothesis? Basically
, because the test knows that this isn't enough data to make a firm conclusion. If you run the experiment 10 times, what is the probability that at some point you reject the null hypothesis? It has gone up a bit. On and on this goes up to 100 trials where you have nearly a 40% chance of rejecting the null hypothesis using this method. This should make you uncomfortable, because this is ideal data where the mean really is 9.8 exactly! This isn't coming from imprecise measurements or something.
The trend will actually continue, but already because of the so-called
problem in programming this was taking a while to run, so I cut it off. As you accumulate more and more experiments, you will be more and more likely to reject the hypothesis:
Actually, if you think about this carefully it isn’t so surprising. The fault is that you recheck whether or not to reject after each sample. Recall that the
-value tells you how likely it is to see these results by random chance supposing the hypothesis is false. But the value is not
which means with enough trials you’ll get the wrong thing. If you have a sample size of
and you recheck your NHST after each sample is added, then you give yourself 100 chances to see this randomness manifest rather than checking once with all
data points. As your sample size increases, you give yourself more and more chances to see the randomness and hence as your sample goes to infinity your probability of falsely rejecting the hypothesis tends to
.
We can modify the above code to just track the p-value over a single 1000 sample experiment (the word “trial” in the title was meant to indicate dropping a ball in the physics experiment). This shows that if you cut your experiment off almost anywhere and run your NHST, then you would not reject the hypothesis. It is only because you incorrectly tracked the p-value until it dipped below 0.05 that a mistake was made:
3 thoughts on “Statistical Oddities 5: Sequential Testing”
Nice! You made that very easy to grasp. One side question: what does the “n+1 problem” refer to? Quick searching the web didn’t yield anything promising.
Thanks. I’m not sure that is standard terminology. I just heard someone use it once and trusted them.
It refers to the slowing down of code by
when you make a computation at every iteration through a list within a list of length n. I think the term comes from the fact that you do something at stage n, and then have to repeat doing it at stage n+1 rather than thinking of an implementation that parallelizes it. | https://hilbertthm90.wordpress.com/2014/03/26/statistical-oddities-5-sequential-testing/ | CC-MAIN-2016-18 | refinedweb | 1,045 | 61.36 |
Components
Modal
Modals are used to overlay content above an interface. They are intended to capture the user’s attention in order to inform or shift focus to a pertinent task.
Always specify a
appElementSelector property to trap keyboard focus in the modal and hide the rest of the app content temporarily.
import { Modal } from '@sproutsocial/racine'
Properties
Subcomponents
Modal Header
The
Modal.Header subcomponent is optional (not all Modals have to have headers), but if a header is present it should always be rendered as the first child of the Modal.
The following example shows a Modal header with a title and subtitle. Note: when rendered within an actual Modal, a close button will also be shown.
Assign Chatbot
This title and subtitle configuration is the default, but if you would like to create your own header you can do so by passing children:
Note that the
bordered prop is needed on a custom header if the bottom border is desired. If children are provided, the title and subtitle props are ignored.
Modal Close Button
The
Modal.CloseButton subcomponent renders an icon button (using the icon) that will close the parent Modal when clicked.
Modal Content
The
Modal.Content subcomponent is a simple wrapper used to contain the content of a Modal (any items not within the header or footer), and ensures that the contents scroll when appropriate.
Modal Footer
The
Modal.Footer subcomponent renders it’s children with the appropriate padding and border for the footer of a Modal. This component should always be rendered after an instance of
Modal.Content.
Recipes
Modal with empty or no header
Modal headers can be omitted altogether, or they can render only a close button if neither a title, subtitle, or children prop is passed to the
Modal.Header subcomponent.
This should only be done with the first item in the modal body is an image or illustration. If the modal body begins with text, a bordered header with a title or subtitle should be used.
The following example shows a modal rendering an empty header, which displays only a close button in the upper right corner.
Expressive modal
Destructive confirmation
Free form modal
This example showcases the freedom and flexibility we have within
Modal.Content | https://seeds.sproutsocial.com/components/modal/ | CC-MAIN-2020-45 | refinedweb | 375 | 53.81 |
Keep your lists sorted on every insert
Python is a language that provides you the tools of efficiency. The bisect module has a series of functions that work with a bisecttion algorithm. An inserting function in this grouping is the
insort function which will insert an element into a sequence in the right spot according to the sort order.
from bisect import insort sorted_list = [0, 2, 4, 6, 8, 10] insort(sorted_list, 5) # [0, 2, 4, 5, 6, 8, 10]
The above example uses a sorted list, what if the list is not sorted?
list = [0, 4, 2, 6] insort(list, 5) # [0, 4, 2, 5, 6]
It will still insert in the first place that it can find that makes sense. But what if there are two places that make sense? The algo seems to give up and just tack it on at the end.
list = [0, 4, 2, 6, 2] >>> insort(list, 5) # [0, 4, 2, 6, 2, 5]
Tweet
insort works left to right but you can also work right to left with
insert_right. In some cases this will be more efficient. | https://til.hashrocket.com/posts/bnhmjyccir-keep-your-lists-sorted-on-every-insert | CC-MAIN-2019-18 | refinedweb | 185 | 76.35 |
Convert result of matches from regex into list of string
How can I convert the list of match result from regex into
List<string>? I have this function but it always generate an exception,
Unable to cast object of type 'System.Text.RegularExpressions.Match' to type 'System.Text.RegularExpressions.CaptureCollection'.
public static List<string> ExtractMatch(string content, string pattern) { List<string> _returnValue = new List<string>(); Match _matchList = Regex.Match(content, pattern); while (_matchList.Success) { foreach (Group _group in _matchList.Groups) { foreach (CaptureCollection _captures in _group.Captures) // error { foreach (Capture _cap in _captures) { _returnValue.Add(_cap.ToString()); } } } } return _returnValue; }
If I have this string,
I have a dog and a cat.
regex
dog|cat
I want that the function will return of result into
List<string>
dog cat
With the Regex you have, you need to use
Regex.Matches to get the final list of strings like you want:
MatchCollection matchList = Regex.Matches(Content, Pattern); var list = matchList.Cast<Match>().Select(match => match.Value).ToList();
How to convert matched Regex patterns to String? - Build, Use the Regex.Matches method to extract substrings based on patterns. It uses an input string that contains several words. Each one starts Foreach: We use foreach on the MatchCollection, and then on the result of the Captures property. Free online regular expression matches extractor. Just enter your string and regular expression and this utility will automatically extract all string fragments that match to the given regex. There are no ads, popups or nonsense, just an awesome regex matcher. Load a string, get regex matches. Created for developers by developers from team
Cross-posting answer from Looping through Regex Matches --
To get just a list of Regex matches, you may:
var lookfor = @"something (with) multiple (pattern) (groups)"; var found = Regex.Matches(source, lookfor, regexoptions); var captured = found // linq-ify into list .Cast<Match>() // flatten to single list .SelectMany(o => // linq-ify o.Groups.Cast<Capture>() // don't need the pattern .Skip(1) // select what you wanted .Select(c => c.Value));
This will "flatten" all the captured values down to a single list. To maintain capture groups, use
Select rather than
SelectMany to get a list of lists.
Regex.Match Method (System.Text.RegularExpressions), try this. Hide Copy Code. mc.OfType<Match>().ToList().ForEach(m => { clsList. AddRange(m.Groups[1].Value.Split(' ')); }); clsList = clsList. Dim kvpList As New List (Of KeyValuePair (Of String,System.Text.RegularExpressions.Regex)) patterns.ForEach (Sub§ kvpList.Add (New KeyValuePair (Of String,Regex) (p,New Regex§))) Dim ib As String = InputBox ((“Enter a string to compare to your pattern” & vbNewLine & “The first pattern matched will be prompted”))
A possible solution using Linq:
using System.Linq; using System.Text.RegularExpressions; static class Program { static void Main(string[] aargs) { string value = "I have a dog and a cat."; Regex regex = new Regex("dog|cat"); var matchesList = (from Match m in regex.Matches(value) select m.Value).ToList(); } }
C# Regex.Matches Method: foreach Match, Capture, The above regex expression will match the text string, since we are trying to match a string of In the output, you can see that the first word i.e. The is returned. On the other hand, the findall function returns a list that contains all the matched� World's simplest browser-based utility for extracting regex matches from text. Load your text in the input form on the left, enter the regex below and you'll instantly get text that matches the given regex in the output area. Powerful, free, and fast. Load text – get all regexp matches. Created by developers from team Browserling.
Here's another solution that will fit into your code well.
while (_matchList.Success) { _returnValue.Add(_matchList.Value); _matchList = _matchList.NextMatch(); }
[Solved] How to convert the match collection values to list using linq , Returns a string with the contents of the n-th match in a match_results object that is ready. The object returned by str is of the basic_string corresponding to the type of the characters in the target sequence, even if the match_results object is filled using other types of character sequences, like the C-strings used in cmatch.
Historically the Regex collections have not implemented the generic collection interfaces, and the LINQ extension methods you're using operate on the generic interfaces. MatchCollection was updated in .NET Core to implement IList, and thus can be used with the Selectextension method, but when you move to .NET Standard 2.0, that interface implementation isn't there, and thus you can't just callSelect. Instead, you'll need to use the
LINQ
Cast or
OfType extensions to convert to an
IEnumerable, and then you can use
Select on that. Hope that helps.
Example
Regex wordMatcher = new Regex(@"\p{L}+"); return wordMatcher.Matches(text).Cast<Match>().Select(c => c.Value); Regex wordMatcher = new Regex(@"\p{L}+"); return wordMatcher.Matches(text).OfType<Match>().Select(c => c.Value);
Using Regex for Text Manipulation in Python, The method str.match(regexp) finds matches for regexp in the string str. It has 3 modes: If the regexp doesn’t have flag g , then it returns the first match as an array with capturing groups and properties index (position of the match), input (input string, equals str ):
In Java, I am trying to return all regex matches to an array but it seems that you can only check whether the pattern matches something or not (boolean). How can I use a regex match to form an array of all string matching a regex expression in a given string?
I am running through lines in a text file using a python script. I want to search for an img tag within the text document and return the tag as text.. When I run the regex re.match(line) it returns a _sre.SRE_MATCH object.
Method : Using join regex + loop + re.match() This task can be performed using combination of above functions. In this, we create a new regex string by joining all the regex list and then match the string against it to check for match using match() with any of the element of regex list.
- This previous answer may help: stackoverflow.com/questions/5767605/…
- did try that but i always get
foreach statement cannot operate on variables of type 'System.Text.RegularExpressions.Match' because 'System.Text.RegularExpressions.Match' does not contain a public definition for 'GetEnumerator'
- to retain multiple groups, see my answer below stackoverflow.com/a/21123574/1037948
- The result of this isn't of "List" type. However, if you change the last row to
.Select(c => c.Value)).ToList<string>();then it will be.
- @Cragmonkey right; but you shouldn't need to specify the type, just
.ToList()(.net45)
- Threw that in there bc I got a type exception without it.
- Nice shot. I love the
.Skip(1)section. | http://thetopsites.net/article/60109650.shtml | CC-MAIN-2021-04 | refinedweb | 1,128 | 59.3 |
Numeric Haskell: A Repa Tutorial
From HaskellWiki
Revision as of 03:34, 17 May 2011
Repa is a Haskell library for high performance, regular, multi-dimensional parallel arrays. All numeric data is stored unboxed and functions written with the Repa combinators are automatically parallel (provided you supply "+RTS -N" on the command line when running the program).
This document provides a tutorial on array programming in Haskell using the repa package.
Note: a companion tutorial to this is provided as the vector tutorial.
1 Quick Tour
Repa (REgular PArallel arrays) is an advanced, multi-dimensional parallel arrays library for Haskell, with a number of distinct capabilities:
- The arrays are "regular" (i.e. dense and rectangular); and
- Functions may be written that are polymorphic in the shape of the array;
- Many operations on arrays are accomplished by changing only the shape of the array (without copying elements);
- The library will automatically parallelize operations over arrays.
This is a quick start guide for the package. For further information, consult:
- The Haddock Documentation
- Regular, Shape-polymorphic, Parallel Arrays in Haskell.
- Efficient Parallel Stencil Convolution in Haskell
1.1 Importing the library
Download the `repa` package:
$ cabal install repa
and import it qualified:
import qualified Data.Array.Repa as R
The library needs to be imported qualified as it shares the same function names as list operations in the Prelude.
Note: Operations that involve writing new index types for Repa arrays will require the '-XTypeOperators' language extension.
For non-core functionality, a number of related packages are available:
- repa-bytestring
- repa-io
- repa-algorithms
- repa-devil (image loading)
and example algorithms in:
1.2 Index types and shapes
Before we can get started manipulating arrays, we need a grasp of repa's notion of array shape. Much like the classic 'array' library in Haskell, repa-based arrays are parameterized via a type which determines the dimension of the array, and the type of its index. However, while classic arrays take tuples to represent multiple dimensions, Repa arrays use a richer type language for describing multi-dimensional array indices and shapes (technically, a heterogeneous snoc list).
Shape types are built somewhat like lists. The constructor Z corresponds to a rank zero shape, and is used to mark the end of the list. The :. constructor adds additional dimensions to the shape. So, for example, the shape:
(Z:.3 :.2 :.3)
is the shape of a small 3D array, with shape type
(Z:.Int:.Int:.Int)
The most common dimensions are given by the shorthand names:
type DIM0 = Z type DIM1 = DIM0 :. Int type DIM2 = DIM1 :. Int type DIM3 = DIM2 :. Int type DIM4 = DIM3 :. Int type DIM5 = DIM4 :. Int
thus,
Array DIM2 Double
is the type of a two-dimensional array of doubles, indexed via `Int` keys, while
Array Z Double
is a zero-dimension object (i.e. a point) holding a Double.
Many operations over arrays are polymorphic in the shape / dimension
component. Others require operating on the shape itself, rather than
the array. A typeclass,
Shape, lets us operate uniformally
over arrays with different shape.
1.3 Shapes
To build values of `shape` type, we can use the
Z and
:. constructors:
> Z -- the zero-dimension Z
For arrays of non-zero dimension, we must give a size. Note: a common error is to leave off the type of the size.
> :t Z :. 10 Z :. 10 :: Num head => Z :. head
leading to annoying type errors about unresolved instances, such as:
No instance for (Shape (Z :. head0))
To select the correct instance, you will need to annotate the size literals with their concrete type:
> :t Z :. (10 :: Int) Z :. (10 :: Int) :: Z :. Int
is the shape of 1D arrays of length 10, indexed via Ints.
Given an array, you can always find its shape by calling
extent.
Additional convenience types for selecting particular parts of a shape are also provided (
All, Any, Slice etc.) which are covered later in the tutorial.
1.4 Generating arrays
New repa arrays ("arrays" from here on) can be generated in many ways, and we always begin by importing the
Data.Array.Repa module:
$ ghci GHCi, version 7.0.3: :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. Prelude> :m + Data.Array.Repa
They may be constructed from lists, for example. Here is a one dimensional array of length 10, here, given the shape `(Z :. 10)`:
> let x = fromList (Z :. (10::Int)) [1..10] > x [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0]
The type of `x` is inferred as:
> :t x x :: Array (Z :. Int) Double
which we can read as "an array of dimension 1, indexed via Int keys, holding elements of type Double"
We could also have written the type as:
x :: Array DIM1 Double
The same data may also be treated as a two dimensional array, by changing the shape parameter:
> let x = fromList (Z :. (5::Int) :. (2::Int)) [1..10] > x [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0]
which has the type:
x :: Array ((Z :. Int) :. Int) Double
or, more simply:
x :: Array DIM2 Double
1.4.1 Building arrays from vectors
It is also possible to build arrays from unboxed vectors, from the 'vector' package:
fromVector :: Shape sh => sh -> Vector a -> Array sh a
New arrays are built by applying a shape to the vector. For example:
import Data.Vector.Unboxed > let x = fromVector (Z :. (10::Int)) (enumFromN 0 10) [0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]
is a one-dimensional array of doubles. As usual, we can also impose other shapes:
> let x = fromVector (Z :. (3::Int) :. (3::Int)) (enumFromN 0 9) > x [0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0] > :t x x :: Array ((Z :. Int) :. Int) Double
to create a 3x3 array.
1.4.2 Reading arrays from files
Using the repa-io package, arrays may be written and read from files in a number of formats:
- as BMP files; and
- in a number of text formats.
with other formats rapidly appearing. For the special case of arrays of Word8 values, the repa-bytestring library supports generating bytestrings in memory.
An example: to write an 2D array to an ascii file:
writeMatrixToTextFile "/tmp/test.dat" x
This will result in a file containing:
MATRIX 2 5 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
In turn, this file may be read back in via
readMatrixFromTextFile.
To process .bmp files, use Data.Array.Repa.IO.BMP, as follows (currently reading only works for 24 bit .bmp):
Data.Array.Repa.IO.BMP> x <- readImageFromBMP "/tmp/test24.bmp"
Reads this .bmp image:
as a 3D array of Word8, which can be further processed.
Note: at the time of writing, there are no binary instances for repa arrays
For image IO in many, many formats, use the repa-devil library.
1.5 Copying arrays from pointers
You can also generate new repa arrays by copying data from a pointer, using the repa-bytestring package. Here is an example, using "copyFromPtrWord8":
import Data.Word import Foreign.Ptr import qualified Data.Vector.Storable as V import qualified Data.Array.Repa as R import Data.Array.Repa import qualified Data.Array.Repa.ByteString as R import Data.Array.Repa.IO.DevIL i, j, k :: Int (i, j, k) = (255, 255, 4 {-RGBA-}) -- 1d vector, filled with pretty colors v :: V.Vector Word8 v = V.fromList . take (i * j * k) . cycle $ concat [ [ r, g, b, 255 ] | r <- [0 .. 255] , g <- [0 .. 255] , b <- [0 .. 255] ] ptr2repa :: Ptr Word8 -> IO (R.Array R.DIM3 Word8) ptr2repa p = R.copyFromPtrWord8 (Z :. i :. j :. k) p main = do -- copy our 1d vector to a repa 3d array, via a pointer r <- V.unsafeWith v ptr2repa runIL $ writeImage "test.png" r return ()
This fills a vector, converts it to a pointer, then copies that pointer to a 3d array, before writing the result out as this image:
1.6 Indexing arrays
To access elements in repa arrays, you provide an array and a shape, to access the element:
(!) :: (Shape sh, Elt a) => Array sh a -> sh -> a
So:
> let x = fromList (Z :. (10::Int)) [1..10] > x ! (Z :. 2) 3.0
Note that we can't give just a bare literal as the shape, even for one-dimensional arrays, :
> x ! 2 No instance for (Num (Z :. Int)) arising from the literal `2'
as the Z type isn't in the Num class, and Haskell's numeric literals are overloaded.
What if the index is out of bounds, though?
> x ! (Z :. 11) *** Exception: ./Data/Vector/Generic.hs:222 ((!)): index out of bounds (11,10)
an exception is thrown. An altnerative is to indexing functions that return a Maybe:
(!?) :: (Shape sh, Elt a) => Array sh a -> sh -> Maybe a
An example:
> x !? (Z :. 9) Just 10.0 > x !? (Z :. 11) Nothing
1.7 Operations on arrays
Besides indexing, there are many regular, list-like operations on arrays.
1.7.1 Maps, zips, filters and folds
We can map over multi-dimensional arrays:
> let x = fromList (Z :. (3::Int) :. (3::Int)) [1..9] > x [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]
since `map` conflicts with the definition in the Prelude, we have to use it qualified:
> Data.Array.Repa.map (^2) x [1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0]
Maps leave the dimension unchanged.
Folding reduces the inner dimension of the array.
fold :: (Shape sh, Elt a) => (a -> a -> a) -> a -> Array (sh :. Int) a -> Array sh a
So if 'x' is a 3D array:
> let x = fromList (Z :. (3::Int) :. (3::Int)) [1..9] > x [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]
We can sum each row, to yield a 2D array:
> fold (+) 0 x [6.0,15.0,24.0]
Two arrays may be combined via
zipWith:
zipWith :: (Shape sh, Elt b, Elt c, Elt a) => (a -> b -> c) -> Array sh a -> Array sh b -> Array sh c
an example:
> zipWith (*) x x [1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0]
1.7.2 Numeric operations: negation, addition, subtraction, multiplication
Repa arrays are instances of the
Num. This means that
operations on numerical elements are lifted automagically onto arrays of
such elements. For example,
(+) on two double values corresponds to
element-wise addition,
(+), of the two arrays of doubles:
> let x = fromList (Z :. (10::Int)) [1..10] > x + x [2.0,4.0,6.0,8.0,10.0,12.0,14.0,16.0,18.0,20.0]
Other operations from the Num class work just as well:
> -x [-1.0,-2.0,-3.0,-4.0,-5.0,-6.0,-7.0,-8.0,-9.0,-10.0] > x ^ 3 [1.0,8.0,27.0,64.0,125.0,216.0,343.0,512.0,729.0,1000.0]
> x * x [1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0,100.0]
1.8 Changing the shape of an array
One of the main advantages of repa-style arrays over other arrays in Haskell is the ability to reshape data without copying. This is achieved via *index-space transformations*.
An example: transposing a 2D array (this example taken from the repa paper). First, the type of the transformation:
transpose2D :: Elt e => Array DIM2 e -> Array DIM2 e
Note that this transform will work on DIM2 arrays holding any elements. Now, to swap rows and columns, we have to modify the shape:
transpose2D a = backpermute (swap e) swap a where e = extent a swap (Z :. i :. j) = Z :. j :. i
The swap function reorders the index space of the array. To do this, we extract the current shape of the array, and write a function that maps the index space from the old array to the new array. That index space function is then passed to backpermute which actually constructs the new array from the old one.
backpermute generates a new array from an old, when given the new shape, and a function that translates between the index space of each array (i.e. a shape transformer).
backpermute :: (Shape sh, Shape sh', Elt a) => sh' -> (sh' -> sh) -> Array sh a -> Array sh' a
Note that the array created is not actually evaluated (we only modified the index space of the array).
Transposition is such a common operation that it is provided by the library:
transpose :: (Shape sh, Elt a) => Array ((sh :. Int) :. Int) a -> Array ((sh :. Int) :. Int)
the type indicate that it works on the lowest two dimensions of the array.
Other operations on index spaces include taking slices and joining arrays into larger ones.
1.9 Examples
Following are some examples of useful functions that exercise the API.
1.9.1 Rotating an image: backpermute
Flip an image upside down:
import System.Environment import Data.Word import Data.Array.Repa hiding ((++)) import Data.Array.Repa.IO.DevIL main = do [f] <- getArgs runIL $ do v <- readImage f writeImage ("flip-"++f) (rot180 v) rot180 :: Array DIM3 Word8 -> Array DIM3 Word8 rot180 g = backpermute e flop g where e@(Z :. x :. y :. _) = extent g flop (Z :. i :. j :. k) = (Z :. x - i - 1 :. y - j - 1 :. k)
Running this:
$ ghc -O2 --make A.hs $ ./A haskell.jpg
Results in:
1.9.2 Example: matrix-matrix multiplication
A more advanced example from the Repa paper: matrix-matrix multiplication: the result of matrix multiplication is a matrix whose elements are found by multiplying the elements of each row from the first matrix by the associated elements of the same column from the second matrix and summing the result.
if
and
then
So we take two, 2D arrays and generate a new array, using our transpose function from earlier:
mmMult :: (Num e, Elt e) => Array DIM2 e -> Array DIM2 e -> Array DIM2 e mmMult a b = sum (zipWith (*) aRepl bRepl) where t = transpose2D b aRepl = extend (Z :.All :.colsB :.All) a bRepl = extend (Z :.rowsA :.All :.All) t (Z :.colsA :.rowsA) = extent a (Z :.colsB :.rowsB) = extent b
The idea is to expand both 2D argument arrays into 3D arrays by
replicating them across a new axis. The front face of the cuboid that
results represents the array
a, which we replicate as often
as
b has columns
(colsB), producing
aRepl.
The top face represents
t (the transposed b), which we
replicate as often as a has rows
(rowsA), producing
bRepl,. The two replicated arrays have the same extent,
which corresponds to the index space of matrix multiplication
Optimized implementations of this function are available in the repa-algorithms package.
1.9.3 Example: parallel image desaturation
To convert an image from color to greyscale, we can use the luminosity method to averge RGB pixels into a common grey value, where the average is weighted for human perception of green
The formula for luminosity is 0.21 R + 0.71 G + 0.07 B.
We can write a parallel image desaturation tool using repa and the repa-devil image library:
import Data.Array.Repa.IO.DevIL import Data.Array.Repa hiding ((++)) import Data.Word import System.Environment -- -- Read an image, desaturate, write out with new name. -- main = do [f] <- getArgs runIL $ do a <- readImage f let b = traverse a id luminosity writeImage ("grey-" ++ f) b
And now the luminosity transform itself, which averages the 3 RGB colors based on preceived weight:
-- -- (Parallel) desaturation of an image via the luminosity method. -- luminosity :: (DIM3 -> Word8) -> DIM3 -> Word8 luminosity _ (Z :. _ :. _ :. 3) = 255 -- alpha channel luminosity f (Z :. i :. j :. _) = ceiling $ 0.21 * r + 0.71 * g + 0.07 * b where r = fromIntegral $ f (Z :. i :. j :. 0) g = fromIntegral $ f (Z :. i :. j :. 1) b = fromIntegral $ f (Z :. i :. j :. 2)
And that's it! The result is a parallel image desaturator, when compiled with
$ ghc -O -threaded -rtsopts --make A.hs -fforce-recomp
which we can run, to use two cores:
$ time ./A sunflower.png +RTS -N2 -H ./A sunflower.png +RTS -N2 -H 0.19s user 0.03s system 135% cpu 0.165 total
Given an image like this:
The desaturated result from Haskell:
| https://wiki.haskell.org/index.php?title=Numeric_Haskell:_A_Repa_Tutorial&diff=prev&oldid=40004 | CC-MAIN-2015-48 | refinedweb | 2,780 | 67.15 |
Convert HTML to PDF using Python
Want to share your content on python-bloggers? click here.
In this tutorial we will explore how to convert HTML files to PDF using Python.
Table of Contents
- Introduction
- Sample file
- Convert HTML file to PDF using Python
- Convert Webpage to PDF using Python
- Conclusion
Introduction
There are several online tools that allow you to convert HTML files and webpages to PDF, and most of them are free.
While it is a simple process, being able to automate it can be very useful for some HTML code testing as well as saving required webpages as PDF files.
To continue following this tutorial we will need:
- wkhtmltopdf
- pdfkit
wkhtmltopdf is an open source command line tool to render HTML files into PDF using the Qt WebKit rendering engine.
In order to use it in Python, we will also need the pdfkit library which is a wrapper for wkhtmltopdf utility.
First, search for the wkhtmltopdf installer for your operating system. For Windows, you can find the latest version of wkhtmltopdf installer here. Simply download the .exe file and install on your computer.
Remember the path to the directory where it will be installed.
In my case it is: C:\Program Files\wkhtmltopdf
If you don’t have the Python library installed, please open “Command Prompt” (on Windows) and install it using the following code:
pip install pdfkit
Sample file
In order to continue in this tutorial we will need some HTML file to work with.
Here is a sample HTML file we will use in this tutorial:
If you download it and open in your browser, you should see:
and opening it in the code editor should show:
Convert HTML file to PDF using Python
Let’s start with converting HTML file to PDF using Python.
The sample.html file is located in the same directory as the main.py file with the code:
First, we will need to find the path to the wkhtmltopdf executable file wkhtmltopdf.exe
Recall that we installed in C:\Program Files\wkhtmltopdf meaning that the .exe file is in that folder. Navigating to it, you should see that the path to executable file is: C:\Program Files\wkhtmltopdf\bin\wkhtmltopdf.exe
Now we have everything we need and can easily convert HTML file to PDF using Python:
import pdfkit #Define path to wkhtmltopdf.exe path_to_wkhtmltopdf = r'C:\Program Files\wkhtmltopdf\bin\wkhtmltopdf.exe' #Define path to HTML file path_to_file = 'sample.html' #Point pdfkit configuration to wkhtmltopdf.exe config = pdfkit.configuration(wkhtmltopdf=path_to_wkhtmltopdf) #Convert HTML file to PDF pdfkit.from_file(path_to_file, output_path='sample.pdf', configuration=config)
And you should see sample.pdf created in the same directory:
which should should look like this:
Convert Webpage to PDF using Python
Using pdfkit library you can also convert webpages into PDF using Python.
Let’s convert the wkhtmltopdf project page to PDF!
In this section we will reuse most of the code from the previous section, except now instead of using HTML file we will use the URL of a webpage and the .from_url() method of pdfkit class:
import pdfkit #Define path to wkhtmltopdf.exe path_to_wkhtmltopdf = r'C:\Program Files\wkhtmltopdf\bin\wkhtmltopdf.exe' #Define url url = '' #Point pdfkit configuration to wkhtmltopdf.exe config = pdfkit.configuration(wkhtmltopdf=path_to_wkhtmltopdf) #Convert Webpage to PDF pdfkit.from_url(url, output_path='webpage.pdf', configuration=config)
And you should see webpage.pdf created in the same directory:
which should should look like this:
Conclusion
In this article we explored how to convert HTML to PDF using Python and wkhtmltopdf.
Feel free to leave comments below if you have any questions or have suggestions for some edits and check out more of my Python Programming tutorials.
The post Convert HTML to PDF using Python appeared first on PyShark.
Want to share your content on python-bloggers? click here. | https://python-bloggers.com/2022/06/convert-html-to-pdf-using-python/ | CC-MAIN-2022-40 | refinedweb | 638 | 64.3 |
This article by David Saltares Márquez and Alberto Cejas Sánchez, the authors of Libgdx Cross-platform Game Development Cookbook, describes how we can generate a distance field font and render it in Libgdx. As a bitmap font is scaled up, it becomes blurry due to linear interpolation. It is possible to tell the underlying texture to use the nearest filter, but the result will be pixelated. Additionally, until now, if you wanted big and small pieces of text using the same font, you would have had to export it twice at different sizes. The output texture gets bigger rather quickly, and this is a memory problem.
(For more resources related to this topic, see here.)
Distance field fonts is a technique that enables us to scale monochromatic textures without losing out on quality, which is pretty amazing. It was first published by Valve (Half Life, Team Fortress…) in 2007. It involves an offline preprocessing step and a very simple fragment shader when rendering, but the results are great and there is very little performance penalty. You also get to use smaller textures!
In this article, we will cover the entire process of how to generate a distance field font and how to render it in Libgdx.
Getting ready
For this, we will load the data/fonts/oswald-distance.fnt and data/fonts/oswald.fnt files. To generate the fonts, Hiero is needed, so download the latest Libgdx package from and unzip it.
Make sure the samples projects are in your workspace. Please visit the link to download the sample projects which you will need.
How to do it…
First, we need to generate a distance field font with Hiero. Then, a special fragment shader is required to finally render scaling-friendly text in Libgdx.
Generating distance field fonts with Hiero
- Open up Hiero from the command line. Linux and Mac users only need to replace semicolons with colons and back slashes with forward slashes:
java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensions gdx-toolsgdx-tools.jar com.badlogic.gdx.tools.hiero.Hiero
Select the font using either the System or File options.
- This time, you don’t need a really big size; the point is to generate a small texture and still be able to render text at high resolutions, maintaining quality. We have chosen 32 this time.
- Remove the Color effect, and add a white Distance field effect.
- Set the Spread effect; the thicker the font, the bigger should be this value. For Oswald, 4.0 seems to be a sweet spot.
- To cater to the spread, you need to set a matching padding. Since this will make the characters render further apart, you need to counterbalance this by the setting the X and Y values to twice the negative padding.
- Finally, set the Scale to be the same as the font size. Hiero will struggle to render the charset, which is why we wait until the end to set this property.
- Generate the font by going to File | Save BMFont files (text)….
The following is the Hiero UI showing a font texture with a Distance field effect applied to it:
Distance field fonts shader
We cannot use the distance field texture to render text for obvious reasons—it is blurry! A special shader is needed to get the information from the distance field and transform it into the final, smoothed result. The vertex shader found in data/fonts/font.vert is simple. The magic takes place in the fragment shader, found in data/fonts/font.frag and explained later.
First, we sample the alpha value from the texture for the current fragment and call it distance. Then, we use the smoothstep() function to obtain the actual fragment alpha. If distance is between 0.5-smoothing and 0.5+smoothing, Hermite interpolation will be used. If the distance is greater than 0.5+smoothing, the function returns 1.0, and if the distance is smaller than 0.5-smoothing, it will return 0.0. The code is as follows:
#ifdef GL_ES precision mediump float; precision mediump int; #endif uniform sampler2D u_texture; varying vec4 v_color; varying vec2 v_texCoord; const float smoothing = 1.0/128.0; void main() { float distance = texture2D(u_texture, v_texCoord).a; float alpha = smoothstep(0.5 - smoothing, 0.5 + smoothing, distance); gl_FragColor = vec4(v_color.rgb, alpha * v_color.a); }
The smoothing constant determines how hard or soft the edges of the font will be. Feel free to play around with the value and render fonts at different sizes to see the results. You could also make it uniform and configure it from the code.
Rendering distance field fonts in Libgdx
Let’s move on to DistanceFieldFontSample. java, where we have two BitmapFont instances: normalFont (pointing to data/fonts/oswald.fnt) and distanceShader (pointing to data/fonts/oswald-distance.fnt). This will help us illustrate the difference between the two approaches. Additionally, we have a ShaderProgram instance for our previously defined shader.
In the create() method, we instantiate both the fonts and shader normally:
normalFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald.fnt")); normalFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); normalFont.setScale(4.5f); distanceFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald-distance.fnt")); distanceFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); distanceFont.setScale(4.5f); fontShader = new ShaderProgram(Gdx.files.internal("data/fonts/font.vert"), Gdx.files.internal("data/fonts/font.frag")); if (!fontShader.isCompiled()) { Gdx.app.error(DistanceFieldFontSample.class.getSimpleName(), "Shader compilation failed:n" + fontShader.getLog()); }
We need to make sure that the texture our distanceFont just loaded is using linear filtering:
distanceFont.getRegion().getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear);
Remember to free up resources in the dispose() method, and let’s get on with render(). First, we render some text with the regular font using the default shader, and right after this, we do the same with the distance field font using our awesome shader:
batch.begin(); batch.setShader(null); normalFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 50.0f); batch.setShader(fontShader); distanceFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 250.0f); batch.end();
The results are pretty obvious; it is a huge win of memory and quality over a very small price of GPU time. Try increasing the font size even more and be amazed at the results! You might have to slightly tweak the smoothing constant in the shader code though:
How it works…
Let’s explain the fundamentals behind this technique. However, for a thorough explanation, we recommend that you read the original paper by Chris Green from Valve ().
A distance field is a derived representation of a monochromatic texture. For each pixel in the output, the generator determines whether the corresponding one in the original is colored or not. Then, it examines its neighborhood to determine the 2D distance in pixels, to a pixel with the opposite state. Once the distance is calculated, it is mapped to a [0, 1] range, with 0 being the maximum negative distance and 1 being the maximum positive distance. A value of 0.5 indicates the exact edge of the shape. The following figure illustrates this process:
Within Libgdx, the BitmapFont class uses SpriteBatch to render text normally, only this time, it is using a texture with a Distance field effect applied to it. The fragment shader is responsible for performing a smoothing pass. If the alpha value for this fragment is higher than 0.5, it can be considered as in; it will be out in any other case:
This produces a clean result.
There’s more…
We have applied distance fields to text, but we have also mentioned that it can work with monochromatic images. It is simple; you need to generate a low resolution distance field transform. Luckily enough, Libgdx comes with a tool that does just this.
Open a command-line window, access your Libgdx package folder and enter the following command:
java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensionsgdx-tools gdx-tools.jar com.badlogic.gdx.tools.distancefield.DistanceFieldGenerator
The distance field font generator takes the following parameters:
- –color: This parameter is in hexadecimal RGB format; the default is ffffff
- –downscale: This is the factor by which the original texture will be downscaled
- –spread: This is the edge scan distance, expressed in terms of the input
Take a look at this example:
java […] DistanceFieldGenerator --color ff0000 --downscale 32 --spread 128 texture.png texture-distance.png
Alternatively, you can use the gdx-smart-font library to handle scaling. It is a simpler but a bit more limited solution ().
Summary
In this article, we have covered the entire process of how to generate a distance field font and how to render it in Libgdx.
Further resources on this subject:
- Cross-platform Development – Build Once, Deploy Anywhere [Article]
- Getting into the Store [Article]
- Adding Animations [Article] | https://hub.packtpub.com/scaling-friendly-font-rendering-distance-fields/ | CC-MAIN-2019-39 | refinedweb | 1,489 | 57.47 |
Usage¶
First demonstration¶
A code sample tells more than thousand words:
import dryscrape import sys if 'linux' in sys.platform: # start xvfb in case no X is running. Make sure xvfb # is installed, otherwise this won't work! dryscrape.start_xvfb() search_term = 'dryscrape' # set up a web scraping session sess = dryscrape.Session(base_url = '') # we don't need images sess.set_attribute('auto_load_images', False) # visit homepage and search for a term sess.visit('/') q = sess.at_xpath('//*[@name="q"]') q.set(search_term) q.form().submit() # extract all links for link in sess.xpath('//a[@href]'): print(link['href']) # save a screenshot of the web page sess.render('google.png') print("Screenshot written to 'google.png'")
In this sample, we use dryscrape to do a simple web search on Google.
Note that we set up a Webkit driver instance here and pass it to a dryscrape
Session in the constructor. The session instance
then passes every method call it cannot resolve – such as
visit(), in this case – to the
underlying driver. | https://dryscrape.readthedocs.io/en/latest/usage.html | CC-MAIN-2018-43 | refinedweb | 166 | 68.97 |
CORBA Programming/Concepts
Object Definition[edit]
A CORBA object is defined using the CORBA IDL programming language. CORBA IDL is pure definitions language, like, for example, UML. You only define the external interface and then choose an implementation language to actually implement your object.
There are many implementation languages available. The OMG alone defines language mappings for Ada, C, C++, COBOL, Java, Lisp, PL/1, Python and Smalltalk. And there might be more.
However, some implementation languages make it easier than others to implement CORBA objects. In order to implement a CORBA object, the implementation's language needs to have a set of features which include object orientation, modules (packages or namespaces), and generics (templates or dynamic typing). If a language lacks any of those vital features, they have to be emulated. The language mapping provides those emulation layers for you, but it does not mean they are easy to use.
The differences are actually so large that it might be better to learn a new programming language with an easy mapping than use a known language with a particularly difficult mapping.
Another nice feature of the CORBA concept is that you don't need to use the object with the same programming language as the one the object is implemented in. So client and server may use different programming languages. | https://en.wikibooks.org/wiki/CORBA_Programming/Concepts | CC-MAIN-2016-30 | refinedweb | 221 | 54.63 |
Getting Started with Servlets
- Getting Started with Servlets
- The Anatomy of a Servlet
- Sending a Response to the Browser
- The HttpServlet Class
- Choosing Between Java Server Pages and Servlets
Getting Started with Servlets
In This Chapter:
A "Hello World" Servlet
The Anatomy of a Servlet
Sending a Response to the Browser
The HttpServlet Class
Choosing Between Java Server Pages and Servlets
A "Hello World" Servlet
Servlets are a little more involved than Java Server Pages, but in simple cases they aren't too bad. Listing 3.1 shows you the "Hello World" program in servlet form.
Listing 3.1 Source Code for HelloWorldServlet.java
package usingjsp; import javax.servlet.*; import java.io.*; public class HelloWorldServlet extends GenericServlet { public void service(ServletRequest request, ServletResponse response) throws IOException { // Tell the Web server that the response is HTML response.setContentType("text/html"); // Get the PrintWriter for writing out the response PrintWriter out = response.getWriter(); // Write the HTML back to the browser out.println("<HTML>"); out.println("<BODY>"); out.println("<H1>Hello World!</H1>"); out.println("</BODY>"); out.println("</HTML>"); } }
The HTML portion of the HelloWorldServlet is probably the most recognizable part. For the sake of completeness, the browser's view of the HelloWorldServlet is shown in Figure 3.1.
Figure 3.1 A servlet can generate HTML code.
Unlike Java Server Pages, servlets are pure Java classes. The good news is that you don't have to learn any additional syntax to create a servlet; the bad news is that you have to do a bit more work in your programs. As you can see from the "Hello World" servlet, the amount of work isn't really that great.
Compiling the Servlet
Before you compile the servlet, make sure the servlet classes are in your classpath. The location of the servlet classes varies depending on which servlet engine you are using, but typically they are in a file called servlet.jar. If you are using the Resin servlet engine, the servlet classes are in a file called jsdk22.jar (for version 2.2 of the Servlet API). The location of the jar file also varies depending on the servlet engine, but you usually find it in the lib direc-tory. Appendixes C-F in this book contain configuration information about several popular Web servers. If all else fails, consult the documentation for your servlet engine. In fact, you should probably consult the documentation first.
If you are having trouble compiling your servlet, see the "Troubleshooting" section at the end of this chapter.
Runtime Classpath
Unlike Java Server Pages, servlets must be in the servlet engine's classpath. When a Java Server Page runs, the JSP engine ensures that the servlet it generates is visible to the JSP engine. Unfortunately, you don't have the luxury of the JSP engine to help you out when you write servlets. Most servlet engines enable you to modify the classpath for the purpose of loading servlets, so you won't have to add all your servlet directories to the system classpath.
Listing 3.1 showed that you can put your servlet into a package. The name of the servlet becomes the fully qualified classname of the servlet. The URL used to load the servlet specifies the pathname for the servlet as /servlet/usingjsp.HelloWorldServlet (refer to Figure 3.1). Most servlet engines contain a special URL mapping for the /servlet directory, which signals that you want to run a servlet. When the Web server sees this special URL, it passes it on to the servlet engine. This /servlet directory usually picks up servlets from the classpath, so you can easily run any servlet that is in your system classpath just by appending the classname to /servlet/.
NOTE
You almost always have to set up a specific URL pattern for running servlets (such as /servlet/) or you need to set up a full URL that points to a servlet. Unlike a JSP, a servlet doesn't have an extension on the end of its name to indicate what kind of a file it is.
If you are having trouble running your servlet, see the"Troubleshooting" section at the end of this chapter.
The HelloWorldServlet In-Depth
The first thing your servlet must do is implement the Servlet interface. There are two ways to do it: subclass from a class that implements the Servlet interface or implement the Servlet interface directly in the servlet. The HelloWorldServlet takes the easier approach by subclassing an existing class that implements Servlet. There are other classes that also implement the Servlet interface, and they will be discussed shortly.
When a request comes in for a particular servlet, the servlet engine loads the servlet (if it has not yet been loaded) and invokes the servlet's service method. The method takes two arguments: an object containing information about the request from the browser and an object containing information about the response going back to the browser.
TIP
Currently, most servlet engines don't automatically reload a servlet after it has been loaded. If you recompile a servlet, you usually need to restart the servlet engine to pick up the changes. This problem should slowly disappear over time as vendors create special class loaders to reload servlets when needed.
Next, the servlet must tell the Web browser what kind of content is being returned. Most of the time, you will be returning HTML content, so set the content type to text/html. As you will see in Chapter 24, "Creating an XML Application," you can also set the content type to text/xml and return XML data back to the browser. Earlier in this chapter, Listing 3.1 showed that you set the content type of the response by invoking setContentType in the response object.
After you have set the content type you are ready to start sending text back to the browser. Of course you need some sort of output object to send the text back. Again, the response object has the methods necessary to get an output stream for writing a response. Because the "Hello World" servlet is writing out text, it only needs a PrintWriter object, so it calls the getWriter method in the response object. If you need to send binary data back to the browser, you should use the getOutputStream method in the response object.
The final part of the "Hello World" servlet should be the most obvious, and if you paid attention in Chapter 2, "Getting Started with Java Server Pages," this part should be the most familiar. The "Hello World" servlet uses the println method in the PrintWriter to send HTML back to the browser. Remember, in Chapter 2 you saw a portion of the servlet code generated by the "Hello World" Java Server Page. Aside from some extra comments, that code is almost identical to the code at the end of the "Hello World" servlet. After all, Java Server Pages eventually become servlets and there are only so many ways to send output from a servlet, so why shouldn't they be similar? | http://www.informit.com/articles/article.aspx?p=160309&seqNum=2 | CC-MAIN-2018-26 | refinedweb | 1,176 | 69.62 |
Asked by:
Help with C# Syntax
Question
I haven't written C# for a while. I am getting the following error and all the "=" are turning red in the Visual Studio editor in all the code that's BOLD. This is sample code from HL7 message writer. I think this is older C# syntax that needs to be updated. It's using .NET 2.0 framework. Thanks
Error: Invalid token '=' in class, struct, or interface member declaration);
All replies
- Those are the sorts of errors I would expect if the code is in a class but not in a method. I notice you end with a "return" statement, so it looks like you intend it to be in a method. You should check all the opening and closing braces to make sure that code has not ended up outside a method by accident.
- Proposed as answer by CoolDadTxModerator Monday, November 13, 2017 3:53 PM
Even before I looked at the code, based on the description, I suspected it was something such as what Ante Meridian said.
Something that can help check the braces is to put the cursor on a brace and click Ctrl-]. That will take you to the matching brace or whatever. It works on multiple types of braces; both curly braces and parentheses. I think it even works on the comment operators /* and */.
Sam Hobbs
SimpleSamples.Info
Isn't this the issue:
NHapi.Model.V231.Message.QRY_R02 qry = new
That doesn't look like the correct way to substantiate a class, right?
And what's the next line doing?
NHapi.Model.V231.Message.QRY_R02();
Thank you!
Normally the instantiation would be written on one line.
NHapi.Model.V231.Message.QRY_R02 qry = new NHapi.Model.V231.Message.QRY_R02();
But split over two lines, it should still work. The compiler will just treat the second line as a continuation of the first, until it hits a semi-colon.
Have you checked the braces, as we suggested?
Hi BlairRV,
>>That doesn't look like the correct way to substantiate a class, right?
Do you resolve the issue? if yes, please mark helpful reply as answer, it will be beneficial other communities who have the similar issue, if not, could you please share a simple demo, which could reproduce the issue, we'll try our best to find a solution to resolve tried the code above with the latest nHAPI package (2.5.0.6). It is compiling and working just fine.
Maybe the error was in another part of your code.
Here is my complete listing.using NHapi.Base.Model;
using NHapi.Base.Parser;
using NHapi.Model.V231.Datatype;
using NHapi.Model.V231.Message;
using NHapi.Model.V231.Segment;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace GenerateurHL7
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
string result;
textBox1.Text = test();
}
string test()
{
string facility = "";
string messageControlId = "";
string mrn = "";); // G);
}
}
} | https://social.msdn.microsoft.com/Forums/en-US/a0a72d5b-2976-4c4d-a535-5ec9ab1dbc09/help-with-c-syntax?forum=csharpgeneral | CC-MAIN-2020-50 | refinedweb | 506 | 68.77 |
Hello,
I am developing an App for Project 2013 using OAuth for the connection and CSOM to access data of the project Instance.
I have been able to get the Resources and Assignments of the project open by the user but haven't been able to get a timespan of each assignment (How many hours each Resource is assigned per day).
I haven't found much help in the CSOM SDK as they aren't many examples.
I know there is a TimeSpan class in the Microsoft.ProjectServer.Client but don't know how to use it.
With OData you can access these data calling /Resources/TimephasedInfoDataSet, how can I do it in CSOM ?
Thanks in advance,
Thomas.
Hi Thomax, I think the link below will be useful to you.
Client-side object model (CSOM) for Project Server 2013:
By the way, I couldn't find TimeSpan class in Microsoft.ProjectServer.Client namespace, do you mean "TimePhase"?
Oops my bad, I meant TimePhase.
I know the link you have sent me, but there aren't any examples treating Resources or TimePhases.
I have been able to get the resources and Assignments but can't get them by day.
I know it is pretty easy to do with REST but I don't know about CSOM and the documentation is thin.
Any example on how to achieve it ? | https://social.technet.microsoft.com/Forums/en-US/05a4657a-a72e-419c-ac99-235bd8ef391c/timespan-in-csom?forum=projectserver2010general | CC-MAIN-2020-45 | refinedweb | 227 | 82.44 |
This preview shows
pages
1–5. Sign up
to
view the full content.
1.00/1.001 Quiz 2 1/12 Fall 2004 1.00/1.001 Introduction to Computers and Engineering Problem Solving Quiz 2 / November 5, 2004 Name: Email Address: TA: Section: Question Points Question 1 / 10 Question 2 / 30 Question 3 / 30 Question 4 / 30 Extra Credit / 10 Total / 100 You have 90 minutes to complete this exam. For coding questions, you do not need to include comments, and you should assume that all necessary files have already been imported. Good luck.
View Full
Document
This
preview
has intentionally blurred sections.
1.00/1.001 Quiz 2 2/12 Fall 2004 Question 1. True / False (10 points) 1. An event source in Swing can have more than one event listener registered with it. TRUE FALSE 2. An event listener in Swing can respond to events from more than one event source. TRUE FALSE 3. A Java class can inherit from more than one class. TRUE FALSE 4. A Java class can implement multiple interfaces. TRUE FALSE 5. Consider the following classes defined in separate Java source files: public class Animal { private int age; } public class Lion extends Animal { public Lion(int a) { age = a; } } The above code would compile. TRUE FALSE
1.00/1.001 Quiz 2 3/12 Fall 2004 Question 2. Swing Components and Events (30 points) Read the following code carefully and answer questions. Please assume that all the necessary packages are imported. public class MyApplication Part 1 { private JLabel message; private Font myFont = new Font("SansSerif", Font.BOLD, 18); public MyApplication() { JButton yesButton = new JButton("Yes"); JButton noButton = new JButton("No"); message = new JLabel("IS LEARNING JAVA A FUN EXPERIENCE??"); message.setFont(myFont); JPanel panel = new JPanel(); panel.setLayout(new BorderLayout()); JPanel buttonPanel = new JPanel(); buttonPanel.add(yesButton); buttonPanel.add(noButton); panel.add(message, BorderLayout.CENTER); panel.add(buttonPanel, BorderLayout.SOUTH); Container con = this.getContentPane(); con.add(panel); yesButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { // // Implementation hidden // } }); noButton.addActionListener( // Part 3 ); } public static void main(String[] args) { MyApplication app = new MyApplication(); app.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); app.pack(); app.setVisible(true); } }
View Full
Document
This
preview
has intentionally blurred sections.
1.00/1.001 Quiz 2 4/12 Fall 2004 In the above piece of code, new Java learners are asked a question as shown in Figure 1. Depending on the answer, the application either displays the message of Figure 2 on a green background or that of Figure 3 on a yellow background. Look at the code carefully and answer the questions given below: Figure 1 Figure 2 Figure 3 Part 1. The main() method invokes the object of the class MyApplication and displays it on screen. What should be the correct declaration of the class? Part 2.
This note was uploaded on 11/29/2011 for the course CIVIL 1.00 taught by Professor Georgekocur during the Spring '05 term at MIT.
- Spring '05
- GeorgeKocur
Click to edit the document details | https://www.coursehero.com/file/6588387/fall04quiz2sol/ | CC-MAIN-2017-17 | refinedweb | 501 | 50.84 |
Integrating mod_python with TurboGears .8a1
Thanks to Jamie's wonderful mpcp script, this could not have been easier. As always, this is what worked for me. YMMV. And $TG_MP = [your turbogears project directory].
Go grab the mpcp.py from and put it into $TG_MP. Or, if you use setuptools/easy_install, you can do 'easy_install -Z mpcp'; make sure to include the '-Z'.
Put an .htaccess file in $TG_MP with the following:
SetHandler mod_python PythonHandler mpcp PythonDebug On PythonOption cherrysetup $NAME_OF_YOUR_START_SCRIPT::mp_setup
Warning! The start script is usually MyAppName?-start.py, and the dash will screw up the PythonOption. Ergo, lose the dash and there shouldn't be any problems.
Now, in your (newly renamed) MyAppName_start.py make the following changes:
1) move cherrypy.server.start() so it only starts if name == "main".
if __name__ == "__main__": cherrypy.server.start()
2) create a new no-op method, called mp_setup(). As such:
def mp_setup(): pass
This method is supposed to have cherrypy production config stuff, but our prod.cfg file handles that, so make it a no-op. The better solution would be to remove the line in mpcp that looks for cherrysetup, but that's something I'll worry about when a push to production is near.
Note: (version 1.2), instead of hacking up your local copy.
Good luck!
Note: To get this running on a Linux machine a little more work is required to correctly map file paths. Here is my working start script:
import pkg_resources pkg_resources.require("TurboGears") import cherrypy from os.path import * import sys def mp_setup(): pass if exists(join(dirname(__file__), "setup.py")): cherrypy.config.update(file=join(dirname(__file__),"dev.cfg")) else: cherrypy.config.update(file=join(dirname(__file__),"prod.cfg")) from pylucenetest.controllers import Root cherrypy.root = Root() | http://trac.turbogears.org/wiki/ModPythonIntegration?version=8 | CC-MAIN-2019-30 | refinedweb | 294 | 62.34 |
TreeCache.getKeys() returns null sometimesElias Ross Apr 7, 2006 12:50 AM
See JBCACHE-535
Manik asked me:
Which corresponding unit test contradicts the TreeCache.getKeys() contract?[sp]
The test org/jboss/cache/optimistic/NodeInterceptorGetKeysTest.java contradicts the documentation in two places:
//assert we can see this with a key value get in the transaction assertEquals(0, cache.getKeys("/").size());
also later
assertEquals(0, cache.getKeys("/one/two").size());
This is from 1.143, before I got my hands on it:
/** * @param fqn * @return A Set<String> of keys. This is a copy of the key set, modifications will not be written to the original. * Returns null if the node is not found, or the node has no attributes */ public Set getKeys(Fqn fqn) throws CacheException { MethodCall m = new MethodCall(getKeysMethodLocal, new Object[]{fqn}); return (Set) invokeMethod(m); } public Set _getKeys(Fqn fqn) throws CacheException { Set retval = null; DataNode n = findNode(fqn); if (n == null) return null; retval = n.getDataKeys(); // return retval != null? new LinkedHashSet(retval) : null; return retval != null ? new HashSet(retval) : null; }
It now looks like this:
/** * Returns a set of attribute keys for the Fqn. * Returns null if the node is not found, or the node has no attributes * @param fqn * @return a copy of the key set, modifications will not be written to the original. */ public Set getKeys(Fqn fqn) throws CacheException { MethodCall m = new MethodCall(getKeysMethodLocal, new Object[]{fqn}); Set s = (Set) invokeMethod(m); if (s == null || s.isEmpty()) return null; // sort of dumb IMHO return s; } public Set _getKeys(Fqn fqn) throws CacheException { DataNode n = findNode(fqn); if (n == null) return null; Set keys = n.getDataKeys(); if (keys == null) return null; return new HashSet(keys); }
The tests currently (as if right now) don't pass. I'm thinking since the API wasn't very clean to begin with, I want to change it to be more useful and consistent:
1. If the node exists, return the keys. If there are no keys, return 0.
2. If the node does not exist, return null.
3. Possibly change DataNode's docs and behavior to match it
It used to be that:
1. If the node exists, return the keys. If there are no keys, return 0.
2. Also, if DataNode.getDataKeys() returned null because no keys were loaded (lazy created), return null
3. The documentation was not correct
My changes to optimize the loading behavior of CacheLoader caused certain tests to break that relied on some state that DataNode was in.
I think returning an empty set in all cases where the node exists is less likely to break people's code.
There are also a couple of other APIs which either return null for the empty set or an empty set or both. These should probably be documented or fixed or at least logged as issues to fix in the future.
This content has been marked as final. Show 1 reply
1. Re: TreeCache.getKeys() returns null sometimesManik Surtani Apr 7, 2006 6:55 AM (in response to Elias Ross)
I agree, there are inconsistencies around the treatment of empty sets. I'd expand JBCACHE-540 to encompass all such inconsistencies in return type (inc Javadocs)
Bits to be aware of include loading of nodes without initialising data (internal uninitialised entry in the data map), etc. | https://developer.jboss.org/thread/97084 | CC-MAIN-2019-09 | refinedweb | 551 | 64 |
jndi name of ejb in earKeith Naas Dec 4, 2006 3:08 PM
EJBs deployed from an EAR are put under the EAR's jndi namespace. For instance if the ear file is "my-ear.ear", and there is a Local Session Bean with the name of "MySessionBean", it will be bound to "my-ear/MySessionBean". In order for Seam to lookup this EJB, it requires a change to the components.xml so that instead of using "#{ejbName}/local" it uses "my-ear/#{ejbName}/local". While this works great, the ear name is now hardcoded in the ejb jar module inside of the ear. This isn't such a big problem until someone renames the ear file. At this point,the components.xml has to be updated since Seam can no longer find "my-ear/MySessionBean/local". This is a big problem, especially when trying to run multiple versions of the same application on a single server.
One possibility to fix the problem is to override the jndi name using either the @LocalBinding or @JndiName annotations. However, the jndi bindings of different EARS can collide since they both name is now hardcoded in the Java class.
Would it be possible for the Seam jndiPattern to automatically look in the EAR's jndi namespace?
1. Re: jndi name of ejb in earEric Ray Dec 4, 2006 4:54 PM (in response to Keith Naas)
I'm interested in a solution to this as well.
2. Re: jndi name of ejb in earPete Muir Dec 4, 2006 5:48 PM (in response to Keith Naas)
Easy. Write an ant task to filter components.xml.
3. Re: jndi name of ejb in earGavin King Dec 5, 2006 7:53 AM (in response to Keith Naas)
Would it be possible for the Seam jndiPattern to automatically look in the EAR's jndi namespace?
I wish. Unfortunately, no.
4. Re: jndi name of ejb in earKeith Naas Dec 5, 2006 2:18 PM (in response to Keith Naas)
Thanks. We settled on hardcoding the ear name in the @JndiName for each service along with placing it in the components.xml. This way, if the ear name changes, the services are still bound to the same namespace. | https://developer.jboss.org/thread/133459 | CC-MAIN-2019-09 | refinedweb | 371 | 70.84 |
Anonymous Types in Visual Basic
Alexandre Moura, Visual Basic QAMicrosoft Corporation
April 2008. (6 printed pages).
In Visual Basic 2008 the declaration of a new instance changed to allow not specifying a type as long as members are assigned to in a curly brackets delimited list, thus creating an anonymous typed instance. The statement: "New With {.x = 5}" creates an anonymous type instance with a member named x. Its syntax is similar to object initializers except that object initializers specify a type between the "New" and "With" keywords.
Anonymous type members infer their type from the values assigned to them – in the above case, x will have type Integer, the default type for numeric integer literals.
Module Module1 Sub Main() 'Definition of a local anonymous type variable. 'Type inference (on by default for new projects created with Visual 'Studio 2008) will cause var1 to be strongly typed, with the type of 'the expression on the right of the = sign. Dim var1 = New With {.x = 5, .y = #1/2/0003#} End Sub End Module
Sample 1. Anonymous type instance.
Let's take a look at what vb actually does behind the curtains. Copy the above code to a text file (named, for the purpose of this example v.vb) . Open the Visual studio command prompt (Start\All programs\Microsoft Visual Studio 2008\Visual Studio Tools\Visual Studio 2008 Command Prompt) and type vbc.exe v.vb – this will compile the above text into v.Exe . (This is also done behind the scenes in the Visual Studio IDE.)
Now type ildasm v.Exe – Ildasm (intermediate language disassembler) allows us to view the IL (Intermediate Language) that constitutes a compiled .NET assembly, so we can see exactly what's going on "under the hood". Ildasm will open once you type this line, displaying the IL code generated for this particular assembly by the Visual Basic compiler.
In Ildasm, expanding v.exe, we see the following tree structure:
We can see several of the elements included in the assembly: its manifest, the namespaces My and System.Xml.Linq, Module Module1, and a distinctively named class: VB$AnonymousType_0`2<T0,T1> - as you may now suspect, this is the anonymous type that the compiler created to define our instance.
So why such a distinctive name? The reason is we wanted to guarantee the uniqueness of the name, plus discourage people from actually using the name in code (at this point, that's not supported). Let's break the name down. "VB" identifies the type as being generated by the Visual Basic compiler, as opposed to another language or user type. The "$" symbol was chosen because it's not allowed in Visual Basic, meaning that types defined in code can never collide in name with this type. "$" is a valid character for a type name in IL – but you cannot define a Visual Basic type that has that symbol in its identifier. So the Visual Basic compiler emits a type that cannot collide with an already existing Visual Basic type. Next, "AnonymousType" simply indicates that this is an anonymous type. We still have: _0'2<T0,T1> - I'll skip the "_0" for the moment and focus on “`2<T0,T1>". This identifies the type as a generic type, with two type parameters – it's the equivalent on a type definition of "(of T0, T1)" in Visual Basic – the "'2" specifies that it has two type parameters, and <T0,T1> specify what their names are. Going back to _0, the reason this is added is so that we can insert more than one anonymous type with the same number of Type Parameters. We'll get to why we would want to do that further ahead.
Let's expand the type in ILDASM. We should see the following tree:
v.exe Manifest My VB$AnonymousType_0`2<T0,T1> .class private auto ansi .custom instance void [mscorlib] System.Diagnostics.DebuggerDisplayAttribute::.ctor(string) = (...) .custom instance void [mscorlib] System.Diagnostics.DebuggerDisplayAttribute::.ctor() = (...) $x : private !0 $y : private !1 .ctor : void(!T0,!T1) ToString : string() get_x : !T0() get_y : !T1() set_x : void(!T0) set_y : void(!T1) x : instance !0() y : instance !1() Module1
We now have a bit more information. The class is private meaning that it can only be accessed in this assembly – a second assembly that creates an anonymous type will have to ignore this class definition). It has two public properties, x and y, and two private data members, $x and $y – they have types T0 and T1 respectively. Properties are used to expose the members. While data members and properties can be accessed in the same way, this is not universal – different languages may have different syntaxes to access one or the other (For example, VB allows to access a property with parenthesis – were this property changed to a data member, code that accessed the element using parenthesis would now result in a compiler error. While this particular example goes against the order in which changes might happen if we had gone with exposing data members on anonymous types, it's representative of the kind of issues that we're trying to avoid.). As for the types of the properties, why are they T0 and T1, rather than Integer and Date? Well, the idea is to reduce the amount of anonymous types that are actually created. I'll come back to this later.
Also note that Equals has been overridden. We won't go through the code in IL, but it basically compares each member of the class (default behavior for reference type instances is to compare the reference of the instances themselves, so two different instances whose members compare as equal will still compare as unequal– for anonymous types, they will compare as equal). See the anonymous types conceptual topic for further discussion on comparing anonymous types.
Finally, notice that ToString was also overridden – calling it will return not the typical type name (since it's supposed to be an anonymous type), but a string containing the instance's member values – for example, calling ToString on var1 from the example above will return "{ x = 5, y = 1/2/0003 12:00:00 AM }" (The actual date format you may see depends on your locale).
So why does an anonymous type use generic type members internally? Take the following code, which exposes two anonymous type definitions, the first with properties x as Integer, y as Date, and the second with x as Char, y as Double:
Sample 2. Two anonymous type instances, showing type merging.
Save the above code in a text file, compile the code with vbc.exe and open the resulting .exe with ildasm (make sure to close the other window of ildasm if you use the same text file to build this sample). We see the following tree:
Note that we still only have one anonymous type definition, even though there are two distinct instances in the code. What happens is that as long as the name and order of the members is the same, the same generic type can be used to create the instances – and so only one extra class has to be added to the assembly.
There is an interesting side effect to this. If you create two instances with the same type, the instantiated generic type is the same. You can assign the variables that infer those types one to the other, if you so wish. but if they differ in type, name or order, the assignment will result in an error:
Sample 3. Showing that Var1 and Var2's variable type is the same, while Var3 is not.
This is only possible if the names and order stay the same – if either changes, a separate class will be created – this is where the "_0" on the anonymous type class name comes handy:
Sample 4. Two anonymous instances that result in two anonymous type classes
The above code results in the following if you analyze it in ildasm:
Note that the second class is named almost exactly like the first – only the "_0" and "_1" terminations are different.
One of the ways anonymous types are useful is in LINQ queries. Let's see how.
Sample 5. Linq query sample showing anonymous type of query elements.
In the above sample, we query over an array, and select element members, which we name x and xx. Doing so creates an anonymous type with the specified members. That can be seen by checking the type of any element that is returned from the query, as we do in the above code which will print out:
Without anonymous types, a user would have to manually create a type to hold the values to be used in the query, and create and initialize its instances. While the compiler could automatically create such a type and not hide it, that would likely lead to cluttered code as new anonymous types get created for subsequent queries.
In this document we have covered Anonymous types, a new feature in Visual Basic 2008 – from their creation in code to their implementation by the Visual Basic compiler, by inspecting the IL generated by the mentioned compiler. We have seen that generic classes are created to reduce the number of classes that are required to support different anonymous types, and a sample of their usage in LINQ queries. | http://msdn.microsoft.com/en-us/library/cc468406(v=vs.90).aspx | CC-MAIN-2014-42 | refinedweb | 1,556 | 62.07 |
Creating a Web API in ASP.NET Core
RESTful services are quite common and popular these days. If you ever developed modern service-based applications using ASP.NET Web Forms and ASP.NET MVC, chances are you used a Web API to create REST services. No wonder ASP.NET Core also allows you to create a Web API. This article discusses how a Web API service can be created in ASP.NET Core. It then shows how the created service can be invoked from JavaScript code.
Creating the Employee service
Let's create a Web API service that performs CRUD (Create, Read, Update, and Delete) operations on the Employees table of the Northwind database.
To begin, create a new ASP.NET Core Web application using Visual Studio 2017 (see Figure 1).
Figure 1: Creating a new ASP.NET Core Web application
Pick Web API as the project template in the next dialog, shown in Figure 2.
Figure 2: Choosing the project template
This way, you will get the default configuration and Web API controller that you can modify to suit your needs.
When you create the project, open ValuesController.cs from the Controllers folder. You should see something like this:
namespace WebAPIInAspNetCore.Controllers { [Route("api/[controller]")] public class ValuesController : Controller { [HttpGet] public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } [HttpGet("{id}")] public string Get(int id) { return "value"; } .... .... }
Notice that the ValuesController class inherits from the Controller base class. This is different as compared to ASP.NET. In ASP.NET, the Web API controller inherits from the ApiController base class. In ASP.NET Core, the Web API has been merged with MVC; therefore, both the controllers—the MVC controller as well as the Web API controllers—inherit from the Controller base class.
Because Web API is now just another controller, you can have as many actions you want. Of course, to create a REST style service, you still need to have those standard Get(), Post(), Put(), and Delete() actions as before. You can define the HTTP verb and action mapping on top of an action by using attributes such as [HttpGet], [HttpPost], [HttpPut], and [HttpDelete]. This is different than before, where HTTP verbs were automatically mapped with the actions by default. The ValuesController class also has a [Route] attribute that decides how the service will be accessed. By default, the service URL takes the form /api/values, but you can change the route as needed.
Okay. Rename the ValuesController (both the class and the .cs files) to EmployeeController and save the file. You will revisit EmployeeController after creating the Entity Framework Core model required by the application.
Next, you need to add Entity Framework Core to your project. To do so, right-click the Dependencies folder in the Solution Explorer, and select the Manage NuGet Packages option. Then, add the EntityFrameworkCore.SqlServer NuGet package. After doing this step, you should see the required packages added to your project (see Figure 3).
Figure 3: Adding the EntityFrameworkCore.SqlServer NuGet package
Then, open the Startup class and add EF Core to the services collection. The following code shows how this is done:
public void ConfigureServices(IServiceCollection services) { // Add framework services. services.AddMvc(); services.AddEntityFrameworkSqlServer(); }
ConfigureServices() calls AddEntityFrameworkSqlServer() to add EF Core to the list of services. Although we are not configuring the DbContext here for DI, you could have done so if needed. For the sake of simplicity, we will specify the connection string in the DbContext class itself.
Also, modify the Configure() method to set up the default MVC route as shown below:
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddConsole(Configuration.GetSection ("Logging")); loggerFactory.AddDebug(); app.UseMvcWithDefaultRoute(); }
Here, we call the UseMvcWithDefaultRoute() method to configure the default route (/controller/action/id) for MVC requests. We do this because later we will develop a JavaScript client that invokes the Web API. And, we will need an MVC controller.
Now, you can create the EF Core model needed by the application. To do so, add a Models folder (you could have placed these classes in some other folder, also) and add two classes to it—Employee and NorthwindDbContext. The following code shows what Employee class looks like.
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using System.ComponentModel.DataAnnotations; using System.ComponentModel.DataAnnotations.Schema; namespace WebAPIInAspNetCore.Models { [Table("Employees")] public class Employee { [DatabaseGenerated(DatabaseGeneratedOption .Identity)] [Required] public int EmployeeID { get; set; } [Required] public string FirstName { get; set; } [Required] public string LastName { get; set; } [Required] public string City { get; set; } } }
The Employee class is mapped to the Employees table using the [Table] data annotation attribute. It has four properties: EmployeeID, FirstName, LastName, and City. The EmployeeID property also is marked using the [DatabaseGenerated] attribute because it's an identity column.
The NorthwindDbContext class is shown next.
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.EntityFrameworkCore; namespace WebAPIInAspNetCore.Models { public class NorthwindDbContext : DbContext { public DbSet<Employee> Employees { get; set; } protected override void OnConfiguring (DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer("data source=.;initial catalog = northwind;integrated security = true"); } } }
The NorthwindDbContext class inherits from the DbContext class. It defines Employees DbSet. The connection string to the Northwind database is specified by using the OnConfiguring() method. Make sure to change the database connection string as per your setup.
Okay. So far, so good. Now, open the EmployeeController and write the following code in it.
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using WebAPIInAspNetCore.Models; using Microsoft.EntityFrameworkCore; namespace WebAPIInAspNetCore.Controllers { [Route("api/[controller]")] public class EmployeeController : Controller { [HttpGet] public List<Employee> Get() { using (NorthwindDbContext db = new NorthwindDbContext()) { return db.Employees.ToList(); } } [HttpGet("{id}")] public Employee Get(int id) { using (NorthwindDbContext db = new NorthwindDbContext()) { return db.Employees.Find(id); } } [HttpPost] public IActionResult Post([FromBody]Employee obj) { using (NorthwindDbContext db = new NorthwindDbContext()) { db.Employees.Add(obj); db.SaveChanges(); return new ObjectResult("Employee added successfully!"); } } [HttpPut("{id}")] public IActionResult Put(int id, [FromBody]Employee obj) { using (NorthwindDbContext db = new NorthwindDbContext()) { db.Entry<Employee>(obj).State = EntityState.Modified; db.SaveChanges(); return new ObjectResult("Employee modified successfully!"); } } [HttpDelete("{id}")] public IActionResult Delete(int id) { using (NorthwindDbContext db = new NorthwindDbContext()) { db.Employees.Remove(db.Employees.Find(id)); db.SaveChanges(); return new ObjectResult("Employee deleted successfully!"); } } } }
The EmployeeController consists of five actions: Get(), Get(id), Post(), Put(), and Delete(). These actions are discussed next.
The Get() action returns a List of Employee objects. Inside, the code instantiates the NorthwindDbContext and returns all the entities from the Employees DbSet. The Get(id) action accepts an EmployeeID and returns just that Employee as its return value.
The Post() action receives an Employee object as its parameter. This is the new employee to be added to the database. Inside, it adds the Employee to the Employees DbSet by using the Add() method. The SaveChanges() method is called to save the changes to the database. Notice that the obj parameter is decorated with the [FormBody] attribute. This is necessary for model binding with JSON data to work as expected. The Post() returns a success message wrapped in ObjectResult to the caller.
The Put() method accepts two parameters: EmployeeID and Employee object. Inside, the State property is set to EmployeeState.Modified. This way, the entity is marked as modified. The SaveChanges() method attempts to save the changes to the database. The Put() method also returns a success message as before.
The Delete() action accepts an EmployeeID to be deleted. It then looks for that Employee in the Employees DbSet. This is done by using the Find() method. The existing Employee returned by Find() is removed from the DbSet by using the Remove() method. The SaveChanges() action is called to save the changes to the database. A success message is then returned to the caller.
Now, run the application by pressing F5. If all goes well, you should see the JSON data returned from the Get() action. Figure 4 shows how the JSON data is displayed in Firefox.
Figure 4: JSON data has been returned from the Get() action
Creating a Client by Using jQuery Ajax
Now that your Employee service is ready, let's consume it by building a jQuery client. Right-click the Controllers folder and select Add > New Item from the shortcut menu. Using the Add New Item dialog, add an MVC controller—HomeController (see Figure 5).
Figure 5: Building a jQuery client
The controller contains the Index() action. We don't need to add any code to Index() because all our code will be in the Index view. Index() simply returns the view.
Then, add a subfolder under Views and name it Home. Next, add an Index viewer to the the Views > Home folder. This can be done by using the Add New Item dialog as before (see Figure 6).
Figure 6: Adding an Index viewer to the the Views > Home folder
Then, add Scripts folder under wwwroot folder and place the jQuery library there. If you want, you also can reference jQuery from CDN.
Now, you need to design a form that looks like what you see in Figure 7:
Figure 7: Designing the form
The preceding Web page consists of a form that allows you to add, modify, and remove employees. When the page loads in the browser, the EmployeeID dropdownlist displays a list of existing EmployeeIDs. Upon selecting an EmployeeID, details of that Employee are fetched from the database and displayed in the first name, last name, and city textboxes. You then can modify the details and click the Update button to save them to the database. You can click the Delete button if you want to delete the selected employee. To add a new employee, simply enter a new first name, last name, and city values and click the Insert button (EmployeeID is an identity column and therefore need not be entered from the page). The success message returned from the respective Web API actions is displayed in a <div> element below the table.
The following markup shows the HTML form behind this page.
<h1>Employee Manager</h1> <form> <table border="1" cellpadding="10"> <tr> <td>Employee ID :</td> <td> <select id="employeeid"></select> </td> </tr> <tr> <td>First Name :</td> <td><input id="firstname" type="text" /></td> </tr> <tr> <td>Last Name :</td> <td><input id="lastname" type="text" /></td> </tr> <tr> <td>City :</td> <td><input id="city" type="text" /></td> </tr> <tr> <td colspan="2"> <input type="button" id="insert" value="Insert" /> <input type="button" id="update" value="Update" /> <input type="button" id="delete" value="Delete" /> </td> </tr> </table> <br /> <div id="msg"></div> </form>
The <form> consists of a <select> element, three textboxes, and three buttons. Make sure to use the same IDs for the elements because later you will use them in the jQuery code. After adding the form markup, go in the <head> section and reference the jQuery library by using the <script> tag.
<script src="~/Scripts/jquery-3.1.1.min.js"></script>
Now, we need to write some jQuery code that invokes the Web API by making requests with appropriate HTTP verbs. Add an empty <script> block below the previous script reference and write the code discussed in the following sections.
Filling the dropdownlist with EmployeeIDs
$(document).ready(function () { var options = {}; options.url = "/api/employee"; options.type = "GET"; options.dataType = "json"; options.success = function (data) { data.forEach(function (element) { $("#employeeid").append("<option>" + element.employeeID + "</option>"); }); }; options.error = function () { $("#msg").html("Error while calling the Web API!"); }; $.ajax(options); }
The EmployeeID dropdownlist is populated in the ready() callback. The preceding code makes an Ajax request using the $.ajax() method of jQuery. Notice how the URL, type, and dataType properties of the options object are specified. Because we want to invoke the Get() action, the URL points to the Web API end point: /api/employee. The HTTP verb used is GET and the response data type is set to json. The success function simply fills the dropdownlist (the <select> element) with a series of elements, each wrapping an EmployeeID. The error function displays an error message in case something goes wrong while calling the Web API. Once the options object is configured, $.ajax() of jQuery is called by passing the options object to it. Doing so will initiate an Ajax request to the Web API based on the specified configuration.
Displaying the Details of a Selected Employee
$("#employeeid").change(function () { var options = {}; options.url = "/api/employee/" + $("#employeeid").val(); options.type = "GET"; options.dataType = "json"; options.success = function (data) { $("#firstname").val(data.firstName); $("#lastname").val(data.lastName); $("#city").val(data.city); }; options.error = function () { $("#msg").html("Error while calling the Web API!"); }; $.ajax(options); });
When a user selects an EmployeeID from the dropdownlist, details of that Employee are to be displayed in the other textboxes. Therefore, we use the change() method to wire an event handler to the change event of the dropdownlist. The code shown above is quite similar to the previous one in that it uses the GET verb. However, it appends the EmployeeID whose details are to be fetched to the URL. The success function fills the three textboxes with firstName, lastName, and city. Notice that the property names are automatically converted to use camel casing. This way, client-side code can use JavaScript ways of naming members, whereas server-side code can continue to use C# naming conventions.
Adding a New Employee
$("#insert").click(function () { var options = {}; options.url = "/api/employee"; options.type = "POST"; var obj = {}; to add a new Employee goes inside the click event handler of the insert button. The above code uses the POST verb to invoke the Employee Web API. Moreover, it sets data, dataType, and contentType properties. The data property is set to the stringified version of the new Employee object. Notice that this new object also uses camel casing when setting the properties. The dataType property is set to HTML because our Post() action returns a plain string. The contentType property indicates the request's data type—JSON, in this case. The success function simply displays the message returned by the Post() action into the msg <div> element.
Modifying an Existing Employee
$("#update").click(function () { var options = {}; options.url = "/api/employee/" + $("#employeeid").val(); options.type = "PUT"; var obj = {}; obj.employeeID = $("#employeeid").val(); that updates employee details goes in the click event handler of the update button. Most of the previous code is similar to the code you wrote in the insert click event handler, except that the EmployeeID being modified is appended to the URL, and the HTTP verb used is PUT.
Deleting an Employee
$("#delete").click(function () { var options = {}; options.url = "/api/employee/" + $("#employeeid").val(); options.type = "DELETE"; options.dataType = "html"; options.success = function (msg) { $("#msg").html(msg); ; options.error = function () { $("#msg").html("Error while calling the Web API!"); }; $.ajax(options); });
The code that deletes an Employee goes in the click event handler of the delete button. The above code should look familiar to you because it follows the same outline as the other event handlers. This time, the HTTP verb used is DELETE and the EmployeeID to be deleted is appended to the URL.
This completes the client application. Run the application, navigate to /Home/Index, and test the CRUD operations.
Using DI to Inject DbContext into the Web API Controller
In the example you just developed, you instantiated NorthwindDbContext yourself in the EmployeeController. You also can put the DI features of ASP.NET Core to use for that purpose. Let's see how.
Open the appsettings.json file and store the database connection in it, as shown below:
{ "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Warning" } }, "ConnectionStrings": { "DefaultConnection": "Server=.; initial catalog=Northwind; integrated security=true; MultipleActiveResultSets=true" } }
As you can see, the ConnectionString key stores the Northwind connection string with the name DefaultConnection. Make sure to change the connection string to suit your setup.
Then, open the Startup class and import these two namespaces:
using WebAPIInAspNetCore.Models; using Microsoft.EntityFrameworkCore;
Here, we imported the Models namespace because our DbContext and entity classes reside in that namespace. We also imported Microsoft.EntityFrameworkCore because we want to use certain extension methods from this namespace.
Then, add the following code to the ConfigureServices() method.
public void ConfigureServices(IServiceCollection services) { services.AddMvc(); services.AddEntityFrameworkSqlServer(); services.AddDbContext<NorthwindDbContext>( options => options.UseSqlServer( Configuration.GetConnectionString("DefaultConnection"))); }
The preceding code registers the NorthwindDbContext with the DI framework. We also specify the connection string by using the UseSqlServer() extension method. Because you are specifying the connection string here, you can remove now OnConfiguring() from the NorthwindDbContext class. You also need to make one more addition in the NorthwindDbContext class. Add a public constructor, as shown below:
public NorthwindDbContext(DbContextOptions options) : base(options) { }
This constructor accepts a DbContextOptions object and passes it to the DbContext base class.
Now, open the EmployeeController class and modify it as shown below:
[Route("api/[controller]")] public class EmployeeController : Controller { private NorthwindDbContext db; public EmployeeController(NorthwindDbContext db) { this.db = db; } .... }
The code declares a private NorthwindDbContext variable to hold the reference to the injected DbContext. The public constructor accepts the NorthwindDbContext. Once we grab the DbContext, you can use it in all the actions. As an example, the following code uses the injected DbContext in the Get() action.
[HttpGet] public List<Employee> Get() { return db.Employees.ToList(); }
The complete source code of this example is available with this article's code download. | https://www.codeguru.com/csharp/.net/net_asp/creating-a-web-api-in-asp.net-core.html | CC-MAIN-2019-35 | refinedweb | 2,895 | 51.55 |
On Friday 27 May 2005 04:11, Chris Wedgwood wrote:>> mine was :-)Yes, it took some time to me to go working on this and getting the time to explain it clearly.> >> is UML I wonder if we could do save a couple of bytes & cycles for> everyone else by doing something like #ifdef CONFIG_IRQ_HAS_RELEASE,> #endif around that and then letting the Kconfig magic set> CONFIG_IRQ_HAS_RELEASE as required? If other arches need it thay can> do the same and if eventually almost everyone does we can kill the> #ifdef crud?Well, that's a point, even because a conditional jump needs to flush the pipeline when mispredicted (which won't happen on other ARCHs after the initial period, if this jump stays in the Branch Target Buffers).> Longer term I wonder if some of the irq mechanics in UML couldn't end> up being a bit more like the s390 stuff too?Christoph Hellwig too suggested this, however anything such *must* be longer term (while this was earlier pointed as a reason to drop this patch, last time).Beyond that, I've not a clear understanding of S390, so I cannot for now help (including any merit discussion) on this point... Bodo Stroesser is porting UML on S390 so probably he might help more on this point.-- Paolo Giarrusso, aka BlaisorbladeSkype user "PaoloGiarrusso"Linux registered user n. 292729 | http://lkml.org/lkml/2005/5/28/37 | CC-MAIN-2015-32 | refinedweb | 228 | 63.73 |
0
Greetings!
I've been messing around with a bit of code for a while now and I've got it compiling etc etc. The only problem is with perfecting the quicksort function (probably trivial, but then I'm not very skilled). From the output file, it seems to 'sort of' sort the string arrays that are fed in. I suspect the problem lies with the string compare function - which I'll admit I don't fully understand - but further fiddling has just given me a bunch of compilation errors.
Here is the quicksort, integrated to give some context (though if there's still too much unnecessary code, please tell me!):
#include <iostream> #include <iomanip> #include <cctype> #include <cmath> #include <cstdlib> #include <fstream> using namespace std; struct employee { char fname[20]; char sname[20]; int mnum; }; void falphaquickSort (employee** a, int left, int right);// function declaration int main(void){ employee* details = new employee[count]; employee** pdetails = new employee*[count]; for(int i=0; i < count; i++){// creates array (of structs to hold all persons in the file in >> details[i].fname >> details[i].sname >> details[i].mnum; }// etc etc. this part of the code works just fine. delete [] details; void falphaquickSort(employee** a, int left, int right) { int i = left, j = right; employee* tmp; int pivot = (left + right)/2; while(strcmp ((a[i])->fname, (a[pivot])->fname) < 0 ) {// start of partition. I'm told 'strcmp' gives a zero, -ve or +ve value depending on the relative values of the pointers but I'm stuck on how to utilise this to perfect the sort. ++i; } while(strcmp ((a[j])->fname, (a[pivot])->fname) > 0 ) { --j; } if (i <= j) { tmp = a[i]; a[i] = a[j]; a[j] = tmp; i++; j--; };// end of partition if (left < j) // start of recursion falphaquickSort(a, left, j); if (i < right) falphaquickSort(a, i, right); } | https://www.daniweb.com/programming/software-development/threads/246666/assistance-needed-for-perfection-of-quicksort | CC-MAIN-2018-17 | refinedweb | 309 | 63.53 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.