text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
#include <ConstrainedOptPack_MatrixSymPosDefLBFGS.hpp>
Inheritance diagram for ConstrainedOptPack::MatrixSymPosDefLBFGS:
The function
set_num_updates_stored() must be called first to set the maximum number of the most recent updates that can be stored. The storage requirements for this class are
O( n*m + m*m ) which is
O(n*m) when
n >> m which is expected (where
n is the dimension of the vector space and
m is the maximum number of updates stored).
This implementation is based on:
Byrd, Nocedal, and Schnabel, "Representations of quasi-Newton matrices and their use in limited memory methods", Mathematical Programming, 63 (1994)
Consider BFGS updates of the form:
( B^{k-1}, s^{k-1}, y^{k-1} ) -> B^{k} where: B^{k} = B^{k-1} - ( (B*s)*(B*s)' / (s'*B*s) )^{k-1} + ( (y*y') / (s'*y) )^{k-1} B <: R^(n x n) s <: R^(n) y <: R^(n)
Bo = ( 1 / gamma_k ) * I where: / (s^{k-1}'*y^{k-1})/(y^{k-1}'*y^{k-1}) : if auto_rescaling() == true gamma_k = | \ 1/alpha from last call to init_identity(n,alpha) : otherwise
Sand
Ythat store the update vectors
s^{i}and
y^{i}for
i = 1 ... m_bar:
S = [ s^{1}, s^{2},...,s^{m_bar} ] <: R^(n x m) Y = [ y^{1}, y^{2},...,y^{m_bar} ] <: R^(n x m)
m_bar <= mmost recent update vectors and their ordering is significant. The columns
S(:,m_bar)and
Y(:,m_bar)contain the most recent update vectors. This is all client needs to know in order to reconstruct the updates themselves.
This class allows matrix-vector products
x = B*y and the inverse matrix-vector products
x = inv(B)*y to be performed at a cost of about
O(n*m_bar^2).
Definition at line 97 of file ConstrainedOptPack_MatrixSymPosDefLBFGS.hpp. | http://trilinos.sandia.gov/packages/docs/r7.0/packages/moocho/src/ConstrainedOptPack/doc/html/classConstrainedOptPack_1_1MatrixSymPosDefLBFGS.html | CC-MAIN-2014-10 | en | refinedweb |
module not discovered
Hi,
I have started a small project with quickly. In my application (bin/<appname>) I have the following lines:
import pyfits
import matplotlib.pyplot as plt
when I run "quickly package", the debian control file has the following lines:
Source: fits-thumbnailer
Section: python
Priority: extra
Build-Depends: cdbs (>= 0.4.43),
debhelper (>= 6),
python,
python-support (>= 0.6.4),
python-
Maintainer: UNKNOWN <UNKNOWN>
Standards-Version: 3.8.3
XS-Python-Version: current
Package: fits-thumbnailer
Architecture: all
XB-Python-Version: ${python:Versions}
Depends: ${misc:Depends},
${python:
${python:Depends},
python-matplotlib
Description: UNKNOWN
UNKNOWN
Does that mean, that the "import pyfits" (which corresponds to the package "python-pyfits") is not recognized as a dependency? And if so, how can I tell quickly that there is some other dependency?
Best regards,
Stefan
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Quickly Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2010-07-05
- Last reply:
- 2010-07-05
Thanks for using Quickly,
right, it seems that python-
distutils- extra doesn't recognize this module. Can you open a bug against it please and subscribe me?
I'll try to have a look there.
As a workaround, you can add manual dependency in Quickly by launching:
quickly configure dependencies
They will be taken into account at next "quickly package" command | https://answers.launchpad.net/quickly/+question/116694 | CC-MAIN-2014-10 | en | refinedweb |
We’re probably done with new features for this release and have just published a new build which we call PyCharm 2.6 Beta.
The major additions since the last EAP are the following:
- Diagrams support for SQLAlchemy
- Infer function return types from NumPy docstrings
- pyvenv & namespace packages support for Python 3.3
The Beta is available for download from a special page with all PyCharm 2.6 features overview.
There’s still time to do some polishing before 2.6 goes public next week and we really looking forward to your feedback on the Beta.
Next week is also a DjangoCon US. If you’re going, meet us there! We’ve also prepared some cool gifts for the attendees.
Are you guys coming at PyCon UK at the end of September ? There are still some great sponsorship opportunities going for extra cheap — see
Thanks Giacomo,
Very low change someone can come.
How many attendees expected?
We could provide some licenses for a raffle. Any interest?
-Eugene
Hello Eugene
The Python Users Group in Ceara (PUGCE) will have an small event in Brazil next october 6 and I would like to know if you could offer just one license for a raffle.
event information:
V Pylestras
Thanks in adance.
I’m really happy you’ve fixed some last-minute reported bugs of mine. I’m looking forward to the 2.6 release.
Screenshots of the diagram support? Sounds really nice | http://blog.jetbrains.com/pycharm/2012/08/pycharm-2-6-goes-beta-and-adds-diagrams-for-sqlalchemy/ | CC-MAIN-2014-10 | en | refinedweb |
Hi,
I have a problem where I am trying to use Log4jMini as a part of my J2ME application running on an N91 but i am having loads of problems trying to get it to work.
I have searched everywhere and there is little or no documentation on log4jMini and there is no mention of it on these forums either.
The problems i am having are I think due to library issues and im not totally sure that this will work in Symbian.
Can anyone advise if they have any experience with this from before please?
Also, below is some of the code I am using and it compiles fine but the application crashes out when I run the application on my phone.
It is only this code as if I remove it, the application runs fine.
Basically, what this code should do it just write one log entry "Awake awake" to a log file when the application first startsBasically, what this code should do it just write one log entry "Awake awake" to a log file when the application first startsCode:import org.apache.log4j.Logger; import org.apache.log4j.PropertyConfigurator; . . . public class ILCMidlet extends MIDlet implements..... . . static Logger cat = Logger.getInstance(ILCMidlet.class.getName()); . . public ILCMidlet() . . PropertyConfigurator.configure(configFile); cat.info("Awake awake."); }
Any help would be greatly appreciated,
Thanks,
Seamus | http://developer.nokia.com/community/discussion/showthread.php/110425-Log4JMini-Log4JME | CC-MAIN-2014-10 | en | refinedweb |
I have been working on this python script and I cannot figure out why it is not working. I have about 15 Web Feature Layers I want to regularly export to individual geodatabases then download them using the script. If I try to just download them it returns a 1kb zip file. So after looking online for a solution a ran across this one:Download feature service as file geodatabase
It worked great for 1 file, but like I said, I have 15 I would like to regularly download. so I spent some time tweaking that and come up with this code:
import time
import os
import zipfile
import arcgis.gis
from zipfile import ZipFile
# *** modify these four lines ***
outputFolder = r"OUTPUT" # where the GDB will be extracted to
gis = arcgis.GIS("URL",
"USER", "PASS") # replace these with your credentials
item_id = ["ID1", "ID2", ......]
GDBname = ["MULTIPLE_NAMES"]
for i in range(len(item_id)):
AGOLitem = gis.content.get(item_id[i])
print("Exporting Hosted Feature Layer...")
AGOLitem.export(GDBname[i], 'File Geodatabase', parameters=None, wait='True')
time.sleep(10) # add 10 seconds delay to allow export to complete
search_fgb = gis.content.search(query="title:{}*".format(GDBname[i])) # find the newly created file geodatabase in ArcGIS online
fgb_item_id = search_fgb[0].id
fgb = gis.content.get(fgb_item_id)
fgb.download(save_path=outputFolder) # download file gdb from ArcGIS Online to your computer
print("Zipping exported geodatabase for download...")
'''while statement runs until a valid zipped file is created'''
# randomly the output is a 1 KB file that is not a valid zipped file.
# The while statement forces a valid zipped file to be created.
zipfullpath = os.path.join(outputFolder, GDBname[i] + ".zip") # full path to the zipped file once it is downloaded to your computer
while zipfile.is_zipfile(zipfullpath) is False:
fgb.download(save_path=outputFolder)
zf = ZipFile(os.path.join(outputFolder, GDBname[i] + ".zip"))
'''deleting hosted File Geodatabase'''
# NOTE: This will delete the temporary File Geodatabase in ArcGIS Online
print("Deleting "+fgb.title+"("+fgb.type+")"+" from ArcGIS Online...")
fgb.delete()
print("Done!")
When I run this it gets to zipping exported geodatabase then it gets hung up. In the folder it is supposed to save to, I have either a service definition or just a file with the original file name (So if the name online is "Features_for_you" but GDBname should have been "New_Features" I will get a file with the name: "Features_for_you".
Things I have tried:
- I thought maybe I wasn't waiting long enough for the export, so I extended the sleep to 200 seconds just to test. Same result
- I have 2 text files, one with the id and one with the GDBname. Instead of a long list I had:
- item_id=open("id.text").read().splitlines()
- GDBname=open("id.text").read().splitlines()
I do have another log-in that I use. I upload some smaller test data and ran the exact same script with only 3 Feature Layers and it worked just fine.
The only difference between the live data and the test data is the live data has a ton of information: I believe each Feature layer has 4 - 10 layers and each individual layer has 40 -70 fields with domains all over the place.
Any reason my script isn't work properly? | https://community.esri.com/message/811940-exporting-multiple-web-feature-layers-as-geodatabase | CC-MAIN-2020-24 | en | refinedweb |
Step 4: Write the Code!
Ooh! Ooh!
This is the part you've probably been waiting to see. Unhappily, this is also the part that relies almost entirely on you. I can't walk you through it. Dig into any documentation that you can find. Be a good developer. Don't get lost. Your old code will provide some guidelines into roughly how much code you'll need to write, but keep in mind step 2, "Understand the intent." Don't lose sight of new language features that could make your life easier. I've added some very basic exceptions in Listing 2, but you can see where I'm headed; we'll give some more detailed feedback to the developer. I've also added some error codes.
Listing 2Translating to C# and adding some basic exceptions.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; using System.IO; using System.Diagnostics; namespace NetConnectSnippet { class DriveMappingSnippet { internal struct NETRESOURCE { public int dwScope; public int dwType; public int dwDisplayType; public int dwUsage; public string lpLocalName; public string lpRemoteName; public string lpComment; public string lpProvider; } [DllImport("mpr.dll", EntryPoint = "WNetAddConnection2W", CharSet=System.Runtime.InteropServices.CharSet.Unicode)] public static extern int WNetAddConnection2(ref NETRESOURCE lpNetResource, string lpPassword, string lpUsername, Int32 dwFlags); public const int RESOURCETYPE_ANY = 0x0; public const int CONNECT_INTERACTIVE = 0x00000008; public const int CONNECT_PROMPT = 0x00000010; public const int RESOURCE_CONNECTED = 0x01; public const int RESOURCE_GLOBALNET = 0x02; public const int RESOURCETYPE_DISK = 0x01; public const int RESOURCEDISPLAYTYPE_SHARE = 0x03; public const int RESOURCEUSAGE_CONNECTABLE = 0x01; public const int CONNECT_UPDATE_PROFILE = 0x01; public const int NO_ERROR = 0; public const int ERROR_ACCESS_DENIED = 5; public const int ERROR_ALREADY_ASSIGNED = 85; public const int ERROR_BAD_DEV_TYPE = 66; public const int ERROR_BAD_DEVICE = 1200; public const int ERROR_BAD_NET_NAME = 67; public const int ERROR_BAD_PROFILE = 1206; public const int ERROR_BAD_PROVIDER = 1204; public const int ERROR_BUSY = 170; public const int ERROR_CANCELLED = 1223; public const int ERROR_CANNOT_OPEN_PROFILE = 1205; public const int ERROR_DEVICE_ALREADY_REMEMBERED = 1202; public const int ERROR_EXTENDED_ERROR = 1208; public const int ERROR_INVALID_PASSWORD = 86; public const int ERROR_NO_NET_OR_BAD_PATH = 1203; // Maps a remote file system to a free local drive letter. public void MapDrive(string LocalDrive, string RemoteDrive, string Username, string Password) { if (!Directory.Exists(RemoteDrive)) { throw new System.Exception("The directory to be mapped doesn't exist!"); } if (Directory.Exists(LocalDrive)) { throw new System.Exception("The drive being mapped to already exists!"); } NETRESOURCE NetR = new NETRESOURCE(); NetR.dwScope = RESOURCE_GLOBALNET; NetR.dwType = RESOURCETYPE_DISK; NetR.dwDisplayType = RESOURCEDISPLAYTYPE_SHARE; NetR.dwUsage = RESOURCEUSAGE_CONNECTABLE; NetR.lpLocalName = LocalDrive; NetR.lpRemoteName = RemoteDrive; NetR.lpComment = null; NetR.lpProvider = null; int result = WNetAddConnection2(ref NetR, Username, Password, CONNECT_UPDATE_PROFILE); if (result == NO_ERROR) { Console.WriteLine("Drive {0} mapped to local drive {1}", RemoteDrive, LocalDrive); } else { Console.WriteLine("Unable to map {0} to local drive {1}", RemoteDrive, LocalDrive); throw new Exception("Error code " + result.ToString() + " was returned."); } } } }
It's a start! | https://www.informit.com/articles/article.aspx?p=1398009&seqNum=5 | CC-MAIN-2020-24 | en | refinedweb |
Introduction: Crying Baby Monitor With Email Notification
Maybe you're leaving you're baby at the hands of a nanny or even the day care and being a concerned parent you want to gauge whether he or she is being treated well. This alarm notes the high peaks when a baby is crying and when it passes a certain threshold alerts the user.
I made this project in the hopes of preventing abuse for both children's and toddlers alike and hope that it will make a difference in parent's lives!
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: BoM
* Arduino 101 (because it has a real time clock and Bluetooth low energy)
* Alternatively, Arduino Uno + RTC + Bluetooth module
* Sound Detector Board
* Bluetooth Low energy-capable phone
Step 2: Wiring
Step 3: Code
#define BLYNK_PRINT Serial
#define BLYNK_MAX_SENDBYTES 128
#include <BlynkSimpleCurieBLE.h><blynksimplecurieble.h> #include <CurieBLE.h</blynksimplecurieble.h>>
// You should get Auth Token in the Blynk App. // Go to the Project Settings (nut icon). char auth[] = "795aa10d38cf4a4c987de4be9b6d0de3"; int counter - 0 BLEPeripheral blePeripheral;
void emailOnButtonPress() { // *** WARNING: You are limited to send ONLY ONE E-MAIL PER 15 SECONDS! ***
// Let's send an e-mail given the following conditions if (analogRead(A0)>300) { long logTime = time(); } if (analogRead(A1)>300 && (time() - logTime() > 300){ counter +=1; Serial.println("Dog / Baby is Overheating in the Car!!!"); // This can be seen in the Serial Monitor Blynk.email("techmartian@techmartian.com", "Subject: BABY Crying for over 5 minutes", "Your Baby could be in danger or being abused!!!"); } }
void setup() { // Debug console Serial.begin(9600);
delay(1000);
blePeripheral.setLocalName("TechMartian"); blePeripheral.setDeviceName("TechMartian"); blePeripheral.setAppearance(384);
Blynk.begin(blePeripheral, auth);
blePeripheral.begin(); }
void loop() { blePeripheral.poll(); Blynk.run(); }
Step 4: App Making
Follow the images above on how to setup your own application.
* Grab the following widgets:
- BLE
- Notification
* Connect to your device.
* Add your email address
* Click Run
That's it, you're done. Enjoy!
Participated in the
Automation Contest 2017
Participated in the
Bluetooth Challenge
Participated in the
Makerspace Contest 2017
Be the First to Share
Recommendations
Discussions
Question 1 year ago
hi this is a very good project
i have a question about using bluetoot with arduino
sorry iam a little bit new in this but can you quick explain why you use bluetoot if you send a email notificaton
thank you | https://www.instructables.com/id/Crying-Baby-Monitor-With-Email-Notification/ | CC-MAIN-2020-24 | en | refinedweb |
As one of the best new features of ASP.NET Core, TagHelpers simplify the work required to design views that interact with the data model. You can write HTML that not only binds to the data model, but also offers the advantages of HTML and CSS. TagHelpers allow you to provide robust, maintainable, reusable code with information provided on the server. This is the second article in this series covering Authoring TagHelpers. When building complex TagHelpers, it’s best to break down the logic into several TagHelpers that communicate with each other. This communication is handled by the context parameter of the process method. The context parameter has an Items property that holds information for parent and child TagHelpers. Let’s take a simple example of Company and Location, where Location is the child TagHelper of Company. This TagHelper would show the contact details of company. Here’s the sample markup:
<contact name="@Model.Name"> <location city="@Model.Location.City" country="@Model.Location.Country" phone="@Model.Location.Phone"></location> </contact>
We consider two model classes:
public class Company { public string Name { get; set; } private Location _location; public Location Location { get { return \_location ?? (\_location = new Location()); } } } public class Location { public string Country { get; set; } public string City { get; set; } public string Phone { get; set; } }
The Contact TagHelper class is defined below. Note how the context parameter is used to store the Company object’s information. This Company object is then used by the child TagHelper Location to process location information. The output parameter’s GetChildContentAsync() method helps to get the content of child TagHelper asynchronously.
public class ContactTagHelper : TagHelper { public string Name { get { return Contact.Name; } set { Contact.Name = value; } } private Company _contact; private Company Contact { get { return \_contact ?? (\_contact = new Company()); } set { _contact = value; } } public override async void Process(TagHelperContext context, TagHelperOutput output) { //In taghelper, we can transfer some information to its children taghelpers via context.Items. //The children taghelpers will get a copy of its parent's Items. // here we can save some information. Its children tags can obtain this information. context.Items["Parent"] = Contact; // render the children taghelpers. This is necessary to await since we need location info to render. TagHelperContent locationContent = await output.GetChildContentAsync(); var loc = context.Items["Parent"] as Company; StringBuilder sb = new StringBuilder(); string str = string.Format(@" {0} {2}, {1} Ph:{3} ", Name, loc.Location.Country,loc.Location.City,loc.Location.Phone); sb.Append(str); output.Content.AppendHtml(sb.ToString()); // when we get all the information from children taghelpers, // the we can render the html markup and startup scripts. } }
The Location TagHelper class is defined below. This child TagHelper gets the information of shared Parent object through the Context.Items collection:
public class LocationTagHelper : TagHelper { public string Country { get; set; } public string City { get; set; } public string Phone { get; set; } public override void Process(TagHelperContext context, TagHelperOutput output) { // Get the information from its parent tag. var parent = context.Items["Parent"] as Company; // set its attributes' values to some instance from the parent. parent.Location.Country = Country; parent.Location.City = City; parent.Location.Phone = Phone; // save some information so that its children taghelpers can use. context.Items["Parent"] = parent.Location; // render the children taghelpers output.GetChildContentAsync(); } }
The Contact TagHelper is now ready! When we use this on a view page and bind to any model that returns contact name, city, country and phone, it is rendered as below:
Rendered TagHelper The rendered markup in browser is as below:
Rendered markup ComponentOne Studio MVC Edition controls are compatible with ASP.NET Core and has TagHelpers for all its controls. | https://www.grapecity.com/blogs/taghelpers-authoring-nested-taghelpers-in-asp-net-core-mvc | CC-MAIN-2020-24 | en | refinedweb |
Windows Azure - AppFabric Access Control Service -
IntroductionIn this article we can explore the Access
Control service of Windows Azure AppFabric. As you know the
services provided by AppFabric are shown below.Theory
on Access ControlThe Access Control Service (ACS)
provides a flexible way of authentication and authorization in the cloud
premises. The ACS extracts out the authentication and
authorization code outside the application.
We can use ACS along with:
Identity Providers:
The identity providers supported by ACS include:
Compatibility: ACS
is compatible with web platforms including::
The format of the end point
will
be:
Namespace: A managed namespace is partially managed by another
service. The Service Bus and Cache services uses managed
namespaces characterized by –sb and –cacheIn this article we
have learnt about the AppFabric Access Control Service. It
provides a better way of authentication and authorization. In the
next article we can see how to create a service namespace and using the Access
Control Service Portal.
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/kb/5063-windows-azure---appfabric-access-control.aspx | CC-MAIN-2017-47 | en | refinedweb |
Output:
Guess a number: 28
Sorry the number was: 29
And it will keep doing that until, you guessed the right number. Therefore my question is how to update the random number, every time when the person guesses the right number?
import random random_number, lucky_number = random.randint(0,100), 0 lucky_number = int(input("Guess a number:")) if lucky_number != random_number: while random_number != lucky_number: lucky_number = int(input("Guess a number:")) print("The number was %d." %(random_number)) elif random_number == lucky_number: print("You have guessed the right number!") break | http://www.dreamincode.net/forums/topic/308839-python-330-storing-new-random-values/ | CC-MAIN-2017-47 | en | refinedweb |
CodePlexProject Hosting for Open Source Software
Hi,
I am building a products' review module where only registered users can post review.s I have been reading the tutorial on 1-n relationships () but I have a couple of questions.
My ReviewPart part currently looks like this:
public class ReviewPartRecord : ContentPartRecord
{
public virtual UserPartRecord User { get; set; }
public virtual string Subject { get; set; }
public virtual string Comment { get; set; }
public virtual int Vote { get; set; }
public virtual int ProductId { get; set; }
public virtual DateTime? ReviewDateUtc { get; set; }
}
public class ReviewPart : ContentPart<ReviewPartRecord>
{
public UserPartRecord User {
get { return Record.User; }
set { Record.User = value; }
}
[Required]
public string Subject
{
get { return Record.Subject; }
set { Record.Subject = value; }
}
[Required]
public string Comment
{
get { return Record.Comment; }
set { Record.Comment = value; }
}
[Required]
public int Vote
{
get { return Record.Vote; }
set { Record.Vote = value; }
}
[Required]
public int ProductId
{
get { return Record.ProductId; }
set { Record.ProductId = value; }
}
public DateTime? ReviewDateUtc
{
get { return Record.ReviewDateUtc; }
set { Record.ReviewDateUtc = value; }
}
}
The migration files looks like this
public int Create() {
// Creating table ReviewPartRecord
SchemaBuilder.CreateTable("ReviewPartRecord", table => table
.ContentPartRecord()
.Column("UserPartRecord_id", DbType.Int32)
.Column("Subject", DbType.String, c => c.WithLength(100))
.Column("Comment", DbType.String, c => c.WithLength(4000))
.Column("Vote",DbType.Int32)
.Column("ProductId", DbType.Int32)
.Column("IsApproved", DbType.Boolean, c => c.WithDefault(true))
.Column("ReviewDateUtc", DbType.DateTime, c => c.WithDefault(DateTime.UtcNow))
);
For the User property in ReviewPart I am using
UserPartRecord. To create the relationship, I am using the following column name
UserPartRecord_id.
Now, my problem is that whenever I try to save changes made in a form to create a review, I get the following message:
and I don't understand where/why is it trying to update a column named User_Id? I have searched my project and I don't have a reference to User_id. I am assuming this has to do with the relationship that I am trying to create but the column name is obviously
incorrect. That make me think that the column name UserPartRecord_id
is incorrect.
What would be the correct column name that I should use? or why does it try to update the column User_id?
Thanks in advance for your help.
The migration logic is trying to match your column name with a property on your record and it fails because the name of the property is completely different. Column name and property name should be consistent.
Thank you bertand. That solved my problem.
You guys have an excellent project going on!
@gadwy: Can you post your final Migration/Model? I'm having a very similar issue at the moment despite following the instructions from Bertrand's post! Just need a bit of a sanity check :-)
Failing that - Bertrand, can you confirm that the property name on the ContentPartRecord should be UserPartRecord?
Thanks
Edit
My Migration is this:
SchemaBuilder.CreateTable("SomeRecord",
table => table.Column("Id", column => column.Identity().PrimaryKey())
.Column("Name", column => column.NotNull())
.Column("UserPartRecord_Id"));
And the record looks like this:
public class SomeRecord
{
public virtual int Id { get; set; }
public virtual UserPartRecord UserPartRecord { get; set; }
public virtual string Name { get; set; }
}
That's not the case - the property will be matched by having the same type as the target record, the name shouldn't matter. That way you can have two different properties both joining to the same foreign table.
The main issue is:
.Column("UserPartRecord_Id"));
As far as I know it should be UserPartRecord_id - a lower case 'i' in id.
Note: I might have got confused about what you were saying. But Bertrand was saying that as long as the Column name matches the name in your Record, it's fine. It doesn't matter whether they're called UserPartRecord or just User, so long as it's the same
name in both migration and record, with _id appended.
Pete, exactly as you suggested - UserPartRecord_id fixed it, I'd been using UserPartRecord_Id. Thanks ever so much!
In the same module I've got relationships to other records where the field names are called SomeOtherRecord_Id, WowYetAnotherRecord_Id with the capital I, and this is reflected in
Creating 1-N and N-N Relations. Is there some rule that gets applied for record classes in the same module vs. those in different modules? I'm not aware of anything from my (limited) past experiences with NHibernate,
and haven't come across this before in Orchard until I tried to create a relationship to UserPartRecord.
Hmm ... I hadn't realised the n-n documentation had _Id. Actually I wasn't sure if the casing was the problem, but I had a situation once where it seemed to fix it, and I've seen a couple of other threads where it seemed to be the issue. So maybe the documentation
needs fixing to show _id, even if it works in some cases. Actually I have a table in Mechanics where my key properties are called LeftContentItemId and RightContentItemId - but maybe its because I'm actually declaring them on the Record class that it works.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/245189 | CC-MAIN-2017-47 | en | refinedweb |
Opened 7 years ago
Closed 7 years ago
Last modified 7 years ago
#3959 closed defect (fixed)
cpl_port.h globally disables Visual C++ warnings
Description
Long time ago, I committed r10310 which is an incomplete solution because this setting affects compilation of any software where cpl_port.h is included. Basically, including cpl_port.h implicitly affects warnings state what is unwelcome if one wants to compile his software with default or higher levels.
The warnings level should be modified locally and privately using
#if defined(_MSC_VER) # pragma warning(push) # pragma warning(disable:4127) #endif ... code which causes C4127 warning #if defined(_MSC_VER) # pragma warning(pop) #endif
Change History (4)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
I don't necessarily mean case by case as warning by warning and location by location. It can be still solved by wrapping large chunks of code, but not in public headers. If GDAL wants to disable warnings, no problem but no reason to put this setting in public headers. However, if some public headers need it, then more surgical precision is needed indeed using warning(push) and warning(pop) around problematic code.
I can't recommend any switches.
comment:3 Changed 7 years ago by
In trunk I have migrated the warning disabling out of cpl_port.h and into the WARNFLAGS macro in nmake.opt (r21678). This should resolve the issue of inclusion of cpl_port.h disabling warnings in calling applications.
I'm also going to make a pass to try and clean warnings at /W4 on MSVC 2008 from the core GDAL source.
comment:4 Changed 7 years ago by
Thanks Frank.
I agree that it is inappropriate for cpl_port.h to end up disabling warnings for client applications. At the very least there could be some sort of opt-out #define that clients could declare (ie. #define CPL_DISABLE_MSVC_WARNING_SUPRESSION) that client applications could use though it really ought to work the other way around where some cpl definitions are only used by GDAL code.
I do not agree to that warnings should be supressed case by case in the code. I consider some of the MSVC warnings to be inappropriate for application to the GDAL code base and I don't want to supress them location by location. Perhaps there are compiler switches we could pass on the commandline for these supressions instead of doing them in cpl_port.h. I would be receptive to such changes. | http://trac.osgeo.org/gdal/ticket/3959 | CC-MAIN-2017-47 | en | refinedweb |
Such a solution should meet some requirements:
1. It should not alter framework files
2. It should be easy to upgrade to another Yii version
To solve this, i used this approach:
1. Override CHtml
CHtml is a core class which gets autoloaded by Yii. So in order to prevent Yii from using its own class, i just have to load my custom CHtml class before Yii does. Fortunately this is very simple and Yii even provides a way how to do is. The key is to add your custom class to the import configuration:
<?php 'import'=>array( 'application.components.CHtml', ...
So now you can have your custom CHtml class in components/CHtml.php. So far so good. But ...
2. Inherit from original CHtml
I don't want to copy the original CHtml.php from Yii and alter it. I would have to do the same changes again, each time a new version is released and it's very likely that i miss to redo some of my changes. So i want some kind of inheritance. My first idea was to use namespaces inside components/CHtml.php like this:
<?php /* BELOW CODE DOES NOT WORK: */ // Load CHtml into yii namespace to have it under yii\Chtml namespace yii { include(Yii::getPathOfAlias('system.web.helpers.CHtml').'.php'); } // Create CHtml in global namespace namespace yii { class CHtml extends yii\CHtml { /* override things here */ } }
But this does not work! When you include a file from inside a namespace, PHP will not use the same namespace for the included file. Or to put it in other words: Every PHP file must define its own namespace. If it does not, then it will always be importet to the global namespace. Yii's files do not declare a namespace, so every code inside them will be in the global namespace as soon as you include it.
So i ended up with an acceptable workaround to reach my above goals. The pattern goes like this:
<?php namespace yii { use \Yii as Yii; // Required for every core class used inside the code below class CHtml { /* copy the original CHtml code here. If Yii gets upgraded, replace the code in here */ } } namespace { class CHtml extends yii\CHtml { /* override CHtml methods here */ } }
The solution is not perfect, but at least it works and still i can upgrade without too much fuss.
If someone has a better idea for how to solve (2) i'm glad to hear it. Maybe we can find the perfect pattern and have a nice wiki article for it. | http://www.yiiframework.com/forum/index.php/topic/28979-how-to-override-yii-core-classes/ | CC-MAIN-2017-47 | en | refinedweb |
hey y'all.
Anyone know algorithms for really cool screen transition effects?I know the basic ones like fading to a color, or blocking the screen out. But what about those cool ones like how the screen blurs out like in those snes versions of super mario bros?
I'd like to kno cuz i'm doing some code for custom transition effects in an RPG and some algorithms for that would be very helpful.
We can only do what we feel is right each moment as we live it.
Well, all transition effects are pretty similar. You have to images (i0, i1). You have transition time (t).Now, what you want to do is to find a way that shows i0 at tcur=0 and i1 at tcur=t.
You can slide i0 in all directions out of the screen showing i1 "behind" it. You can even scroll parts i0 in different directions (one half to the left, one half to the right).
You can display lines / columns of i1 in an interval of n. Start with a large n then decrease it.
You can alpha blend the images.You can implement a paper turn effect.
There are a lot of possibilities. Some are pretty easy to implement, some take more time.
--There are no stupid questions, but there are a lot of inquisitive idiots.
I love crossfading
- Wisdom is the art of using knowledge- String theory: There's music in everything
Check out the code from Ballz, a TINS game. It had several transitions when you die/finish a level. However, most transitioned to a black screen and then out from a black screen.
------------Solo-Games.org | My Tech Blog: The Digital Helm
Here's the code I use for fading effects. Example code will follow if somebody could reply - this post is getting too long
fadecol.h
fadecol.cpp
crossfadecol.h
crossfadecol.cpp
crossfade.h
#ifndef CROSSFADE_HEADER
#define CROSSFADE_HEADER
#include <allegro.h>
class CrossFade {
public:
CrossFade();
virtual ~CrossFade();
virtual void crossfade(BITMAP* img1, BITMAP* img2, BITMAP* dst, double percent) = 0;
};
#endif
crossfade.cpp
#include "crossfade.h"
CrossFade::CrossFade() {
}
CrossFade::~CrossFade() {
}
fade.h
#ifndef FADE_HEADER
#define FADE_HEADER
#include <allegro.h>
class Fade {
public:
Fade();
virtual ~Fade();
virtual void fade(BITMAP* src, BITMAP* dst, double percent) = 0;
};
#endif
fade.cpp
#include "fade.h"
Fade::Fade() {
}
Fade::~Fade() {
}
This posts does not exist. It is but a figment of your imagination.
--sig used to be here
Here's some example code:
example.cpp
@Miran: Thanks!
EDIT: Changed parts of the code and removed one example. The example listed will show all effects.
I tried your examples but they crashed, I am using Dev-C++ and alleg42, also it seems sprintf is no longer available.
____
"The unlimited potential has been replaced by the concrete reality of what I programmed today." - Jordan Mechner.
Well,the crashing could be due to no images with the given name in the same directory (image1.tga, image2.tga).Regarding sprintf: chances are that allegro no longer includes <stdio.h>. I'll add the include to the examples and I'll check if the images can be loaded.
Edit:You can download the compiled example program and 2 example images here:
Yes that was the problem, I was using 2 pcx files, I did not see you were using .tga files! Now it works nicely!
[EDIT]Really nice! I am going to use the alphablending fx in my project! but I'll replace draw_trans_sprite with Fladimir's alpha blending routines.
Well, guess that's the problem if you simply copy'n'paste old code. I just recompiled that code with warnings enabled and almost fell out of my chair in shame.
I also realized that I can't recall all the libs and linking order to link allegro statically. Finally, it seems that this does the job:
-lalleg_s -lgdi32 -ldxguid -lole32 -ldinput -lddraw -lwinmm -ldsound
(This is the point where somebody links to the documentation showing me that this info is well documented and easily available )
Sounds like you needed something like Easy Allegro..
Chaos Groove Development BlogFree Logging System Code & Blog
Those are good, but they fade from one image to another. I've got a tile based engine with sprites on top of it. What I was after was some tips on ways of postprocessing the doublebuffer, which emulates a transition effect. That is, everything gets drawn to the doublebuffer, then if the engine recieves a message to do a transition effect (from an event telling it to go to the next level or something) it will increment the effect a little.
Well, the double buffer is just an image.
Let's say your engine renders your start scene into buffer1. Let's also assume that your engine renders the second scene into buffer2. No, all you need to do is to pass these two buffers to the fade method. As you can see, it takes two BITMAPs and a percentage value.This allows you to fade both static images or animated graphics fresh from your engine.
But I do hope you don't expect me to create a RPG engine just to demonstrate how you could fade two scenes by using the code above | https://www.allegro.cc/forums/thread/591493/673831 | CC-MAIN-2017-47 | en | refinedweb |
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
import numpy as np import scipy as sp import scipy.optimize as opt import matplotlib.pyplot as plt %matplotlib inline
f = lambda x: np.cos(x) - x
x = np.linspace(-5, 5, 1000) y = f(x) plt.figure(figsize=(5,3)); plt.plot(x, y); plt.axhline(0, color='k'); plt.xlim(-5,5);
bisectfunction implements the bisection method (also called the dichotomy method). It takes as input the function and the interval to find the root in.
opt.bisect(f, -5, 5)
Let's visualize the root on the plot.
plt.figure(figsize=(5,3)); plt.plot(x, y); plt.axhline(0, color='k'); plt.scatter([_], [0], c='r', s=100); plt.xlim(-5,5);
brentq(Brent's method). This algorithm also requires that $f$ is continuous and that $f(a)$ and $f(b)$ have different signs.
opt.brentq(f, -5, 5)
The
brentq method is faster than
bisect. If the conditions are satisfied, it is a good idea to try Brent's method first.
%timeit opt.bisect(f, -5, 5) %timeit opt.brentq(f, -5, 5)
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter09_numoptim/01_root.ipynb | CC-MAIN-2017-47 | en | refinedweb |
- When the Server run, it will show it's own IP and port, open a ServerSocket and wait for socket connection from clients.
- In Client side, enter message to be sent to server, enter the server Ip and port, then clieck Connect... button, The client will connect to server using socket with DataInputStream and DataOutputStream loaded with message to send.
- In client side, when serverSocket.accept(), it will retrieve the message sent with dataInputStream.readUTF().
Both client and server need permission of "android.permission.INTERNET" in AndroidManifest.xml.
Notice:
- All code for network operation should run in background thread.
- The code dataInputStream.readUTF() will block the program flow if no data input. (Read Pervent program blocked by DataInputStream.readUTF())
Notice:
- All code for network operation should run in background thread.
- The code dataInputStream.readUTF() will block the program flow if no data input. (Read Pervent program blocked by DataInputStream.readUTF())
Example code in Server Side:
package com.example.androidserversocket; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException;() { Socket socket = null; DataInputStream dataInputStream = null; DataOutputStream dataOutputStream = null; try { serverSocket = new ServerSocket(SocketServerPORT); MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { info.setText("I'm waiting here: " + serverSocket.getLocalPort()); } }); while (true) { socket = serverSocket.accept(); dataInputStream = new DataInputStream( socket.getInputStream()); dataOutputStream = new DataOutputStream( socket.getOutputStream()); String messageFromClient = ""; //If no message sent from client, this code will block the program messageFromClient = dataInputStream.readUTF(); count++; message += "#" + count + " from " + socket.getInetAddress() + ":" + socket.getPort() + "\n" + "Msg from client: " + messageFromClient + "\n"; MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { msg.setText(message); } }); String msgReply = "Hello from Android, you are #" + count; dataOutputStream.writeUTF(msgReply); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); final String errMsg = e.toString(); MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { msg.setText(errMsg); } }); }(); } } } } }; } }
Layout:
>
Example code in Client Side:
package com.example.androidclient; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException;; import android.widget.Toast; public class MainActivity extends Activity { TextView textResponse; EditText editTextAddress, editTextPort; Button buttonConnect, buttonClear; EditText welcomeMsg; ); welcomeMsg = (EditText)findViewById(R.id.welcomemsg);) { String tMsg = welcomeMsg.getText().toString(); if(tMsg.equals("")){ tMsg = null; Toast.makeText(MainActivity.this, "No Welcome Msg sent", Toast.LENGTH_SHORT).show(); } MyClientTask myClientTask = new MyClientTask(editTextAddress .getText().toString(), Integer.parseInt(editTextPort .getText().toString()), tMsg); myClientTask.execute(); } }; public class MyClientTask extends AsyncTask<Void, Void, Void> { String dstAddress; int dstPort; String response = ""; String msgToServer; MyClientTask(String addr, int port, String msgTo) { dstAddress = addr; dstPort = port; msgToServer = msgTo; } @Override protected Void doInBackground(Void... arg0) { Socket socket = null; DataOutputStream dataOutputStream = null; DataInputStream dataInputStream = null; try { socket = new Socket(dstAddress, dstPort); dataOutputStream = new DataOutputStream( socket.getOutputStream()); dataInputStream = new DataInputStream(socket.getInputStream()); if(msgToServer != null){ dataOutputStream.writeUTF(msgToServer); } response = dataInputStream.readUTF(); }(); } } } return null; } @Override protected void onPostExecute(Void result) { textResponse.setText(response); super.onPostExecute(result); } } }
Layout:
/welcomemsg" android: >
Client and Server run on WiFi share hotspot:
The following video, at 1:22, show it work when both client and server connect to the shared WiFi network from server hotspot.
- Pervent program blocked by DataInputStream.readUTF()
- Editable message sent from server
- Java/JavaFX Client link to Android Server
- Java/JavaFX Client run on Raspberry Pi, link with Android Server
- Java/JavaFX Server link to Android Client
- A Simple Chat App
46 comments:
This is awesome. I love how simple it is. I have two questions however:
if I want to send messages from the server to the client as well and have the server receive them would it work if I use the client side of your code into the server part of the code and the server part of your code into the client side of the code?
Also I am thinking of changing it so that when pressing connect a new screen shows up where the client can also view the chat messages from the server and send messages. Will it be possible to do this through intents? Sorry if my questions sound silly. I am new to android programming.
hello blackjwl,
"send messages from the server to the client" have already been done in the example. The message "Hello from Android, you are #" is sent from server to client after connected.
For "chat" application, I have no idea right now, may be will try later.
Thx
I see, well I am working on that these days and your code seemed pretty much the solution to a chat application as I thought I only need to add a text box to the server app's xml layout and enter text instead of sending a hard coded message. I did not mean a fully fledged chat application only chatting between the server and client taking place in the simplest way possible.
Anyway I have tried to run your code on two separate emulators and I do not get the local IP address shown as the "SiteLocalAddress" when I run the server app. I only see one SiteLocalAddress statement mentioning "10.0.2.15"
and when I enter that into the client app on the other emulator I got an IOexception java.net.ConnectException saying failed to connect to 10.0.2.15. Connect failed ECONNREFUSED(connection refused).
After that when I entered my computer's local IP address into the client app's text box thinking maybe it would work even if it doesn't show it but absolutely NOTHING happened. I had copied and pasted your code exactly as it is and had run it. Any idea why I can;t run it like you did in the video?
I forgot to mention one other thing, when I tried to run the apps again on the two separate emulators, and I put in the local ip address of my computer even though it did not show up on the server app, I decided to wait and see what happens and I ended up getting an error saying failed to connect to 192.168.0.7 ETIMEDOUT(Connection timed out).
if I run the server app on the mobile and the client app on the emulator your code works however. I am still trying to make the server send a message from the user to the client through a textbox rather than a hardcoded string message but the application just crashes. I have been at this all day and it is starting to drive me crazy.
ServerActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
msg.setText(message);
}
});
sendChat.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
chatMsg= chatBoxText.getText().toString();
try {
dataOutputStream.writeUTF(chatMsg);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
Any clues on how I can insert a textfield that would send the message to the client without causing it to crash?
hello blackjwl,
Is it what you want?
Strongly recommend NOT to test on Emulator.
Thank you so much. :) this tutorial proved to be very useful! :)?
Please check:
Implement simple Android Chat Application, server side..
and
Simple Android Chat Application, client side.
Is it gonna work if I try to connect on different networks?
Awesome....It is really working.Can you please provide a similar code which can transfer files between two android devices
hello Sauradipta Mishra,
Please check File transfer via Socket, between Android devices.
Hi guys,
Could anyone give me a Python socket server code (I have a server running Python). Thanks a lot.
Thanx for sharing such code. it's really good example.
But, I have one question. Can we make server devices ip adr. static? with code or with android user interface? We may get rid of entering Ip adr. and port every time.
sure you can make the ip static. But it is not the job in the application.).
I tried sending http handshake message from serverSocket(Android code) to websocket(JavaScript code).
WebSocket in JavaScript received onopen() callback function but it immidaitely recived onclose() callback with reason code 1006.
Note: ServerSocket is listening to port 8080.
Websocket is cerated using URL - "wb:127.0.0.1:8080"
Please let me know if my approach is wrong.
Is it possible to communicate between serverSocket and webSocket?
Or is there any other way to create WebSocket Server in andorid similar to serverSocket and communicate with JavaScript websocket?
Mallikarjun
Hello Mallik,
I have no idea right now.
I think serverSocket and webSocket cannot communicate directly.
Please,,,,,I need a way to made two mobiles to be connected. Via WIFI and playTicTacToe on them
my android mobile is client only, my wifi module is server , if i connect from client the server should connect and the server sending should be seen in a label. for this how to edit your code. please guide
hi
i use eclipse with target 17 and i use 2 emulator android on eclipse when i run the application i have a prbm in client i can't connect to the server i have an exception (connection refused)plz hel me
Hello,,,im just thinking if it is possible to divide the codes on connect button. by creating 2 buttons[connect,send].connect is just simply notify you if its connected, then send is to send a message to the server. can anyone help me?
Add the permissions
(Uses internet,access ntwk state )
Thanks for the tutorial its quite helpful!!!!!
How can I send the location of the server to client on button click.
As expected, Always find a solution from your blog.Thank you
Hi!
is posible to use the same code for ObjectInputStream and ObjectInputStream? Because, i tried but i didnt have any result. Thanks!!
Regards
Hi,
I am trying to execute your example, but i always got failed to connect IOException also isConnectedFailed: EHOSTUNREACH(no route to host), Please help to resolve this exception.
I am trying to execute your example, but i always got failed to connect IOException also isConnectedFailed: EHOSTUNREACH(no route to host), Please help to resolve this exception.?
Hello Rasoul Gholamhosseinzadeh,
In this example, the server start ServerSocket and wait request from client. The communication link is initiate by client, then server reply something, than close. So the server cannot initiate communication to client.
But you can check socket.getInetAddress() to determine what to reply.
I only implemented client code since I need to connect to a telnet server.
Is this possibile ??
When I press connect I obtain no messages and no errors. Simply, nothing happens !!
i have a same question:
Rasoul Gholamhosseinzadeh said...?
anybody have example?help please
Hello Rasoul Gholamhosseinzadeh and Iqra Ali,
I think what you want is Android Chat example, with server sending individual message to specify client..
thankyou :)
its really helpful
and if i want to communicate with different network. any example about this?
i want to communicate with different network. any example about this?
and also i want some modification like a client sends a msg to the server then server forward it to other user.
hey thank you for code. i used your client side code and use another app as server i can send a msg but i can not receive from another server app and also cant see the chat panel which is shown in ur video.
Hello I want server and client side code is it possible to give me this code my emaild id is kirtigiripunje@gmail.com thanks
hello kirti kumar,
The post already had links to download the code.
Hello,
On the Client side, I would like to connect to the server, then send multiple messages. At the moment, I am able to connect once and send a message, however, when I click on the "Connect" button again it tries to create a new socket. I want the same connected socket to send multiple messages. Is it possible? Is there another example with this implementation?
Thanks
i want to run this client and server in different network and want them to communicate. how do i do this. how to do port forwarding.
on my router wifi its work,phones are connect, when the ip is 10.0.0.2, but in mobile network how can it work? what need to change?
hello Pavel Palei,
As I know, most mobile network provider block your port as server, not related to the program. So mobile phone can be a client, but not as server.
why we need to create new Socket every time we send massage? the connection stop after thread executed?
is there another ways communicate with sockets?
hello Pavel Palei,
Depends on your implementation. In this simple example, client side only connect to server, send data, and close.
Refer to another example of A Simple Chat App (listed below "Next:"), connection no close after connected.
i have problem i dont recived data i recevd java.io.EOFException
NB:i have a python client and i send the data it is tested
Hi sir,Thank you very much very good tutorial.
But sir , I have one question how to transfer file or bidirectional communication "file" using socket ..please help me .thanks
Hi sir,
Can you please tell me 1 thing.
When the connection is established how to show another page for few sec.
I am creating a game app so when the connection is established is should go to the game page play for 2 min and then return the score.
Can we implement it using this?? | http://android-er.blogspot.com/2014/08/bi-directional-communication-between.html | CC-MAIN-2017-47 | en | refinedweb |
One thing that still gets in my way more than it should is O/R mapping. No other mapper I've come across encourages a true object-oriented application structure. Granted, I've only really used NHibernate, and I had built my own mapper before that was even available, but I've read up quite a bit on the the other mappers.
By true OO structure, I mean that all application objects are only constructed from other application objects, which doesn't involve dependencies on environment-specific code (ie. if you're running under ASP.NET, Windows forms, Swing, etc.). A pure structure encourages a proper separation between core application code, and display and controller code, which allows more flexible application evolution.
Instead, controller logic often manually constructs application objects, passing in default arguments to properly initialize the required fields. This means constructor and initialization code must be duplicated when running in another environment, or tedious refactoring is needed when changing the constructor interface. Further, the defaults are hardcoded in the code, which means changes in defaults require an application upgrade.
Instead, O/R mappers should promote a factory pattern for constructing application objects. Factories themselves are constructed when the application is initialized, and are henceforth singletons within a given application instance. O/R mappers don't support or encourage factories or singletons in this manner however, as they always map a key/identifier, to an object instance. Factories are slightly different as they are generally singletons.
For example, let's assume we have a simple Product class:
public abstract class ProductNow we have here a public constructor which requires a Quote object to initialize the base Product object. You can't sell 'abstract' products, so we need a concrete product, like a Table:
{
int productId;
decimal price;
protected Product()
{
}
}
public class Table : ProductOf course, a Table with dimensions of 0'x0' is invalid, so we need to ensure that a Table is initialized with a proper length and width. We can pass in a pair of default dimensions when constructing a Table instance in a controller, but chances are the default values will be the same everytime you construct an instance of Table. So why duplicate all that code?
{
int length;
int width;
public Table(int length, int width) : base()
{
this.length = length;
this.width = width;
}
}
For instance, suppose we have another class "DiningSet" which consists of a Table and a set of Chairs. Do we call the Table constructor with the same default values within the DiningSet constructor?
Of course, many of you might now be thinking, "just create an empty constructor which invokes the parameterized constructor with the default values; done". All well and good because your language likely supports the int type very well. Now suppose that constructor needs an object that cannot be just constructed at will from within application code, such as an existing object in the database.
Enter factories:
public interface IProductFactoryThe IProductFactory abstract all factories which construct products. Any parameters that the base Product class accepts in its constructor are passed in to the Make() method, as this is shared across all Product Factories. TableFactory is mapped to a table with a single record containing the default length and width values. If the constructor requires an existing database object, this can be referenced via a foreign key constraint, and the O/R mapper will load the object reference and its dependencies for you.
{
Product Make();
}
public sealed class TableFactory : IProductFactory
{
int defaultLength;
int defaultWidth;
public Product Make()
{
return new Table(defaultLength, defaultWidth);
}
}
Since factories are generally singletons, it would be nice if O/R mappers provided special loading functions:
public interface ISessionThis models and O/R mapper session interface after the one in NHibernate. Note that a special Singleton() method simply loads the singleton of the given type without needing an object identifier.
{
T Load<T>(object id);
T Singleton<T>();
}
Our controller code is thus reduced to:
...Which encapsulates all the constructor details in application objects, does not hardcode any default values since they live in the database and can be upgraded on the fly, isolates refactorings which alter the Table constructor interface to the TableFactory alone, and simplifies controller code as we don't need to load any objects. This is a "pure" object-oriented design, in that the application can almost bootstrap itself, instead of relying on its environment to properly endow it with "god-given" defaults.
Product table = session.Singleton<TableFactory>().Make();
...
This approach also enables another useful application pattern which I may describe in a future post.
[Edit: I've just realized that the above is misleading in some parts, so I'll amend soon. Singletons aren't needed as much as I suggest above.] | http://higherlogics.blogspot.com/2008_04_01_archive.html | CC-MAIN-2017-47 | en | refinedweb |
I'm just starting to teach myself C++, and have begun learning about integer overflow. Out of curiosity I wrote some tests just to see what occurs with certain integer values.
Here's my program:
#include <iostream>
int main()
{
int x(0);
std::cout << x << std::endl;
x = x + 2147483647;
std::cout << x << std::endl;
x = x + 1;
std::cout << x << std::endl;
std::cout << std::endl;
unsigned int y(0);
std::cout << y << std::endl;
y = y + 4294967295;
std::cout << y << std::endl;
y = y + 1;
std::cout << y << std::endl;
}
0
2147483647
-2147483648
0
4294967295
0
Integers (generally) take a 32-bit representation. If you have 32 bits, you can address from 0 to 231-1. i.e.,
00000000000000000000000000000000 00000000000000000000000000000001 . . . 01111111111111111111111111111111 ^------------------------------- signed bit
0 indicates a positive number, 1 indicates a negative number.
If you add 1 to
01111111111111111111111111111111, you get
10000000000000000000000000000000, which is -2147483648 in decimal.
Using an unsigned integer, there's no signed bit and, ipso facto, can have a number twice as large as your largest signed integer. However, when the number rolls over again (i.e.,
11111111111111111111111111111111 +
00000000000000000000000000000001), you simply roll back to
00000000000000000000000000000000.
For a more in depth understanding, you can look at two's complement, which is how integers are represented in computers. | https://codedump.io/share/8RPJzVD00lsR/1/c-integer-overflow | CC-MAIN-2017-47 | en | refinedweb |
Hi,The random device is a fairly useful thing to have (I use it forinput values to "crashme" ;-) but there lots of people who don'tuse it for anything. For those people, it would be useful to have aCONFIG_RANDOM option, so that they can save a bit of space, and avoidthe small overhead associated with all the add_XXX_randomness() calls.Paul.diff -ur /tmp/linux/Documentation/Configure.help linux/Documentation/Configure.help--- /tmp/linux/Documentation/Configure.help Mon May 13 01:11:09 1996+++ linux/Documentation/Configure.help Mon May 13 14:12:50 1996@@ -41,7 +41,7 @@ currently in alpha-test, then the developers usually discourage widespread use of this feature by the general public to avoid "Why doesn't this work?" type mail messages. However, active testing- and and detailed bug reports from people familiar with the kernel's+ and detailed bug reports from people familiar with the kernel's internals are usually welcomed by the developers. Unless you intend to help test and develop a feature or driver that falls into this category, you should probably say N here, which will cause this @@ -3246,6 +3246,23 @@ compatible fashion. If you think you have a use for such a device (such as periodic data sampling), then say Y here, and go read the file Documentation/rtc.txt for details.++Secure Random Number Generator Support+CONFIG_RANDOM+ Since computers are very predictable devices, it is difficult to+ have a computer create an unpredictable series of random numbers.+ Applications such as cryptography require a truly random series+ of numbers to ensure data security against unwanted attackers.+ Enabling this option causes the kernel to gather environmental + noise from things like mouse, keyboard and disk activity, and+ then create truly random numbers based on this collected noise.+ These random numbers are reported back to the user via /dev/random+ and /dev/urandom character devices. The top few pages of the file+ drivers/char/random.c describes how to implement these devices, how+ to preserve randomness across reboots, and other important details.+ If you have a use for such a device, then say Y here, and go read + the above file. Saying N here will save you about 15kB of memory,+ and avoid the small overhead associated with collecting the noise. Sound card support CONFIG_SOUNDdiff -ur /tmp/linux/arch/alpha/defconfig linux/arch/alpha/defconfig--- /tmp/linux/arch/alpha/defconfig Sat May 11 16:25:51 1996+++ linux/arch/alpha/defconfig Mon May 13 13:03:37 1996@@ -202,6 +202,7 @@ # CONFIG_APM is not set # CONFIG_WATCHDOG is not set # CONFIG_RTC is not set+# CONFIG_RANDOM is not set # # Sounddiff -ur /tmp/linux/arch/i386/defconfig linux/arch/i386/defconfig--- /tmp/linux/arch/i386/defconfig Sat May 11 16:25:51 1996+++ linux/arch/i386/defconfig Mon May 13 13:03:22 1996@@ -161,6 +161,7 @@ # CONFIG_APM is not set # CONFIG_WATCHDOG is not set # CONFIG_RTC is not set+# CONFIG_RANDOM is not set # # Sounddiff -ur /tmp/linux/drivers/char/Config.in linux/drivers/char/Config.in--- /tmp/linux/drivers/char/Config.in Thu May 2 15:06:31 1996+++ linux/drivers/char/Config.in Sun May 12 15:21:27 1996@@ -62,4 +62,5 @@ fi fi bool 'Enhanced Real Time Clock Support' CONFIG_RTC+bool 'Secure Random Number Generator Support' CONFIG_RANDOM endmenudiff -ur /tmp/linux/drivers/char/Makefile linux/drivers/char/Makefile--- /tmp/linux/drivers/char/Makefile Thu Apr 25 20:27:42 1996+++ linux/drivers/char/Makefile Sun May 12 15:24:04 1996@@ -21,8 +21,14 @@ L_TARGET := char.a M_OBJS := L_OBJS := tty_io.o n_tty.o console.o \- tty_ioctl.o pty.o vt.o mem.o vc_screen.o random.o \+ tty_ioctl.o pty.o vt.o mem.o vc_screen.o \ consolemap.o selection.o++ifeq ($(CONFIG_RANDOM),y)+L_OBJS += random.o+else+L_OBJS += ran_stub.o+endif ifeq ($(CONFIG_SERIAL),y) ifndef CONFIG_SUN_SERIALdiff -ur /tmp/linux/drivers/char/mem.c linux/drivers/char/mem.c--- /tmp/linux/drivers/char/mem.c Thu Apr 25 20:27:42 1996+++ linux/drivers/char/mem.c Sun May 12 17:36:26 1996@@ -351,12 +351,14 @@ case 7: filp->f_op = &full_fops; break;+#ifdef CONFIG_RANDOM case 8: filp->f_op = &random_fops; break; case 9: filp->f_op = &urandom_fops; break;+#endif default: return -ENODEV; }@@ -382,7 +384,9 @@ { if (register_chrdev(MEM_MAJOR,"mem",&memory_fops)) printk("unable to get major %d for memory devs\n", MEM_MAJOR);+#ifdef CONFIG_RANDOM rand_initialize();+#endif tty_init(); #ifdef CONFIG_PRINTER lp_init();diff -ur /tmp/linux/drivers/char/ran_stub.c linux/drivers/char/ran_stub.c--- /tmp/linux/drivers/char/ran_stub.c Thu Jan 1 00:00:00 1970+++ linux/drivers/char/ran_stub.c Mon May 13 13:30:54 1996@@ -0,0 +1,48 @@+/*+ * Stub functions for a non-implementation of the random device.+ * This avoids having #ifdef CONFIG_RANDOM scattered through+ * millions of files. Ideally it could have been done as macros in+ * random.h but quota "non-support" does it with stub functions+ * as well, so...+ * Paul Gortmaker+ */++#include <linux/kernel.h>+#include <asm/types.h>++void rand_initialize_irq(int irq)+{+}++void rand_initialize_blkdev(int irq, int mode)+{+}++void add_keyboard_randomness(unsigned char scancode)+{+}++void add_mouse_randomness(__u32 mouse_data)+{+}++void add_interrupt_randomness(int irq)+{+}++void add_blkdev_randomness(int irq)+{+}++/*+ * Currently nfsroot is the only kernel code to call this routine, as it+ * uses random bytes for the BOOTP request. Moan loudly if someone tries+ * to call get_random_bytes() without compiling the random driver in.+ * Alternatively, one could just bin this function, which would cause+ * such broken kernel configurations to fail to link...+ */++void get_random_bytes(void *buf, int nbytes)+{+ printk(KERN_ERR "Random data reqested but random driver not resident.\n");+} | http://lkml.org/lkml/1996/5/13/41 | CC-MAIN-2015-11 | en | refinedweb |
Name | Synopsis | Interface Level | Parameters | Description | Return Values | Context | Examples | Attributes | See Also
#include <sys/ddi.h> #include <sys/sunddi.h> int pm_power_has_changed(dev_info_t *dip, int component, int level);
Solaris DDI specific (Solaris DDI)
Pointer to the device dev_info structure
Number of the component that has changed power level
Power level to which the indicated component has changed().
If this function is called as a result of entry into the driver's attach(9E), detach(9E) or power(9E) entry point, this function must be called from the same thread which entered attach(9E), detach(9E) or power(9E).
The pm_power_has_changed() function returns:
The power level of component was successfully updated to level.
Invalid component component or power level level.), attach(9E), detach(9E), power(9E), pm_busy_component(9F), pm_idle_component(9F), pm_raise_power(9F), pm_lower_power(9F), pm(9P), pm-components(9P)
Writing Device Drivers
Name | Synopsis | Interface Level | Parameters | Description | Return Values | Context | Examples | Attributes | See Also | http://docs.oracle.com/cd/E19253-01/816-5180/pm-power-has-changed-9f/index.html | CC-MAIN-2015-11 | en | refinedweb |
17 March 2011 18:01 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
Although crude oil prices in the short term have suffered a negative impact from the earthquake in
Moreover, the country will have to compensate for the power loss left from the nuclear generation and some of it will have to come from oil fired electricity power plants, Lambrecht added.
“Once the nuclear disaster is resolved then reconstruction will begin,” said Harry Tchilinguirian, analyst at BNP Paribas.
When reconstruction begins then there will be an increase in demand for gasoline, fuel for heavy machinery and private power, Tchilinguirian added.
Meanwhile, high levels of radiation at the Fukushima Daiichi nuclear facility are causing further difficulties as attempts continue to stop the six reactors at the site from overheating.
Workers were evacuated as radiation levels are now considered fatal, following a series of explosions at the No 1, No 2, No 3 and No 4 reactors since Saturday. Helicopters are being deployed to continue with cooling operations, pouring seawater onto the reactors.
The International Energy Agency (IEA) estimated
Expectations were of a slight contraction in demand of around 100,000 bbl/day for 2011, but it is likely that due to the reconstruction, demand will be flat and may even increase, Tchilinguirian said.
Meanwhile, crude oil futures were being supported by fears that turmoil in
At 16:50 GMT, Mary Brent was trading at $114.02/bbl, up $3.42/bbl from the previous close, while April WTI was trading at $100.93/bbl, up $2.95 | http://www.icis.com/Articles/2011/03/17/9444949/japan-disaster-domestic-crude-oil-demand-to-rise-in-2011-analyst.html | CC-MAIN-2015-11 | en | refinedweb |
There's some edge cases that could make it somewhat complex. Mostly because you could have lines bigger than your buffer. This will result in two types of reads: those that end in newlines, and those that do not. (and a newline happens to be exactly the symbol that marks the end of the section you want to ignore) You'll need to keep track of whether you're inside a comment or not. In addition you could have reads like "<text>/" followed by a next read that looks like "/<text>". You need to detect that too.
Below is a little example. A little sloppy due to time constraints but it shows the concept I think.
#include <stdio.h> #include <string.h> #include <stdbool.h> #define BUFFER_SIZE (80) int main(void) { FILE *fp = NULL, *fp1 = NULL; char *token_pos = NULL, *copy_pos = NULL; const char *fname = "a1.txt", *fname1 = "a2.txt"; char str[BUFFER_SIZE] = {0}, character = 0; int buffer_length = 0; bool inComment = false; // Open the files. fp = fopen(fname ,"r"); // Opening of the input file failed. if(fp == NULL) { printf("Error on opening input file \"%s\".\n", fname); } else { fp1 = fopen(fname1,"w"); // Opening of the output file failed. if (fp1 == NULL) { printf("Error on opening output file \"%s\".\n", fname1); } else { // Note that feof becomes true after attempting to read out of bounds. // An easier way is probably to use fgets as your loop condition. while (fgets(str, BUFFER_SIZE, fp) != NULL) { // Store the size of the buffer as it is possible it is not fully filled. buffer_length = strlen(str); // Look for a position to start copying from. If we were in a comment this // position starts after the first newline that is read, otherwise from the start. if (inComment) { copy_pos = strchr(str, '\n'); } else { copy_pos = str; } // A position to copy was found. if (copy_pos != NULL) { // We found a position to copy, which is not a comment section by definition. inComment = false; // Look for "//". token_pos = strstr(copy_pos, "//"); // A "//" was found! if (token_pos != NULL) { // Copy everything to our output until that point. token_pos[0] = '\0'; fputs(copy_pos, fp1); // If reading stopped because of a newline, include that one in the output too. if (str[buffer_length - 1] == '\n') { fputc('\n', fp1); } // read didn't include line termination. Set our flag so we know next reads will be part of a comment. else { inComment = true; } } else { // It is possible our read ends ith a single '/'. This could be part of a next read. if (buffer_length > 0 && str[buffer_length - 1] == '/') { // Look at the next character in the stream. character = fgetc(fp); // It will be part of a comment! if (character == '/') { str[buffer_length - 1] = '\0'; inComment = true; } // Put it back. (not really needed) ungetc(character, fp); } fputs(copy_pos, fp1); } } } // Close the output file. fclose(fp1); } // Close the input file. fclose(fp); } return 0; }
This should remove comment lines from a file, but it keeps the newline that follows it. I'm not sure if you wanted to remove this newline if the comment is the only thing the line consists of. You'd have to modify the code above (or your own) to achieve that if this is something you do want.
There's some edge cases that could make it somewhat complex.
Yes, but they have nothing to do with long lines. ;)
Mostly because you could have lines bigger than your buffer.
Actually, this is a good place for an extended fgets() that isn't restricted by a buffer size:
#include <stdio.h> #include <stdlib.h> #include <string.h> /* @description: Reads a string from the given input stream up to and including one of the characters in a string of delimiters. @return: The length of the final string. @notes: The result string must be released with free() from stdlib.h. */ size_t readline(FILE *in, char **s, const char *delim) { #define READLINE_CHUNK 16 size_t capacity = 0, size = 0; /* Buffer sizing */ char *dst = NULL, *temp; /* Buffer contents */ int ch, done = 0; /* Character processing */ while (!done) { /* Resize with an extra chunk if necessary */ if (size == capacity) { capacity += READLINE_CHUNK; if (!(temp = (char*)realloc(dst, capacity + 1))) break; dst = temp; } /* Fill in the newest chunk with string data */ while (!done && size < capacity) { if ((ch = getc(in)) != EOF) dst[size++] = (char)ch; done = (ch == EOF || strchr(delim, ch)); } } /* Finalize the string */ if (dst) dst[size] = '\0'; *s = dst; /* Save the string */ return size; #undef READLINE_CHUNK }
As far as edge cases go, I'd be more worried about true edge cases for comments. Two that come to mind are // in places where they don't start a comment (such as inside a string) and line continuation:
// This is a comment \ this is still the same comment!
Though I suspect the OP's project doesn't need to be absolutely thorough in terms of all possible cases.
Yes, but they have nothing to do with long lines. ;)
They do in the context I posted that in; you cannot guarantee that a comment found within read data ends within that same data.
Actually, this is a good place for an extended fgets() that isn't restricted by a buffer size:
Yeah, that's probably useful.
As far as edge cases go, I'd be more worried about true edge cases for comments. Two that come to mind are // in places where they don't start a comment (such as inside a string) and line continuation. Though I suspect the OP's project doesn't need to be absolutely thorough in terms of all possible cases.
Adding that would introduce quite some complexity yes. I also don't think it's what the OP wants though, given his minimalistic description of his problem (When interpreted more freely his title may imply him wanting to remove comment blocks too (with this I mean "/* */" ) even though he only mentions a subset of what comments could be in the part added in parenthesis).
Unfortunately, it would have to be even more complicated that that. You have to be able to intelligently ignore
// when it occurs in quotes, otherwise, you would end up cutting strings apart. Take the following code, for example:
#include <stdio.h> int main(void) { printf("I like to // eat apples.\n"); return 0; }
Without a check to make sure the double slashes aren't in quotes, you would end up with this after running your program:
#include <stdio.h> int main(void) { printf("I like to return 0; }
This would, of course, generate a compiler error.
| https://www.daniweb.com/software-development/c/threads/441542/program-to-remove-all-comments-comments-start-with- | CC-MAIN-2015-11 | en | refinedweb |
Rahul wrote:
HiUsers who want to take a look at this new Echo theme would want a quick and easy way to install and check it out.Currently all I see is a list of SVG files in a wiki page. Can we put up a tarball appropriately packaged or a RPM package that is updated frequently perhaps?Currently all I see is a list of SVG files in a wiki page. Can we put up a tarball appropriately packaged or a RPM package that is updated frequently perhaps?] J5 wrote: Attached is the python script used to pull them all from the wiki. I whipped it up pretty quickly and there are several areas where it can be improved if someone wants to work on it: * It uses a base directory of echo_art/ this should be able to be overridden by a command line switch * If the directory exists it should confirm with the user and thendelete or move the old directory * Statistics could be added as well as a way to diff previous pulls to
see what has changed * There is already a filter function that returns a new name and directory based on the icons name. Right now the script just checks for 'image-missing' and returns None indicating that icon should not be downloaded. This filter function can be expanded to filter icons into specific directories for easier packaging provided you have a consistent naming scheme on the wiki. * Inputs need to be filtered to make sure someone doesn't add input on the wiki that would cause the script to do bad thing on your machine. Right now it is not an issue but if the filter function gets more complex (such as using a part of a file name as the directory) you might want to scan those directory strings to make sure characters like ~ or .. don't get in. * Someone could add a specfile generator and auto packager so that new sets of icons can be tested easily. Hope this helps you guys out and I can't wait to see the new icons in action. -- John (J5) Palmieri <johnp redhat com>
#!/bin/env python ## echo_pull - pulls echo icons off the fedora wiki ## Copyright (C) 2006 Red Hat, Inc. ## Copyright (C) 2006 John (J5) Palmieri <john. base_url='' echo_dir='wiki/Artwork/EchoDevelopment' base_dir='echo_art' import sys import os import urllib2 import HTMLParser import re href_re=re.compile('\/(.*)\?.*target=(.*[(\.svg)(\.png)])') def _mkdir(newdir): """works the way a good mkdir should :) - already exists, silently complete - regular file in the way, raise an exception - parent directory(ies) does not exist, make them as well """ if os.path.isdir(newdir): pass elif os.path.isfile(newdir): raise OSError("a file with the same name as the desired " \ "dir, '%s', already exists." % newdir) else: head, tail = os.path.split(newdir) if head and not os.path.isdir(head): _mkdir(head) #print "_mkdir %s" % repr(newdir) if tail: os.mkdir(newdir) class ArtParser(HTMLParser.HTMLParser): def handle_starttag(self, tag, attr): if tag == 'a': for a in attr: if a[0] == 'href': match = href_re.match(a[1]) if match: self.filter_and_download (base_url, a[1], match.group(2)) def download(self, url, directory, filename): if not os.path.isdir(directory): _mkdir(directory) file_path = os.path.join(directory, filename) file = os.popen ('wget "%s" -O %s'%(url, file_path)) error = file.close() if error: sys.stderr.write ('Error downloading %s to %s\n'%(url, file_path)) def filter(self, file): if file.startswith('image-missing'): return else: return (base_dir, file) def filter_and_download (self, base_url, resource, file): art_url = "%s%s"%(base_url, resource) filter = self.filter(file) if filter: (filtered_dir, filtered_file) = filter self.download(art_url, filtered_dir, filtered_file) def main(): #get the main page url = '%s%s'%(base_url, echo_dir) try: data = urllib2.urlopen(url).read() except urllib2.HTTPError, e: print "HTTP error: %d" % e.code exit(1) except urllib2.URLError, e: print "Network error: %s" % e exit(2) #pull out <a > tags with a target graphics in the href p = ArtParser() p.feed(data) p.close() main() | https://www.redhat.com/archives/fedora-art-list/2006-August/msg00134.html | CC-MAIN-2015-11 | en | refinedweb |
Automating the world one-liner at a time…
Fellow Scripters,
We’ve heard that some of you think our PowerShell sample scripts on the TechNet Script Center stink. Do you have any idea how much that hurts? We slave over hot keyboards day in and day out trying to help you ungrateful scoundrels learn more about scripting. And how do you repay us? Nasty emails and smear campaigns, that’s how….just a sec…what’s that?...what do you mean they have a point? Really? Oh. OK.
Ahem. Yes, well then…as I was saying, in my official capacity as someone writing an email message to a fellow named Lee, that I hope he’ll post on a blog for me, I’m here to explain why the PowerShell sample scripts on the Script Center are not at all elegant and basically look like VBScripts translated into PowerShell scripts. For those of you who haven’t experienced the horror first hand (you might want to chase your offspring out of the room at this stage), here’s one of the offending scripts.
$strComputer = "."$colItems = get-wmiobject -class "Win32_LoadOrderGroup" -namespace "root\CIMV2" `-computername $strComputer
foreach ($objItem in $colItems) { write-host "DriverEnabled: " $objItem.DriverEnabled write-host "GroupOrder: " $objItem.GroupOrder write-host "Name: " $objItem.Name write-host}
If you happen to be into this sort of vulgarity, here’s a link to the entire repository of the beasts:.
So what, exactly, were we thinking? And when are we going to get our act together and post some elegant scripts? Don’t we know that PS:>get-wmiobject Win32_LoadOrderGroup would have accomplished the same thing and demonstrated some of the Power in PowerShell?
Here’s the deal. Our performance is judged by the number of lines of script we write. We needed to figure out a way to keep the line count up and the bonuses rolling in. (you would not believe what it costs to fill up a Ferrari these days!). So, we told management that we wanted to create PowerShell scripts that looked exactly like the typical VBScripts you find on Script Center.
We spun this tale about how we wanted to help ease our beloved Script Center customers into the PowerShell world by showing them how their existing knowledge translates easily to PowerShell. They can still set a strComputer variable to “.” to reference the local machine, they still have a For Each loop, there’s still a statement that let’s them display information on the screen. Heck, if they look at the two scripts side-by-side, they’ll surely see the parallel statements and thereby be gently introduced to a bit of PowerShell syntax.
On Error Resume NextstrComputer = "."Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")Set colItems = objWMIService.ExecQuery("Select * from Win32_LoadOrderGroup") For Each objItem in colItems Wscript.Echo "Driver Enabled: " & objItem.DriverEnabled Wscript.Echo "Group Order: " & objItem.GroupOrder Wscript.Echo "Name: " & objItem.Name Wscript.EchoNext
Of course we expected to be found out eventually and, apparently, eventually has arrived early this year. So, what are we going to do about it? We’ll we’re not going to take down those existing scripts anytime soon. (we need the line count, remember?). We are, however, working another scam. We’re soliciting elegant PowerShell scripts from internal folks at Microsoft who do have a clue. This is brilliant. We claim that we want to “leverage their expertise” or benefit from their “understanding of real-world customer scenarios” or some other buzz-word-laden suck up sentence and they do our work for us.
Unfortunately, we still have to organize what we get and do the production work required to get the samples posted. We’re a tiny little team and we’re currently working on improving the command line help in PowerShell. Though we’ve tried, we can’t figure out a way to make that work easier or to get someone else to do it for us. So, the new PowerShell script samples are on hold until we’re done.
Now, in case you haven’t picked it up, we’re pretty open to benefiting from the hard work of other people. If you’ve got some elegant PowerShell scripts that you’d like to see on the Script Center, we’d like to help you help us. Just click on the following link, click Submit a Script, agree to donate all of your future earnings to us and fill out the form. No promises, but we’ll do our best to post the submissions (only if they’re elegant of course).
Many thanks (ungrateful scoundrels),
Dean Tsaltas
Official Spokesman for the Script Center team ….on this one particular, inconsequential issue….only because the rest of the team didn’t have time and Lee demanded something by the end of the day
[Edit: Fixed some typos]PSMDTAG:TYPE:WMI:
Sorry about that, Chuck. The correct name is about_shell_variable. I've fixed it in the help and the fix will be available in the next release.
To restart your computer, use the
"shutdown -r" command. You can type it in Windows PowerShell, cmd.exe, or in the Start/Run box.
To see all of the parameters of the shutdown command, type shutdown -?.
I use shutdown to restart my remote machines. It has a -m \\computername parameter.
Again, sorry for the inconvenience.
June Blender [MSFT]
Windows PowerShell Documentation
PowerShell is fantastic! I like the sample scripts, they actually helped my create a build process for a .NET 2.0 web application complete with deployment, zip and other functionality. Ok, so most of the work is handled by MSBuild, but it wouldn't happen without my PS script driving it by collecting SVN revision number, the build number, trunk or branch build and the branch name. Then, reading a config file (in good ol' XML) and filling some other variables to be passed to MSBuild.
I looked into doing this in cmd and VBS, and the choice was clear. PowerShell ruled out all the others.
Thanks for everything, PowerShell Team! Here's to the future!
I have to say that VBScripters are the majority. Most of us have been writing them for years and are adapting to the new technology that PowerShell presents.
I for one think that the examples are just fine.
I think you would get your best mileage by hiring a die-hard Linux-UNIX type that knows shell scripting and a few scripting languages (one of them must include Perl). Then tie him/her up in a dungeon and wipe him/her until s/he cranks out Powershell code to do stuff that doesn't involve ADSI or WMI.
On more serious note, what is sorely needed is real world scripts on system administration, chores that are commonplace in UNIX. This involves text manipulation, analyzing events, generating reports, etc. Typical chores that would be done with grep, sed, awk or just perl, like frequency hashes, trend analysis.
It seems Powershell oriented more of neat playground for object introspection and easier access to complex APIs, but not focused a lot on real shell usage. So more articles to flesh out the language would be well received.
- Joaquin
I totally agree that there should be other script examples than just the same WMI scripts over and over again.
I mean, the only commands we've learned about using the example scripts are "get-wmiobject" and "write-host". Isn't PS more versatile than that?
I think everyone has the same opinion. The current scripts are useful for the beginner and the person looking to move to PS from VB. Additional examples of real world administration would be most appreciated.
On another note you guys have been doing outstanding work. I and i am sure everyone else truely appreciate your efforts.
Come on guys, where are the new scripts. I'll think I'll stick to VBScript for now, until we get some more samples. Its been over a year!!!
This blog does stretch in time to scary degree. Assuming a transition to PS is ever going to happen, the Technet script repository would be a good place to start. Rather than having a node dedicated to PowerShell in the repository, each of the existing scripts should be re-classified as "legacy" and accompanied by a PS equivalent, preferably an "elegant" one.
Is anyone home?
Here is what i think...if someone thinks that the scripts on MS is crap then its their opinion and to prove it they should post on the web. i find MS scripts to be very good for learning. Thanks guys you are doing a great job. lot of those scripts have solved problems for me at work.
So what happens when the get-wmiobject call fails?
One problem I have with scripting under Windows (VBS) is that the examples have no error checking in them. Which means in the wild you find admins have hacked together scripts that can really screw things up if a single call fails or will simply fail to run if a call fails -- the combination of which is really miserable, and silly. It would be nice if there was a greater emphasis improving the robustness of scripting under windows, give examples with real error checking in them, show how to recover from minor failures.
My 2c from the wild...
I got 2 *.ps1 scripts
Master.ps1 - which has couple of functions
Test.ps1 - test script file which calls Master.ps1
both are in same directory and I call the .\Master.ps1 file and it works fine, but the functions are not exposed.
but if I execute those functions in the PowerShell editor its exposed.
Cant I run a ShellScript within a Script file.
Can someone please throw some light on my query
Insert the command that requires you to declare variables
Declare variables named x, y, and z.
Assign the values 10 to x and 20 to y.
Add x to y and store the result in z.
Test if x + y equals 40.
If it does, print out the message “x + y = 40”.
If it does not, print out “x + y = ##” (Where ## equals the total.)
Print out an “End of Script” message.
guilty as
that's a one liner
the link to the repository is broken. maybe you could write a script to fix that | http://blogs.msdn.com/b/powershell/archive/2006/05/11/595583.aspx?PageIndex=2 | CC-MAIN-2015-11 | en | refinedweb |
kricker Wrote:I am trying to get my Xbox to send a WOL to my server when XBMC starts. I understand how to use autoexec.py. The problem lies in trying to find the correct WOL script. I have the tweaked WOL, but it opens a GUI to select the PC to wake. This is great if I have other computers to wake, but I want it to always wake the server and not show a GUI when running at startup. All my searches for the older WOL showed either dead links or the code I lifted from the posts did nothing. Can anyone please help me out?
# Wake-On-LAN
#
# Written by Marc Balmer, marc@msys.ch,
# This code is free software under the GPL
import struct, socket
def WakeOnLan(ethernet_address):
# Construct a six-byte hardware address
addr_byte = ethernet_address.split(':')
hw_addr = struct.pack('BBBBBB', int(addr_byte[0], 16),
int(addr_byte[1], 16),
int(addr_byte[2], 16),
int(addr_byte[3], 16),
int(addr_byte[4], 16),
int(addr_byte[5], 16))
# Build the Wake-On-LAN "Magic Packet"...
msg = '\xff' * 6 + hw_addr * 16
# ...and send it to the broadcast address using UDP
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
s.sendto(msg, ('<broadcast>', 9))
s.close()
# Example use
WakeOnLan('aa:bb:cc:dd:ee:ff') # My PC
kricker Wrote:That is the exact code I tried to lift and use. When I run it nothing happens. If I run the tweaked WOL script it works properly.
kricker Wrote:I looked for Nuka's Plug-in maker and could not find it. Is it only in the SVN? So I need to find it there? I'd love for it to only send the WOL when the share is requested. That is even better than always waking the server at boot up.
<onclick>XBMC.RunScript(Q:\scripts\Wake\wakePVR.py)</onclick> | http://forum.kodi.tv/showthread.php?tid=24843&pid=125892 | CC-MAIN-2015-11 | en | refinedweb |
ok, so I'm realllyyyyy stuck at the moment. I started coding something, but I've realised that I've done it all wrong and now I literally have no idea where to start.
I'm not asking anyone to write a program for me, etc. But I do need someone to kick start me and give me some sort of idea on where to even begin.
Here's the website for the actual assignment: Java Programming (CITS1200)
Here's what *little* code I have (Some of it not even related to how the assignment is supposed to be done (Mainly supplying it so that if anyone does decide to help, they may be able to use what I have there and give me some sort of idea what my constructor should look like or something?):
I'm even having trouble understanding what exactly the assignment means, in particular the constructor and the moveTile method.I'm even having trouble understanding what exactly the assignment means, in particular the constructor and the moveTile method.public class FifteenPuzzle { //Instance Variables public SimpleCanvas sc; public int x; public int y; public int[][] initialGrid; //Constructor public FifteenPuzzle(int[][] initial java.awt.Color white = new java.awt.Color(255,255,255); sc.setForegroundColour(white); for (int j=0; j < x*100;j++) sc.drawLine(j,(y-1)*100,j,y*100); } //Methods public void moveTile (int x, int y){ } private void createTile (int x){ sc.drawLine(xPos,yPos,xPos+100,yPos); sc.drawLine(xPos+100,yPos,xPos+100,yPos+100); sc.drawLine(xPos+100,yPos+100,xPos,yPos+100); sc.drawLine(xPos,yPos+100,xPos,yPos); } //public int [][] getCurrentGrid(){ //} }
Any help is very much appreciated.
Need help ASAP. Been struggling with it for a few days now, and have only just realised how wrong I was and its due tomorrow (Yay for deadlines)
Regards, Tom.
P.S Heres the code for simpleCanvas if you need it:
/** * This is a stripped-down version of the Canvas class from the * BlueJ team, retaining only the most fundamental features. * * @author BlueJ team with modifications by Gordon Royle * @version July 2003 */ import javax.swing.*; import java.awt.*; import java.awt.event.*; public class SimpleCanvas { private JFrame frame; private CanvasPane canvas; private Graphics2D graphic; private Image canvasImage; private boolean autoRepaint; /** * Creates and displays a SimpleCanvas of the specified size * with a white background. The client specifies whether repainting * after a drawing command should be manual or automatic. * * @param title title for the window * @param width the desired width of the SimpleCanvas * @param height the desired height of the SimpleCanvas * @param autoRepaint true for automatic repainting * */ public SimpleCanvas(String title, int width, int height, boolean autoRepaint) { frame = new JFrame(); canvas = new CanvasPane(); frame.setContentPane(canvas); frame.setTitle(title); canvas.setPreferredSize(new Dimension(width,height)); frame.pack(); Dimension size = canvas.getSize(); canvasImage = canvas.createImage(size.width,size.height); graphic = (Graphics2D) canvasImage.getGraphics(); graphic.setColor(Color.white); graphic.fillRect(0,0,size.width,size.height); graphic.setColor(Color.black); frame.setVisible(true); frame.setVisible(true); // frame.show(); frame.setVisible(true); this.autoRepaint = autoRepaint; } /** * Creates and displays a SimpleCanvas with a white background and * with automatic repainting after drawing commands. * * @param title title for the window * @param width the desired width of the SimpleCanvas * @param height the desired height of the SimpleCanvas * */ public SimpleCanvas(String title, int width, int height) { this(title,width,height,true); } /** * Creates and displays a SimpleCanvas of size 400x400 with the * default title "SimpleCanvas" and with automatic repainting * enabled. * * @param title title for the window * @param width the desired width of the SimpleCanvas * @param height the desired height of the SimpleCanvas * */ public SimpleCanvas() { this("SimpleCanvas",400,400); } /** * Draws a line on the SimpleCanvas between two points. * * @param x1 x-coordinate of the first point * @param y1 y-coordinate of the first point * @param x2 x-coordinate of the second point * @param y2 y-coordinate of the second point * */ public void drawLine(int x1, int y1, int x2, int y2) { graphic.drawLine(x1,y1,x2,y2); if (autoRepaint) canvas.repaint(); } /** * Changes the colour for subsequent * drawing on this SimpleCanvas. * * @param newColour the new drawing colour * */ public void setForegroundColour(Color newColour) { graphic.setColor(newColour); } /** * Gets the colour currently used for * drawing on this SimpleCanvas. * */ public Color getForegroundColour() { return graphic.getColor(); } /** * Changes the font for subsequent String * drawing on this SimpleCanvas. * * @param newFont the new Font * */ public void setFont(Font newFont) { graphic.setFont(newFont); } /** * Gets the font currently used for * String drawing on this Canvas */ public Font getFont() { return graphic.getFont(); } /** * Draws the specified String at the specified * location on this SimpleCanvas */ public void drawString(String text, int x, int y) { graphic.drawString(text, x, y); if (autoRepaint) canvas.repaint(); } /** * Sets the repaint mode to either manual or automatic. * * @param autoRepaint automatic repainting if this is true * */ public void setAutoRepaint(boolean autoRepaint) { this.autoRepaint = autoRepaint; } /** * If this SimpleCanvas does not automatically repaint * after each drawing command, then this method can be * used to cause a manual repaint. */ public void repaint() { canvas.repaint(); } /** * Causes execution to pause for the specified amount of time. * This is usually used to produce animations in an easy * manner, by repeatedly drawing, pausing, and then redrawing * an object. */ public void wait(int millis) { try { Thread.sleep(millis); } catch (InterruptedException ie) { System.out.println("Interrruption in SimpleCanvas: "+ie); } } public void addMouseListener(MouseListener ml) { canvas.addMouseListener(ml); } class CanvasPane extends JPanel { public void paint(Graphics g) { g.drawImage(canvasImage,0,0,null); } } } | http://www.javaprogrammingforums.com/collections-generics/401-problem-implementation-fifteen-puzzle-2d-arrays.html | CC-MAIN-2015-11 | en | refinedweb |
I have a bunch of stock data in csv's that I am backtesting trading strategies on. The problem is that my strategy buys the open market price if a signal was found yesterday, unfortunately my data is released only at the end of the day meaning I wouldn't know if I was supposed to enter a trade until after market close when the data is released. But because my strategy is trading solely on yesterdays data I think a workaround is to simply append a record to the end of my data representing the next trading day and just show the days price as yesterdays close throughout as to not mess with the profit loss. So for instance say one of my csv's looks like this (albeit not in this format, the actual files have no headers and are comma delimited)
Date | Open | High | Low | Close | Volume | |
I would like to append the following record:
20130105 | 12.40 | 12.40 | 12.40 | 12.40 | 89690 |
So I need to increment the date 1, then copy the prior close to the other pricing fields and I think it would be best to just keep the volume the same. This would loop through the folder daily as to add a dummy field to all the files so I can get signals in a more timely manner. And then at the end of each day I have another batch file I already got working to clear out my data folder and overwrite with the true pricing data.
This might be easier than you thought or I may have oversimplified your need, but I read your question as you'd like to append a CSV file to an existing file and on Windows and most command shell systems a command would read like:
type cat on UNIX, LINUX and OSX NEWCSVFILENAME >> EXISTINGCSVFILE
That is the command, but there is an amount of methodology required since you're also paying for this data it has to be respected. As a trader, you already have some method of backing up your PC data but I encourage you preserve originals, so in the form of a batch file on most windows PC's try this series of commands saved as a batch file:
@echo off
title=Pricing Data Appender executed on %date% at %time% by %username%
copy EXISTINGCSVFILE **EXISTINGCSVFILE.%date%_bak** && attrib +r EXISTINGCSVFILE.%date%_bak
type cat on UNIX, LINUX and OSX NEWCSVFILENAME >> *EXISTINGCSVFILE
title=Pricing Data Appender COMPLETED on %date% at %time% by %username%
And that would make a backup copy of your existing pricing database before edititng it and as a minor safeguard, sets it for ReadOnly attributes to at least warn in the worst circumstance the file shouldn't be deleted and from your shell and various utiltities, protect it from deletion.
If you are being subjected to SarBox or any other regulation that even afflicts small boutique traders, there are a few other commands which will record an independent entry in your systems event log of the process as required by SarbaneOxley Section 404 and the Dodd-Frank act separation of of duty clauses.
I'm interested in helping more, so if you need to elaborate or want to test this and confirm / debug, today is a good day to let me know.
If Data.csv contains:
and you want to append the following line to it every time the script is run:
20130117,12.40,12.40,12.40,12.40,89690
(i.e. "Current Date (YYYYMMDD),Prev Close,Prev Close,Prev Close,Prev Close,Prev Volume")
Try something like this saved as Update.bat:
@echo off
for /f "usebackq tokens=1,5,6 delims=," %%i in (Data.csv) do (
REM set /a date=%%i+1 // Wrong way to increment a date!
set close=%%j
set volume=%%k
)
echo %date:~6%%date:~3,2%%date:~0,2%,%close%,%close%,%close%,%close%,%volume%>>Data.csv
If you want to run this on multiple CSVs in a directory, replace both instances of Data.csv above with %1. Now to update a single file i.e. Data.csv, you'd use the command:
update Data.csv
To update multiple CSVs, you'd use the command:
for %f in (*.csv) do @update "%f"
Note #1: The exact Date format depends on what you've specified in the Region and Language Control Panel applet. See here for details.
Note #2: I suppose date incrementing and validation logic (including leap year calculation etc.) could be implemented in batch as well (yeah, perhaps if a gun were pointed at my head!), but at that point I would start to question the asker's sanity (besides doubting my own) and seriously suggest that a better (read actual) programming language be used to accomplish this.
@ echo off start c:\cygwin\bin\bash.exe -1 #!/bin/bash cd "C:/Users/research1/Desktop/Test" ls | while read csvfile do IFS=, read olddate open high low close volume < <(tac $csvfile) read newdate < <(date -d "$olddate + 1 day" +%Y%m%d) echo $newdate,$close,$close,$close,$close,$volume >> $csvfile done
If you are familiar with C++, then you can probably handle some C#.
There is no error checking, it changes you csv file without taking a backup and if the csv file is large then it isn't particularly efficient (as it reads the whole file into memory), but should give you the general idea.
If you have Windows 7, then you should have the dotnet framework already, so you can copy this into a file and compile it from the command line using csc filename.cs and it should produce filename.exe.
using System.Collections.Generic;
using System;
using System.Linq;
public class Sample
{
public static void Main()
{
string path = @"c:\temp\temp.csv";
var lines = System.IO.File.ReadAllLines(path).ToList();
var lastLine = lines[lines.Count-1];
var sections = lastLine.Split(new char[] { ','});
DateTime lastDate = DateTime.ParseExact(sections[0], "yyyyMMdd" , null );
DateTime newDate = lastDate.AddDays(1);
string newLine = String.Format("{0},{1},{1},{1},{1},{2}",
newDate.ToString("yyyyMMdd"), sections[4] , sections[5]);
lines.Add(newLine);
System.IO.File.WriteAllLines(path, lines.ToArray());
}
}
However, since you are going to need to do a bit of editing I would recommend downloading LinqPad and just copy the main method into it (and select C# Program) and hit run which should compile it and run it.
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
1815 times
active | http://superuser.com/questions/537239/batch-script-to-append-record-to-csv-files | CC-MAIN-2015-11 | en | refinedweb |
“I'm going to fucking bury that guy. I've done it before and I'll do it again. I'm going to fucking kill Nintentdorulez”
Pay Recieved
Thank you muchly, Plot #4 is now yours.--The One and Only Czar Yah 17:53, 16 June 2006 (UTC)
League of Benson
You're in the League! Benson 22:30, 7 June 2006 (UTC)
Unpocalypse
I was poking around the most linked to pages, when I noticed Unpocalypse was rather high up, even higher then the Main Page. I was shocked. When I first created the page nothing linked to it. So I tried to figure out how come it now had 279 (as of this message) pages linking to it. After poking around, I discovered that you had added it to your signature. A was amazed! When it was first nominated, Unpocalypse was resoundingly voted down due to its many inside jokes (it relies upon them for funny). I was wondering why you added it to your sig. I am highly flattered, and am not suggesting at all that you should remove it :). If this means that you have taken an intrest in the article and would like to see it featured, by all means nominate it whenever you like. -
I have been on the forefront of the Freedom of Expression/{{C|template}} fight that's underway, currently because people believe that to be the reason Google removed us. I had another theory, but not being a gmail user, I can't test it out; until I noticed that you were one while looking to see if anyone had tried google on their forums. If i could pester you, could you try Running a diagnostic (preferably with Uncyc as the actual thing)? And if that's not a problem, could you look at their Automatic URL Removal to see how easy it is to remove someone else's website? I hate to ask a lot of you, good sir, but I'm wondering if there's more to this than google. Thanks, sir. --epynephrin 21:53, 7 June 2006 (UTC)
- Well, I'm not a webmaster here, and it says this is supposed to only be done by the webmaster. And there's some sort of verification thing to be done by putting some file in the root of the site. Which I certainly can't do. Try asking chron. --User:Nintendorulez 21:56, 7 June 2006 (UTC)
- well, my theory was that someone else got jealous of our success, and deleted us from the google registry, but that quashes my theory altogether. Oh well, wasn't all that attached to it... --epynephrin 22:52, 7 June 2006 (UTC)
Identity Theft
Sorry for the gratuitous Barney reference, but I wanted to let you know that I deleted the "User:Nintendorulez/B" redirect to "BBBBBBBBB!", just for the sake of general cleanup. --
Sir BobBobBob ! S ? [rox!|sux!]
22:43, 3 Nov 2005 (UTC)
Sophia Timeline
You can enlarge images thusly with wikicode --Splaka 22:13, 6 Dec 2005 (UTC) : [[Image:Seizure.gif|400px|thumb]]
- It seems to only work with thumbnails though, since this won't get bigger:
- Okay, thanks. *kicks wikimedia software* You and your thumbnails! Although perhaps thumbnails might be even more painful to the eye for this template... --[[User:Nintendorulez|Nintendorulez | talk]] 23:01, 6 Dec 2005 (UTC)
Euroipods
If you recall, you are not allowed to touch Euroipods for any reason, whatsoever. I have reverted your edits and we're considering how to punish you for failing to follow the rules.
» Brig Sir Dawg | t | v | c » 03:03, 3 February 2006 (UTC)
- There's an established policy which has been put into effect. Cheers. --KATIE!! 03:09, 3 February 2006 (UTC)
- Right purge your browser cache and check euroipods--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 22:32, 3 February 2006 (UTC)
IRC
The admins wish to discuss matters of great importance with you. Please come on to IRC, as the topic is about your special rules and it is of the upmost importance that you come now. --KATIE!!19:56, 11 February 2006 (UTC)
Blasphemy! Oh well... all in good fun? --KATIE!! 21:23, 11 February 2006 (UTC)
Award!- Sir Sikon [formerly known as Guest] 12
Encyclop*dia Dramatica isn't censorship, the asterisk allegedly stands for a copyrighted letter that cannot be used on this site because of the fear of DMCA warnings. - Sir Sikon [formerly known as Guest] 16:50, 2 March 2006 (UTC)
- Hence making it all the more offensive if we leave the copyrighted letter there. It's an expletive, is it not?
Cthulhu
Since we were able to trace the excellent question back to you, Cthluhu has asked that you be given this as a token of his gratitude.
--<<
>> 23:55, 3 March 2006 (UTC)
And, while I'm at it...
I appreciate your vote AND help with Do NOT click any links!. I think it's really awesome how good of a sport you've been about the Euroipods torture you've recieved. And for not holding my jokes about it against me. :) Anywho, cheers.--<<
>> 19:35, 5 March 2006 (UTC)
- By the way, Nin. I don't know if you've noticed, but many users have cited the "clicking tangent" as their favorite part of this article. Good show! If I get featured article status, you're getting a ninjastar for that.--<<
>> 17:22, 13 March 2006 (UTC)
As promised.--<<
>> 02:59, 5 April 2006 (UTC)
- ^_^ No problem. It just seemed fitting to turn the pages links into further things, since obvoiusly you aren't supposed to click links, and I wound up going overboard, ard now we have that nice, long piece of madness. --[ 21:07, 16 April 2006 (UTC)
- You're not the sharpest tool in the shed, are you? Edit away on the main namespace page, but heed the comment and don't revert the discussion page. - Nonymous 21:10, 16 April 2006 (UTC)
- Well what was what appeared to be part of the main series doing on a discussion page? Whenever there's not a link, it's already established that you simply edit the URL. --User:Nintendorulez 21:23, 16 April 2006 (UTC)
- I'm not going to spell it out to you, but something about that page is different. It is a clue to a mystery, and not part of the main series. - Nonymous 21:26, 16 April 2006 (UTC)
- Not part of the main series? In that case, it ought to go elsewhere! --User:Nintendorulez 10:30, 17 April 2006 (UTC)
NRV vs. QVFD
Hi Nin. Whilst I appreciate your help in tagging short, worthless articles, please try to better judge which of the above options is more appropriate. I've just deleted a bunch of stuff, including a one-word article that you NRV tagged:
- (Deletion log); 18:22 . . Codeine (Talk | contribs | block) (deleted "Megalon": content was: 'A giant cockroach{{NRV|13:57, 24 April 2006 (UTC)}}')
- (Deletion log); 18:22 . . Codeine (Talk | contribs | block) (deleted "Cotana": content was: 'cutey{{NRV|14:13, 24 April 2006 (UTC)}}')
...for example. These two (and others like them) should go straight to QVFD - there's very little to suggest that the creator is intending to expand them, and they don't need to hang around cluttering up the Special:Shortpages for a whole week. NRV is generally for articles that could stand an outside chance of being saved. The guidelines for both are at Uncyclopedia:No Redeeming Value and Uncyclopedia:QuickVFD if you're uncertain, and there's also a VD discussion about it here. Cheers! -- Sir Codeine KUN VFP Bur. MDA NS UotM CM +S (Harangue) 17:40, 24 April 2006 (UTC)
- Ah. Yeah. I suppose I should use that more often... *coughI'malazyassholeandIfindtheonelittletemplatemucheasierandfastercough* -!
Heh, I must admit that you crossed my mind when I was trying to invent controversial comments and people for my book cover. :) I had a feeling you might enjoy seeing RC's Featured!™ catchphrase on there, though I wasn't sure if you'd actually read the article. I'm glad you found it in time. :) Thanks for the vote. ~ T. (talk) 04:47, 2 June 2006 (UTC)
- The joke never gets old. Never. Ever. ^_^ --User:Nintendorulez 11:39, 2 June 2006 (UTC)
- It will be passed on from father to son for all certainly will. --User:Nintendorulez 11:42, 2 June 2006 (UTC)
A gift for you.
4chan has a Zelda porn thread going on right now. Jackpot! Peach as well. I know they get them a lot, but I was surprised they're there at the same time.
And a DOAX thread, but it's not very interesting.
There's also one going on for Cammy if you like other fighting games.
--Monika 18:23, 17 June 2006 (UTC) | http://uncyclopedia.wikia.com/index.php?title=User_talk:Nintendorulez&curid=44540&diff=855717&oldid=852903 | CC-MAIN-2015-11 | en | refinedweb |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
template< typename Sequence > struct pop_front { typedef unspecified type; };
pop_front performs a removal at the beginning of the sequence. The algorithm returns a new sequence which contains all the elements in the range
[next< begin<Sequence>::type >::type, end<Sequence>::type). The result sequence preserves all the functional and performance characteristics of the original
Sequence, except its size and identity.
#include "boost/mpl/pop_front.hpp"
Amortized constant time [1].
typedef list<long>::type types1; typedef list<int,long>::type types2; typedef list<char,int,long>::type types3;
typedef pop_front<types1>::type result1; typedef pop_front<types2>::type result2; typedef pop_front<types3>::type result3;
BOOST_STATIC_ASSERT(size<result1>::type::value == 0); BOOST_STATIC_ASSERT(size<result2>::type::value == 1); BOOST_STATIC_ASSERT(size<result3>::type::value == 2);
BOOST_MPL_ASSERT_IS_SAME(front<result2>::type, long); BOOST_MPL_ASSERT_IS_SAME(front<result3>::type, int);
[1] The algorithm is provided only if the sequence can meet the stated complexity requirements.
Extensible Sequence,
erase,
push_front,
front,
pop_back | http://www.boost.org/doc/libs/1_31_0/libs/mpl/doc/ref/Reference/pop_front.html | CC-MAIN-2015-11 | en | refinedweb |
Created on 2008-08-22.16:52:23 by jy123, last changed 2009-04-05.17:41:16 by thobes.
when I try to install lastest version django on lastest jython version,
get "couldn't make directories". It seems ntpath.isdir failed.
This is a bug/"feature" in the MSVCRT stat function, which Jython uses through jna-
posix.
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
int main(void) {
struct stat foo;
int retval = stat("c:\\Documents and Settings\\", &foo);
fprintf(stderr, "stat returned %d\n", retval);
perror("stat");
}
% ./stat
stat returned -1
stat: No such file or directory
If you remove the trailing \, it works fine.
I think this problem still exists in 2.5alpha3
Here is my workarounds
in lib\os.py:
def makedirs(path, mode='ignored'):
"""makedirs(path [, mode=0777])
Super-mkdir; create a leaf directory and all intermediate ones.
Works like mkdir, except that any intermediate path segment (not
just the rightmost) will be created if it does not exist.
The optional parameter is currently ignored.
"""
sys_path = sys.getPath(path)
if File(sys_path).mkdirs():
return
if(sys_path[-1]=='\\'):
if File(sys_path[:-2]).mkdirs():
return
# if making a /x/y/z/., java.io.File#mkdirs inexplicably fails. So
we need
# to force it
# need to use _path instead of path, because param is hiding
# os.path module in namespace!
head, tail = _path.split(sys_path)
if tail == curdir:
if File(_path.join(head)).mkdirs():
return
if(_path.join(head)[-1]=='\\'):
if File(_path.join(head)[:-2]).mkdirs():
return
raise OSError(0, "couldn't make directories", path)
This issue still exists in 2.5b1; I am experiencing the issue when
trying to install Django on jython.
I am attaching a patch to install_egg_info.py which solves the problem
on my Windows machine.
Fix implemented in revision 6164. | http://bugs.jython.org/issue1110 | CC-MAIN-2015-11 | en | refinedweb |
I have an assignment to write a program that asks a user the year, model, and condition of a vehicle and display the price. We have to use a 3-D array to hold all of the values and a 2-D array to hold all of the names of the cars. The function I am having a problem with is the one that asks the user the year, model, and condition of the car. Say the person enters "Camry" for the model and "Used" for the condition, I need a way to assign a numeric value to a variable to represent the location of the price in the 3-D array. At first I was going to use a switch statement but visual studio kept telling me that you cant use a switch statement with a string. Here is the code I have so far, I'll post the whole thing but I pointed out where I was having the problem.
Any help would be great, Thanks.Any help would be great, Thanks.Code:#include<iostream> #include<iomanip> #include<fstream> #include<string> #include<cmath> #include<cctype> using namespace std; char userprompt(); void getdata(ifstream& Readfile, float carvalues[100][17][4]); void openfile(ifstream& Readfile); float findcarvalue(char carnames[100][100], float carvalues[100][17][4]); int main() { float carvalues[100][17][4]; char carnames[100][100]={"Altima", "Avalanche", "Camaro", "Camry", "CTS", "E350", "Escalade", "Explorer", "F150", "Fairlane", "S600", "Silverado", "Suburban", "Tacoma", "Tahoe", "Titan", "Tundra"}; float carvalue; ifstream Readfile; openfile(Readfile); getdata(Readfile, carvalues); switch(userprompt()) { case'f': case'F': carvalue = findcarvalue(carnames, carvalues); cout << endl << "The price of that vehicle is " <<carvalue; break; case'c': case'C': cout << "computeaverage"; break; case's': case'S': cout << "showprices"; break; case'h': case'H': cout << "cheapestcar"; break; case'a': case'A': cout << "showallvalues"; break; case'q': case'Q': break; default: cout << "Invalid Input!"; } return 0; } void openfile(ifstream& Readfile) { string pathname; cout << "Hello, Welcome to Kelly Green Book." << endl; cout << "What is the name of the input file? " << endl; cin >> pathname; Readfile.open(pathname.c_str()); } char userprompt() { char action; cout << endl << "What would you like to do?" << endl << "F = Find a car value" << endl << "C = Compute average" << endl << "S = Show Prices" << endl << "H = Find the cheapest car" << endl << "A = Show all values" << endl << "Q = Quit" << endl; cin >> action; return action; } void getdata(ifstream& Readfile, float carvalues[100][17][4]) { float price; for (int year = 0; year < 7; year++) { for (int model = 0; model < 17; model++) { for (int condition = 0; condition < 4; condition++) { Readfile >> price; carvalues[year][model][condition] = price; } } } } float findcarvalue(char carnames[100][100], float carvalues[100][17][4]) { float year, mlocation, carvalue, clocation; char model, condition; cout << endl << "Year: "; //Here is where I am having the problem. cin >> year; year = 2007 - year; cout << endl << "Model: "; cin >> model; cout << endl << "Condition(Excellent/Fine/Good/Used): "; cin >> condition; return carvalue; } | http://cboard.cprogramming.com/cplusplus-programming/100419-3d-array-problem.html | CC-MAIN-2015-11 | en | refinedweb |
Hi Bill, if a system did not define the constants we should not use them. I prefer to #ifdef the line where the additional constants are used. I cant believe that a system take acount on bits which are not defined. And defining bits as workaround is not the solution here. Simply leave them in the call if the system did not have them. For that we could simply : ifdef POLLRDNORM .... No aditional OS testing needed. What do you think? Bye Klaus > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > In that man page for poll I see the following text: > > ~ */ > ~ #endif > > My guess is that David doesn't have this. Klaus, would it be a bad > idea for you to condition your use of POLLRDNORM on _XOPEN_SOURCE? I > don't know the correct solution here....if you need that feature, then > we need to find a solution. I also suspect that it is better for us to > define our own #define for the use of this value, and let configure > determine how whether or not to use it. (for all I know, other systems > also provide/use this value) > > > > Klaus Rudolph wrote: > > | Hi David, > | > | the problem is related to OS X. 'POLLRDNORM' is defined somewhere > | in the system header files <asm/poll.h> in linux and it looks that > | for OS X this is not correct. My problem is that I have no access > | to any OS X machine. I google a bit and found that in OS X the > | constants should be defined in poll.h > | > | Could you please try: #include <poll.h> in front of the file > | mysocket.cpp. If this will not work, could you please grep through > | your system header files for the lost constants and add the missing > | header to mysocket.cpp and try to compile again, maybe we will run > | in some more "compatibility" problems. > | > | I hope we will be able to make the simulavrxx running on OS X. If > | so, we (Bill :-)) will change the configure tests for os x and we > | could put that on the CVS. > | > | If you are not the familar with searching for such things you maybe > | could give me or Bill access to your machine? If so please send me > | a PM. > | > | Thank you for your help! > | > | Klaus > | > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.2.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - > > iD8DBQFBbzgnuZBxYuVmoeoRAtpRAJ4//gZhIoj4ohXcI++EUJeaE6CdXACggkgx > 1EaH39bzXLAyyexd+E8Rqgk= > =HAMl > -----END PGP SIGNATURE----- > > > > _______________________________________________ > Simulavr-devel mailing list > address@hidden > > -- +++ GMX DSL Premiumtarife 3 Monate gratis* + WLAN-Router 0,- EUR* +++ Clevere DSL-Nutzer wechseln jetzt zu GMX: | http://lists.gnu.org/archive/html/simulavr-devel/2004-10/msg00018.html | CC-MAIN-2015-11 | en | refinedweb |
El mié, 19-10-2005 a las 18:51 +1000, David Crossley escribió:
> Ross.
Actually that should not be so hard. We "only" have to use the
strip_namespace.xsl when requesting the site.xml. I can do that if we
want to do it. The plan is to add a preparation pipeline and change the
lm to return this pipe (like cocoon://prepare.site.xml). I am using
cocoon:// protocol in the lm-structurer.xml. It is working fine.
>
> > .
agree
salu2
--
thorsten
"Together we stand, divided we fall!"
Hey you (Pink Floyd) | http://mail-archives.apache.org/mod_mbox/forrest-dev/200510.mbox/%3C1129713814.8301.31.camel@localhost%3E | CC-MAIN-2015-11 | en | refinedweb |
If you are developing a Silverlight RIA Service Application & you need to expose your GET or POST methods via the API calls that could be consumed by third-party apps then you can very easily utilize the ASP.Net Web API for that.In today's article we will see how to configure an existing Silverlight RIA Services Enabled app to use Web APIs. To summarize, the following steps are needed to create a RIA Enabled Silverlight App hosted in an ASP.Net Web App.
using System.Collections.Generic;
using System.Linq;
using System.Web.Http;
namespace SLRiaWebAPiDemo.Web
{
public classGroupsController : ApiController
{
MRM_LatestEntities _context = new MRM_LatestEntities();
//List<Group> groups = new List<Group>();
// GET api/<controller>
public IEnumerable<Group> GetAllGroups()
{
return _context.Groups;
//return new string[] { "value1", "value2" };
}
// GET api/<controller>/5
public Group GetGroupById(int id)
{
return _context.Groups.Where(x => x.Id == id).FirstOrDefault();
//return "value";
public void Post([FromBody]string value)
// PUT api/<controller>/5
public void Put(int id, [FromBody]string value)
// DELETE api/<controller>/5
public void Delete(int id)
}
}
Finally we need to tell our app to route to this controller when the URL is looking for it. In order to do this we need to add the controller mappings in the configurations of our app.
public classWebApiConfig
public staticvoid Register(HttpConfiguration config)
config.Routes.MapHttpRoute(
name:"DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults:new { id = RouteParameter.Optional }
//defaults: new { controller = "Test", action = "Get", id = "" }
);
}
using System;
public classGlobal : System.Web.HttpApplication
protected void Application_Start(object sender,EventArgs e)
WebApiConfig.Register(GlobalConfiguration.Configuration);
That's all we need. Put a breakpoint at the GetAllGroups() method in WebApi Controller & hit F5. When the app is up & running for example change it to Hit enter and you will see that the breakpoint is hit. Continue debugging & you will see the JSON data in the browser.
ConclusionThis was a very basic example that shows how you can configure your Silverlight RIA Service or ASP.Net app to use the Web API without actually getting started with a full blown MVC project. You can create multiple APIs & release them to be used by your clients.
Supreet is a strange guy. Whether it is his interaction with the machines or people, he excels at both. A fanatic developer who has a fad for new technology. He is also well renowned for his latest gadgets and love for t... Read more
©2015
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/2450ca/configuring-silverlight-ria-services-app-to-use-Asp-Net-web/ | CC-MAIN-2015-11 | en | refinedweb |
Using the Location API in Your Windows Phone 8 Applications
Introduction
Windows Phone 8 provides application developers with the ability to build applications, which can utilize the new location APIs in the platform.
Windows Phone 8 platform allows the device to derive the location information from various sources like GPS, Wi-Fi and cellular triangulation.
If you were starting fresh, you are better off using the Windows Phone Runtime API for creating location-aware apps. The Windows Phone Runtime API is new in Windows Phone 8 and did not exist in Windows Phone 7.x versions.
These runtime location APIs are accessible from both managed and native code. Additionally, they support better accuracy of the location and well as permissible latency to derive results.
To get access to location, a Windows Phone application will need to declare the LOCATION capability, which exists in WMAppManifest.xml file as ID_CAP_LOCATION. The location APIs are implemented by the Geolocator class in the Windows.Devices.Geolocation namespace.
The location APIs also support notifying changes in position, which can be used if one is building a GPS application. The change in position is notified by the Geolocator.PositionChanged event. When the GPS state changes, the Geolocator.StatusChanged event is fired.
Hands-On
In our demo, we will build a simple Windows Phone application, which will enable the Geolocator and retrieve the latitude and longitude of the current location. Additionally, we will have controls that will help us capture when the status of the Geolocator object changes.
Start Visual Studio 2012 and create a new Windows Phone 8 application titled WindowsPhoneLocationDemo.
New Project
When prompted, select Windows Phone OS 8.0 as the target Windows Phone OS version.
New Windows Phone Application
Next, open the WMAppManifest.xml and declare the location capability by selecting ID_CAP_LOCATION.
WMAppManifest.xml
Next, on MainPage.xaml, add controls so that your MainPage.xaml looks like the screenshot below.
To do this, you will need three textblocks (one named Latitude, one named Longitude and one named “GPS status”), three textboxes (which will contain the values for latitude, longitude and the current GPS status), and two buttons – one to enable the GPS and the other to disable GPS (scope of this button is to only stop processing GPS status changes and position changes).
Next, we will declare the Geolocation namespace in the “using” section of MainPage.xaml.cs.
using Windows.Devices.Geolocation;
Next, we will declare an object of type Geolocator called myGeoLocator.
public partial class MainPage : PhoneApplicationPage { Geolocator myGeoLocator; // Constructor public MainPage()
Now, we will wire up the event handler for the Click event on the “Enable GPS button”. In the event handler, we will initialize a new instance of Geolocator and also wire up the event handlers for the StatusChanged and Positionchanged events of the Geolocator object. Finally, we will disable the “Enable GPS” button and enable the “Disable GPS” button.
private void Button_Click_1(object sender, RoutedEventArgs e) { myGeoLocator = new Geolocator(); myGeoLocator.DesiredAccuracy = PositionAccuracy.Default; myGeoLocator.MovementThreshold = 50; myGeoLocator.StatusChanged += myGeoLocator_StatusChanged; myGeoLocator.PositionChanged += myGeoLocator_PositionChanged; buttonEnableGPS.IsEnabled = false; buttonDisableGPS.IsEnabled = true; }
Now, we will wire up the Click event for the “Disable GPS” button, where we will essentially reverse what we did in the click event for “Enable GPS” button.
private void buttonDisableGPS_Click(object sender, RoutedEventArgs e) { myGeoLocator.PositionChanged -= myGeoLocator_PositionChanged; myGeoLocator.StatusChanged -= myGeoLocator_StatusChanged; myGeoLocator = null; buttonEnableGPS.IsEnabled = true; buttonDisableGPS.IsEnabled = false; }
Finally, we will write the code for the PositionChanged event handler. In this event handler, we will process the StatusChangedEventArgs, process the positionstatus and display it in the textbox for GPS status.
void myGeoLocator_StatusChanged(Geolocator sender, StatusChangedEventArgs args) { PositionStatus ps = args.Status; Dispatcher.BeginInvoke(() => { textBoxGPSStatus.Text = ps.ToString(); }); }
We will also process the PositionChanged event. In this event, we will look at the position information we get from the geolocation object and display it in the textboxes for latitude and longitude.
void myGeoLocator_PositionChanged(Geolocator sender, PositionChangedEventArgs args) { Dispatcher.BeginInvoke(() => { textBlockLocationLatitute.Text = args.Position.Coordinate.Latitude.ToString("0.00"); textBlockLocationLongitute.Text = args.Position.Coordinate.Longitude.ToString("0.00"); }); }
Now, your application is complete. If you have trouble following along, you can download sample code from here.
Once you deploy your application, your application will look like the screenshot below.
Your application
When we click the Enable GPS button, your app will look like the screenshot below.
Click the Enable GPS button
We can see that the application returns the current location and also the GPS status, which in my case is Ready.
Summary
In this article, we saw how to use the location APIs to build location-aware applications._6<< | http://www.developer.com/ws/using-the-location-api-in-your-windows-phone-8-applications.html | CC-MAIN-2015-11 | en | refinedweb |
Details
- Type:
Bug
- Status: Closed
- Priority:
Blocker
- Resolution: Fixed
- Affects Version/s: 2.2.0
-
- Component/s: Components Core
- Labels:None
Description
This issue is foremost the same as
PLUTO-553, so I won't repeat the description and arguments given there.
However, for Jetspeed the migrating to slf4j requires a bit more changes than merely changing the logger api and maven dependencies fixing.
Note: over the weekend and today I've already done these locally and tested it successfully on both Tomcat and Websphere (6.1.0.9)
- Other (third party) dependencies relying on using commons-logging directly
Examples of these are commons-digester, commons-betwixt and the spring-ojb module.
For these artifacts, the commons-logging api simply needs to be present at runtime.
Luckily, slf4j provides the jcl-over-slf4j jar for exactly this purpose: it provides the commons-logging api but will redirect any usage to leverage slf4j under the hoods.
Jetspeed therefore will have to use this jcl-over-slf4j jar to "fix" these commons-logging usages.
Note: this obviously requires us to exclude any other dependency on commons-logging in the maven poms to prevent api namespace clashes at runtime.
- Explicit Commons Logging functionality used within Jetspeed
The ToolsLogger interface was created to "bridge" Maven Log interface to Commons Logging so that Jetspeed components could be invoked from a Maven plugin while directing their log output through the Maven log.
Implementations of these ToolsLogger interface, CommonsToolsLogger.java and CommonsLogToolsLogger.java now need to be redefined and reimplemented to "bridge" to SLF4J
- jetspeed-webapp-logging/apa-webapp-logging (see also:
JS2-508)
The jetspeed-webapp-logging component has been created long time ago to support "isolated" webapplication log management on certain webservers like Websphere and others like JBoss.
The issue solved by this component was the inability to use both commons-logging and log4j within a webapplication on such webservers properly with as result that log management couldn't be managed and configured locally per webapplication.
However, by migrating to slf4j (+log4j) this problem is "resolved" automatically, which I've already tested successfully on Websphere (not yet on JBoss).
The only requirement to get this working is to use a PARENT_LAST (or CHILD_FIRST) webapplication classloader configuration.
As this is "standard" on Tomcat and for jetspeed required anyway, Jetspeed, nor any other web/portlet application using slf4j, will require the jetspeed-webapp-logging component anymore.
Therefore, although we already were in the process of moving jetspeed-webapp-logging to Portals Application (apa-webapp-logging), I think we can simply "drop" this component, also from APA, after the slf4j migration!
I intend to commit the outstanding changes for this migration to slf4j soon, right after the migration for Pluto (
PLUTO-553).
Activity
I've committed the required migration changes for slf4j on both jetspeed-2 and j2-admin now.
Just to clarify a few of the required changes I didn't describe yet above:
- slf4j doesn't support FATAL logging.
We almost had no usages of that, but where they were, I've changed them to ERROR level logging instead.
- slf4j doesn't support log(Object) but requires (at least) a String as first parameter
For some of the parameters we passed in this required just a Object.toString(), e.g charSequence.toString() to "fix" it.
But, we also used log(Exception) in certain locations. For this, slf4j requires changing this to log(String, Exception) and I changed our usage to log(exception.getMessage(), exception).
What still remains to be "fixed" is the Velocity logging. We used our custom webapp-logging handler for that which I for now changed it to the Velocity provided log class: org.apache.velocity.runtime.log.Log4JLogChute
However, this somehow doesn't seem to work and produce any log output yet.
I'll further look into this and try to get this fixed too ASAP.
Hi Ate,
I've just run unit tests today for my
JS2-976 task.
I instantly met an exception complaining commons-logging stuffs.
-------------------------------------------------------------------------------
Test set: org.apache.jetspeed.cache.TestContentCache
-------------------------------------------------------------------------------
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.126 sec <<< FAILURE!
testContentCacheByUser(org.apache.jetspeed.cache.TestContentCache) Time elapsed: 0.086 sec <<< ERROR!
java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory
at org.springframework.util.ClassUtils.<clinit>(ClassUtils.java:73)
at org.springframework.core.io.ClassPathResource.<init>(ClassPathResource.java:82)
at org.springframework.core.io.ClassPathResource.<init>(ClassPathResource.java:64)
at org.apache.jetspeed.cache.impl.EhCacheConfigResource.afterPropertiesSet(EhCacheConfigResource.java:206)
at org.apache.jetspeed.cache.impl.EhCacheConfigResource.getInstance(EhCacheConfigResource.java:84)
at org.apache.jetspeed.cache.TestContentCache.testContentCacheByUser(TestContentCache.java)
<SNIP>
I've just investigated the spring source history (ClassUtils.java).
The ClassUtils.java was changed to not statically use LogFactory since June 4, 2008.
And, this changes can be applied since Spring Framework 2.5.5 release.
Therefore, I think it is inevitable to upgrade the version to 2.5.5 or 2.5.6 now.
If there's no objection to upgrade it to 2.5.6 and no commit before I do, I will do this as soon as possible.
Hi Woonsan,
We already discussed a bit upgrading to latest Spring 2.5.6 and I think/expect that to be ok to do, so +1 from me.
Alternatively, to solve these kind of "hard coded" usages of commons logging which also and already have with for instance commons-digester or spring-ojb,
you can also add a jcl-over-slf4j (test) dependency instead.
Thanks, Ate.
I think we'd better include jcl-over-slf4j library for runtime because there could be some external jar files dependent on commons-logging, such as commons-digester or spring-ojb as you mentioned. Those are used at runtime.
I've already added jcl-over-slf4j as dependency on jetspeed-rdbms and jetspeed-db-maven-plugin.
That dependency on jetspeed-rdbms causes it to be included for the portal at runtime already implicitly but I agree it is better to have it explicitly (probably on the on jetspeed-dependencies).
The jetspeed-db-maven-plugin also has this dependency as it needs to be run standalone (from the command line).
FYI, I met another exception with the unit tests after upgrading spring framework to 2.5.6 locally:
java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory
at org.springframework.core.io.support.PropertiesLoaderSupport.<init>(PropertiesLoaderSupport.java:46)
at org.apache.jetspeed.components.JetspeedBeanDefinitionFilter.<init>(JetspeedBeanDefinitionFilter.java:106)
at org.apache.jetspeed.components.test.AbstractSpringTestCase.getBeanDefinitionFilter(AbstractSpringTestCase.java:81)
at org.apache.jetspeed.components.test.AbstractSpringTestCase.setUp(AbstractSpringTestCase.java:48)
at junit.framework.TestCase.runBare(TestCase.java:125)
<SNIP>
So, we need jcl-over-slf4j library anyway because spring core itself depends on commons-logging.
After I deleted earlier this evening the apa-webapp-logging component I noticed the apa-rss still was using it
I started cleaning up apa-rss for this and then noticed some other APA component still were not completely "clean" from commons-logging either.
One thing let to another and I ended up cleaning up all the APA components, including stripping out other no longer needed dependencies, JSP tag lib uri cleanup, etc.
As a result I've got now most if not all the APA components "clean", not just for commons-logging, I'll take this issue back to commit my changes and thereafter resolve this issue.
slf4j migration for Pluto has been completed (
PLUTO-533).
I'll continue with and commit the migration for Jetspeed, j2-admin and APA tomorrow (no more left time today). | https://issues.apache.org/jira/browse/JS2-972 | CC-MAIN-2015-11 | en | refinedweb |
H&R Block Ch 27 Final Exam Review
What income reporting form should an independent contractor sometimes receive from the person who paid him for his services?
Form 1099-MISC (15.6)
Schedule C, Line F asks for the accounting method used in the business.
What is the difference between the cash method and the accrual method of accounting?
• Under the accrual method, total sales and total charges for services are included in income even though payment may be received in another tax year.
• Under the cash method, only income actually received or expenses actually paid during the year are included. (15.5)
What does it mean if a proprietor "materially participates" in the business?
He is active in running the business in a substantial way on a day-to-day basis. (15.5)
Why is it important to know whether or not the proprietor materially participates?
If the proprietor does not materially participate, any loss from the business is a passive loss and generally may be currently deducted only against passive income. (15.6)
What are returns and allowances?
• Amounts that were refunded to customers who returned merchandise for refund or partial refund.
• These amounts are subtracted from gross receipts. (15.6)
How is cost of goods sold determined?
Beginning inventory plus purchases, plus labor, supplies, depreciation, etc. attributable to product manufacture or preparation for sale, minus ending inventory. (15.7)
If the client has contract labor, what should you remind the client that they should do?
• Provide a Form 1099-MISC to any independent contractor who worked and earned $600 or more. (15.9)
What amounts does a proprietor have "at risk"?
Amounts invested in the business plus any business debts for which the proprietor is personally liable. (15.16)
What difference does it make if the proprietor is "at risk" or not?
Only amounts at risk may be used to determine the actual loss on Schedule C. (15.16)
How does a Tax Professional meet due diligence requirements?
• Tax Professionals fulfill due diligence requirements by making every effort to prepare accurate and complete returns.
• Tax Professionals must have knowledge of tax law, and apply a reasonability check to the information provided by their clients. (26.1)
What is a thorough interview?
A thorough interview consists of asking general information questions, then asking additional questions whenever information is incomplete or seems inaccurate or inconsistent. (26.2)
What is a conflict of interest?
A conflict of interest is when one's situation might benefit at the expense of another's situation. (26.3)
What actions can resolve a conflict of interest?
A conflict of interest is resolved when it is acknowledged, disclosed to all parties, and the parties have consented to waiving the conflict. (26.3)
What client information is confidential?
Any information that could potentially identify the client is confidential.
Information includes (but is not limited to):
• Name
• Address and phone number
• Social security numbers
• Place of employment
• Any information from a tax return (26.4)
Is it acceptable for a Tax Professional to leave a detailed phone message for a client, letting them know their tax return is complete?
• Tax Professionals must have prior consent from the client to leave phone messages related to their tax return.
• The fact that a taxpayer is the client of a Tax Professional or tax preparation business is confidential information that must not be disclosed. (26.5)
What is a Tax Professional's responsibility upon finding out that a client has not complied with any tax law?
A Tax Professional must advise the client of the noncompliance and the consequences for not correcting the situation. (26.7)
What action should a Tax Professional take if a client insists on reporting information that is inaccurate?
A Tax Professional should never prepare a return that contains inaccurate information. (26.7)
If the employee thinks his Form W-2 is not correct, what should he do?
• If the name or social security number is incorrect, the taxpayer may change it himself and need not obtain a corrected W-2 before filing his tax return.
• The employer should be notified of the error and asked to update his records.
• Furthermore, the employee's social security number and earnings records should be verified with the Social Security Administration to ensure that the earnings were properly credited. (2.17)
Where can the regular standard deduction amounts be found?
• In the left-hand margin at the top of page 2 of Forms 1040 and 1040A.
• They are: S, MFS $5,700; MFJ, QW $11,400; HH $8,400.
• The amounts differ for taxpayers age 65 or older or blind and those who may be claimed as dependents by other taxpayers.
[2010]
What is the exemption amount for 2009?
$3,650 with a reduction for higher-income taxpayers of 2% for each $2,500 ($1,250 MFS) the AGI exceeds amounts:
$166,800 S
$250,200 MFJ QW
$125,100 MFS
$208,500 HH
(3.7,8)
Are early distributions from qualified retirement plans always penalized?
No.
• Does not apply to qualified disaster recovery assistance distributions.
• Does not apply to any recovery of cost or any amount rolled over in a timely manner.
(22.12,13)
How does a Tax Professional know if a distribution exception applies?
• He can determine that by using thorough interview questions when discussing the distribution with the client.
• The distribution code on the 1099-R can also be helpful to the Tax Professional. (22.12)
Is there a time limit for filing amended returns?
Yes.
Three years from the date the return was filed or within two years the tax was
Any refund will be limited to $280 (the tax paid within the two years preceding the date the amended return was filed). (23.3)
What are the rules for changing filing status after the due date of the return?
Married couples may not change their filing status from MFJ to MFS after the due date. (23.4)
A taxpayer's employer paid $500 of a taxpayer's $2,000 child care expenses for him. How will the employer's assistance affect the child-care credit?
Total child care expenses must be reduced by any amounts paid by the employer. (8.11)
Where does the employer report the amount of child care expense assistance to the taxpayer?
Form W-2 Box 10 (8.11)
What is the maximum amount of contributions on which the Saver's Credit may be based?
$2,000 per individual or spouse (21.19)
What are the rates for the Saver's Credit?
The rates are 10%, 20%, or 50%, depending upon filing status and modified AGI. (21.17)
A taxpayer is building a new home and had a solar water heater installed in 2009, but the home was not ready to be occupied until early 2010. Can they take the residential energy credit?
Yes, they can take the credit on their 2010 tax return. (8.21)
How much may an eligible educator deduct for qualified classroom expenses as an adjustment to income?
Up to $250 (11.4)
Who may not claim a student loan interest deduction?
Someone who is claimed as a dependent may not claim the deduction in the current tax year, nor may someone who uses the married filing separately filing status.(11.8)
What is a qualified student loan?
• Any type of loan used to pay qualified expenses. Credit card debt may be included, provided the card was used exclusively to pay for qualified expenses.
• Money borrowed from a related person is not a qualified student loan. (11.5)
What are qualified medical expenses with regards to an HSA?
Unreimbursed medical expenses that would normally be deductible on Schedule A (11.16)
What form is used to report HSA contributions and determine any allowable deduction?
Form 8889. Reported on Form 1040 Line 25.
What is a qualified retirement plan?
A plan which is eligible for favorable tax treatment because it meets the requirements of IRC §401(a) and the Employment Retirement Income Security Act of 1974 (ERISA) (21.2)
What is the 2009 contribution limit to 401(k) plans?
• The maximum contribution for 2009 is $16,500 (and 2010).
• Taxpayers age 50 and above are allowed a $5,500 annual "catch-up" contribution. (21.4)
What is it called if a taxpayer takes money out of one IRA and puts it into another (and all requirements are met)?
Roll-over.
What is the last date on which a contribution may be made and qualify as a contribution for a given year?
The due date (not including extensions) of the return for that year.
Why is it important to distinguish between taxpayers who are active participants in an employer-maintained retirement plan and those who are not?
• Those who are not active participants and whose spouses are not active participants may deduct the full amount they contribute to a traditional IRA, assuming they stay within the contribution limits.
• Those who are active participants or whose spouses are active participants may still contribute within the limits but may find their allowable deduction reduced or eliminated. (21.13)
What are the main differences between traditional IRAs and Roth IRAs?
• Contributions to a Roth IRA are never deductible, but qualified distributions are exempt from tax.
• Participation in an employer-maintained retirement plan has no effect on Roth IRA contributions, and contributions can be made after the taxpayer has reached age 70½.
• As long as they have compensation, contributions to Roth IRAs are not reported on the tax return. (21.15)
Under what circumstances do you need to determine whether a taxpayer paid over half of the cost of maintaining his home?
If you are determining if the taxpayer may be considered unmarried, a qualifying widow(er), or head of household. (5.2)
What are some of the costs of maintaining a home?
• Rent
• Mortgage interest
• Real Estate Taxes
• Homeowners Insurance
• Property Taxes
• Repairs
• Utilities
• Food eaten in the home
(5.3)
What requirements must be met for a taxpayer to use the qualifying widow(er) status?
• The death of the taxpayer's spouse must have occurred during one of the two preceding tax years;
• The taxpayer must not have remarried and must have been entitled to file a joint return for the year of death.
• The taxpayer must have paid over half the cost of maintaining the home which, for the entire year, was the main home of their dependent son, daughter, stepson, or stepdaughter.
In general, which parent gets to claim the qualifying child in a divorce?
The custodial parent. (5.7)
What is the exception to the custodial parent qualifying child rule?
• If a decree of divorce or separate maintenance or written separation agreement that became effective after October 4, 2004, states that the noncustodial parent is entitled to claim the child's dependency exemption, or if the custodial parent executes a written declaration that they will not claim the child as a dependent for that year, the noncustodial parent may claim the qualifying child.
• For divorces granted after December 31, 2008, Form 8332 must be filed if parents are separating tax benefits. (5.7)
What's the difference between a withholding allowance and an exemption?
• A withholding allowance is reported on Form W-4 and is used to accurately calculate the amount of tax to be withheld from an employee's wages.
• An exemption is claimed on the tax return for the taxpayer, spouse, and each dependent. (24.)
Under what circumstances may an employee claim exemption from withholding?
Only if the employee had no federal income tax liability for the prior year and he expects to have no tax liability for the current year. (24.)
A single self-employed taxpayer estimates that his 2009 tax will be $7,500. His 2008 tax was $7,000. How much must he prepay for 2009 in order to avoid an underpayment penalty?
$6,750; the lesser of 90% of his 2009 tax [$7,500 X 90% = $6,750] or 100% of his 2008 tax ($7,000). (24.)
What information do you need to know to determine whether a return is required?
• Marital Status
• Age & Student Status
• Gross Income
• Over 65 and Blindness
• Dependent Status
(3.2)
For tax purposes, when is a person's marital status determined?
On the last day of the tax year. (3.2)
What two amounts combine to make up the gross income filing requirement for most taxpayers?
The Standard Deduction and the Personal Exemption amounts. (3.7)
How much is added to the standard deduction if the taxpayer (or spouse) is age 65 or older or blind?
$1,400 per condition for S and HH
$1,100 per condition for MFS, MFJ and QW (3.6)
If one spouse refuses to file a joint return, can the other spouse do anything about it?
No Both will have to file using the married filing separately status unless one or both qualifies to be considered unmarried.
What kinds of property may be expensed using the Section 179 deduction?
New or used tangible personal property (usually equipment or office furniture) purchased for use in a trade or business. (17.2)
How is the MACRS deduction computed in the year of disposition for property being depreciated using the half-year convention?
HALF of the normal depreciation is allowed. (17.9)
How is the MACRS deduction for the year of disposition computed if the property is being depreciated using the mid-quarter convention?
Depreciation for the entire year, multiplied by a PERCENTAGE for quarter of disposition:
12.5% First
37.5% Second
62.5% Third
87.5% Fourth
(17.9)
What special treatment is available to self-employed taxpayers with regard to health insurance premiums they pay?
They may deduct their premiums as an adjustment to income, if they qualify. (17.17)
Carol has a home office. When she is not using the office, she lets her children play video games on an old television she keeps there. Can Carol deduct home-office expenses?
No. The space must be used exclusively for the business. (17.21)
An employee has an office where he works, but his work load demands that he bring home work on evenings and weekends. He uses a room of his home regularly and exclusively for his work. May he deduct home-office expenses?
No, the employer provides a work office. (17.23)
What activities are considered farming activities?
• Cultivating land, operating dairy farms, fruit farms, nurseries, orchards, poultry farms, fish farms, plantations, ranches, stock farms, truck farms;breeding and raising fur-bearing animals or laboratory animals.
• Does NOT include breeding, raising dogs, cats or pets. (17.28)
Matthew breeds Cocker Spaniels for sale as pets. What schedule will Matthew use to determine his profit or loss?
Schedule C (17.28)
What are some general types of itemized deductions that are subject to the 2%-of-AGI floor?
• Transportation Expenses
• Education Expenses
• Job-Seeking Expenses
• Tax Preparation Fees
• Investment Expenses
• Hobby Expenses
(13.9)
What are some miscellaneous itemized deductions that are not subject to the 2%-of-AGI limitation?
• Gambling Losses
• Impairment-Related Work Expenses
• Federal Estate Tax
• Unrecovered cost of a decedent's Pension or Annuity
• Repayments of certain income more than $3,000
• Casualty and theft losses from income-producing property
• Amortizable bond premiums
(13.18)
At what amount must interest income be reported on Form 1040, Schedule B?
When the total taxable interest exceeds $1,500 (6.2)
Is interest received on U.S. Treasury Obligations taxable on state and /or local returns?
No. Interest on U.S. Treasury Obligations is exempt from state and local tax by federal law. (6.8)
What types of taxpayers will require the Qualified Dividends and Capital Gain Tax Worksheet - Line 44?
• Taxpayers who receive Form 1099-DIV showing that they received qualified dividends must use the Qualified Dividend and Capital Gain Tax Worksheet—Line 44.
• Also, those taxpayers who have capital gain distributions shown in box 2a of Form 1099-DIV will use the worksheet. (6.14)
What form is used to request a six-month extension to file?
Form 4868 Application for Automatic Extension of Time to File US Individual Income Tax Return (25.2)
If a taxpayer is anxious to e-file their return in January, can they do so without Form W-2, as long as they have their last paystub?
• No, they must wait until February 15.
• The IRS will not accept returns with substitute W-2s prior to February 15. (2.23)
What is the basis of purchased property?
Cash paid plus the fair market value of services rendered plus the fair market value of property traded. Certain closing costs are added to the basis. (20.4)
What is the maximum net capital loss that a taxpayer may deduct in one year?
$3,000 ($1,500 MFS) (20.11)
The top marginal tax rate for 2009 is 35%. For most capital assets sold during 2009, what is the maximum tax rate for long-term capital gains?
15% or 0% for taxpayers in the 10% and 15% brackets.
Some long-term capital gains are taxed at other rates. (20.9)
When is the American Opportunity Credit (AOC) available?
Under current law, the AOC is available only for tax years 2009 and 2010. (9.8)
What effect do tax-free funds (such as grants) have on qualifying expenses for the AOC?
Expenses must be reduced by those amounts. (9.6)
How is the lifetime learning credit calculated?
20% of the first $10,000 of qualifying expenses per return, per year. (9.17)
What is the maximum tuition and fees deduction?
$4,000 for taxpayers with modified AGIs up to $65,000 ($130,000 MFJ), or $2,000 for taxpayers with modified AGIs between $65,001 and $80,000 ($130,001 and $160,000 MFJ). (9.18)
What are the six tests for a qualifying child?
• Relationship
• Age
• Residency
• Support
• Joint Return
• Special Test
(4.2)
How can a married individual meet the joint return test to remain a qualifying child?
They can meet this test by not filing a joint return with their spouse, or they can file a joint return with their spouse if they are filing only to claim a refund of any taxes withheld. (4.4)
How can you determine who paid more than half of the person's support?
Total support is determined and reduced by the funds received by and for the person from all sources other than the taxpayer. The remaining support is considered to be provided by the taxpayer. (4.9)
What additional requirements must be met by a qualifying child for purposes of the Child Tax Credit?
• Must be under age 17
• Must be claimed on taxpayer's return
• Must be a US citizen, US national or resident of the US. (4.15)
What is the purpose of the alternative minimum tax (AMT)?
The purpose of the alternative minimum tax is to make sure that taxpayers with higher incomes cannot entirely avoid taxes through the use of certain deductions and credits. (22.2)
In what way is a clergy member's compensation treated differently from compensation of other employees?
Compensation of clergy members is subject to self-employment tax instead of social security and medicare tax withholding. (22.5)
Under what circumstances are tips not subject to social security and medicare taxes?
Tips totaling less than $20 in a calendar month are not subject to these taxes. Also, if the taxpayer has already paid the maximum social security tax for the year, further tips are not subject to social security tax. (22.11)
Under what circumstances is Form 4137 prepared?
Only if the taxpayer did not report tips to his employer as required, or if he is reporting allocated tips. (22.9)
What form is used to report household employment taxes?
Schedule H, Household Employment Taxes is filed to report household employment taxes paid. The calculated amount is then carried to Form 1040, line 59. (22.17)
Who may qualify for the Additional Child Tax Credit?
• Taxpayers with earned income in excess of $3,000 for 2009.
• Those with three or more qualifying children for child tax credit purposes, whose child tax credit was limited by their tax liabilities.. (7.3)
Kris (26) has an earned income and AGI of $9,256. He has no other income. He lived in the United States all year and is no one's dependent. He has a valid SSN and is filing as single. He is a U.S. citizen. Does Kris qualify for EIC?
Yes. His Earned Income is under $13,440 (7.5)
What is the possible penalty for failing to comply with the EIC due diligence rules?
$100 fine for each occurrence (7.15)
What happens if an individual is a qualifying child for more than one taxpayer?
The taxpayers may decide among themselves who will claim the credit. (7.9)
What happens when more than one taxpayer claims the same qualifying child?
The IRS will decide based on the tiebreaker rules. (7.9)
How does one determine the taxable income of the taxpayers who itemize deductions?
Adjusted gross income (AGI) minus total itemized deductions and total exemptions. (12.2)
What types of taxes are deductible?
• State and local taxes
• Real property taxes
• Personal property taxes
• Foreign income taxes
(12.12)
A taxpayer makes his final 2008 state estimated tax payment on January 15, 2009. Where should he report this item?
It is included on his 2009 Schedule A, line 5. This payment also should have been included on the estimated payments line of his 2008 state return.
Why is it important to distinguish qualified home mortgage interest from personal interest?
Mortgage Interest is deductible, Personal Interest is not. (12.20)
Is the cost of items purchased to benefit a charitable organization deductible, for example, ballet tickets to raise money for a non-profit hospital?
Only the amount paid in excess of the value of the item is deductible. (12.27)
A taxpayer wrote a check for a $500 donation to his mosque. Is his cancelled check sufficient documentation to support his deduction?
No. Donations of $250 or more must have written substantiation from the donee. (12.29)
Are scholarships and fellowships taxable?
• If a W-2 is received it is FULLY taxable.
• Amounts to Non-degree candidates are FULLY taxable and reported on Line 7 marked "SCH"
• Amounts to Degree candidates spent for qualified expenses are NOT taxable. (18.5)
Under what circumstances are gross gambling winnings taxable?
Always. Gambling losses may be deductible up to the amount of winnings. (13.18)
What document will the taxpayer receive from their employer reporting their disability pension?
1099-R (18.8)
How can a disability pension qualify as earned income for the EIC? We've learned pensions are not earned income.
Before the taxpayer reaches minimum retirement age, it's considered earned income (18.11)
What pensions are fully taxable?
Those to which the taxpayer DID NOT make after-tax contributions or from which all pre-tax amounts have been recovered in previous years. (10.8)
Under what circumstances would a pension be partly taxable?
When it's funded through employer plans to which the employee contributed some after-tax money. (10.9)
When would a traditional IRA distribution be partly taxable?
If nondeductible contributions had been made, Form 8606 is used to compute the taxable portion. (10.16)
Where is income tax withheld from a pension or IRA distribution reported on the tax return?
Form 1040 Line 61 (10.10)
What does it mean to depreciate an asset?
To reduce the basis of a business asset allowing for the reasonable wearing out over a period of years. (16.1)
What kind of property is depreciable?
Business-use property with a useful life of more than one year. (16.2)
Ho do we determine the MACRS recovery period of a piece of personal property?
By using the Table of Asset Class Lives and Recovery Periods (16.3)
For MACRS purposes, we need to divide real property into two categories. What are they?
Residential and Nonresidential (16.7)
How is residential real property, such as a rental house or apartment building, depreciated under MACRS?
Using a straight-line method over 27½ years.
What are some examples of listed property?
• Most passenger automobiles under 6,000 pounds and any property used for transportation.
• Property used for entertainment, recreation or amusement.
• Computers and related equipment unless used exclusively at a business.
• Cellular phones.
(16.17) | http://quizlet.com/2823131/hrb-27-final-exam-review-flash-cards/ | CC-MAIN-2015-11 | en | refinedweb |
How to Bind a Datasource to the Grouping Engine?
27 Mar 20181 minute to read
Essential Grouping can use any IList object holding objects and a common System.Type as its datasource. The public properties of the common type can be used to group, sort and summarize the data in the IList.
The following code shows how to set an IList object to be the data source of a GroupingEngine object. Within Essential Grouping, the items in your IList datasource are referred to as records.
using Syncfusion.Grouping; // Create a Grouping.Engine object. Engine groupingEngine = new Engine(); // Set its datasource. groupingEngine.SetSourceList(list);
Imports Syncfusion.Grouping ' Create a Grouping.Engine object. Dim groupingEngine As New Engine() ' Set its datasource. groupingEngine.SetSourceList(list) Syncfusion Logo Essential Studio Windows Forms - v. 13.2.0.29 Class Reference Demo Support | https://help.syncfusion.com/windowsforms/grouping/faq/how-to-bind-a-datasource-to-the-grouping-engine | CC-MAIN-2021-31 | en | refinedweb |
On Wed, Aug 05, 2020 at 12:53:19PM +0200, Jiri Pirko wrote: > Wed, Aug 05, 2020 at 11:33:38AM CEST, yan y zhao intel com wrote: > >On Wed, Aug 05, 2020 at 04:02:48PM +0800, Jason Wang wrote: > >> > >> On 2020/8/5 下午3:56, Jiri Pirko wrote: > >> > Wed, Aug 05, 2020 at 04:41:54AM CEST, jasowang redhat com wrote: > >> > > On 2020/8/5 上午10:16, Yan Zhao wrote: > >> > > >. > >> > > > >> > > CC Jiri and Parav for a better answer for this. > >> > > > >> > > My understanding is that the following advantages are obvious (as I replied > >> > > in another thread): > >> > > > >> > > - existing users (NIC, crypto, SCSI, ib), mature and stable > >> > > - much better error reporting (ext_ack other than string or errno) > >> > > - namespace aware > >> > > - do not couple with kobject > >> > Jason, what is your use case? > >> > >> > >> I think the use case is to report device compatibility for live migration. > >> Yan proposed a simple sysfs based migration version first, but it looks not > >> sufficient and something based on JSON is discussed. > >> > >> Yan, can you help to summarize the discussion so far for Jiri as a > >> reference? > >> > >yes. > >we are currently defining an device live migration compatibility > >interface in order to let user space like openstack and libvirt knows > >which two devices are live migration compatible. > >currently the devices include mdev (a kernel emulated virtual device) > >and physical devices (e.g. a VF of a PCI SRIOV device). > > > >the attributes we want user space to compare including > >common attribues: > > device_api: vfio-pci, vfio-ccw... > > mdev_type: mdev type of mdev or similar signature for physical device > > It specifies a device's hardware capability. e.g. > > i915-GVTg_V5_4 means it's of 1/4 of a gen9 Intel graphics > > device. > > software_version: device driver's version. > > in <major>.<minor>[.bugfix] scheme, where there is no > > compatibility across major versions, minor versions have > > forward compatibility (ex. 1-> 2 is ok, 2 -> 1 is not) and > > bugfix version number indicates some degree of internal > > improvement that is not visible to the user in terms of > > features or compatibility, > > > >vendor specific attributes: each vendor may define different attributes > > device id : device id of a physical devices or mdev's parent pci device. > > it could be equal to pci id for pci devices > > aggregator: used together with mdev_type. e.g. aggregator=2 together > > with i915-GVTg_V5_4 means 2*1/4=1/2 of a gen9 Intel > > graphics device. > > remote_url: for a local NVMe VF, it may be configured with a remote > > url of a remote storage and all data is stored in the > > remote side specified by the remote url. > > ... > > > >Comparing those attributes by user space alone is not an easy job, as it > >can't simply assume an equal relationship between source attributes and > >target attributes. e.g. > >for a source device of mdev_type=i915-GVTg_V5_4,aggregator=2, (1/2 of > >gen9), it actually could find a compatible device of > >mdev_type=i915-GVTg_V5_8,aggregator=4 (also 1/2 of gen9), > >if mdev_type of i915-GVTg_V5_4 is not available in the target machine. > > > >So, in our current proposal, we want to create two sysfs attributes > >under a device sysfs node. > >/sys/<path to device>/migration/self > >/sys/<path to device>/migration/compatible > > > >#cat /sys/<path to device>/migration/self > >device_type=vfio_pci > >mdev_type=i915-GVTg_V5_4 > >device_id=8086591d > >aggregator=2 > >software_version=1.0.0 > > > >#cat /sys/<path to device>/migration/compatible > >device_type=vfio_pci > >mdev_type=i915-GVTg_V5_{val1:int:2,4,8} > >device_id=8086591d > >aggregator={val1}/2 > >software_version=1.0.0 > > > >The /sys/<path to device>/migration/self specifies self attributes of > >a device. > >The /sys/<path to device>/migration/compatible specifies the list of > >compatible devices of a device. as in the example, compatible devices > >could have > > device_type == vfio_pci && > > device_id == 8086591d && > > software_version == 1.0.0 && > > ( > > (mdev_type of i915-GVTg_V5_2 && aggregator==1) || > > (mdev_type of i915-GVTg_V5_4 && aggregator==2) || > > (mdev_type of i915-GVTg_V5_8 && aggregator=4) > > ) > > > >by comparing whether a target device is in compatible list of source > >device, the user space can know whether a two devices are live migration > >compatible. > > > >Additional notes: > >1)software_version in the compatible list may not be necessary as it > >already has a major.minor.bugfix scheme. > >2)for vendor attribute like remote_url, it may not be statically > >assigned and could be changed with a device interface. > > > >So, as Cornelia pointed that it's not good to use complex format in > >a sysfs attribute, we'd like to know whether there're other good ways to > >our use case, e.g. splitting a single attribute to multiple simple sysfs > >attributes as what Cornelia suggested or devlink that Jason has strongly > >recommended. > > Hi Yan. > Hi Jiri, > Thanks for the explanation, I'm still fuzzy about the details. > Anyway, I suggest you to check "devlink dev info" command we have > implemented for multiple drivers. You can try netdevsim to test this. > I think that the info you need to expose might be put there. do you mean drivers/net/netdevsim/ ? > > Devlink creates instance per-device. Specific device driver calls into > devlink core to create the instance. What device do you have? What the devlink core is net/core/devlink.c ? > driver is it handled by? It looks that the devlink is for network device specific, and in devlink.h, it says include/uapi/linux/devlink.h - Network physical device Netlink interface, I feel like it's not very appropriate for a GPU driver to use this interface. Is that right? Thanks Yan | https://listman.redhat.com/archives/libvir-list/2020-August/msg00413.html | CC-MAIN-2021-31 | en | refinedweb |
From: Vladimir Prus (ghost_at_[hidden])
Date: 2004-01-22 02:09:56
Robert Ramey wrote:
>>My understanding was that as you declare derived class, you also invoke
>>BOOST_CLASS_EXPORT on it, and now you can save pointers to that class
>>everywhere.
>
>>But this does not work in my tests. Unless I use BOOST_CLASS_EXPORT in the
>>code which does actual saving, I get:
>
>>/space/NM/boost-serialization/boost/serialization/oserializer.hpp:297:
>>static void boost::serialization::detail::save_pointer_type<Archive,
>>TPtr>::polymorphic<T>::save(Archive&, const T&, const
>>boost::serialization::basic_oserializer*) [with T = B, Archive =
>>boost::archive::text_oarchive, TPtr = B*]: Assertion `__null != vp'
>>failed.
>
>>Test case attached. Am I doing something wrong?
>
> This is explained in the header code at basic_archive.hpp:125
> Remember your problem with PIMPL ? well it shows up again with export in
> a particularly
> subtle way.
Yea, I wondered how BOOST_CLASS_EXPORT would instantinate necessary
'serialize' but since docs say it will, I though it would magically
work in all cases ;-)
> Suppose I export a derived pointer but otherwise never
> explicitly refer to
> it a compile time. Code with template parameters <Archive, T> will never
> get instantiated. So EXPORT has two functions
> a) make sure that a class is registered in a archive by its key string
> b) instantiate required code to serialize the given type with all the
> archives used by the program.
>
> In order to do this, EXPORT builds an mpl::list of all the file types
> used by the module by
> check for definition of the head inclusion guards. Using mpl, the proper
> serialization code is explicitly instantiated so that it can be invoked if
> necessary.
Now I see there's boost::archive::known_archive_types.
> In order for this work, export.hpp has to come after the archive header
> .hpp files. This is enforced by the code that tripped above.
>
> I spent lots of time finding a solution to this difficult problem and am
> pleased with the result.
IOW, it's required that archives to be used with exported classes
are included before you do BOOST_EXPORT? Indeed, when I add
#include <boost/archive/text_oarchive.hpp>
everything starts to work. I guess I've no problem with this solution --
since I don't know how else EXPORT would work.
But it's unfortunate to get assert like above -- newbie will immediately
decide it's a bug in serialization library. OTOH, requiring that all archive
headers are included before BOOST_CLASS_EXPORT is probably overkill.
- Volodya
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/01/59597.php | CC-MAIN-2021-31 | en | refinedweb |
Many programmers and students create small projects. These projects are often based on the same project template. In many cases, they need to modify the project settings, and every time they must do that, they lose time on repetitive configuration tasks. For these reasons, I have decided to create this article; it explains Visual Studio .NET template creation, and it can be a good example for creating more complex wizards.
This information is not exhaustive, it explains the most common features of a custom wizard. A typical project wizard template contains the files listed below:
Identifies the wizard engine and provides context and optional custom parameters.
This file provides a routing service between the Visual Studio shell and the items in the wizard project.
Templates.inf is a text file that contains a list of templates to use for the project. Note that Template.inf is a template file itself (you can use directives to indicate which files to include in the project).
The XML file that contains the project type information.
This folder contains files that will be included in the project. Note that the location on the hard drive is "$(ProjectDir)\Template Files\1033".
Contains the HTML file (user interface) used by the wizard, one file per page wizard. The first file is "Default.htm". The location on the hard drive is "$(ProjectDir)\Html\1033".
The Custom Wizard creates a JScript file called "Default.js" for each project. It also includes "Common.js". These files contain JScript functions that give you access to the Visual C++ object models to customize a wizard.
Wait a moment, be patient. Before starting the template creation, you must create a default project. This project must contain your pre-requisites for a default project template. Write down on a text file everything that you do, this information will be useful when the wizard creates the new project.
For this article, I have chosen to create a Win32 Console project for creating a C project. My requisites are:
typedef
bool
argc
argv
To create the template structure is pretty simple; Visual Studio does it for you. First, start Visual Studio .NET and select "New Project" from the "File" menu.
width="531" alt="Image 2" data-src="/KB/macros/VCNetCustomWizard/VCNetCustomWizard1.gif" class="lazyload" data-sizes="auto" data->
On the project dialog box, choose "Visual C++ Project" and "Custom Templates".
On the Application Setting page of the "Custom Wizard", change the wizard friendly name to "C Console Project", check the "User interface" checkbox, and set the number of wizard page to 2. The base is now created.
Now, we can add the project files. In my example, I only add one file called "main.c".
To add this file, create a file called "main.c" and save it to the "$(ProjectDir)\Templates\1033" folder. After that, right click on the "Template Files" directory on Visual Studio and select "Add existing item", then add the "main.c" file.
Delete the "ReadMe.txt" and the "Sample.txt" files from the "Template Files" directory (from Visual Studio and from the hard drive).
Now, we modify the "template.inf" file to represent the last three modifications. For that, replace all the file content by "main.c".
In this step, I only explain three modifications. All others are technically the same. These modifications are:
On every project, you can find a "default.js" file. This file contains some functions that were called when the output project was created. On the default HTML file, there is a textbox "MAIN_FILE_NAME" that contains the file name; by default, it is "main". To allow Visual Studio to change the name of this file, you must modify the function "GetTargetName" as follows (the function is located on the default.js file).
MAIN_FILE_NAME
GetTargetName
function GetTargetName(strName, strProjectName)
{
try
{
var strTarget = strName;
if (strName == 'main.c')
strTarget = wizard.FindSymbol('MAIN_FILE_NAME') + '.c';
return strTarget;
}
catch(e)
{
throw e;
}
}
You can define the default value for the HTML controls by adding a "SYMBOL" tag on the HTML file. For example, to set the default value of the "MAIN_FILE_NAME" control to main, add the following line on the HEAD section.
SYMBOL
HEAD
<symbol name="MAIN_FILE_NAME" type="text" value="main"></symbol>
For example, to add the "stdio.h" file into the "main.c" file, add a checkbox control on the "Default.htm" file:
<input id="INCLUDE_STDIO_H" type="checkbox" value="checkbox" name="checkbox">
After that, edit the "main.c" file and modify it according to the example below:
[!if INCLUDE_STDIO_H]
#include <stdio.h>
[!endif]
To view more possibilities, edit the file in the sample.
To modify the project settings, it's a little more complicated. The function that does that is on the "default.js" file, its name is AddConfig().
AddConfig()
When you create a project with the custom wizard, the generated AddConfig() function does not contain much information, but it contains the object declaration which we will use to change the project settings.
In my project, I need to change the following settings (config=debug). Below is a table with the settings to change and a sample code to make this change:
Character Set:Use Multi-Byte Character Set
Debug Information Format:Program Database for Edit & Continue (/ZI)
Warning Level:Level 3 (/W3)
Optimization:Disabled (/Od)
Preprocessor Definitions:WIN32;_DEBUG;_CONSOLE
Runtime Library:Single-threaded Debug (/MLd)
Create/Use Precompiled Headers:Not Using Precompiled Headers
Enable Incremental Linking:Yes (/INCREMENTAL)
Generate Debug Info:Yes (/DEBUG)
The code:
config.CharacterSet = charSetMBCS;
CLTool.DebugInformationFormat = debugOption.debugEditAndContinue;
CLTool.WarningLevel = warningLevelOption.warningLevel_3;
CLTool.Optimization = optimizeOption.optimizeDisabled;
CLTool.PreprocessorDefinitions = "WIN32;_DEBUG";_CONSOLE";
CLTool.RuntimeLibrary = rtSingleThreadedDebug;
CLTool.UsePrecompiledHeader = pchNone;
LinkTool.GenerateDebugInformation = true;
LinkTool.LinkIncremental = linkIncrementalYes;
The JavaScript:
var config = proj.Object.Configurations("Debug|Win32");
var CLTool = config.Tools("VCCLCompilerTool");
var LinkTool = config.Tools("VCLinkerTool");
For more information, see the AddConfig() function in the "Default.js" file.
In the setup file, you will find scripts allowing to install this example automatically. If you want to change these scripts for your personal use, you will find the parameters for the installation in the file "config.vbs".
Extract the contents of "VCNetCustomWizard_demo.zip" and use the cmd files (install.cmd, uninstall.cmd, and settings.cmd).
For Visual C++ 2005 or 2005 Express, replace the string "$$WIZARD_VERSION$$" with "8.0", and go to the "config.vbs" file to view the installation paths.
Visual C++ .NET and Visual Studio .NET offer some features to customize your project templates; custom projects can help single developers or developer teams to accelerate their development by automating repetitive tasks. It can also provide a good foundation for creating personal or enterprise standards. Some other Visual Studio features like add-ins or macros can help developers to organize their work. I encourage developers to look at these different possibilities to eliminate the non- interesting things and concentrate on more significant. | https://www.codeproject.com/Articles/13745/How-to-create-a-custom-project-template-using-Visu | CC-MAIN-2021-31 | en | refinedweb |
Computing > Embedded Computing
How to connect a custom i2c C library here?
(1/1)
triedgetech:
I have NanoPi with installed Ubuntu 16 LTS, kernel 4.14.
I want to read value from a specific register on slave device.
On default, there's no support for SMBUS commands in C, and I don't have i2c/smbus.h
The read command that there is, reads all values from 0x00 register up to whatever register you pass on as an argument in "read()".
I have found this C library (though I heard it's not a good C library for i2c). What steps should I do in order to have a simple
#include <customI2C.h> in my script?? I read this guide on how to connect an external library, and it seems like such a hassle, not sure how to start.
Looks like i2c.c is implementation file, how do I create an i2c.h interface file then??? It'll take too long for me to figure out. I just need a tl;dr.
Its function declaration for read assumes the to pass on the third argument like this I guess:
--- Code: ---u8 result_val;
u8 reg = 0x05;
u8 slave_addr = 0x18;
if (err = i2c_read(slave_addr, reg, &result_val)) {
// handle error
}
// do things with val
--- End code ---
Does it mean, that if this command reads a register 0x05 (System Clock in MAX9867), it might return an 8-bit value in hex like 0x10?
Anyhow, using the read command right now:
--- Code: ---char bufer[15];
read(file, buffer, 15);
for (int i=0;i<=sizeof(buffer);i++) {
printf("buffer[%d] is %d \n", i, buffer[i]);
usleep(1000);
}
--- End code ---
Will print every register's value up to 14 (15 -1, idk why that's so), which in hex is 0x0E. Obviously that won't do, as this is inefficient, a read command should only access the specific register and then return its value.
What's the simplest way to achieve that? I noticed some C libraries utilize the ioctl command, as a declared function before int main? Perhaps there's a ready to use (copy & paste) sample of it?
Navigation
[0] Message Index | https://www.eevblog.com/forum/embedded-computing/how-to-connect-a-custom-i2c-c-library-here/?wap2;PHPSESSID=a8013mhphpeltueho7p1ra3h80 | CC-MAIN-2021-31 | en | refinedweb |
Basic Shapes
The Drawing library provides a set of built-in basic shapes for constructing scenes.
Getting Started
The following example demonstrates the complete implementation for rendering a static scene. In terms of the Drawing API, this scene consists of a
path (the border),
text, and
image.
Constructing the Sample Scene
To replicate the static scene from the previous example and add an enclosing group for positioning:
- Import the types
- Draw the path
- Draw the image
- Draw the text
- Draw the gradient
- Group the shapes
- Create the surface
- Render the scene
Importing the Types
The main entry point rest of the types are exported in the main
@progress/kendo-drawing module. The sample project uses the
Path,
Image, and
Group elements. The
geometry namespace contains geometric primitives and helpers, such as
Point,
Rect, and
Size.
The following example demonstrates how to use the classes from the two namespaces.
import { geometry, Image, Surface, Path, Text, Group } from '@progress/kendo-drawing'; const { Rect, Point, Size, transform } = geometry;
Drawing the Path
To draw straight lines, curves, or a combination of both, use the
Path element.
Set the stroke (line) color and width to match the picture. The constructor accepts the
ShapeOptionsobject that controls the appearance of the shape.
const path = new Path({ stroke: { color: "#9999b6", width: 2 } });
Set the initial position of the line by using the
moveTocommand. To draw the three sides, use the
lineTocommand. The last
closecommand closes the path and draws a straight line to the initial position.
path.moveTo(0, 0) .lineTo(150, 0).lineTo(150, 65).lineTo(0, 65) .close();
Alternatively, you can also use the static
fromRectmethod because the figure from the example is a rectangle.
const borderRect = new Rect( new Point(0, 0), new Size(150, 65) ); const path = Path.fromRect(borderRect, { stroke: { color: "#9999b6", width: 2 } });
Drawing the Image
To draw an image, use the
Image element which draws a bitmap image from a given URL.
Define the position and size of the image as a
Rectelement.
const imageRect = new Rect( new Point(5, 5), new Size(50, 50) );
(Optional) Shorten the previous code statement.
Each method that expects
Pointand
Sizealso accepts
[x, y]and
[width, height]arrays.
const imageRect = new geom.Rect([5, 5], [50, 50]);
Create the image.
const image = new Image('../assets/diego.jpg', imageRect);
Drawing the Text
To draw the text, use the
Text element which draws a single line of text. Appearance options, such as the font, are set through
options. The
Point configuration defines the position of the top-left corner.
const text = new Text( 'Diego Roel', new Point(60, 25), { font: 'bold 15px Arial' } );
Drawing the Gradient
To add a gradient color to the background, use the
Gradient and
LinearGradient classes.
Grouping the Shapes
It is convenient to treat a group of shapes as a single entity. To set the position of all elements at once, use the
group constructor.
Create a
Groupelement and append the rest of the elements as children.
const group = new Group(); group.append(path, image, text);
Set the transformation which applies to all group children—to effectively make the coordinates of the element relative, translate their parent group.
group.transform( transform().translate(50, 50) );
Creating the Surface
To create the surface, use the
Surface.create method which chooses an implementation that matches the capabilities of the browser. The default output is an SVG with a fallback to Canvas.
The following example demonstrates how to apply
Surface.create. The surface is created in the AppComponent.
// Obtain a reference to the native DOM element of the wrapper const element = this.surfaceElement.nativeElement; // Create a drawing surface this.surface = Surface.create(element);
Rendering the Scene
To render the scene, use the
draw method of the surface which appends shapes to the scene graph.
surface.draw(group); | https://www.telerik.com/kendo-angular-ui/components/drawing/basic-shapes/ | CC-MAIN-2021-31 | en | refinedweb |
Writing or reading from/to a specific source might be acceptable for specialized applications. In spite of that, typically we should separate the way our program reads and writes from the actual input and output device; therefore, this avoids us the urge of directly address specific approaching for each different device (changing our program for each screen or disk in the market) or only use a limited amount of screens or disks that we happen to like,and by this, limiting our clients.
Nowadays, operating systems separate the details of handling specific I/O devices into device drivers allowing programs to access through a generic I/O library which encourage a better development.
The C++ Standard provides I/O libraries to define output and input for every built-in type, but in this article we’re going to focus on 3 specific stream libraries:
- fstream for reading from a file and writing to a file in the same stream.
- ofstream converts objects in memory into streams of byte and writes them to the file.
- ifstream takes streams of bytes from the file and composes objects from it.
Write a File steps
The following snippet shows the basic- recommend steps to write a file using C++:
//Header for I/O file streams using namespace std; int main(){ //1. Name it (or create it) for writing ofstream ofs ("test.txt",ios_base::out); //2.- Open it if(ofs.is_open()){ //3.- Write out objects ofs<<"Hello C++"<<endl; for(unsigned int i=0;i<5;i++) ofs<<"Paragraph: "<<i<<endl; //4.-Close ofs.close(); } //0. Hold errors. else{ //Error //... } return 0; }
Read a file steps
The coming snippet shows the basic- recommend steps to read a file using C++, this program reads the content of the file and shows the same content in the output console window:
//... Read it and show it //Read //1. Know its name ifstream ifs{"test.txt",ios_base::in}; if(!ifs){ cout<<"can open input file text.txt"<<endl;} else{ //2. Open it string temp; temp.clear(); //3. Read in the characters while(ifs){ ifs>>temp; cout<<temp; } } //4. Close (Implicitly closed) //ifs.close();
Exercises
The next exercise creates a file of data in a specific format <hour (0-23), temperature (ºC)>.
The row (1) demonstrates how to create a file with a specific version and mode (in this case, for writing the file).
The row (2) shows how to handle errors (displaying a error message and ending the program). Various C++ authors recommend to check for errors just after the creation/opening of the file.
The (3) represents the format we’re writing in the file <12 56C>.
int main(int argc, char *argv[]) { ofstream os{"raw_temps.txt",ios_base::out}; //(1) if(!os.is_open()){ //(2) //hold error cout<<"The file produces a error"<<endl; return 1; } bool cont_flag=true; cout<<"[Welcome to the Temperature File creator]"<<endl; do{ int tempHour; int tempTemp=0; char cDecision; cout<<"Type the hour (0 to 23): "<<endl; cin>>tempHour; cout<<"Type the temperature (ºC): "<<endl; cin>>tempTemp; os<<tempHour<<'\t'<<tempTemp<<'C'<<'\n'; //(3) cout<<"Do you want to add another pair? Y-N "<<endl; cin>>cDecision; cont_flag=(cDecision== 'Y' || cDecision== 'y')?true:false; }while(cont_flag); return 0; }
So far, that’s it! I hope you enjoy this article. | http://gearstech.com.mx/blog/2019/03/15/cio-stream/ | CC-MAIN-2021-31 | en | refinedweb |
Apache!
Usually your project consists of modules (or a single module) which are deployed as OSGi bundles. To get your scripts into the repository, you add the scripts as resources to your project and use the initial content feature from Apache Sling to add the scripts to the repository.
So you usually end up with your module (bundle) checked out in your IDE (Eclipse for example), you do your initial development here (develop your OSGi services and scripts). For testing purposes you deploy the bundle (using the Maven Sling Plugin) which copies your scripts into the repository. From here they get picked up by Sling. If you now edit your scripts directly in the repository you have to take care to synchronize the changes with your checked out project in your IDE which can be an error prone and annoying task. Or you can edit the scripts in your IDE and then either redeploy your bundle or manually copy the scripts via WebDAV – which doesn’t make the process easier.
Fortunately Sling provides some tooling which makes these extra steps obsolete – actually we have this feature for a long time now but I always forgot to write about it…of course the following is only interesting for you, if you’re using Maven for your project.
Now, imagine your scripts are for your own node types which use the namespace prefix “myapp”, so you have a “src/main/resources/SLING-INF/content” (this is a convention for initial content) directory in your project. This content directory now contains a sub directory “libs” (or “apps” or any other configured search path) with your scripts. Underneath “libs” you have the “myapp” folder with a folder for each node type and this folder contains then your scripts (it’s really easier than it is to describe in textual form).
You’ll add a configuration for the Maven Bundle Plugin for adding the initial content header to your pom.xml:
<Sling-Initial-Content>
SLING-INF/content/libs/myapp;overwrite:=true;path:=/libs/myapp
</Sling-Initial-Content>
This basically copies the contents of the “/libs/myapp” folder on bundle install into the repository at the same location. On each update the contents gets overwritten.
Now add the Maven Sling Plugin to your pom.xml:
<plugin>
<groupId>org.apache.sling</groupId>
<artifactId>maven-sling-plugin</artifactId>
<version>2.0.3-incubator-SNAPSHOT</version>
<executions>
<execution>
<id>install-bundle</id>
<goals>
<goal>validate</goal>
<goal>install</goal>
</goals>
<configuration>
<mountByFS>true</mountByFS>
</configuration>
</execution>
</executions>
</plugin>
In the configuration above you can spot to new features of the latest plugin version: the validate goal will validate all *.json files and more important for our topic, the configuration “mountByFS” with the value “true”. Apache Sling has an extension bundle called file system provider (aka fsresource) which allows you to mount a file system path into the resource tree. So basically you can point for example the Sling resource path /myphotos to some directory on your server containing photos. This allows you to directly use files with Sling without copying them into the repository. Once you have installed this bundle into Sling and use the above configuration, each time you build your bundle with Maven and do a “mvn install”, the Maven Sling plugin will create a fsresource configuration for your initial content. In our case the Sling resource path “/libs/myapp” points to the file system directory “/src/main/resources/SLING-INF/content/libs/myapp”. So once you’ve done an initial install of your bundle, you can directly change the scripts in your IDE in your project. And the changes get immediately active, no need to sync and no need to copy. This makes development turnarounds for scripting much shorter as there is no turnaround any more.
The whole thing comes with a little drawback – with the configuration from above, your build fails if no Sling instance is reachable. So you should use a Maven profile for this configuration.
Rapid Development with Apache Sling using an IDE
Apache! | https://blog.osoco.de/2009/04/ | CC-MAIN-2021-31 | en | refinedweb |
This post will show you how to write your first C# program. Please make sure that you have installed Visual Studio on your machine. If you have not yet installed VS, download and install Visual Studio first.
Downloads | IDE, Code, & Team Foundation Server | Visual Studio
Download Visual Studio Community, Professional, and Enterprise. Try Visual Studio Code or Team Foundation Server for free today.
Creating first C# program
- Open Visual Studio and create a Console Application.
- Modify the main() method as shown below.
using System; //Adding .NET namespaces namespace LearnCSharp //Namespace of the class { class Program //The class { static void Main(string[] args) //Main method { Console.WriteLine("Hello World"); Console.ReadKey(); //This keeps the console alive } } }
Output
Hello world
About the program
- using System - The using keyword is used to include namespace (collection of classes) to the program.
- namespace LearnCSharp - Namespace of the current class is declared using the namespace keyword.
- class Program - A class named Program is declared. A class is declared using the class keyword.
- static void Main(string[] args) - Here, we define the main() method. It is the entry point of a console application.
- Console.WriteLine() - WriteLine() is a method of the Console class. These classes and methods are defined in the System namespace. The WriteLine() method is used to display text on the console.
- Console.ReadKey() - This code makes the console application to wait for a key-press, before closing the console.
Subscribe
Join the newsletter to get the latest updates. | https://www.geekinsta.com/first-c-program/ | CC-MAIN-2021-31 | en | refinedweb |
Squarespace Polyfills
A polyfills package with the goal of standardizing polyfill implementations across Squarespace templates. core-js implementations of static and instance methods are preferred where available.
NOTICE: This code is licensed to you pursuant to Squarespace’s Developer Terms of Use. See license section below.
Usage
npm install --save @squarespace/polyfills;
import '@squarespace/polyfills/CustomEvent'; import '@squarespace/polyfills/Element/matches'; import '@squarespace/polyfills/Element/closest';
Usage Note: Bundlers
@squarespace/polyfills must be used with some kind of CommonJS-compatible script bundler (like Webpack, Browserify, Rollup, etc.)
License. | https://www.npmtrends.com/@squarespace/polyfills | CC-MAIN-2021-31 | en | refinedweb |
In this tutorial, you’ll learn how to schedule a Python script using crontab.
Don’t worry if you never used crontab before. There are no prerequisites for this tutorial. I’ll walk you through everything you need to know step-by-step with lots of examples along the way.
Last thing, any modern version of Python should work.
Okay, here we go!
Crontab vs Cronjob – What’s the Difference?
A crontab is a file which contains the schedule of cronjob entries to be run at specified times. Crontab is short for cron table. You can think of a crontab as a configuration file that specifies shell commands to run periodically on a given schedule. A cronjob is basically instructions to run a command at a prescribed time.
Crontab Syntax
The crontab syntax is very powerful and flexible. Below is a reference describing the syntax for a cronjob i.e. a single line in the crontab * * * * * command to executeFrom left to right, each of the 5 asterisks represents minute, hour, day of month, month, and day of week. Finally on the very right is the actual command to execute.
A Very Simple Cronjob Example
To add a cronjob to your crontab, open up a terminal window and type the following. The additional argument -e here means edit.
crontab -e
Assuming that you have no cronjob entries, you’ll see an empty file. Type the following cronjob into your crontab. This cronjob will redirect the current date and time into a file. Save and close your file when you are done.
* * * * * date > /tmp/test.txt
If you guessed that this command will be executed once per minute, you’re absolutely right. A series of five asterisks in a cronjob is quite valid and simply means to execute the command every minute of every day of every week of every month.
Let’s prove that this is actually working. In another terminal window, let’s use the watch command to periodically cat the contents of the test.txt file to the screen.
watch cat /tmp/test.txt
If you’re quick enough, you may see a “No such file or directory” error if the cronjob hasn’t been executed yet. Otherwise, you will see the contents of test.txt similar to below.
Mon Oct 7 12:21:00 CEST 2018
Wait another minute and you will see the date increment by a minute.
Mon Oct 7 12:22:00 CEST 2018
It’s worth pointing out here that cronjobs are by default executed at the top of the minute.
More Examples of Cronjobs
Now that you have created your very first cronjob, let’s go over some other cronjob examples that will execute at various frequencies.
As you can see, the crontab syntax is very flexible. You can pretty much execute any command on any schedule no matter how simple or complicated it may be.
Schedule a Python Script with Crontab
With a good understanding of crontab and the cronjob syntax under your belt, let’s move on and walk through an example to schedule a Python script using crontab.
For simplicity’s sake, let’s write a simple Python program to log the date, time, and a random number between 1 and 100 to the end of a file. Create a file in your home directory called rand.py with the following content.
import random from datetime import datetime now = datetime.now() num = random.randint(1, 101) with open('/tmp/rand.txt', 'a') as f: f.write('{} - Your random number is {}\n'.format(now, num))
Let’s test out this program before we add it as a cronjob. Execute the following in a terminal window.
python rand.py
Check that the program did its job with cat.
cat /tmp/rand.txt
You should see something similar to below.
2018-10-07 12:33:21.211066 - Your random number is 65
It works!
Before we add a cronjob to execute the rand.py program every minute, you must know that it is necessary to use absolute paths for everything in your crontab. This is where most people get hung up with cronjobs. We will need the absolute path of the Python binary and the absolute path of our Python script.
- To get the absolute path of the Python binary, execute which python in a terminal window. In my case, the Python binary is at /usr/local/bin/python
- To get the absolute path of your Python script, execute the pwd in the same directory as the rand.py program. In my case, my Python script is at /Users/tonyflorida/rand.py
Alright. Now that we know where everything resides on our filesystem, let’s schedule our Python script to execute every minute. Open the crontab file like before.
crontab -e
Add the following line to the bottom of your crontab, substituting the appropriate paths for your filesystem. Save and close your file when you are done.
* * * * * /usr/local/bin/python /Users/tonyflorida/rand.py
Again, we can use the watch command to monitor the contents of the /tmp/rand.txt file.
watch cat /tmp/rand.txt
After a few minutes, your rand.txt file will look similar to this.
2018-10-01 13:57:31.158516 - Your random number is 27 2018-10-01 14:01:00.175556 - Your random number is 23 2018-10-01 14:02:00.267484 - Your random number is 81 2018-10-01 14:03:00.386802 - Your random number is 85 2018-10-01 14:04:00.504855 - Your random number is 22 2018-10-01 14:05:00.613324 - Your random number is 94 2018-10-01 14:06:00.706200 - Your random number is 45
Crontab Scheduling Final Thoughts
Congratulations! You now know how to schedule a Python script using crontab. I’m sure by now, you realize the endless possibilities of scheduling tasks with crontab.
Let’s clear out all of our cronjobs with the following command.
crontab -r
Any questions about how to schedule a Python script with crontab, let me know in the comments below. For another Python tutorial, check out last week’s post about how to send an email from Python.
3 thoughts on “How to Schedule a Python Script with Crontab”
Thank you for that incredibly clear and on point post/page. Much appreciated.
So happy you found it useful. Thanks for letting me know!
Thank you so much. I found this extremely useful. I also fell victim of
“…you must know that it is necessary to use absolute paths for everything in your crontab. This is where most people get hung up with cronjobs.” | https://tonyteaches.tech/schedule-python-script/ | CC-MAIN-2021-31 | en | refinedweb |
Introduction: ESC Programming on Arduino (Hobbyking ESC)
Hello Community,.
So let's go...
Step 1: Getting ESC Information
You really should remark the Amperevalue of your ESC. This tutorial is only tested on 20 AMP ESC :.
Source:
German:
Step 2: Connection to Arduino
I tried it with an arduino uno R3. I think it's also possible with an e.g. arduino Duemilanove or Mega.:
Step 3: Upload Sketch
Just copy and paste this Code to your IDE:
/*
Coded by Marjan Olesch
Sketch from Insctructables.com
Open source - do what you want with this code!
*/
#include <Servo.h>
int value = 0; // set values you need to zero
Servo firstESC, secondESC; //Create as much as Servoobject you want. You can controll 2 or more Servos at the same time
void 4: Understanding and Programming an ESC'. it means the highest Signal the ESC can receive
* You will hear the sounds which are described on the picture ( Source: Manual:)
The Hobbyking ESC's can receive a Signal between 700 and 2000 us(microseconds). 700 means throttle at lowest position and 2000 on the highest Position. If you want to know what exactly you du, when you pick a menu visit the manual.
Example:
- Write 2000 us
- Wait until D-D-D-D to chose lipo as batterytype
- When it apperas, write at the third 'D' 700 in your Serial ( short delay, thats why you have to send it at the third 'D')
- ESC will make a sound, and the Option is chosed.
I Hope i could help you with this tutorial.
1 Person Made This Project!
- joaopinela made it!
Recommendations
55 Comments
Question 2 years ago
Thanks for the great page! Do you know if this works with TURNIGY MULTISTAR 40A BLHELI-S REV 16 ESC 2~4S V3 (OPTO)?
Question 2 years ago on Step 4
Hello,
Thank you for this tutorial, it is extremely helpful!
I was trying to create a similar circuit myself whereby I attach a 40A ESC UBEC 4A, to a 300W brushless motor, being powered by a 6s Lipo battery, controller by my Arduino Mega.
I cannot seem to find the specifications defining the signal range for my esc. When I try callibrating it with a throttle I am able to go through the initialization of sequential beeps (AAAA BBBB etc), however when I try it through sending values via the serial monitor, I only get one or two beeps for each value I send, and it does not seem to initialize. Any ideas how I can fix this issue?
4 years ago
Here is my two cents. Should work for any ESC. You may need to play
with delay times. Choose your own speed as you wish, between HI and LO.
/*** ***/
#include <Servo.h>
Servo esc;
void setup()
{
esc.attach(9);
delay(5000);
esc.write(179); // HI
delay(5000);
esc.write(1); // LO
delay(5000);
esc.write(90); // MID
delay(10000);
esc.write(120); // SPEED
}
void loop()
{
}
Reply 2 years ago
I am very interested in running 1 motor through an ESC via an Arduino Uno. And I just need to run it through the usb port. I don't know what to do about the code though. Even if I write it out in the editor, what should I expect -how do I just turn it on/off
Question 2 years ago on Step 2
Attached images; The set up as a whole:
FIRST: I have it set up in 'a' way that will allow this to function/interact correctly. I do not need an accelerometer and I am wanting to turn it on/off-start/stop and adjust speeds (for all intense and purposes) through my desktop via the USB port. So I want to be able to do all of this through the Arduino create agent editor thing on my desktop.
- 6s LiPo battery - 6,000mAh
- 80A ESC - "Mystery" brand
- Drone motor - U7 T-Motor brand, 420KV, 3-8s LiPo battery
NEXT: So even if/when someone is so nice enough to show me the code in such a way I can just copy and paste into the editor - I really don't know what to do next. How do I get it to go? I'll have to adjust numbers in certain sections of the code in editor. I'll need some help with this, e.g. this number means this - that means that and adjusts the speed accordingly, etc.
But basically; How do I turn this set up on and off in the editor controlled from my desktop.
Thanks AND please let me know what I can clarify
3 years ago
An Arduino ESC Library exists on Github . It takes many other issues into consideration. I do admire your effort though!
Reply 3 years ago
I was using the github code and it was working fine, but only for pin 9 and 10 on the arduino NANO the other pins that do have pwm do not work. Someone said it might be that pin 9 and 10 run on 980 hertz and the other pins only at 490 hertz. Is that the case and is it fixable?
Question 3 years ago
Oh thanks for explained.
In my case, my pwm output from pid calculation is responsive but when it write to esc ,the bldc response is slow(its like have some delay on it) so my quad can't stable in any pid constants that i tune..
Please help how to make my esc reponse fast according to yaw and pitch angle with no delay ?( thanks.
4 years ago
hello, love this tutoral, and I'm using it to power 4 esc/motors.
I've pretty much just quadrupled your send routines:
firstESC.writeMicroseconds(value);
secondESC.writeMicroseconds(value);
thirdESC.writeMicroseconds(value);
fourthESC.writeMicroseconds(value);
Running the motors without props they run fine for as long as I need, but I put some small 6" props on for testing, and after about 10 seconds at least one motor slows and stops. Eventually two go, then three.
I'm running 2300kva, I would have thought these motors could have handled it.
Is there something I'm missing?
Reply 4 years ago
Did you check the max amperage your ESC can provide vs the rating for the engine? Are your engines overheating because of too much power?
Reply 3 years ago
Sorry I honestly didnt see this reply.
It was just the power supplies I was using couldnt give amps fast enough. I eventually just bought a large capacity lipo and charger off ebay.
Thanks for the reply.
Reply 3 years ago
No worries, thanks for getting back to me :-)
LiPo will also be the long term solution, but for now, using a power supply for the algorythm tests with my students..
Thanks again for the the ESC instructions, they were very useful!
Reply 4 years ago
It turned out to be power supply, I was using the wrong sort. Stuck a LiPo on it and everything is good.
I forgot I posted this otherwise I would have updated it, thanks for the reply though.
Reply 3 years ago
Hey! What sort of Power Supply?
Reply 3 years ago
hmm this was a while ago now.
I was using a pc power supply, but I couldnt get the motors to run at full speed for long before they eventually cause the power supply to fail.
I eventually just bought a proper lipo and charger off ebay, I think I got change out of $50 for both. I havent done much on this for a year or so because I've moved and no longer have the space, but I need to get bigger motors. The ones I have ALMOST lift it off the ground but the airframe is just under 1.4kg. I am also trying to lighten the load.
Question 3 years ago
Hello, is that all kind of ESC have the same signal value between 700 and 2000?
4 years ago
The link doesn't work. It just says error 404 when the page goes to hobbyking. Is there another link?
Reply 3 years ago
I know this is very late but for any new people coming here, I found the link here
5 years ago
Found how to calibrate traxxas XL 2.5. Hope this helps:
#include <Servo.h>
#define MAX_SIGNAL 2300
#define NETRUAL 1400
#define MIN_SIGNAL 400
#define MOTOR_PIN 9
Servo motor;
void setup() { //open serial monitor
Serial.begin(9600);
Serial.println("Uno online. Software launch sucessful");
delay(100);
Serial.println("Calabration set for TRAXXAS XL 2.5. Please wait for further instruction.");
delay(100);
Serial.println("Begin calibration with ESC powered off and LiPo attached. Connect control wire and grnd to Arduino. Press any key when complete");
while (!Serial.available());
Serial.read();
motor.attach(MOTOR_PIN);
Serial.println("Output at NEUTRAL. Please press and hold ESC calibration button. Light shall flash green then red. Release the button");
delay(1000);
Serial.println("Wait for ESC to blink red once. Then press any key");
while (!Serial.available());
Serial.read();
Serial.println("Now outputting maximum output.");
motor.writeMicroseconds(MAX_SIGNAL);
delay(1000);
Serial.println("Wait for ESC to blink red twice. Then press any key");
while (!Serial.available());
Serial.read();
Serial.println("Sending minimum output");
motor.writeMicroseconds(MIN_SIGNAL);
delay(1000);
Serial.println("ESC should blink green once. If not, calbration has failed. Please atempt agian");
}
void loop() {
}
Reply 3 years ago
I tried it on my XL5 esc but it didn't work. Could you please tell me if there are any modifications I'd have to make to the code? Thanks! | https://www.instructables.com/ESC-Programming-on-Arduino-Hobbyking-ESC/ | CC-MAIN-2021-31 | en | refinedweb |
.
Azure Data Lake Storage (ADLS)
Azure Data Lake Storage (ADLS) is an unlimited scale, HDFS (Hadoop)-based repository with user-based security and a hierarchical data store. Recently, Azure Blob Storage was updated to (among other things) increase capabilities in both scaling and security. Although these updates reduced the differential benefit between ADLS and Blob Storage, that didn’t last long. The improvements to Blob Storage are now the basis for updates to Azure Data Lake Storage. These updates are currently in preview as ADLS Generation 2 and build directly on the new improvements in Blob Storage.
ADLS Gen2 sits directly on top of Blob Storage, meaning your files are stored in Blob Storage and simultaneously available through ADLS. This enables access to all key Blob Storage functionality, including Azure AD based permissions, encryption at rest, data tiering, and lifecycle policies. You can access data stored in ADLS Gen2 via either ADLS (HDFS) or the Blob Storage APIs without moving the data.
Key offerings for Gen2 on top of Blob Storage’s capabilities are the Hadoop-compatible file system (HDFS), hierarchical namespace (folders/metadata), and high-performance access to the large volumes of data required for data analytics. With these updates, ADLS remains the best storage interface in Azure for services running large volume analytic workloads. ADLS makes this performance available to any service that can consume HDFS, including ADLA, Databricks, HD Insight, and more.
Azure Data Lake Analytics (ADLA)
Azure Data Lake Analytics (ADLA) is an on-demand analytics job service. The ADLA service enables
execution of analytics jobs at any scale as a Software as a Service (SaaS) offering, eliminating up-front investment in infrastructure or configuration. This analysis is performed using U-SQL, a language that combines the set based syntax of SQL and the power of C#. With no up-front investment and a language that a .NET Developer can easily work with ADLA simplifies startup on the analysis of the terabytes or petabytes of data resting in your Data Lake.
What’s U-SQL?
U-SQL is a new language that combines the set-based syntax and structures of SQL with the capability and extensibility of C#. However, U-SQL is not ANSI SQL, as the intended purpose of U-SQL goes beyond reading data from a traditional RDBMS. Additional capabilities are required to support actions that are normal for data analysis of large sets both unstructured and structured data, but not for standard SQL.
One example of this is that data in a data lake is often unstructured and likely in the raw format in which it was received. Working with this unstructured data requires defining structure for the data and also often introducing some transformations and enrichment. U-SQL provides a schema-on-read capability that provides structure to data as it is being read and used vs. applying structure to the data as it is received and stored. This unstructured or semi-structured data can then be combined with data in structured data stores like SQL to find answers to questions about your business.
We’ll dive deeply into U-SQL in the future. For now, we’ll take a quick look at a sample U-SQL script. The script below reads in data from a set of CSV log files, counts the number of actions per user, and writes out a CSV file with these results. Although this is a simple example it shows the basic concepts of a U-SQL script. A pattern of read (extract), act (select and transform), and output is one that is repeated again and again in U-SQL scripts, this pattern also maps easily to a typical ETL process.
// Read the user actions data @userActionsExtract = EXTRACT userName String, action String, date DateTime, fileDate DateTime FROM "/raw/userlog/user-actions-{fileDate:yyyy}-{fileDate:MM}-{fileDate:dd}.csv" USING Extractors.Csv(); // Count the actions for each user where the date in the file name is in July @totalUserActionCounts = SELECT userName, COUNT(action) AS actionCount FROM @userActionsExtract WHERE fileDate.Month == DateTime.Now.Month GROUP BY userName; // Write the aggregations to an output file OUTPUT ( SELECT * FROM @totalUserActionCounts ) TO "/processed/userlog/user-action-counts-2018-07.csv" USING Outputters.Csv();
Related Services
The Azure Data Lake services are only part of a solution for large-scale data collection, storage, and analysis. Here are three questions you might ask about solving these problems. Answers to these questions offer a starting point for further exploration into your data analytics pipelines.
How do you get data into the data lake?
In Azure, the most prominent tool for moving data is Azure Data Factory (ADF). ADF is designed to
move large volumes of data from one location to another making it a key component in your effort to collect data into your Data Lake. V2 of Azure Data Factory was recently released. V2 provides an improved UI, trigger-based execution, and Git integration for building data movement pipelines. Another key addition is support for SSIS package execution so you can reuse existing investments in data movement and transformation.
How do you perform batch analysis of data in the data lake?
In addition to ADLS, there are other analysis services that are directly applicable to the large-scale batch analysis of unstructured data that resides in your data lake. Two of these services available on Azure are HDInsight and Databricks. Using these other services may make sense if you are already familiar with them and/or they are already part of your analytics platform in Azure. Databricks provides an Apache Spark SaaS offering that allows you to collaborate and run analytics processes on demand. HDInsight provides a greater range of analytics engines including HBase, Spark, Hive, and Kafka. However, HDInsight is provided as a PaaS offering and therefore requires more management and setup.
How do you report on data in the data lake?
Azure Data Lake and the related tools mentioned above provide the ability to analyze your data but are not the generally the right source for reports and dashboards. Once you’ve analyzed your data and identified measures and metrics you might want to see in dashboards and reports, you’ll need to do some additional work. Ideally, data for dashboards and reports will be structured and stored in a service designed to be queried regularly and update the report or dashboard data. The right place for this data will be a destination like SQL Azure, a SQL Azure Data Warehouse, Cosmos DB or your existing BI platform. This is another stage where Azure Data Factory will be key, as it can orchestrate the process to read data, schedule execution of analysis (if needed), structure data, and write the resulting data to your Reporting data store.
Contextual Overview Diagram
For More Information
- Azure Data Lake Storage (ADLS)
- Azure Data Lake Storage Generation 2 (ADLS Gen 2)
- Azure Data Lake Analytics (ADLA)
- Azure Data Factory (ADF)
- Getting Started With U-SQL | https://www.ais.com/an-introduction-to-azure-data-lake/ | CC-MAIN-2021-31 | en | refinedweb |
Last time I discovered that the JVM startup times aren't that bad. Clojure and Leiningen are much slower.
Clojure startup times suck. Let's just be honest. Starting
lein repl in a typical project takes about eight seconds on my machine. Running
lein test takes over twelve seconds. And I don't even have any
tests.
How do Clojure programmers live with this?
Further, Clojure is all about interactive development. It's all about fast feedback, incremental compiling, exploration. How can you have fast feedback when it takes 12 seconds to run tests? How do you do exploration when things take so long?
The startup times are annoying! When I want to start coding, it takes a lot of time before I can even type the first paren. I have to wait and remember all of the stuff that's going on in my head that I just want to type out. It's very stressful. I love fast feedback. I can't stand programming in a system that forces me to type stuff, then takes a while to show me the result of typing that stuff. But let me tell you, Clojure is not one of those systems.
The (Not so) Secret
Here's the key to fast feedback in Clojure: I rarely restart my application. I have REPLs that have been open for days. When I'm actively working on a single application, I will keep it running for weeks sometimes. It's not crazy. So I am willing to wait 20-30 seconds for my application to start. 20-30 seconds is nothing compared to weeks of development with super fast feedback.
Most workflows get it all wrong
Let me tell you a little story that Alan Kay has told about Smalltalk back in the day. People writing C++ would show all of these microbenchmarks showing how C++ way outperforms Smalltalk, especially when doing integer arithmetic. But when they'd write a GUI in C++, they'd waste all of the performance gained. Mouse clicks would take a long time to have their effect. Windows wouldn't update their contents quickly. The actual goal of showing the answer to the user quickly was ignored. Smalltalk let you do the math, see the answer, and change the calculation faster than the C++-based GUIs could.
Smalltalk focused on giving users the whole cycle. C++ focused on fast math, one little piece. When people focus on startup times, they are missing the big picture. It's not about startup times, it's about fast feedback.
What I see in Clojure (and some other communities) is a consistent high-level concern with the speed of feedback. That includes the entire cycle, from thinking of some code, to typing it in, to compiling it and running it and seeing the result.
It's why Figwheel exists. People complain about ClojureScript compile times. In most frontend systems, there's almost no compilation time. But they are not counting the time it takes to refresh their browser and click buttons to test out what they just coded. With Figwheel, you literally type code, save it, and less than a second later see the result in your browser. The compilation time is only a part of the equation.
When people complain about the startup times, I wonder about their workflow. I've even considered offering a consulting service to help people set up their workflow for fast feedback. If you want me to get annoyed and stressed and quit my job, give me long feedback loops. The absolutely most important thing to me in any system is making it fast to make changes and see the results. I'll spend a lot of time speeding up my workflow instead of working on features.
Cider (the Emacs development environment for Clojure), Cursive, and proto-repl (the Atom development environment for Clojure) all focus on this entire feedback loop.
My workflow
With the obligatory caveats that my workflow is certainly not ideal for everyone, let me try to describe how I work in Clojure.
I open up Emacs, browse to a random file in my project, then run
cider-jack-in, which starts a connected REPL. That takes about 20-30
seconds. I twiddle my thumbs.
Now that I'm in, I compile the current namespace with a keystroke
(
C-c-k). Then I code. Then I compile everything. Then I code. Then I
compile everything. Over and over.
If I'm doing TDD, after I compile my code, I run the tests with a
keystroke (
C-c C-t n). All of this stuff is so ingrained in my
muscle memory that I don't think about it. I do it without
realizing it.
Some other resources
Okay, startup times is a very common complaint and people have worked hard on it. There are some interesting resources.
One approach is to keep a JVM around that's already booted up and
ready with the classpath you want. Then when you want to run some
code, you just connect to the JVM and add the new code and run
it. That's the approach of drip and
it should be a drop-in replacement for the
java executable.
The Leiningen wiki has a page about making Leiningen faster.
Alan Dipert wrote a guide to avoid restarting the JVM when developing in Boot. Boot is cool because it totally gets the fast-feedback mindset. Boot lets you add dependencies as you go, instead of having to restart to add a new JAR.
But startup time of Clojure is still bad for shell scripting. Planck and Lumo are ClojureScript REPLs that start up fast and let you code ClojureScript. It's not JVM Clojure, but it's a way to run Clojure for quick scripts. They can handle command line arguments, input/output, and shelling out to other programs, among other things. These are under active development and they get new features all the time.
Conclusions
Startup times are still important for a lot of applications, like running shell scripts. However, when developing applications, the Clojure community focuses on fast feedback more than it focuses on the startup time. Get your workflow set up so that you can see the result of your changes instantly.
If you are coming to Clojure from another language (or it's your first language), I don't want you to get stuck on the intricacies of the JVM. It can be a huge stumbling block. That's why I created a course called JVM Fundamentals for Clojure. It explains all sorts of stuff that will make you more effective when using the JVM, configuring it, doing interop, and understanding what's going on. You can buy that individual course or you can get it as part of a membership.
We're going to take a little turn now and next time we'll be exploring the wide variety of JVM deployment options. | https://purelyfunctional.tv/article/how-do-clojure-programmers-deal-with-long-startup-times/ | CC-MAIN-2018-43 | en | refinedweb |
Lec 29: Critical Sections, Locking, and Deadlocks
Table of Contents
1 Threads and shared resources
In the last lesson, we explored threads and their usefulness for concurrent programming. Threads are a way to divide the resources of a process so that individual parts can be scheduled independently. We also described this as user level parallelism, as oppose to O.S. level parallelism which is provide by processing.
There are many benefits to user level parallelism, such as simpler programming structure and design. User level parallelism also means that each thread shares the same resources, including memory. For example, consider the following small program:
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <pthread.h> int shared = 10; void * fun(void * args){ time_t start = time(NULL); time_t end = start+3; //run for 3 seconds while(time(NULL) < end){ shared++; } return NULL; } int main(){ pthread_t thread_id; pthread_create(&thread_id, NULL, fun, NULL); pthread_join(thread_id, NULL); printf("shared: %d\n", shared); return 0; }
The thread executing
fun() will increment the shared variable
shared for 3 seconds. The main thread will block on the join and
print the result. This is only possible because both threads, the
worker thread and the main thread share memory. While a contrived
example, it is demonstrative of the capabilities of shared memory
and your experience with multi-processing and piping should help you
appreciate the benefit.
However, shared resources comes at a cost. The above example has a single thread working on a single variable: what happens if we have multiple threads trying to manipulate a single memory address? You might think that each thread will be able to act without interference, but computers are finicky and the thread scheduling routine is not transparent. A thread may be just in the middle of an operation and then be interupted by another thread that is also operating on the same data. The result is that the data can become inconsistent and neither thread has the true representation of the data.
To solve these problems, we need a new mechanism to ensure that all critical operations are atomic or mutually exclusive, that is, an operation completes full within one thread without the possibility of another thread preempting that process. In today's lessons, we are going to learn how to add mutual exclusion to our programs to avoid inconsistencies and how to avoid using these tools in ways to hamper program progress.
2 Locking
The concept of resource locking is a huge area of study in computer and operating system research. The big idea is that when a program enters a critical section or a set of code that must fully complete without interruption, the program holds a lock that only one program (or thread) can hold a time. In this way only one program can ever be in the critical section at any time.
2.1 Why we need locks?
Before we get into the details of programming with locks. Let's look at an example of why we need locks. Consider the program below which is simulating a bank. The user selects the number of threads from the command line and then each thread will manipulate an account balance by randomly choosing an amount to credit or debit the account while tracking the total number of debits and credits.
/* bank.c*/ int balance = INIT_BALANCE; int credits = 0; int debits = 0; void * transactions(void * args){ int i,v; for(i=0;i<NUM_TRANS;i++){ //choose a random value v = (int) random() % NUM_TRANS; //randomnly choose to credit or debit if( random()% 2){ //credit balance = balance + v; credits = credits + v; }else{ //debit balance = balance - v; debits = debits + v; } } return 0; } int main(int argc, char * argv[]){ int n_threads,i; pthread_t * threads; //error check if(argc < 2){ fprintf(stderr, "ERROR: Require number of threads\n"); exit(1); } //convert string to int n_threads = atol(argv[1]); //error check if(n_threads <= 0){ fprintf(stderr, "ERROR: Invalivd value for number of threads\n"); exit(1); } //allocate array of thread identifiers threads = calloc(n_threads, sizeof(pthread_t)); //start all threads for(i=0;i<n_threads;i++){ pthread_create(&threads[i], NULL, transactions, NULL); } //join all threads for(i=0;i<n_threads;i++){ pthread_join(threads[i], NULL); } printf("\tCredits:\t%d\n", credits); printf("\t Debits:\t%d\n\n", debits); printf("%d+%d-%d= \t%d\n", INIT_BALANCE,credits,debits, INIT_BALANCE+credits-debits); printf("\t Balance:\t%d\n", balance); //free array free(threads); return 0; }
Once the program completes, it prints out the total amount credited and debited and the current balance. The program also prints out what the balance should be if we consider credits, debits and initial balance. Here is two runs of the program with 2 worker threads:
#> ./bank 2 Credits: 4707 Debits: 4779 1000+4707-4779= 928 Balance: 942 #> ./bank 2 Credits: 4868 Debits: 4465 1000+4868-4465= 1403 Balance: 1587
What happened? In neither case did the balance at the end match the expectation based on the total debits and credits. How could this be?
Each thread is scheduled independently and can be interrupted and preempted arbitrarily by the OS. This can even occur in the middle of an operation. Consider just the simple bit of code to add numbers:
balance = balance + v
To assign the final balance, two things must happen:
balance + vmust be computed and stored in a temp variable
- The temp variable must be assigned to
balance
What happens if the thread did step (1) but was preempted by the other thread before completing step (2). In the meanwhile, the other thread completes (1) and (2), and finally the original thread is scheduled back to do step (2). Consider the diagram below:
Thread 1 Thread 2 balance v tmp (1) bal + v -> tmp 1000 5 1005 (1) bal + v -> tmp 1000 3 1003 (2) bal <- tmp 1003 (2) bal <- tmp 1005 1005
Since the incremental work of Thread 1 was assigned after the work of Thread 2, it is if Thread 2 never did the work in the first place. This is why we get such inconsistent results with the balance, and further, it's more than just the balance value, it not clear that even credit or debit counters are consisten either.
To solve this problem, we have to identify the critical sections of the program, that is, sections of code that only one thread can execute at a time. Once a critical section is identified, we use a shared variables to lock that section. Only one thread can hold the lock, so only one thread executes the critical section at the time.
2.2 Naive Locking
While it may simple at first creating a lock is no small task. The locking procedure must be an atomic operation where the whole process of testing if the lock is available and acquiring the lock occur singularly. To demonstrate why this must be the case, consider the following code where we implement a naive lock, which is just a variable integer, either set to 0 for available or 1 for locked.
char USAGE[] = "naive_lock n_threads\n" "USAGE: run n threads with a naive lock\n"; int lock = 0; //0 for unlocked, 1 for locked int shared = 0; //shared variable void * incrementer(void * args){ int i; for(i=0;i<100;i++){ //check lock while(lock > 0); //spin until unlocked lock = 1; //set lock shared++; //increment lock = 0; //unlock } return NULL; } int main(int argc, char * argv[]){ pthread_t * threads; int n,i; if(argc < 2){ fprintf(stderr, "ERROR: Invalid number of threads\n"); exit(1); } //convert argv[1] to a long if((n = atol(argv[1])) == 0){ fprintf(stderr, "ERROR: Invalid number of threads\n"); exit(1); } //allocate array of pthread_t identifiers threads = calloc(n,sizeof(pthread_t)); //create n threads for(i=0;i<n;i++){ pthread_create(&threads[i], NULL, incrementer, NULL); } //join all threads for(i=0;i<n;i++){ pthread_join(threads[i], NULL); } //print shared value and result printf("Shared: %d\n",shared); printf("Expect: %d\n",n*100); return 0; }
If you follow the code, you see that the variable
lock is set to 0
for unlocked and 1 for locked. Then each thread will spin until the
variable is set to 0 for unlocked, that is
while(lock > 0);
If we run this program a few times with 2 threads, it might seem to work …
#> ./naive_lock 2 Shared: 200 Expect: 200 #> ./naive_lock 2 Shared: 200 Expect: 200 #> ./naive_lock 2 Shared: 200 Expect: 200 #> ./naive_lock 2 Shared: 200 Expect: 200 #> ./naive_lock 2 Shared: 200 Expect: 200 #> ./naive_lock 2 Shared: 172 Expect: 200
… until all of a sudden it does not work. What happened?
The problem is the testing of the lock and the acquisition of the lock are not atomic operations. If we look at that sequence of code:
//check lock while(lock > 0); //spin until unlocked <----------- What if threads swap here? lock = 1; //set lock Then two (or more) threads could have the lock.
What we need is a specialized operation for testing and setting that
all happens at once, and fortunately, this is provided to us by
pthread library.
2.3
mutex locks
The term mutex stands for a mutually exclusion, which is a fancy name for lock. A mutex is not a standard variable, instead, it is guaranteed to be atomic in operation. The act of acquiring a lock cannot be interrupted.
In the
pthread library the type of a mutex is:
pthread_mutext_t mutex;
You must first initialize a mutex before you can use it:
pthread_mutex_init(&mutex, NULL); // ^ // will not use second arg of init
You then can acquire and unlock a mutex:
pthread_mutex_lock(&mutex); /* critical section */ pthread_mutex_unlock(&mutex);
Both of these operations are atomic, and the locking function is blocking. That is, once a thread tries to acquire a lock, it will be suspended execution until the lock is available. Further, locks are ordered in their acquisition; the order in which threads try to acquire the lock determine which thread gets the lock next. How that works is a discussion for your operating systems class, and more generally, the locking functionality implantation details are heavily supported by the operating system and the hardware.
Finally, creating a mutex allocates memory. So we have to deallocate the mutex, or destroy it:
pthread_mutex_destroy(&mutex);
3 Locking Strategies
There are two strategies for locking critical sections. In this
part, we will refer back the
bank.c program at the top of the
lessons where we had three shared variables,
balance,
credits,
debits, and multiple threads depositing and withdrawing random
funds from the balance. As we identified, there was inconsistencies
in the balance after running the program, and those inconsistencies
are present with all the shared variables.
3.1 Coarse Locking
One locking strategy for locking a program is to use a single lock for the critical section to protect the entire critical section. This is called coarse locking because it is a large granularity. In code, this might look like this for the critical section:
/*bank_coarse_lock.c*/ int balance = INIT_BALANCE; int credits = 0; int debits = 0; pthread_mutex_t lock; void * transactions(void * args){ int i,v; for(i=0;i<NUM_TRANS;i++){ pthread_mutex_lock(&lock); //acquire lock //choose a random value v = (int) random() % NUM_TRANS; //randomnly choose to credit or debit if( random()% 2){ //credit balance = balance + v; credits = credits + v; }else{ //debit balance = balance - v; debits = debits + v; } pthread_mutex_unlock(&lock); //release lock } return 0; }
With this locking strategy, consistency is achieved. All the critical code executes atomically and the output is consistent:
#> ./bank_coarse_lock 3 Credits: 7247 Debits: 7462 1000+7247-7462= 785 Balance: 785
3.2 Fine Locking
While coarse locking is a reasonable choice, it is inefficient. We
looks some paralleism because not all parts of the critical section
are critical to each other. For example, consider that the variable
credits and
debits are used exclusively of each other; each
thread only performs a credit or debit but not both. Maybe it would
be worth while to do more fine grain locking.
Consider the following changes:
int balance = INIT_BALANCE; int credits = 0; int debits = 0; pthread_mutex_t b_lock,c_lock,d_lock; void * transactions(void * args){ int i,v; for(i=0;i<NUM_TRANS;i++){ //choose a random value v = (int) random() % NUM_TRANS; //randomnly choose to credit or debit if( random()% 2){ //credit pthread_mutex_lock(&b_lock); balance = balance + v; pthread_mutex_unlock(&b_lock); pthread_mutex_lock(&c_lock); credits = credits + v; pthread_mutex_unlock(&c_lock); }else{ //debit pthread_mutex_lock(&b_lock); balance = balance - v; pthread_mutex_unlock(&b_lock); pthread_mutex_lock(&d_lock); debits = debits + v; pthread_mutex_unlock(&d_lock); } } return 0; }
In this example, we use fine locking because each of the shared variables are independently locked only when used. This allows for more parallelism at the cost of complexity. If we compare the run times of the two strategies we see that, yes, fine grain locking is marginally faster and much faster on larger load sets.
3.3 Deadlocks
One consequence of fine grain locking is the potential for deadlocks. A deadlock occurs when two threads each hold a resource the other is waiting on. Let's look at a naive example:
#include <stdio.h> #include <stdlib.h> #include <pthread.h> pthread_mutex_t lock_a, lock_b; void * fun_1(void * arg){ int i; for (i = 0 ; i< 10000 ; i++){ pthread_mutex_lock(&lock_a); //lock a then b pthread_mutex_lock(&lock_b); //CRITICAL SECTION pthread_mutex_unlock(&lock_a); pthread_mutex_unlock(&lock_b); } return NULL; } void * fun_2(void * arg){ int i; for (i = 0 ; i< 10000 ; i++){ pthread_mutex_lock(&lock_b); //lock b then a pthread_mutex_lock(&lock_a); //CRITICAL SECTION pthread_mutex_unlock(&lock_b); pthread_mutex_unlock(&lock_a); } return NULL; } int main(){ pthread_t thread_1,thread_2; pthread_mutex_init(&lock_a, NULL); pthread_mutex_init(&lock_b, NULL); pthread_create(&thread_1, NULL, fun_1, NULL); pthread_create(&thread_2, NULL, fun_2, NULL); pthread_join(thread_1, NULL); pthread_join(thread_2, NULL); return 0; }
In this code we have two threads and two locks. One thread acquires
lock_a and then
lock_b while the other thread acquires
lock_b
and then
lock_a. If we run this program we find that nothing
happens; the program hangs. This is because of a deadlock.
Consider what happens when one each thread run such that one holds
lock_a and the other holds
lock_b, then one thread is waiting
on the lock the other thread holds. Both threads are now blocking
on acquiring the resources, and the main thread is blocking trying
to join the worker threads. Nothing is running: deadlock. | https://www.usna.edu/Users/cs/aviv/classes/ic221/s16/lec/29/lec.html | CC-MAIN-2018-43 | en | refinedweb |
import hair extensions from india
Import Indian Hair, Wholesale Various High Quality Import Indian Hair . KBL Import indian hair 100 human,loose wave indian hair extension,indian hair raw. Buy Remy Virgin hair Extensions wholesale from India. At our factory, the raw . Our wefting machines are imported from Japan. . 3 step hot water wash is used . Welcome to Indian Hair Expo, Human Hair Exporters in India - provides quality . 100% virgin Indian human hair extensions and Indian Remy hair weaves. . Extra zig zag stitching and imported glue gum treatment Technology allows for . Find here Human Hair Extension suppliers, manufacturers, wholesalers, traders with Human Hair Extension prices for buying. . forever hair BLACK Unprocessed Indian Hair, for Personal and Parlour . Golden Exports And Imports. May 14, 2013 - Most of the hair business companies in US are importing hair from China . Indian remy hair is very expensive hair ranges from 0-00 We have more than 5000 pieces hair extensions and lace wigs in stock right now! Feb 1, 2016 - Well, any woman who's a fan of hair extensions will likely give you an also import a lot of virgin human hair from India and China as well? Find Reliable Import Indian Hair Manufacturers, Suppliers, Exporters, Factory, . Hair Extension Type : Human Hair Weave, Human Hair Bulk, Human Hair . Hair Maiden India specializes in raw, unprocessed hair extensions and wigs sourced from remote temples in South India We make lace closures, 360 lace . Started off our journey in 1970s as a small scale cottage business in Chennai supplying human hair procured from Indian temples. . Since then we are . Shop for the highest quality Indian and Brazilian hair extensions! | http://accessorycell.info/import-hair-extensions/import-hair-extensions-from-india/ | CC-MAIN-2018-43 | en | refinedweb |
Part 7: Packages
This is the tutorial number 7 in Golang tutorial series.
What are packages and why are they used?
So far we have seen go programs which have only one file which has a main function with a couple of other functions. In real world scenarios this approach to writing all source code in a single file will not work. It becomes impossible to reuse and maintain code written this way. This is where packages save the day.
Packages are used to organise go source code for better reusability and readability. Packages offer compartmentalisation of code and hence it becomes easy to maintain go applications.
For example lets say we are creating a go image processing application which offers features such as image cropping, sharpening, blurring and color enhancement. One way to organise this application is to group all code related to a feature in its own package. For example cropping can be a individual package, sharpening can be another package. The advantage of this is, the color enhancement feature might need some of the functionalities of sharpening. The color enhancement code can simply import(we will discuss import in a minute) the sharpening package and start using its functionality. This way the code becomes easy to reuse.
We will step by step create an application which calculates the area and diagonal of a rectangle.
We will understand packages better through this application.
main function and main package
Every executable go application must contain a main function. This function is the entry point for execution. The main function should reside in the main package.
The line of code to specify that a particular source file belongs to a package is
package packagename. This should be first line of every go source file.
Lets gets started by creating the main function and main package for our application. Create a folder inside the src folder of the go workspace and name it
geometry. Create a file
geometry.go inside the
geometry folder.
Write the following code in geometry.go
//geometry.go package main import "fmt" func main() { fmt.Println("Geometrical shape properties") }
The line of code
package main specifies that this file belongs to the main package. The
import "packagename" statement is used to import a existing package. In this case we import the
fmt package which contains the Println method. Then there is a main function which prints
Geometrical shape properties
Compile the above program by typing
go install geometry. This command searches for a file with a main function inside the
geometry folder. In this case it finds
geometry.go. It then compiles it and generates a binary named
geometry(
geometry.exe in the case of windows) inside the bin folder of the workspace. Now the workspace structure will be
src geometry gemometry.go bin geometry
Lets run the program by typing
workspacepath/bin/geometry. Replace
workspacepath with the path of your go workspace. This command executes the
geometry binary inside the bin folder. You should get
Geometrical shape properties as the output.
Creating custom package
We will structure the code in such a way that all functionalities related to a rectangle are in
rectangle package.
Lets create a custom package
rectangle which has functions to determine the area and diagonal of a rectangle.
Source files belonging to a package should be placed in separate folders of their own. It is a convention in Go to name this folder with the same name of the package.
So lets create a folder named
rectangle inside the
geometry folder. All files inside the
rectangle folder should start with the line
package rectangle as they all belong to the rectangle package.
Create a file
rectprops.go inside the rectangle folder we just created and add the following code.
//rectprops.go package rectangle import "math" func Area(len, wid float64) float64 { area := len * wid return area } func Diagonal(len, wid float64) float64 { diagonal := math.Sqrt((len * len) + (wid * wid)) return diagonal }
In the above code we have created two functions which calculate
Area and
Diagonal. The area of the rectangle is the product of the length and width. The diagonal of the rectangle is the square root of the sum of squares of the length and width. The
Sqrt function in the
math package is used to calculate the square root.
Note that the function names Area and Diagonal starts with caps. This is essential and we will explain shortly why this is needed.
Importing custom package
To use a custom package we must first import it.
import path is the syntax to import a custom package. We must specify the path to the custom package with respect to the
src folder inside the workspace. Our current folder structure is
src geometry geometry.go rectangle rectprops.go
The line
import "geometry/rectangle" will import the rectangle package.
Add the following code to geometry.go
//geometry.go package main import ( "fmt" "geometry/rectangle" //importing custom package ) func main() { var rectLen, rectWidth float64 = 6, 7 fmt.Println("Geometrical shape properties") /*Area function of rectangle package used */ fmt.Printf("area of rectangle %.2f\n", rectangle.Area(rectLen, rectWidth)) /*Diagonal function of rectangle package used */ fmt.Printf("diagonal of the rectangle %.2f ",rectangle.Diagonal(rectLen, rectWidth)) }
The above code imports the
rectangle package and uses the
Area and
Diagonal function of it to find the area and diagonal of the rectangle. The
%.2f format specifier in Printf is to truncate the floating point to two decimal places. The output of the application is
Geometrical shape properties area of rectangle 42.00 diagonal of the rectangle 9.22
Exported Names
We capitalised the functions
Area and
Diagonal in the rectangle package. This has a special meaning in Go. Any variable or function which starts with a capital letter are exported names in go. Only exported functions and variables can be accessed from other packages. In this case we need to access
Area and
Diagonal functions from the main package. Hence they are capitalised.
If the function names are changed from
Area(len, wid float64) to
area(len, wid float64) in
rectprops.go and from
rectangle.Area(rectLen, rectWidth) to
rectangle.area(rectLen, rectWidth) in
geometry.go and if the program is run, the compiler will throw error
geometry.go:11: cannot refer to unexported name rectangle.area. Hence if you want to access a function outside of a package, it should be capitalised.
init function
Every package can contain a
init function. The init function should not have any return type and should not have any parameters. The init function cannot be called explicitly in our source code. The init function looks like below
func init() { }
The init function can be used to perform initialisation tasks and can also be used to verify the correctness of the program before the execution starts.
The order of initialisation of a package is as follows
- Package level variables are initialised first
- init function is called next. A package can have multiple init functions (either in a single file or distributed across multiple files) and they are called in the order in which they are presented to the compiler.
If a package imports other packages, the imported packages are initialised first.
A package will be initialised only once even if it is imported from multiple packages.
Lets make some modifications to our application to understand init functions.
To start with lets add a init function to the
rectprops.go file.
//rectprops.go package rectangle import "math" import "fmt" /* * init function added */ func init() { fmt.Println("rectangle package initialized") } func Area(len, wid float64) float64 { area := len * wid return area } func Diagonal(len, wid float64) float64 { diagonal := math.Sqrt((len * len) + (wid * wid)) return diagonal }
We have added a simple init function which just prints
rectangle package initialised
Now lets modify the main package. We know that the length and width of a rectangle should be greater than zero. We will define this check using init function and package level variables in the
geometry.go file.
Modify the
geometry.go file as shown below,
//geometry.go package main import ( "fmt" "geometry/rectangle" //importing custom package "log" ) /* * 1. package variables */ var rectLen, rectWidth float64 = 6, 7 /* *2. init function to check if length and width are greater than zero */ func init() { println("main package initialized") if rectLen < 0 { log.Fatal("length is less than zero") } if rectWidth < 0 { log.Fatal("width is less than zero") } } func main() { fmt.Println("Geometrical shape properties") fmt.Printf("area of rectangle %.2f\n", rectangle.Area(rectLen, rectWidth)) fmt.Printf("diagonal of the rectangle %.2f ",rectangle.Diagonal(rectLen, rectWidth)) }
The following are the changes made to
geometry.go
- rectLen and rectWidth variables are moved to package level from the main function level.
- An init function has been added. The init function prints a log and terminates the program execution if either the rectLen or rectWidth is less than zero by using log.Fatal function.
The order of initialisation of the main package is
- The imported packages are first initialised. Hence rectangle package is initialised first.
- Package level variables rectLen and rectWidth are initialised next.
- init function is called.
- main function is called at last
If you run the program, you will get the following output.
rectangle package initialized main package initialized Geometrical shape properties area of rectangle 42.00 diagonal of the rectangle 9.22
As expected the init function of the rectangle package is called first followed by the initialisation of the package level variables rectLen and rectWidth. The init function of the main package is called next. It checks whether rectLen and rectWidth are lesser than zero and terminates if the condition is true. We will learn about
if statement in detail in a separate tutorial. For now you can assume that
if rectLen < 0 will check whether
rectLen is less than 0 and if it is so, the program will be terminated. We have written a similar condition for
rectWidth. In this case both the conditions are false and the program execution continues. Finally the main function is called.
Lets modify this program a bit to learn the use of the init function.
Change the line
var rectLen, rectWidth float64 = 6, 7 in
geometry.go to
var rectLen, rectWidth float64 = -6, 7. We have initialised
rectLen to negative.
Now if you run the application, you will see
rectangle package initialized main package initialized 2017/04/04 00:28:20 length is less than zero
As usual the rectangle package is initialised followed by the package level variables rectLen and rectWidth in the main package. rectLen is negative. Hence when the init function runs next, the program terminates after printing
length is less than zero.
The code is available for download at github.
Use of blank identifier
It is illegal in Go to import a package and not to use it anywhere in the code. The compiler will complain if you do so. The reason for this is to avoid bloating of unused packages which will significantly increase the compilation time. Replace the code in
geometry.go with the following,
//geometry.go package main import ( "geometry/rectangle" //importing custom package ) func main() { }
The above program will throw error
geometry.go:6: imported and not used: "geometry/rectangle"
But it is quite common to import packages when the application is under active development and use them somewhere in the code later if not now. The
_ blank identifier saves us in those situations.
The error in the above program can be silenced by the following code,
package main import ( "geometry/rectangle" ) var _ = rectangle.Area //error silencer func main() { }
The line
var _ = rectangle.Area silences the error. We should keep track of these kind of error silencers and remove them including the imported package at the end of application development if the package is not used. Hence it is recommended to write error silencers in the package level just after the import statement.
Sometimes we need to import a package just to make sure the initialisation takes place even though we do not need to use any function or variable from the package. For example, we might need to ensure that the init function of the rectangle package is called even though we do not use that package anywhere in our code. The _ blank identifier can be used in this case too as show below.
package main import ( _ "geometry/rectangle" ) func main() { }
Running the above program will output
rectangle package initialized. We have successfully initialised package even though it is not used anywhere in the code.
Thats it for packages. Hope you enjoyed reading. Please leave your valuable comments and feedback :).
Next tutorial - if else statement | https://golangbot.com/packages/ | CC-MAIN-2018-43 | en | refinedweb |
This HackerRank problem is meant to be about strings. however, I solved it using Python, and in this case I ended up seeing the two strings not differently as they was lists of whatever elements.
One aspect I have overseen initially was that in a string there could be more than an occurrence of a character. To ensure that my code would work correctly I added a test to the one suggested by Hackerrank:
def test_provided_1(self): self.assertEqual(4, solution('cde', 'abc')) def test_many_a(self): self.assertEqual(3, solution('aaaa', 'a'))In the one provided, I have to remove 'd' and 'e' from the first string, plus 'a' and 'b' from the second one to get to the same 'c' in both of them, total, 4 eliminations.
In my test, I just have to remove 3 'a' from the first string.
I decided to implement my solution using a dictionary to store the letters in the first string and their numbers. I could have written a piece of code like this:
counter = {} for ch in a: counter[ch] = counter.setdefault(ch, 0) + 1The handy setdefault() function return the value for the specified key, or second parameter, instead of the default None, if there is not such key in the dictionary.
But the job of counting the number of elements in a collection is so common, that the standard python collections library give us a class, Counter, that works exactly in this way. So I saved same typing and I used it instead.
from collections import Counter def solution(a, b): counter = Counter(a) # ...Now I have to sort of compare this counting with the values store in the other string. I decided to loop on all the characters in it, and decrease the associated value if the key exists and the value is not zero, otherwise to increase a buffer variable to keep track of all the characters that are in the second string and not in the first one.
extra = 0 # 1 for ch in b: if counter.get(ch): # 2 counter[ch] -= 1 else: extra += 11. Number of characters that are in the second string without a match in the first one.
2. Remember that get() on dictionary returns the associated value to the passed key or None. Here I decrease the value only if I get a value (not None) and it is different from zero.
Finally, I add up all the values in the counter dictionary, plus the extra count coming from the second string, and I return it.
return sum(counter.values()) + extraI pushed both the unit test and the python script to GitHub for full reference. | http://thisthread.blogspot.com/2017/02/hackerrank-strings-making-anagrams.html | CC-MAIN-2018-43 | en | refinedweb |
Contents
OverviewEdit
AROS uses several custom development tools in its build-system to aid developers by providing an easy means to generate custom makefiles for amigaos like components.
The most important ones are:
- MetaMake: A make supervisor program. It can keep track of targets available in makefiles available in subdirectories a certain root directory. A more in depth explanation is given below.
- GenMF: (generate makefile) A macro language for makefiles. It allows to combine several make rules into one macro which can simplify writing makefiles.
- Several AROS specific tools that will be explained more when appropriate during the rest of this documentation.
MetaMakeEdit
IntroductionEdit
MetaMake is a special version of make which allows the build-system to recursively build "targets" in the various directories of a project, or even another project.
The name of the makefile's used is defined in the MetaMake config file and defaults to mmakefile for AROS - so we shall use this name to donate MetaMake Makefiles from here on in.
MetaMake searches directory tree's for mmakefiles - and, for each it finds, process's the metatargets.
You can also specify a program which converts "source" mmakefiles (aptly named mmakefile.src) into proper mmakefile's before MetaMake will be invoked on the created mmakefile.
MetaTargetsEdit
MetaMake uses normal makefile syntax but gives a special meaning to a comment line that start with #MM. This line is used to define so called metatargets.
There exist three ways of defining a metatarget in a makefile:
Real MetaTargetsEdit
#MM metatarget : metaprerequisites This defines a metatarget with its metaprerquisites: When a user asks to build this metatarget, first the metaprerequisites will be build as metatargets, and afterwards the given metatarget. This form also indicates that in this makefile also a makefile target is present with the same name. #MM metatarget : prerequisites This form indicates that the make target on the next line is also a metatarget but the prerequisites are not metaprerequisites: The line for the definition of a metatarget can be spread over several lines if one ends every line with the character and starts the next line with #MM.
Virtual MetaTargetsEdit
#MM- metatarget : metaprerequisites This is the same definition as for Real MetaTarget's - only now no "normal" make target is present in the makefile with the same name as the metatarget:
How MetaMake worksEdit
MetaMake is run with a metatarget to be built specified on the command line.
MetaMake will first build up a tree of all the mmakefiles present in a directory and all subdirectories (typically from the aros source base directory) - and autogenerate them where applicable. While doing this it will process the mmakefiles and build a tree of all the defined metatargets and their dependencies.
Next it will build all the dependencies (metaprerequisites) needed for the specified metatarget - and finally the metatarget itself.
metaprerequisite are metatarget's in their own rite - and are processed in the same fashion so that dependancies they have are also fulfilled.
For each metatarget, a walk through of all the directories is done - and in every mmakefile where Real MetaTarget's are defined, make is called with the name of the target as a "make target".
Exported variablesEdit
When MetaMake calls normal make, it also defines two variables...
$(TOP) contains the value of the rootdirectory. $(CURDIR) contains the path relative to $(TOP).
Autogenerating mmakefile'sEdit
Another feature of MetaMake is automatic generation of mmakefile's from a source mmakefile's.
When the directory tree is scanned for mmakefiles, ones with a .src suffix that are newer then any present mmakefile are processed using a specified script that regenerate's the mmakefile from the source mmakefile. The called script is defined in the configuration file.
ExamplesEdit
The next few examples are taken from the AROS project.
Example 1: normal dependenciesEdit
#MM contrib-regina-module : setup linklibs includes contrib-regina-includes
This example says that in this makefile a contrib-regina-module is present that has to be build but the before building this metatarget first the metatargets setup, linklibs, ... has to be build; e.g. that the includes linklibs etc. have to be present before that this module can be build.
Example 2: metatarget consisting of submetatargetsEdit
#MM- contrib-freetype : contrib-freetype-linklib \ #MM contrib-freetype-graph \ #MM contrib-freetype-fonts \ #MM contrib-freetype-demos
Here actually is said that the contrib-freetype metatarget consists of building linklib, graph, fonts and demos of freetype. If some extra work needs to be done in the makefile where this metatarget the definition can start with '#MM ' and a normal make target 'contrib-freetype' has to be present in the makefile.
Also the use of the line continuation for the metatarget definition is shown here.
Example 3: Quick building of a targetEdit
#MM workbench-utilities : includes linklibs setup-clock-catalogs #MM workbench-utilities-quick : workbench-utilities
When a user executes MetaMake with workbench-utilities as an argument, make will be called in all the directories where the metaprerequisites are present in the makefile. This can become quite annoying when debugging programs. When now the second metatarget workbench-utilities-quick is defined as shown above only that target will be build in this directory. Of course the user has then to be sure that the metatargets on which workbench-utilities depend are up-to-date.
Usage and configuration filesEdit
Usage:
mmake [options] [metatargets]
To build mmake, just compile
mmake.c. It doesn't need any other files.
mmake looks for a config file
mmake.config or .mmake.config in the current directory for a file in the environment variable
$MMAKE_CONFIG or a file
.mmake.config in the directory
This file can contain the following things:
- #
- This must be the first character in a line and begins a comment.
- Comments are completely ignored my mmake (as are empty lines).
- text="[<name>]"
- This begins a config section for the project name. You can build
- targets for this project by saying name.target.
- maketool <tool options...>
- Specifies the name of the tool to build a target. The default is
- make "TOP=$(TOP)" "CURDIR=$(CURDIR)".
- top <dir>
- Specifies the root directory for a project. You will later find
- this config option in the variable $(TOP). The default is the
- current directory.
- defaultmakefilename <filename>
- Specifies the basename for makefiles in your project. Basename means
- that mmake will consider other files which have this stem and an
- extension, too. See the items to generate makefiles for details.
- The default is Makefile.
- defaulttarget <target>
- The name of the default target which mmake will try to make if you
- call it with the name of the project alone. The default is all.
- genmakefilescript <cmdline...>
- mmake will check for files with the basename as specified in
- defaultmakefilename with the extension .src. If such a file is found,
- the following conditions are checked: Whether this file is newer than
- the makefile, whether the makefile doesn't exist and whether the file
- genmakefiledeps is newer than the makefile. If any of these is true,
- mmake will call this script the name of the source file as an extra
- option and the stdout of this script will be redirected to
- defaultmakefilename. If this is missing, mmake will not try to
- regenerate makefiles.
- genmakefiledeps <path>
- This is the name of a file which is considered when mmake tries to
- decide whether a makefile must be regenerated. Currently, only one
- such file can be specified.
- globalvarfile <path>
- This is a file which contains more variables in the normal make(1)
- syntax. mmake doesn't know about any special things like line
- continuation, so be careful not to use such variables later (but
- they don't do any harm if they exist in the file. You should just
- not use them anywhere in mmake).
- add <path>
- Adds a nonstandard makefile to the list of makefiles for this
- project. mmake will apply the standard rules to it as if the
- defaultmakefilename was like this filename.
- ignoredir <path>
- Will tell mmake to ignore directories with this name. Try ignore
- CVS if you use CVS to manage your projects' sources.
- Any option which is not recognised will be added to the list of known variables (ie. foo bar will create a variable $(foo) which is expanded to bar).
Example
Here is an example:
# This is a comment # Options before the first [name] are defaults. Use them for global # defaults defaultoption value # Special options for the project name. You can build targets for this # project with "mmake name.target" [AROS] # The root dir of the project. This can be accessed as $(TOP) in every # makefile or when you have to specify a path in mmake. The default is # the current directory top /home/digulla/AROS # This is the default name for Makefiles. The default is "Makefile" defaultmakefilename makefile # If you just say "mmake AROS", then mmake will go for this target defaulttarget AROS # mmake allows to generate makefiles with a script. The makefile # will be regenerated if it doesn't exist, if the source file is # newer or if the file specified with genmakefiledeps is newer. # The name of the source file is generated by concatenating # defaultmakefilename and ".src" genmakefilescript gawk -f $(TOP)/scripts/genmf.gawk --assign "TOP=$(TOP)" # If this file is newer than the makefile, the script # genmakefilescript will be executed. genmakefiledeps $(TOP)/scripts/genmf.gawk # mmake will read this file and every variable in this file will # be available everywhere where you can use a variable. globalvarfile $(TOP)/config/host.cfg # Some makefiles must have a different name than # defaultmakefilename. You can add them manually here. #add compiler/include/makefile #add makefile
A metatarget look like so: project.target. Example: AROS.setup. If nothing is specified, mmake will make the default target of the first project in the config file. If the project is specified but no target, mmake will make the default target of this project.
GenMFEdit
IntroductionEdit
Genmf uses two files for generating a makefile. First is the macro definition file and finally the source makefile where these macro's can be used.
* This syntax example assumes you have AROS' sources (either from SVN or downloaded from the homesite). Assuming 'genmf.py' is found in your $PATH and that $AROSDIR points to location of AROS' sources root (e.g. /home/projects/AROS or alike). [user@localhost]# genmf.py $AROSDIR/config/make.tmpl mmakefile.src mmakefile This creates a mmakefile from the mmakefile.src in the current directory.
In general the % character is used as the special character for genmf source makefiles.
After ./configure i run the make command and that halts with an error from within the genmf.py script that is cannot find some file. the files that are fed to the genmf.py script seem to be lines in the /tmp/genmfxxxx file. the problem is that the lines are not created right. so when the lines are fed to the genmf.py script it cannot handle it.
Metamake creates tmpfiles:
./cache.c: strcpy(tmpname, "/tmp/genmfXXXXXX");
Metamake actually calls genmf.py to generate the genmf file. It is located in bin/$(arch)-$(cpu)/tools
MetaMake uses time stamps to find out if a mmakefile has changed and needs to be reparsed. For mmakefiles with dynamic targets we would have to avoid that time stamp comparison.
This is I think only the case if the metarules would change depending on an external config file without that the mmakefile itself changes.
But this reminds me another feature I had in mind for mmake. I would make it possible to have real files as prerequisites of metatargets. This is to avoid that make is called unnecessary in directories. I would introduce a special character to indicate if a metatarget depends on a file, let's take @ and have the following rule
__MM__ :: echo bar : @foo
This would indicate that for this mmakefile metatarget 'bar' only has to be build if file foo changes. So if mmake wants to build metatarget 'bar' if would only call make if file foo in the same directory as the mmakefile has changed.
This feature would also be able to indicate if the metarules have to be rebuild, I would allocate the special __MM__ metatarget for it. By default always the implicit metarule would be there:
__MM__ :: echo __MM__ : @mmakefile
But people could add config files is needed:
__MM__ :: echo __MM__ : @mmconffile
Does MetaMake really do variable substitution? Yes, have a look in the var.c file.
The generated mmakefile for Demos/Galaxy still has #MM- demo-galaxy : demo-galaxy-$(AROS_TARGET_CPU) and I think the substitution is done later by Gnu/Make.
No, for gmake it is just a comment line; it does not know anything about mmake. And it also the opposite case; mmake does not know anything about gmake it just all the lines starting with #MM. So the next thing does not what you think it does in a gmake file:
ifeq ($(target), ) #MM includes : includes-here else #MM $(target) : includes-here endif
mmake will see both lines as just ignores the if statement ! It will complain if it does not know target. That is one of the main reasons I proposed the above feature.
The main feature of mmake is that is allows for modular directory structure you can add or delete directories in the build tree and metamake will automatically update the metarules and the build itself to the new situation. For example it would allow to checkout only a few subdirectories of the ports directory if one wants to work on one of the programs there.
Macro definitionEdit
A macro definition has the following syntax:
%define macroname option1[=[default][\A][\M]] option2[=[default][\A][\M]] ... ... %end
macroname is the name of the macro. option1, option2, ... are the arguments for the macro. These options can be used in the body of this template by typing %(option1). This will be replaced be the value of option1.
The macro can be followed by a default value. If no default value is specified an empty string is taken. Normally no space are allowed in the default value of an argument. If this is needed this can be done by surrounding the value with double quotes (").
Also two switches can be given:
\A Is the switch to always need a value for this. When the macro is instantiated always a value need to be assigned to this argument. \M Is the switch to turn on multi words. This means that all the words following this argument will be assigned to this argument. This also means that after the use of such an argument no other argument can be present because it will become part of this argument.
Macro instantiationEdit
The instantiation of the macro is done by using the '%' character followed by the name of the macro to instantiate (without a round brackets around it):
%macro_name [option1=]value [option2=]value
Two ways are possible to specify value for arguments to a macro:
value This will assign the value to the argument defined as the first argument to this macro. The time this format is used it will be assigned to the second argument and so on. option1=value This will assign the given value to the option with the specified name.
When giving values to arguments also double quotes need to be used if one wants to include spaces in the values of the arguments.
Macro instantiation may be used inside the body of a macro, even macro's that will only be defined later on in the macro definition file. Examples
FIXME (whole rules to be shown as well as action to be used in make rules)
AROS Build-System usageEdit
AROS Build-System configurationEdit
Before the build-system can be invoked via make - you will need to run "./configure" to set up the environment for your chosen target platform
i.e.
./configure—target=pc-i386
This causes the configure script to perform the following operations ...
AROS MetaMake configuration fileEdit
[add the default settings for mmake]
Default AROS MetaMake MetaTargetsEdit
AROS uses a set of base metatargets to perform all the steps needed to build the tools and components not only used to compile aros but also that make up aros itself
AROS Build MetaMake MetaTargetsEdit
AROS.AROS AROS.contrib AROS.development AROS.bootiso
[list standard metatargets used during the build process]
Special AROS MetaMake MetaTargetsEdit
************ denotes a Real MetaTarget ************-setup ************-includes
Default AROS mmakefile VariablesEdit
The following variables are defined for use in mmakefile's.
//System related variables $(ARCH) $(AROS_HOST_ARCH) $(AROS_HOST_CPU) $(AROS_TARGET_ARCH) $(AROS_TARGET_CPU) $(AROS_TARGET_SUFFIX) / $(AROS_TARGET_VARIANT)
//Arch specific variables $(AROS_TARGET_BOOTLOADER)
//Directory related variables $(TOP) $(CURDIR) $(HOSTDIR) $(TOOLDIR) $(PORTSDIR) $(TARGETDIR) $(GENDIR) $(OBJDIR) $(BINDIR) $(EXEDIR) $(LIBDIR) $(OSGENDIR) $(KOBJSDIR) $(AROSDIR) $(AROS_C) $(AROS_CLASSES) $(AROS_DATATYPES) $(AROS_GADGETS) $(AROS_DEVS) $(AROS_FS) $(AROS_RESOURCES) $(AROS_DRIVERS) $(AROS_LIBS) $(AROS_LOCALE) $(AROS_CATALOGS) $(AROS_HELP) $(AROS_PREFS) $(AROS_ENVARC) $(AROS_S) $(AROS_SYSTEM) $(AROS_TOOLS) $(AROS_UTILITIES) $(CONTRIBDIR)
AROS mmakefile.src High-Level MacrosEdit
Note : In the definition of the genmf rules sometimes mmake variables are used as default variables for an argument (e.g. dflags=%(cflags)). This is not really possible in the definition file but is done by using text that has the same effect.
Building programs
There are two macro's for building programs. One macro %build_progs that will compile every input file to a separate executable and one macro %build_prog that will compile and link all the input files into one executable.
%build_progsEdit
This macro will compile and link every input file to a separate executable and has the following definition:
%define build_progs mmake=/A files=/A \ objdir=$(GENDIR)/$(CURDIR) targetdir=$(AROSDIR)/$(CURDIR) \ cflags=$(CFLAGS) dflags=$(BD_CFLAGS$(BDID)) ldflags=$(LDFLAGS) \ uselibs= usehostlibs= usestartup=yes detach=no
With the following arguments:
- mmake=/A
- This is the name of the metatarget that will build the programs.
- files=/A
- The basenames of the C source files that will be compiled and
- linked to executables. For every name present in this list an
- executable with the same name will be generated.
-.
- ldflags=$(LDFLAGS)
- The flags to use when linking the executables. By default the
- standard AROS link flags will be used.
- uselibs=
- A list of static libraries to add when linking the executables.
- This is the name of the library without the lib prefix or the .a
- suffix and without the -l prefix for the use in the flags
- for the C compiler.
- By default no libraries are used when linking the executables.
- usehostlibs=
- A list of static libraries of the host to add when linking the
- executables. This is the name of the library without the lib prefix
- or the .a suffix and without the -l prefix for the use in the flags
- for the C compiler.
- By default no libraries are used when linking the executables.
- usestartup=yes
- Use the standard startup code for the executables. By default this
- is yes and this is also what one wants most of the time. Only disable
- this if you know what you are doing.
- detach=no
- Wether the executables will run detached. Defaults to no.
%build_progEdit
seems that the %build_prog macros is currently alway producing stripped binaries, even in debug build. To workaround this problem, I need to define TARGET_STRIP in the following way: TARGET_STRIP := $(STRIP) %build_prog mmake="egltest" progname="egltest" files="$(EGL_SOURCES) peglgears" uselibs="GL galliumauxiliary" Can someone with enough knowledge please fix the macro so that it produces unstripped binaries for debug builds again
This macro will compile and link the input files to an executable and has the following definition:
%define build_prog mmake=/A progname=/A files=%(progname) asmfiles= \ objdir=$(GENDIR)/$(CURDIR) targetdir=$(AROSDIR)/$(CURDIR) \ cflags=$(CFLAGS) dflags=$(BD_CFLAGS$(BDID)) ldflags=$(LDFLAGS) \ aflags=$(AFLAFS) uselibs= usehostlibs= usestartup=yes detach=no
With the following arguments:
mmake=/A This is the name of the metatarget that will build the program. progname=/A The name of the executable. files= The basenames of the C source files that will be compiled and linked into the executable. By default just the name of the executable is taken. asmfiles= The assembler files to assemble and include in the executable. By default no asm files are included in the executable. variable). ldflags=$(LDFLAGS) The flags to use when linking the executable. By default the standard AROS link flags will be used. uselibs= A list of static libraries to add when linking the executable. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the executable. usehostlibs= A list of static libraries of the host to add when linking the executable. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the executable. usestartup=yes Use the standard startup code for the executables. By default this is yes and this is also what one wants most of the time. Only disable this if you know what you are doing. detach=no Wether the executable will run detached. Defaults to no.
%build_linklibEdit
Building static linklibraries
Building link libraries is straight forward. A list of files will be compiled or assembled and collected in a link library into a specified target directory.
The definition of the macro is as follows:
%define build_linklib mmake=/A libname=/A files="$(basename $(wildcard *.c)) \ asmfiles= cflags=$(CFLAGS) dflags=%(cflags) aflags=$(AFLAGS) \ objdir=$(OBJDIR) libdir=$(LIBDIR)
With the meaning of the arguments as follows:
mmake=/A This is the name of the metatarget that will build the linklib. libname=/A The base name of the library to generate. The file that will be generated will be called lib%(libname).a files=$(basename $(wildcard *.c)) The C files to compile and include in the library. By default all the files ending in .c in the source directory will be used. asmfiles= The assembler files to assemble and include in the library. By default no asm files are included in the library.. objdir=$(OBJDIR) The directory where to generate all the intermediate files. The default value is $(OBJDIR) which in itself is by default equal to $(GENDIR)/$(CURDIR). libdir=$(LIBDIR) The directory to put the library in. By default the standard lib directory $(LIBDIR) will be used.
%build_moduleEdit
Building modules consists of two parts. First is a macro to use in mmakefile.src files. Another is a configuration file that describes the contents of the module.
The mmakefile.src macroEdit
This is the definition header of the build_module macro:
%define build_module mmake=/A modname=/A modtype=/A \ conffile=%(modname).conf files="$(basename $(wildcard *.c))" \ cflags=$(CFLAGS) dflags=%(cflags) objdir=$(OBJDIR) \ linklibname=%(modname) uselibs=
Here is a list of the arguments for this macro:
mmake=/A This is the name of the metatarget that will build the module. Also a %(mmake)-quick and %(mmake)-clean metatarget will be defined. modname=/A This is the name of the module without the suffix. modtype=/A This is the type of the module and corresponds with the suffix of the module. At the moment only library, mcc, mui and mcp are supported. Support for other modules is planned in the future. conffile=%(modname).conf The name of the configuration file. Default is modname.conf. files="$(basename $(wildcard *.c))" A list of all the C source files without the .c suffix that contain the code for this module. By default all the .c files in the current directory will be taken. functions.
The module configuration fileEdit
The module configuration file is subdived in several sections. A section is defined with the following lines:
## begin sectionname ... ## end sectionname
The interpretation of the lines between the ##begin and ##end statement is different for every section. The following sections are defined:
* config The lines in this section have all the same format: optionname string with the string starting from the first non white space after optionname to the last non white space character on that line. A list of all the options available: basename Followed by the base name for this module. This will be used as a prefix for a lot of symbols. By default the modname specified in the makefile is taken with the first letter capitalized. libbase The name of the variable to the library base in. By default the basename will be taken with Base added to the end. libbasetype The type to use for the libbase for use internally for the library code. E.g. the sizeof operator applied to this type has to yield the real size of the object. Be aware that it may not be specified as a pointer. By default 'struct LibHeader' is taken. libbasetypeextern The type to use for the libbase for code using the library externally. By default 'struct Library' is taken. version The version to compile into the module. This has to be specified as major.minor. By default 0.0 will be used. date The date that this library was made. This has to have the format of DD.MM.YYYY. As a default 00.00.0000 is taken. libcall The argument passing mechanism used for the functions in this module. It can be either 'stack' or 'register'. By default 'stack' will be used. forcebase This will force the use of a certain base variable in the static link library for auto opening the module. Thus it is only valid for module that support auto opening. This option can be present more than once in the config section and then all these base will be in the link library. By default no base variable will be present in the link library. * cdef In this section all the C code has to be written that will declare all the type of the arguments of the function listed in the function. All valid C code is possible including the use of #include. * functionlist In this section all the functions externally accessible by programs. For stack based argument passing only a list of the functions has to be given. For register based argument passing the names of the register have to be given between rounded brackets. If you have function foo with the first argument in D0 and the second argument in A0 it gives the following line in in the list: foo(D0,A0)
%build_module_macroEdit
Building modules (the legacy way)
Before the %build_module macro was developed already a lot of code was written. There a mixture of macro's was usedin the mmakefile and they were quite complicated. To clean up these mmakefiles without needing to rewrite too much of the code itself a second genmf macro was created to build modules that were written using the older methodology. This macro is called build_module_macro. For writing new modules people should consider this macro as deprecated and only use this macro when the %build_module doesn't support the module yet they want to create.
The mmakefile.src macroEdit
This is the definition header of the build_module_macro macro:
%define build_module_macro mmake=/A modname=/A modtype=/A \ conffile=%(modname).conf initfile=%(modname)_init \ funcs= files= linklibfiles= cflags=$(CFLAGS) dflags=%(cflags) \ objdir=$(OBJDIR) linklibname=%(modname) uselibs= usehostlibs= \ genfunctable= genincludes= compiler=target
Here is a list of the arguments for this macro:
mmake=/A This is the name of the metatarget that will build the module. It will define that metatarget but won't include any metaprerequisites. If you need these you can add by yourself with an extra #MM metatargets : ... line. Also a %(mmake)-quick and %(mmake)-clean metatarget will be defined. modname=/A This is the name of the module without the suffix. modtype=/A This is the type of the module and corresponds with the suffix of the module. It can be one of the following : library gadget datatype handler device resource mui mcc hidd. conffile=%(modname).conf The name of the configuration file. Default is modname.conf. funcs= A list of all the source files with the .c suffix that contain the code for the function of a module. Only one function per C file is allowed and the function has to be defined using the AROS_LHA macro's. files= A list of all the extra files with the .c suffix that contain the extra code for this module. initfile=%(modname)_init The file with the init code function. usehostlibs= A list of static libraries of the host to add when linking the module. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the module. genfunctable= Bool that has to have a value of yes or no or left empty. This indicates if the functable needs to be generated. If empty the functable will only be generated when funcs is not empty. genincludes= Bool that has to have a value of yes or no or left empty. This indicates if the includes needs to be generated. If empty the includes will only be generated for a library, a gadget or a device. compiler=target Indicates which compiler to use during compilation. Can be either target or host to use the target compiler or the host compiler. By default the target compiler is used.
The module configuration fileEdit
For the build_module_macro two files are used. First is the module configuration file (modname.conf or lib.conf) and second is the headers.tmpl file.
The modules config file is file with a number of lines with the following syntax:
name <string> Init the various fields with reasonable defaults. If <string> is XXX, then this is the result: libname xxx basename Xxx libbase XxxBase libbasetype XxxBase libbasetypeptr XxxBase * Variables will only be changed if they have not yet been specified. libname <string> Set libname to <string>. This is the name of the library (i.e. you can open it with <string>.library). It will show up in the version string, too. basename <string> Set basename to <string>. The basename is used in the AROS-LHx macros in the location part (last parameter) and to specify defaults for libbase and libbasetype in case they have no value yet. If <string> is xXx, then libbase will become xXxBase and libbasetype will become xXxBase. libbase <string> Defines the name of the library base (i.e. SysBase, DOSBase, IconBase, etc.). If libbasetype is not set, then it is set to <string>, too. libbasetype <string> The type of libbase (with struct), i.e. struct ExecBase, struct DosLibrary, struct IconBase, etc.). libbasetypeptr <string> Type of a pointer to the libbase. (e.g. struct ExecBase *). version <version>.<revision> Specifies the version and revision of the library. 41.0103 means version 41 and revision 103. copyright <string> Copyright string. define <string> The define to use to protect the resulting file against double inclusion (i.e. #ifndef <string>...). The default is _LIBDEFS_H. type <string> What kind of library is this ? Valid values for <string> are: device, library, resource and hidd. option <string>... Specify an option. Valid values for <string> are: o noexpunge Once the lib/dev is loaded, it can't be removed from memory. Be careful with this option. o rom For ROM based libraries. Implies noexpunge and unique. o unique Generate unique names for all external symbols. o nolibheader We don't want to use the LibHeader prefixed functions in the function table. o hasrt This library has resource tracking. You can specify more than one option in a config file and more than one option per option line. Separate options by space.
The header.tmpl fileEdit
Contrary to the %build_module macro for %build_module_macro the C header information is not included in the configuration file but an additional files is used with the name headers.tmpl. This file has different section where each of the sections will be copied in a certain include file that is generated when the module is build. A section has a structure as follows:
##begin sectionname ... ##end sectionname
With sectionname one of the following choices:
* defines * clib * proto
%build_archspecificEdit
Compiling arch and/or CPU specific files
In the previous paragraph the method was explained how a module can be build with the AROS genmf macro's. Sometimes one wants to replace certain files in a module with an implementation only valid for a certain arch or a certain CPU. The macro definition
Arch specific files are handled by the macro called %build_archspecific and it has the following header:
%define build_archspecific mainmmake=/A maindir=/A arch=/A files= asmfiles= \ cflags=$(CFLAGS) dflags=%(cflags) aflags=$(AFLAGS) compiler=target
And the explanation of the argument to this macro:
mainmmake=/A The mmake of the module from which one wants to replace files or to wich to add additional files. maindir=/A The directory where the object files of the main module are stored. The is only the path relative to $(GENDIR). Most of the time this is the directory where the source files of the module are stored. arch=/A The architecture for which these files needs to be build. It can have three different forms ARCH-CPU, ARCH or CPU. For example when linux-i386 is specified these files will only be build for the linux port on i386. With ppc it will be build for all ppc processors and with linux it will be build for all linux ports. files= The basenames of the C source files to replace add to the module. asmfiles= The basenames of the asm source files to replace or add assembling the asm files. By default the standard AROS cflags (the $(AFLAGS) make variable) are taken. This also means that some flags can be added by assigning these to the SPECIAL_AFLAGS make variable before using this macro. compiler=target Indicates which compiler to use during compiling C source files. Can be either target or host to use the target compiler or the host compiler. By default the target compiler is used.
%rule_archaliasEdit
Code shared by different ports
A second macro called %rule_archalias allows to create a virtual architecture. And code for that virtual architecture is shared between several architectures. Most likely this is used for code that uses an API that is shared between several architecture but not all of them.
The macro has the following header:
- %define rule_archalias mainmmake=/A arch=/A alias=/A
With the following arguments
- mainmmake=/A
- The mmake of the module from which one wants to replace files, or
- which to add additional files to.
- arch=/A
- The arch one wants to make alias from.
- alias=/A
- The arch one wants to alias to.
Examples
1. This is an extract from the file config/linex/exec/mmakefile.src that replaces the main init.c file from exec with a linux specialized one:
%build_archspecific \ mainmmake=kernel-exec maindir=rom/exec arch=linux \ files=init compiler=host
2. For the dos.library some arch specific files are grouped together in the unix arch. The following lines are present in the several mmakefiles to make this possible
In config/linux/mmakefile.src:
%rule_archalias mainmmake=kernel-dos arch=linux alias=unix
In config/freebsd/mmakefile.src:
%rule_archalias mainmmake=kernel-dos arch=freebsd alias=unix
And finally in config/unix/dos/mmakefile.src:
%build_archspecific \ mainmmake=kernel-dos maindir=rom/dos \ arch=unix \ files=boot \ compiler=host
AROS mmakefile.src Low-Level MacrosEdit
LibrariesEdit
A simple library that uses a custom suffix (.wxt), and returns TRUE in its init function, however the Open code never gets called - and openlibrary fails? (the init function does get called though..) With a conf file with no ##functionlist section I get the error: In readref: Could not open (null)
Genmodule tries to read a ref file when no ##functionlist section is available. After adding a dummy function to the conf file it worked for me. Take care: haven't added any flags which avoids creating of header files and such. How to deal with library base pointers in plug-ins when you call library functions.
use only one function -> called to make the "plugin" register all its hooks with wanderer. Iterate through the plugin directory, and for each file ending ".wxt", create an internal plugin structure in which i store the pointer to the libbase of the OpenLibrary'd plugin. After enumerating the plugins, iterate the list of plugin structs and call the single library function which causes them to all register with wanderer. had been using some of the struct library fields (lib_Node.ln_Name was the culprit).
We should remove the dos.c, intuition.c, etc. files with hardcoded version numbers from autoinit and replace them with -ldos -lintuition inside gcc specs file. This would avoid starting programs on older versions of libraries. If an older version suffice some __xxx_version global can be defined in the program code to enable this. We could also provide based on the info you described below exec_v33 exec_v45 link libraries that would also make sure no function of a newer version is used. A very clean solution to get the desired effect.
-noarosc mentions checking the spec file to find out about it but there is nothing in the specs file related. This was added to disabled automatic linking of arosc to all libraries. It was used in the build_library macro - check V0. Automatic linking of arosc.library which had per task context to other libraries which had global context was a very bad thing. "C standard library" objects belonging to global context library were allocated on opening task context. When the task exited and global context library not, global context library was using "freed" memory.
A note to any of you wanting to upgrade to Ubuntu 12.10, or any distribution that uses gcc 4.7. There is an issue (bug? misfeature?) in gcc 4.7 where the '-specs /path/to/spec/override' is processed *after* gcc checks that it has been passed valid arguments. This causes gcc to fail with the error:
gcc-4.7: error: unrecognized command line option "-noarosc"
when it is used to link programs for the x86 and x86_64 targets if you are using the native host's compiler (for example, when compiling for linux-x86_64 hosted). Please use gcc-4.6 ("export CC=gcc-4.6") for hosted builds until further notice (still valid as of March 2013).
Per taskEdit
There are other things for which arosc.library needs to be per task based: autoclosing of open files and autofreeing of malloced memory when a programs exits; a per task errno and environ variable that can be changed by calling library functions.
regina.library does also do that by linking with arosc_rel. It needs some more documentation to make it usable by other people. You can grep aroscbase inside the regina source code to see where it is used. regina.library and arosc.library are per task libraries. Each time regina.library is opened it also opens arosc.library and it then gets the same libbase as the program that uses regina.library.
By linking with arosc_rel and defining aroscbase_offset arosc.library functions called from regina.library will be called with the arosc libbase stored in it's own libbase, and the latter is different for each task that has opened regina.library.
The AROS_IMPORT_ASM_SYM of aroscbase in the startup section of regina.conf assures that the arosc.library init functions are called even if the programs that uses regina.library does not use an arosc.library function itself and normally would not autoopen it.
Problem is that both bz2 and z library use stdio functions. The arosc.library uses the POSIX file descriptors which are of type int to refer to files. The same file descriptor will point to different files in different tasks. That's why arosc.library is a pertask library. FILE * pointer internally have a file descriptor stored that then links to the file.
Now bz2 and z are using also stdio functions and thus also they need a different view for the file descriptors depending in which program the functions are called from. That's why bz2 and z become also pertask libraries.
it breaks POSIX compatibility to use a type other than int for file descriptors. Would a better solution be to assign a globally unique int to each file descriptor, and thus avoid the need to make arosc.library a per-task library? far simpler solution - all DOS FileHandles and FileLocks are allocated from MEMF_31BIT. Then, we can be assured that their BPTRs fit into an int.
int open(const char *path, int flags, int mode) { BPTR ret; ULONG rw = ((flags & O_READ) && !(flags & O_WRITE)) ? MODE_OLDFILE : MODE_NEWFILE; ret = Open(path, rw); if (ret == BNULL) { IoErr_to_errno(IoErr()); return -1; } return (int)(uintptr_t)ret; } void close(int fd) { Close((BPTR)(uintptr_t)fd); } static inline BPTR ftob(int fd) { return (fd == 0) ? Input() : (fd == 1) ? Output() : (fd == 2) ? ErrorOutput() : (fd < 0) ? BNULL : ((BPTR)(uintptr_t)fd); } int read(int fd, void *buff, size_t len) { int ret; ret = Read(ftob(fd), buff, len); if (ret < 0) IoErr_to_errno(IoErr()); return ret; }
you will most likely kill the 64-bit darwin hosted target. AFAIR it has 0 (zero) bytes of memf_31bit memory available.
Must modules which are using pertask libraries be implemented itself as pertask library? Is it a bug or feature that I get now the error about missing symbolsets handling. You will now see more verbose errors for missing symbol sets, for example:
Undefined symbol: __LIBS__symbol_set_handler_missing Undefined symbol: __CTORS__symbol_set_handler_missing
By linking with jpeg and arosc, instead of jpeg_rel and arosc_rel, it was pulling in the PROGRAM_ENTRIES symbolset for arosc initialization. Since jpeg.datatype is a library, not a program, the PROGRAM_ENTRIES was not being called, and some expected initialization was therefore missing.
It is the ctype changes that is causing the problem. This code now uses ADD2INIT macro to add something to initialization of the library. As you don't handle these init set in your code it gives an error. You can for now use -larosc.static -larosc or implement init set handling yourself.
The move to ctype handling is that in the future we may want to have locale handling in the C library so toupper/tolower may be different for different locales. This was not possible with the ctype stuff in the link lib. Ideally in the source code sqlite3-aros.c whould be replaced with sqlite3.conf and genmodule would be called from makefile-new
- strongly* recommend that you *not* use %build_module_simple for pertask/peropener libraries for now. There is a PILE of crap that genmodule needs to do *just*exactly*right* to get them to work, and that pile is still in flux at the moment.
Use %build_module, and add additional initialization with the ADD2*() family of macros.
If you insist on %build_module_simple, you will need to link explicitly with libautoinit.
To handle per-task stuff manually:
LibInit: you will need to call AllocTaskStorageSlot() to get a task storage slot, save that in your global base LibExpunge: FreeTaskStorageSlot() the slot LibOpen: and use SetTaskStorageSlot() to put your task-specific data in the task's slot LibClose: Set the task's storage slot to NULL
You can get the task-specific data in one of your routines, using GetTaskStorageSlot().
if you're not using the stackcall API, that's the general gist of it.
would recommend that you use the static libraries until the pertask/peropener features have stabilized a bit more. You can always go back to dynamic linking to pertask/peropen libs later.
You should be able to use arosc.library without needing to be pertask. Things gets more complicated if code in library uses file handles, malloc, errno or similar things.
Is the PROGRAM_ENTRIES symbolset correct for arosc initialization then, or should it be in the INIT set? If so move the arosc_startup.c to the INIT set.
Think about datatypes. Zune (muimaster.library) caches datatype objects. Task A may be the one triggering NewDtObject(). Task B may be the one triggering DisposeDTObject().
NewDTObject() does OpenLibrary of say png.datatype. DisposeDTObjec() does CloseLibrary of say png.datatype.
If png.datatype usees some pertask z.library that's a problem, isn't it? As png.datatype is not per opener and is linked with arosc there should only be a problem when png.datatype is expunged from memory not when opened or closed. It will also use the arosc.library context from the task that calls the Init function vector of png.datatype and it will only be closed when the Expunge vector is called.
relbaseEdit
stackcall/peropener
- library.conf: relbase FooBase -> rellib foo
- rellib working for standard and peropener/pertask libraries
- <proto/foo.h> automatically will use <proto/foo_rel.h> if 'rellib foo' is used in the libraries .conf
- "uselibs" doesn't need to manually specify rellib libraries
arosc_rel.a is meant to be used from shared libraries not from normal programs. Auto-opening of it is also not finished, manual work is needed ATM.
z_au, png_au, bz2_au, jpeg_au, and expat_au now use the relbase subsytem. The manual init-aros.c stub is no longer needed. Currently, to use relative libraries in your module, you must:
- Enable 'options pertaskbase' in your library's .conf
- Add 'relbase FooBase' to your library's .conf for each relative library you need.
- Make sure to use the '<proto/foo_rel.h>' headers instead of '<proto/foo.h>'
- Link with 'uselibs=foo_rel'
can't find a valid way to implement peropener libraries with 'stack' functions without a real ELF dynamic linker (ie ld-linux.so). The inherent problem is determining the where the 'current' library base is when a stack function is called.
(In the following examples, assume stack.library is a peropener library, and StackFunc() is a stack style function in that library. plugin.library uses stack.library)
Example 1 - Other libraries doing weird things behind your back
extern struct Library *StackBase; /* set up by autoinit */ void foo(void) { struct Library *PluginBase; StackFunc(); // Called with expected global StackBase PluginBase = OpenLibrary("plugin.library",0); /* Note that plugin.library did an OpenLibrary() of "stack.library". * In the current implementation, this now sets the taskslot of * stack.library to the new per-opener stack.library base. */ StackFunc(); // Unexpectly called with the new stack.library base!!! CloseLibrary(PluginBase); /* The CloseLibrary() reset the taskslot to the old base */ StackFunc(); // Back to the old base }
Ok, to fix that issue, let's suppose we use a stub wrapper that sets the taskslot to the (global) library base. This was no problem with old implementation. StackBase was passed in the scratch register StackFunc() each time it was called. This base was then used.
Example 2 - Local vs Global bases
extern struct Library *StackBase; /* set up by autoinit */ void bar(void) { StackBase(); // Called as expected { struct Library *StackBase = OpenLibrary("stack.library"); StackBase(); // WTF! linklib wrapper used *global* StackBase, not local one! CloseLibrary(StackBase); } StackBase(); // Works as expected }
Hmm. Ok, that behavior is going to be a little weird to explain to developers. I don't see the need to support local bases.
Example 3 - Callback handlers
extern struct Library *StackBase; /* set up by autoinit */ const struct handler { void (*dostack)(void); } handler = { .dostack = StackFunc }; void bar(void) { /* Who knows what base this is called with? * It depends on a number of things, could be the global StackBase, * could be the most recently OpenLibrary()'d stack.library base. */ handler->dostack(); }
Function pointers to functions in a peropener library may be a problem but is it needed ?
All in all, until we have either
- a *real* ELF shared library subsystem
or
- real use case for peropener libraries. C split arosstdc.library is peropener base. Reasoning is that you
may sometimes want to do malloc in a library that is not freed when the Task that happened to call function is exiting. Let's say picture caching library that uses ported code which internally uses malloc. If you have a pertask library the malloc will allocate memory on the Task that currently is calling the library and this memory will disappear when this task quits (should do free() prior exit). Before your change the caching library could just link with libarosstdc.a (and not libarosstdc_rel.a) and it worked.
idea could be to either link(*) or unlink(**) the malloc to a given task, depending on from where it is called (within library or not). No, the whole point is to have the malloced memory _not_ to be attached to a task so a cached image can be used from different tasks even if the first task already died.
staticEdit
call the static link library for the static version for the few shared libraries that need it different, like libz_static.a Ideally all code should just work with the shared library version.
Module Link Library uselibs= ---------- ------------ -------------- Foo.datatype => libfoo.datatype.a => foo.datatype Foo.library => libfoo.a => foo foo (static) => libfoo.static.a => foo.static And the 'misc static libs' (libamiga.a libarossupport.a, etc) libmisc.a => libmisc.a => misc
usestartup=no and the '-noarosc' LDFLAG both imply arosc.static (it doesn't hurt to link it, and if you really want arosc.library, that will preempt arosc.static)
Again this will make -lz not link with the shared library stubs. IMO uselibs=z should use shared library by default.
'uselibs="utility jpeg.datatype z.static arossupport"' method.
if there's a dynamic version of a library, it should always be used: static linking of libraries should be discouraged for all the usual reasons, e.g. the danger of embedding old bugs (not just security holes), bloat etc. Don't see the need for a -static option (or any other way to choose between static and dynamic libraries).
MakedependEdit
AROS build system generates for each .c file a .d file where the includes are listed. The .c is recompiled when any of the includes changes. Remember that AROS is an OS in development so we often do/did changes to the core header files. If this makedepend was not done programs would not be rebuilt if changes are made to AROS libraries or other core code. OK, so it's basically creating the dependencies of the .o
mmakefileEdit
We do get an error from it, so something is in fact going wrong. But what is?
Probably a hacky mmakefile so that include file is not found during makedepend but is found during compilation or maybe a wrong dependency so it is not guaranteed that the include file is there during makedepend. And I do think it would be better if the build would stop when such an error occurs.
configuration filesEdit
We are talking about configuration files for modules like this: rom/graphics/graphics.conf.
I have been thinking about similar things, but first I would like to convert our proprietary .conf format to xml. Manually writing file parsings is so passe :)
Uhh.. I have no objection to using a 'standard' parser, but I have to vote no on XML *in specific*.
JSON or YAML (summaries of both are on Wikipedia) are available would be better choices, since they are much more human readable, but semantically equivalent to XML.
I agree that xml is not the easiest format to edit in a text editor and is quite bloated. From the other side it has ubiquituous in scripting and programming language and in text editors and IDEs. I also like that the validity of a xml file can be checked through a schema file and that it also can be a guide for the editor. There are also tools to easily convert xml files based on this schema etc. It does not matter what format it is in but it should take as much coding away from the (genmodule) programmer.
Another improvement over XML could be the inclusion of literal code. Currently some literal code snippets are included in .conf file and in XML they would need some character encoding. How is this for JSON or YAML ?
YAML supports UniCode internally. I don't know how well that could be ported to AROS though since it seems AROS doesn't have UniCode support yet. JSON is based on JavaScript notation and YAML 1.2 can import JSON files as it implemented itself as a complete super-set of JSON. YAML's only 1.2 implementation is in C++ using CMake as a build script creator. If we use the C implementation of libYaml, it's only YAML 1.1 compliant and loses the backward compatibility to JSON.
Any data languages can be checked against a scheme; it's mostly a matter of writing out the schemes to check against. You can but my questions if the tools exists. From the second link you provided: "There are a couple of downsides to YAML: there are not a lot of tools available for it and it’s also not very easy to validate (I am not aware of anything similar to a DTD or a schema)". I find validation/syntax checking as important as human readability. Syntax checking is in the parsing in all four cases. The validation the XML can do is whether it conforms to the parsing and whether it conforms to a specific scheme. YAML and JSON are specifically intended for structured data, en I guess my example is too, so the equivalent XML scheme would check whether the content was correctly structured for structured data. The other three don't need that as anything they parse is by definition structured data.
All four have the same solution: They are all essentially tree builders, and you can walk the tree to see if each node conforms to your content scheme. The object is to use a defined schema/DTD for the files that are describing a library. Text editors that understand schemas can then let you only add fields that are valid by the schema. So this schema let everyone validate a XML if it is a valid XML library description file; they can use standard tools for that.
AFAICS JSON and YAML parsers only validate if the input file is a valid JSON/YAML file, not that it is a valid JSON/YAML library description file. AFAICS no such tools exist for these file formats.
ETask Task StorageEdit
__GM_* functions
__GM_BaseSlot: externally visible slot ID (for the AROS_RELLIBFUNCSTUB() assembly routines) __GM_SetBase_Safe: Set (and reallocate) task storage Static function, only called in a library's InitLib() and OpenLib() code. __GM_GetBase_Safe: Get task storage slot This is the 'slow' version of __GM_GetBase(), which calls Exec/GetTaskStorageSlot(). Returns NULL if the slot does not exist or is unallocated. __GM_GetBase: Get task storage slot (unsafe) This is the 'fast' version of __GM_GetBase(), which does not need to perform any checking. This function is provided by the CPU specific AROS_GM_GETBASE() macro (if defined). The fallback is the same implementation as __GM_GetBase_Safe __AROS_GM_GETBASE: Fast assembly 'stub' for getting the relbase Designed to be used in the AROS_RELLIBFUNCSTUB() implementation. Does not do any sanity checking. Guaranteed to be run only if (a) InitLibrary() or OpenLibrary() has already been called in this ETask context or (b) this ETask is a child of a parent who has opened the slot's library. I can generate implementations of this for arm, m68k, and i386, but I want the location of TaskStorage to be agreed upon before I do that work and testing. AROS_GM_GETBASE(): Generates a C function wrapper around the fast stub.
Genmodule no longer has to have internal understanding of where the TaskStorage resides. All of that knowledge is now in exec.library and the arch/*-all/include/aros/cpu.h headers.
Location of the TaskStorage slots
It was important to me that the address of the ETask does not change. For example, it would be pretty bad if code like this broke:
struct ETask *et = FindTask(NULL)->tc_UnionETask.tc_ETask; ... UnzipFile("foo.zip"); <= opens z_au.library, slots reallocated .. if (et->Parent) { <= ARGH! et was freed! ....
Also, I wanted to minimize the number of places that need to be modified if the TaskStorage location needed to be moved (again).
et_TaskStorage is automatically resized by Exec/SetTaskStorageSlot() as needed, and a new ETask's et_TaskStorage is cloned from its parent, if the parent was also an ETask with et_TaskStorage. What I wanted to say here is that some overhead may be acceptable for SetTaskStorageSlot() if properly documented. E.g. to not call the function in time critical paths.
You clone Parent TaskStorage when creating a subtask as before. This may be acceptable if it is documented that a slot allocated in the parent may not be valid in child if it is allocated in parent after the child has been created. For other use cases think it acceptable to have to be sure that in a Task first a SetStorageSlot() has to be done before getting the value.
Auto generation of oop.libraryEdit
updated genmodule to be capable of generating interface headers from the foo.conf of a root class, and have tested it by updating graphics.hidd to use the autogenerated headers.
Hopefully this will encourage more people to use the oop.library subsystem, by making it easier to create the necessary headers and stubs for an oop.library class interface.
Note that this is still *completely optional*, but is encouraged.
Plans to extend this to generating Objective C interfaces in the future, as well as autoinit and relbase functionality.
This allows a class interface to be defined, and will create a header file in $(AROS_INCLUDES)/interface/My_Foo.h, where 'My_Foo' is the interface's "interfacename". In the future, this could be extended to generate C++ pure virtual class headers, or Objective C protocol headers.
The header comes complete with aMy_Foo_* attribute enums, pMy_Foo_* messages, moMy_Foo method offsets, and the full assortment of interface stubs.
To define a class interface, add to the .conf file of your base class:
##begin interface ##begin config interfaceid my.foo interfacename My_Foo methodstub myFoo # Optional, default to interfacename methodbase MyFooBase attributebase MyFooAttrBase ##end config ##begin attributelist ULONG FooType # [ISG] Type of the Foo BOOL IsBar # [..G] Is this a Bar also? <- comments are preserved! ##end attributelist ##begin methodlist VOID Start(ULONG numfoos) # This comment will appear in the header BOOL Running() .skip 1 # BOOL IsStopped() Disabled obsolete function VOID KillAll(struct TagItem *attrList) ##end methodlist ##end interface
DocumentationEdit
It would be nice if we could just upload to diff (maybe as zip file) and then the patching is automatically done.
If you have a local copy of the whole website, you can update only the file(s) that are changed with a rsync-type script (maybe rsync itself works for the purpose).
MiscEdit
# Your c++ files CXX_FILES := main.cpp debug.cpp subdir/module.cpp # subdir slashes are replaced by three underscores CXX_OBJS := $(addprefix $(GENDIR)/$(CURDIR)/, $(addsuffix .o, $(subst /,___,$(CXX_FILES)) ) ) CXX_FLAGS := -W -Wall -Wno-long-long -fbounds-check CXX_CC = $(TOOLDIR)/crosstools/$(AROS_TARGET_CPU)-aros-g++ CXX_DEPS := $(patsubst %.o,%.d,$(CXX_OBJS)) $(CXX_DEPS): @echo Makedepend $(patsubst %.d,%.cpp,$(subst ___,/,$(notdir $@)))... @$(CXX_CC) $(CXX_FLAGS) -MM -MT $(patsubst %.d,%.o,$@) -o $@ $(patsubst %.d,%.cpp,$(subst ___,/,$(notdir $@))) @echo $@: $(patsubst %.d,%.cpp,$(subst ___,/,$(notdir $@))) >>$@ -include $(CXX_DEPS) $(CXX_OBJS): %compile_q \ cmd=$(CXX_CC) \ opt=$(CXX_FLAGS) \ from="$(patsubst %.o,%.cpp,$(subst ___,/,$(notdir $@)))" \ to=$@
- Make sure your target depends on both deps and objs emumiga-library: $(CXX_DEPS) $(CXX_OBJS)
The AROS build system.
Even if it's not specific to a particular platform, the code in arch/common is hardware dependent, whereas the code in rom/ and workbench/ is supposed to be non-hardware-specific. This has been discussed before when you moved other components (e.g. ata.device) from arch/common to rom/devs. IIRC you accepted that that move was inappropriate in retrospect (but didn't undo it).
Having said that, arch/all-pc might be a good place for components shared between i386-pc and x86_64-pc such as the timer HIDD. On further inspection it seems that most drivers are already in workbench/hidds.
IntroductionEdit
AROS build system is based around GNU toolchain. This means we use gcc as our compiler, and the build system needs a POSIX environment to run.
Currently AROS has been successfully build using the following environments:
- Linux, various architectures and distributions. This has been, for a long time, a primary development platform. Most of our nightly builds are running under Linux.
- MacOS X (more technically known as Darwin).
- Cygwin, running Windows.
- MinGW/MSYS, running Windows (both 32-bit and 64-bit version of MinGW was tested).
From these two Windows environments, MinGW is the preferred one, because of significantly faster (compared to Cygwin) operation. There's, however, a known problem: if you want to build native port, GRUB2 can't be built. Its own build system fails is currently incompatible with MinGW, and will fail. You can work around it if you use—with-bootloader=none argument when configuring AROS. This will disable building the primary bootloader. You can perfectly live with that if you already have GRUB installed.
Running on a host whose binary format is different from ELF (i. e. Darwin and Windows), requires you to use native AROS-targeted crosstoolchain. It can be built together with AROS, however using a standalone preinstalled toolchain significantly shortens the build time and saves up your drive space. A pretty good set of prebuilt toolchains for Darwin and Windows can be obtained from AROS Archives.
Cross-compiling a hosted version of AROS requires you, additionally, to have the second crosstoolchain, targeted to what will be your host. For example if you're building Windows-hosted AROS under Linux, you'll need Windows-targeted crosstoolchain. Because of this, building a hosted version is best to be done on the same system it will run on.
In the past, configure found e.g. i386-elf-gcc etc. on the path during a cross-compile without passing special options. I'd like to retain that capability. That *should* still work if you pass in—disable-crosstools.
Remember, --enable-crosstools is the default now - and it would be silly to use the external crosstools is AROS is just going to build its own anyway.
For the kernel tools though, yes, I definitely agree. Let me know if you have a system where the kernel tool type isn't detected properly.
Making use of threaded builds (make -j X)? if not it might be worth using. Please don't; vps is a virtual machine also running some web sites. I don't want to fully starve the rest that is running on that machine. I appreciate what you are saying - but without info on the virtualised hardware cant really comment. How many "cores" does the vm have? if it has >2, don't see why adding an additional thread (make -j 2) should cause any noticeable difference to the web services it also hosts?
26 February 2012, configure has been restructured, to generate three sets of *_*_{cc,as,objdump,...} definitions.
If we are building crosstools:
orig_target_* - AROS built toolchain (in bin/{host}/tools/crosstools/....)
aros_kernel_* - External toolchain, if—with-kernel-tool-prefix, or the architecture configure it as such
(ie hosted archs) Otherwise, it points to the orig_target_* tools
aros_target_* - AROS target tools (in bin/{host}/tools/${target_tool_prefix}-*
If we are *not* building crosstools: (--disable-crosstools, or—with-crosstools=...)
aros_kernel_* - External toolchain (required, and configure should be checking for it!)
orig_target_* - Points to aros_kernel_*
aros_target_* - AROS target tools (in bin/{host}/tools/${target_tool_prefix}-*
modified collect-aros to mark ABIv1 ELF files with EI_OSABI of 15 (AROS) instead of 0 (generic Unix). For now, I'm going to hold off on the change to refuse to load ABIv0 files (with EI_OSABI of 0) until I can get some more testing done (since dos/internalloadseg_elf.c is reused in a few places).
A separate change to have ABIv0 refuse to load ABIv1 applications will need to be made. The patch to have ABIv1 refuse to load ABIv0 applications will come in the near future.
Custom toolsEdit
../$srcdir/configure" --target=linux-i386—enable-debug=all—with-portssources="$curdir/$portsdir
Always use the 'tools/crosstools' compiler to build contrib/gnu/gcc. AFAIK this was the previous solution, using TARGET_CC override in mmakefile...
the host toolchain should only be used for compiling the tools (genmodule, elf2hunk, etc), and for the bootstrap (ie AROSBootstrap on linux-hosted and grub2 for pc-*). To be exact, the 'kernel' compiler is used for compiling GRUB2, and probably AROSBootstrap too. This is important when cross-compiling.
How about we invert the sense of --enable-crosstools? We make it '--disable-crosstools', and crosstools=yes is on by default? That way we can support new arch bringup (if we don't have working crosstools yet), but 'most people' won have to deal with the issues of, say, having C compiled with (host) gcc 4.6.1, but C++ compiled with (crosstools) gcc 4.2
add-symbol-file boot/aros-bsp-linux 0xf7b14000 add-symbol-file boot/aros-base 0xf7b6a910
There's "loadkick" gdb command now which does this auomatically. Btw, don't use add-symbol-file. Use "loadseg <address>".
You need to run as you have a stale config.
$ ./config.status—recheck && ./config.status
In the end I would like to get rid of the mmakefile parsing by mmake. What I would like to put in place is that mmake calls the command: 'make -f mmakefile __MM__' and it parses the output of that command. The mmakefile would the be full of statements like:
__MM__ :: echo metatarget : prerequisite1 prerequisite2
This could be generated by genmf macros or gmake functions.
I think this approach would give some advantages:
The parsing code in mmake would become simpler: * No need to discard non-#MM lines or at least reduce it significantly * No need for line continuation handling * No need for variable substitution Rule generation in mmakefile would become more flexible. To generate the output one could use all facilities provided by gmake: if statements, functions, complex variable substitutions. For example: providing arch specific or configuration dependent rules would become much easier. This architecture would be much easier to extend to other make(-like) tools like cmake, scons, ... This would for example allow to gradually convert out genmf+gmake build system to a scons based one. External code coudld choose their prefered method: the AROS SDK would support several systems.
Would like to express the following 'build all libraries I depend on' concept in Metamake:
MODULE=testmod USELIBS=dos graphics utility $(MODULE)-linklib: core-linklibs $(addsuffix -includes,$(USELIBS)) $(addsuffix -linklib,$(USELIBS))
At the moment it not possible as mmake is a static scanner and does not support loops or function like $(addsuffix ...). Look in the AROS dev maillist for a thread called 'mmake RFC' (in Aug 2010) describing my idea. If you look at the svn log of tools/MetaMake there is r34165 'Started to write a function which calls the _MM_ target in a mmakefile. ...'
Can see this breaking because it wont know which "parent" metatarget(s) to invoke to build the prerequisites based on the object files / binaries alone, unless you add a dependancy on the (relevant) metatarget for every binary produced. i.e it would be like doing "make <prerequisites-metatarget>-quick" for the prerequisite. Yes, each module target would get an extra linklib-modulename target. (not linklib-kernel-dos, just linklib-dos, for example).
mmake at the moment only knows about metatargets and metadependencies. It does not handle real files or knows when something is old or new. Therefore it always has to try all metadependencies and make will find out if it is up to date or needs to be rebuilt. This can be changed to also let mmake dependencies on real files (e.g. the .c files for a shared library); remember when something was last build and check if files have changed. But this won't be a small change. IS there some way we can pass info about the file types in the "files=" parameter, so that the macros can automatically pass the files to the necessary utility macros?
CFILES := example1 CPPFILES := example2 ASMFILES := example3 %build_prog mmake=foo-bar \ progname=Example files="c'$(CFILES)',cpp'$(CPPFILES)',asm'$(ASMFILES)'" targetdir=$(AROS_TESTS) \ uselibs="amiga arosc"
IMO uselibs= should only be needed when non-standard libraries are used. In my ABI V1 I even made a patch to remove all standard libs from the uselibs= statement. I do plan to submit this again sometime in the future. And there should not be a need to add these libs to uselibs=. linklibs that are standardly linked should be build by the linklibs-core metatarget. %build_module takes care of the linklinbs-core dependency. Currently a lot of linklibs are not dependent of this metatarger because a lot of the standard libs autoopened by libautoinit.a. TBH, I also find it a bit weird. Standard libraries don't need -lXXX, because they "link" via proto files, right?
They are (currently) only used for the linklibs-<foo> dependency autogeneration. Was under the impression you wanted to move all the per-library autoinit code back to the specific libraries? Yes to avoid the current mismatch between versions in libautoinit and libxxx.a.
for %build_prog and some others so it might seem logical to add another cppfiles But then we might need to add dfiles, modfiles or pfiles for the D language, Modula-2 and Pascal as well in the future so your idea about adding it all to the files parameter in one way or another seems to be more future proof to me.
Personally, I'd prefer to let make.tmpl figure it all out from the extensions, even though it'd be a large changeset to fix all the FILES=lines.
FILES = foobar.c \ qux.cpp \ bar.S \ xyyzy.mod %build_prog mmake=foo-bar progname=Example files="$(FILES)" \ targetdir=$(AROS_TESTS) uselibs="frobozz"
By the way: what are the 'standard libraries'? That is to be discussed. I would include almost all libs in our workbench/libs and rom/ directories unless there is a good reason not to use it as a standard linklib. mesa will always require -lGL to be passed because AROSMesaGetProcAddress is only present in linklib. Also nobody will write code #include <proto/mesa.h>. All code will have #include <GL/gl.h>
working on minimal-version autoopening, to enhance binary compatibility with m68k and PPC AOS flavors. To be clear I like the feature you are implementing, I don't like it that programmers have to specify a long list of libs to uselibs= all the time.
Does this give the programmer a way to specify that he'll need more than the minimum for a function? For example, one aspect of a function may have been buggy/unimplemented in the first version. If that aspect is used, a version is needed that supports it properly.
Yes, in the library.conf file, you would use:
foo.conf
... .version 33 ULONG FooUpdate(struct Foo *foo) ULONG FooVersion() # NOTE: The version 33 FooSet() didn't work at all! # It was fixed in version 34. .version 34 ULONG FooSet(struct Foo *foo, ULONG key, ULONG val) .version 33 ULONG FooGet(struct Foo *foo, ULONG key) ...
Then, if you use FooSet(), you'll get version 34 of the library, but if your code never calls FooSet(), you'll only OpenLibrary() version 33.
OpenLibrary requiring version 34 in one case and 37 in the other, depending on whether I needed that specific NULL-handling aspect of FooSet(). How will this work with otherwise automatic determination of minimum versions?
Uh... You'll have the handle library loading yourself, then:
APTR Foo; if (IAmOnABrokenA1000()) { Foo = OpenLibrary("foo.library",34); } else if (TheA3000ReallyNeedsVersion37()) { Foo = OpenLibrary("foo.library",37); } else { /* Put your hands in the air like you just don't care! */ Alert(AT_DeadEnd); }
Syntax of the makefileEdit
Where do I need to make the changes to add 'contrib' to the amiga-m68k build process? You need to study scripts in /AROS/scripts/nightly/pkg and get some knowledge from them. Neil can probably give you better explanation. Contrib | https://en.m.wikibooks.org/wiki/Aros/Developer/BuildSystem | CC-MAIN-2018-43 | en | refinedweb |
User:Flaimo
On the web
OSM
The rest
My proposals
Approved
- parking redux
- Mapping of amenity=parking_space / amenity=parking_entrance and combining them to one logical parking lot using a relation with type=site + site=parking
- recycling type
- To differentiate between big recycling centers and just simple recycling containers
- shelter type
- To refine an object mapped with amenity=shelter
- addr keys (2011-04)
- Additional keys for the addr namespace (addr:entrance=*, addr:unit=*, addr:floor=*, addr:room=*, addr:door=*)
Proposed
- area:highway
- A tag to define the two dimensional area of highways
- Access restrictions 1.5
- Refinement and structured refactoring of the current access tags and modifiers
- Automated tasks
- A way for mappers to use simple automated tasks (ata_delete=*, ata_move=*, ata_copy=*, ata_change=*)
- ssociatedAddress
- A way to connect multiple POIs to one address object | https://wiki.openstreetmap.org/wiki/User:Flaimo | CC-MAIN-2018-43 | en | refinedweb |
Related Reads
Block Cipher Encryption Method
January 6, 2016
By: HamzaMegahed
4998 hide our message inside an image.
What is LSB
Least Significant bit also called RightMost Bit is a lowest bit of a binary number. For example in binary number 10010010, “0”is the least significant bit.
What is LSB-Steganography
LSB-Steganography is a steganography technique in which we hide messages inside an image by replacing Least significant bit of image with the bits of message to be hidden..
Let’s hide our message
STEP 1 Before hiding our secret message we will make sure to make it fully encrypted so if someone extracted the message from the image he would not able to get the secret message. So first we will create our secret key, for this i will open nano (you can use whatever you want ) by typing nano file.txt then press return and then i will enter my secret key then exit.
STEP 2 We will encrypt our message with symmetric key algorithm using gpg so that both the sender and receiver will only need a single key for encryption and decryption . So type gpg -c msg.txt to encrypt it. ( I am using CAST 128 cipher for encryption.)
STEP 3 It will ask for password so enter your desire password.
STEP 4 I will upload this encrypted message to some hosting server and then copy the downloadable link to hide it in our image.
STEP 5 Now type got clone and move into this directory.
STEP 6 Install the dependencies by typing pip install -r requirements.txt.
STEP 7 To hide the message type python LSBSteg.py -i “your image in which you want to hide your messge” -o “output of the image file with the format” -f ” text file which you want to hide” .
Our image file has prepared.
Extracting the hidden message
Our recipient has receive has received the image file and its time to reveal the hidden message from this image.
STEP 1 To decode the hidden data from the image type decode -i “your image file contain messgae” -o ” output name with file format”.
STEP 2 We have extracted the file from the image now its time to read content from it so type cat msg.txt to read the file.
STEP 3 After downloading the file its time to decrypt it so type gpg -d “your encrypted file” , it will ask for password enter it and then we will get our secret message.
Donate Here to Get This Month's Donor Badge
Did You Know?
Cybrary training is FREE
Just create an account now for lifetime access. Members login here.
We recommend always using caution when following any link
Are you sure you want to continue?
Cool! Thanks for sharing.
python3 LSBSteg.py decode -i msg.png -o msg.txt && ls
Traceback (most recent call last):
File “LSBSteg.py”, line 17, in
import cv2
ModuleNotFoundError: No module named ‘cv2’
what’s the problem ?
pip install opencv-python should fix the problem, Check if this works.
Thank you for this. Very useful | https://www.cybrary.it/0p3n/hide-secret-message-inside-image-using-lsb-steganography/ | CC-MAIN-2018-43 | en | refinedweb |
So far we've successfully created an
EditTaskComponent and
TaskListComponent, including all necessary
@Inputs and
@Outputs to pass data down and actions up. In this lesson, we'll create our
NewTaskComponent with a form to create new
Task objects.
Let's generate the component:
$ ng generate component new-task
Now let's add import statements:
import { Component, Output, EventEmitter } from '@angular/core'; import { Task } from '../models/task.model';
They're very similar to the the import statements in the
EditTaskComponent; however, we won't need
Input for this component. While it might be tempting just to copy/paste import statements into components, we should always strive to only import what the component actually needs.
Next we'll place
<app-new-task></app-new-task> tags where we'd like to render our
NewTaskComponent. We'll place them beneath the
<app-edit-task></app-edit-task> tags in the root component's HTML template:
... <app-edit-task [childSelectedTask]="selectedTask" (doneButtonClickedSender)="finishedEditing()"></app-edit-task> <app-new-task></app-new-task>
As always, we should verify that our home page correctly renders "new-task works!"
Now let's replace that boilerplate with a form for new
Tasks that has
description and
priority fields:
<h1>New Task</h1> <div> <label>Enter Task Description:</label> <input #newDescription> </div> <div> <label>Task Priority:</label> <select #newPriority> <option [value]="1"> Low Priority </option> <option [value]="2"> Medium Priority </option> <option [value]="3"> High Priority </option> </select> <button (click)="submitForm(newDescription.value, newPriority.value)">Add</button> </div>
We have an input field for
description and a drop-down menu for
priority. We use select fields instead of radio buttons so we can practice using other types of input fields with Angular.
The
# symbols in the form above are template reference variables. We can use template reference variables to save our inputs.
By adding the template reference variable
#newDescription, we can save a user's input in a
newDescription variable. We can then retrieve the user input by calling
newDescription.value.
When a user selects an option from the dropdown, this choice will be saved to
#newPriority. We can then retrieve the user's input by calling
newPriority.value. Let's take a look at the code for our drop down again:
... <label>Task Priority:</label> <select #newPriority> <option [value]="1"> Low Priority </option> <option [value]="2"> Medium Priority </option> <option [value]="3"> High Priority </option> </select> ...
Form fields like
<select> drop-downs and radio buttons are formatted a little differently than text fields. They have values added like this:
[value]="3".
This is because we can't gather user input from radio buttons or drop-downs in the same manner we do with a text box. We need to first determine which option the user selected, then gather the value from that option. This is why we include property bindings that alter the
value of the
#newPriority template reference variable depending on which option the user selects from the drop-down list.
Template reference variables are generally used in forms that create new objects while two-way data binding is used in forms that edit existing objects.
Why is this? When we fill out a form to create a new object, that object doesn't exist yet. For that reason, we can't bind our data to the object in the same manner we bind edit form data to existing models.
For more information on template reference variables, check out the Angular Documentation.
Let's take a look at the code for our form's button:
... <button (click)="submitForm(newDescription.value, newPriority.value)">Add</button> ...
The button includes an event binding (
(click)) that triggers a
submitForm method. The method takes the user input as arguments.
Let's define the method in the component's class declaration:
... export class NewTaskComponent { submitForm(description: string, priority: number) { let newTask: Task = new Task(description, priority); } }
Let's add an
@Output to pass the
submitForm() action up to the root component:
... export class NewTaskComponent { @Output() sendTask = new EventEmitter(); submitForm(description: string, priority: string) { let newTask: Task = new Task(description, parseInt(priority)); this.sendTask.emit(newTask); } }
Remember that an
EventEmitter can only emit one argument at a time; that's why we create a new
Task first and send it up as a single argument. This is an appropriate place to create the
Task anyway. After all, that's the responsibility of this component!
Note that we pass in
priority as a
string. Remember that input from a form is always a string, not a number. We then need to
parseInt the
priority to make it into a number.
Let's update the selector tags in the parent component's HTML template:
... <app-new-task (sendTask)="addTask($event)"></app-new-task> ...
When the child component emits an action via
sendTask, it will be passed into the parent component's
addTask() method (which we have yet to define).
$event is a special variable that holds the argument passed to the child component's
EventEmitter (in this case, the
newTask).
Let's define this
addTask() method in our
AppComponent next. It will add the new
Task to the array of all
Tasks:
... export class AppComponent { ... addTask(newTask: Task) { this.masterTaskList.push(newTask); } }
For now, we're just pushing the new
Task to our
masterTaskList. Next week, we'll learn how to connect Angular to a database.
If we serve our application, we can now use the form to add new
Tasks to the list.
Let's clear our form's text field after we submit it:
... <button (click)="submitForm(newDescription.value, newPriority.value); newDescription.value='';">Add</button> ...
An event binding's template statement can have both method calls and property assignments. Here we include a property assignment along with our method call. We set
newDescription.value back to an empty string.
When we include multiple actions in an event binding's template statement, we must separate the actions with semicolons.
We should be able to launch our application and use our new button to clear the form after creating an object. Doing the same for our dropdown is more involved and beyond the scope of this lesson. Feel free to set a default value to the dropdown on your own!
In the next lesson, we'll explore
pipes that can be used to live-filter data right on the page and put the finishing touches on our to do application. | https://www.learnhowtoprogram.com/javascript/angular/template-reference-variables-and-creating-objects | CC-MAIN-2018-43 | en | refinedweb |
Storing Bounded Data Without SQL (and the benefits thereof)
To paint the context for the scenarios and the problems I’ll outline, consider an app in which I’m classifying exercises for workouts. Each exercise has its own attributes such as which muscle group it works, which equipment it requires, whether it’s considered a strength, power, or stabilization exercise, and a few others.
The Problem
For the given scenario, you’ll notice that for all practical purposes, the data is finite. Once we input our several hundred exercises, it’s not going to grow any more. Maybe a few more will be added, but it’s not like the data will suddenly take off and grow by several magnitudes. The total size of the data is in kilobytes, not megabytes or even gigabytes.
So the question is: Is it worth storing that data in a database?
Potential Alternative
The answer to the above question: No. Sort of. At least you don’t have to store the data in a database, and I’ve recently adjusted an app to avoid using a database for the exact problem described.
One of the things I thought was interesting when I first started my job as a software engineer was how enumerated data types were represented inside of a database. In a database course in college, you’d learn that every item or tangible object belonged in a row on a table, and other rows on other tables could reference that item.
However, in an application where we’re using a database for the purpose of being referenced by another programming language, it’s possible to avoid representing bounded data (or enumerated data types as a simple example) inside the database. So for example, you can have a row in a database represent another arbitrary enumerated datatype using an id that corresponds to something else that lives in the code.
This is precisely what Django-RichEnum is for. It’s an open source Python library written at Hearsay that allows you to create enumerated data types and then have database rows reference those enums. In turn, when you’re loading objects from the database into a django model, you can seamlessly reference the object’s attributes that represent an enum.
So for example, if I wanted to represent gender, I could do something like this:
class _GenderType(OrderedRichEnumValue): def __init__(self, index, canonical_name, display_name): super(_GenderType, self).__init__(index, canonical_name, display_name) class GenderType(OrderedRichEnum): MALE = _GenderType(1, 'male', 'Male') FEMALE = _GenderType(2, 'female', 'Female')
You could represent gender a number of different ways, but doing so with an enumerated data types provides a consistent solution. Now if we have a Django model, we can reference gender with:
from django_richenum.models import IndexEnumField from django.db import models from .constants import GenderType class _User(models.Model): gender = IndexEnumField(GenderType)
And inside the database, the “gender” field would use “gender_id” and would either be 1 or 2, which corresponds to the enumerated data type above.
For the example I’m using for exercises, I’m essentially doing the same concept, but instead of 2 different items being represent, I’m representing 700+ items.
Moar Advantages
Bottom line: it’s faster, simple to maintain, and this module by itself has no dependency on a database.
Databases are slow because they pull data from a hard drive. Accessing the hard drive compared to a single cpu cycle is like comparing a year to a second in terms of the difference in magnitudes. While databases are optimized for querying, there’s still overhead involved with every single query. As long as keeping data in memory isn’t costly, you can get some performance benefits be keeping your data outside of a database.
In my example of exercises, I had 5 different database tables just to represent exercises. This was because I had the table for exercises and then 4 additional many to many tables to join against phases, muscle groups, equipment required, and exercise type. So I spent a few minutes joining all of the data and then just dumping it into a JSON file.
Now each exercise is represented as a JSON blob and looks something like this:
{ "muscle_group_ids": [ 21, 17, 3, 18 ], "workout_component_id": 5, "muscle_group_id": 3, "multi_joint": 1, "secondary_muscle_group_id": null, "progressor": 0, "mutually_exclusive": null, "one_limb": 0, "min_experience_id": 3, "min_fitness_level_id": 1, "phase_ids": [ 5, 2, 3, 4 ], "compound": 1, "timed": 0, "exercise_type_ids": [ 1 ], "id": 133, "equipment_ids": [ 2 ], "name": "Deadlift" }
And I have a single json file that has about 700 of these exercises. At this point there’s not really a compelling reason to represent the data in a different way.
Now I want to be able to query the data efficiently. I can do that by indexing the data with a simple dictionary based on every possible way I’d want to query it. To keep the example simple, let’s just say I only want to query by muscle group. So creating a class like this one below will fulfill those needs:
class Exercise(object): class _Exercise(object): def __init__(self, dict_obj): for key, value in dict_obj.items(): setattr(self, key, value) def __hash__(self): return self.id _exercises = [_Exercise(dict_obj) for dict_obj in read_file_as_json("workout_generator/exercises.json")] _exercises_by_muscle_group = defaultdict(set) for e in _exercises: _exercises_by_muscle_group[e.muscle_group_id].add(e) def __init__(self): self.query = set(self._exercises) def for_muscle_group(self, muscle_group_id): self.query = set.intersection(self.query, self._exercises_by_muscle_group[muscle_group_id]) return self
As we step through the code, here’s why I made each decision:
class _Exercise(object): def __init__(self, dict_obj): for key, value in dict_obj.items(): setattr(self, key, value) def __hash__(self): return self.id
This is just a simple class that will never be directly accessed by anything other than the outer “Exercise” class. Its purpose is to convert a JSON blob into a Python object. Since we know that each exercise ID is unique (every exercise ID is unique if I didn’t mention), we’ll override the hash function so that it just returns the unique ID and doesn’t do anything wild and crazy that might slow us down.
class Exercise(object): _exercises = [_Exercise(dict_obj) for dict_obj in read_file_as_json("workout_generator/exercises.json")]
Here I just read in the json file and converted each item to the “_Exercise” object previously described. You’ll notice that the “_exercises” variable is defined as a class attribute. This means that this line of reading in a file will be executed only once to minimize interaction with the hard drive. Every piece of the code that might reference this class will reference the same static object, and an added plus for web applications is that this is still run exactly once until the application is restarted. So the program in memory will have an immortal list of exercises that never dies.
_exercises_by_muscle_group = defaultdict(set) for e in _exercises: _exercises_by_muscle_group[e.muscle_group_id].add(e)
Now I’m indexing all of the exercises by muscle group using a dictionary that references a set. So for any given muscle group, you can immediately look up any corresponding exercises in O(1) time.
I’m using a set instead of a list because in practice, additional filtering will happen. So for example, I use a similar process to index all exercises by the required equipment. So now, if I want to query by equipment and I query by muscle group, I get two distinct sets. Then I want the intersection (I want to AND them together) to get all of the exercises that satisfy both requirements. And that brings us to:
def __init__(self): self.query = set(self._exercises) def for_muscle_group(self, muscle_group_id): self.query = set.intersection(self.query, self._exercises_by_muscle_group[muscle_group_id]) return self
In order to query items, I’m going through the process described above. I instantiate the Exercise class with a query representing a set of all possible exercises. For every additional condition that I add to the query, I want the intersection of the possible results. By returning “self” I can chain queries together.
Exercise().for_muscle_group(2).for_equipment([2, 3]).query
Now when I query the data, I can do so with lines that look like the above. Also note that the “for_equipment” method wasn’t included in the stripped down example, but here you can see how queries can be chained together with sets.
The Results
For a standard query it took on average 0.15 milliseconds.
The queries are now fast as a mother falcon. Dictionary lookups are generally O(1), and set intersections are O(n) where n is the length of the smaller of the two sets. In every case, O(n) for n=~700 is drastically faster than a database query. The result is that I can query my data without worrying about any incurred cost.
Compare this to the alternative where I had to query 5 different tables or make 4 different table joins. In a best case scenario with optimal code, it would have probably taken around a millisecond per table query. That’s a 30x increase in speed! Now consider a worst case scenario where someone unknowingly left a database query inside of a loop. It would be a bummer. | http://scottlobdell.me/2015/01/storing-bounded-data-without-sql-benefits-thereof/ | CC-MAIN-2018-43 | en | refinedweb |
int main(){
using namespace std;is a bad idea as it WILL get you in trouble some day. Then when you leave school and start your first job and they tell you not to use this line ever yo wll now have to learn what you should have in a short period of time. It is better to learn how to use "std::" to qualify what is in the standard name space now when it is easier rather than later. | http://www.cplusplus.com/forum/beginner/229591/ | CC-MAIN-2018-43 | en | refinedweb |
GameFromScratch.com
So today I fired up Steam to finally pull the trigger on Pillars of Eternity and what do I see….
It seems that Degica games, the makers of the popular RPG Maker series of game building tools are having a sale today. Their flagship product RPG Maker VX Ace ( … quite the name ) is currently on sale for 80% off.
On top of RPG Maker Vx Ace, RPG Maker XP and RPG Maker 2003 are all also discounted, 80% and 50% respectively. What perhaps caught my eye most of all however wasn’t the RPG Maker line of products, but Spriter is also on sale for 75% off. Spriter was a successfully kickstarted project made by BrashMonkey, I had no idea Degica was now the publisher.
RPG Maker
This is a program that has been around for a very long time, first released in 1988. As the title suggests, it’s a game building tool for making RPG’s, generally of the classic jRPG format.
That said, this is a proper game building application, a number of commercially shipped games were created using the various iterations of RPG Maker. RPG Maker was mentioned as an option in my guide to getting kids started in game development. Bundled with a ton of premade assets, it is a good way to get your hands wet in game development. In addition to world, character, dialog, etc… building tools, there is also a programmatic layer using the Ruby programming language.
Spriter
Perhaps of most interest to GameFromScratch readers is Spriter. Spriter is essentially a modular 2D animation system. You basically cut your sprites up into multiple independent pieces which in turn can be animated. You can also apply an IK chain ( inverse kinematics… think skeleton ) that controls the animation for you. You can then export the resulting animation as a sprite sheet, sequence of images or as an animated gif.
Most likely though you will want to use the Spriter API, or one of the pre-existing plugins, and use the Spriter animations directly in your game engine, such as Construct, Unity or LibGDX.
Truth is, I did a pretty lousy job there describing Spriter, so I will just show their video instead…
Both are on sale until May the 29th.
Oh, and as I finish typing this up, the Steam page for the sale now appears to be broken…
News
Applications Tools
As part of GameFromScratch’s ongoing A Closer look at game engine series, today we are going to be taking a look at LibGDX, a Java based cross platform game engine(sorta). The “Closer Look” series is intended to be a cross between a tutorial, review and overview of a game engine, aimed at helping those evaluating which game engine is right for them. We are going to go into a bit less technical detail with LibGDX, as this site already has a massive text based and video based tutorial series covering LibGDX in depth.
There is an HD video version of this post available as well, also embedded below.
Let’s start with their own description:.
You can get a complete feature list here.
So basically LibGDX is a cross platform, JVM based 2D/3D game engine(sorta) capable of targeting desktops, iOS, Android and HTML. It is free and completely open source. When I say JVM based, LibGDX is written primarily in Java but your own applications code could theoretically be written in any JVM based language, such as Kotlin or Scala. LibGDX is able to compile to HTML using the GWT compiler, while iOS support is provided using RoboVM. For the most part, 99% of the time, the platform specific portions are pretty well hidden from you the developer.
So why do I keep throwing a (sorta) disclaimer every time I use the word “Game Engine”? Well that’s technically because LibGDX isn’t a game engine, it’s more of a framework, similar in scope to SFML, MonoGame/XNA or SDL. It provides the underlying “guts” required to make game a game, things like graphics, audio and input. It however doesn’t provide a scenegraph, essentially the “heart” of a game. However it does provide a simple 2D scene graph/ UI / Widget library named Scene2D, which is built over top of LibGDX. The use of Scene2D is completely optional.
Getting started with LibGDX is incredibly simple. If you want to use it as just another library that is of course your option. If you are for example making a desktop only project, you can download and build LibGDX from it’s Github repository. If you want to create a cross platform game, or like me, want to avoid Java build systems like the plague there is a handy project creation tool available right here. Simply download and execute the linked Jar file. It will automatically download all of the required dependencies, although you will have to have a Java Development Kit and the Android SDK pre-installed. For more detailed installation instructions watch this. Assuming you have a properly configured Java dev environment, you should see the following:
This will automatically generate a project for you. Clicking Advanced allows you to have LibGDX generate IntelliJ IDEA or Eclipse projects for you. You can also select which platforms you want to support and which extensions to you want enabled. Once you click Generate, Maven will automatically create a project for you and download all the dependencies. If you want the tools such as a font generator, particle tool or sprite packer, be sure to select Tools under extensions.
If you are looking for more details on working with your specific IDE, check the tutorials for more details. There are IntelliJ and Eclipse configuration tutorial available.
Let’s take a quick look at the project it generated.
The heart of your game is the platform agnostic ApplicationAdapter located in the core project. ( More on this shortly ). Here is the;
public class MyGame();
}
}
If you’ve previously worked in XNA, the overarching structure should be immediately familiar to you, XNA and LibGDX feel incredibly similar to me, and this is a good thing.
The nutshell version of what’s happening here is our game extends the ApplicationAdapter class… we will see why shortly. In it’s create() method we create a SpriteBatch and a Texture object, the texture “badlogic.jpg” is loaded from the assets folder ( by default in the Android project, unless you didn’t create an Android project, in which case it will be in Core ). The assets folder is basically the root where all of your game’s non-code content is located. In render() method we clear the background color, start our sprite batch, draw the texture to it, then finish the batch, which results in it’s contents being drawn to the screen.
There are a few things to be immediately aware of. LibGDX provides a fairly shallow abstraction over top of OpenGL ( or WebGL/OpenGL ES, depending on platform ), which can be access with Gdx.gl. calls. Most of the GL functions are accessible this way, but in a cross platform friendly manner. The other thing to be aware of is the coordinate system by default follows the OpenGL convention of having the origin (0,0) be the bottom left corner of the screen.
When you run the generated code, you should see:
That’s about as detailed as I am going to get with the code, if you want more details, simply go through the tutorial series for dozens of them.
Now the neat part about this whole process is, using the exact same code you could switch your run target to iOS, Android or HTML and it should just work. Let’s look at exactly how that works. If you look at the project, your folders should be structured like so:
Basically you have a project/module for each targeted platform, then core where the cross platform ( majority ) of your code exists. Let’s take a quick look at the desktop project for a look at exactly how this works. Expanded, the desktop project should look something like this.
It’s the DesktopLauncher class that is of the most interest. For desktop projects, this is your application entry point. Let’s take a quick look inside:
package com.gamefromscratch.desktop;
import com.badlogic.gdx.backends.lwjgl.LwjglApplication;
import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration;
import com.gamefromscratch.MyGame;
public class DesktopLauncher {
public static void main (String[] arg) {
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
new LwjglApplication(new MyGame(), config);
}
}
This is where you would put the platform specific code related to desktop platforms. You can see the traditional program main() is here, inside of which we create a simple configuration object and pass it to our LwjglApplication. LWJGL stands for Light Weight Java Game Library, and it’s the OpenGL layer that LibGDX desktop implementation is based on. There is a similar entry point created for the Android application, the iOS application and the HTML5 application. Each and every one creates a new instance of your game class and passes in the platform specific configurations. Ideally, this is the extent of your platform specific code, but quite often you have to implement some details on a platform by platform basis.
If you look at the various config objects being created, you will notice each is full of properties that are specific to that particular platform, like so:
Splitting your project up into this format makes platform independence mostly an after thought for most developers.
I wont be going into depth of the functionality that LibGDX provides, it’s covered in detail elsewhere. That said, if functionality is required by a game, but isn’t game specific, chances are its in LibGDX. This includes graphics, audio, input, motion, asset management and much more.
Not being strictly a game engine, the tooling support in LibGDX is minimal, but there are some tools included, many of which are command-line based. There is no world editor, animation tools, etc… The tools are bundled in the project if you included the Tools extension. The tools include:
Most of the tools are Java callable and can be invoked from the command line/terminal. This means integrating them into your build process is trivial. The primary tools above are for font and sprite sheet creation. Only hiero and particleeditor are GUI based:
Hiero is a tool for creating bitmap fonts from system/ttf fonts:
The Particle Editor is a GUI tool for creating particle systems:
Another critical tool, not included by default, is the fbx-conv project, which is used for importing 3D assets into LibGDX.
Although not directly part of LibGDX, there are some tools build over top of LibGDX and provide some of the functionality typically ascribed to a full game engine. They include:
Overlap2D A 2D level and UI editor for LibGDX games built over LibGDX
GDX-Proto A 3D game engine built over LibGDX
BDX Another 3D engine built over LibGDX, this one built over top of Blender as the world editor.
Spine A commercial 2D IK animation tool built using LibGDX by the author of Scene2D.
Of course, there is the GameFromScratch.com tutorial series, which I happen to think are pretty good, but I’m biased. On top of that, there is the Wiki, which is pretty comprehensive. There is also a generated from code complete reference guide, that while a little shallow in spots, is generally quite useful.
There are a couple recently published LibGDX books available as well, one which I somewhat recently reviewed. I am not sure the other books are really needed between the Wiki and tutorials available on this site. If you want however, there are some beginner oriented books available as well.
For support, there is a quite active forum available. Well worded questions are generally answered quite quickly. Stackoverflow also sees a fair number of LibGDX questions. All told, it’s got a very large community, making community based support pretty solid.
Personally I am a big fan of LibGDX, even though I am not the biggest Java cheerleader you will find. If you are looking to create complex 3D games, or want extensive editor support, LibGDX is probably not the engine for you. If you want a code focused, open source, free, accessible cross platform game engine capable of doing 2D or 3D, you really should check out LibGDX.
Programming
Engine LibGDX
I… ;)
So what’s new here?
The most immediately obvious is the old interface has been stripped away. Outdated menus to obsolete tutorials no longer clutter the site. Behind the scenes, the HTML is optimized and the page should load better. The biggest changes are highlighted below..
A lot of people arrive here with no prior experience. I have set up a new landing page to get them started with a series of simple questions. I will be expanding upon and improving this area over time..
This area is still very much a WIP. Basically I have done a number of tutorials for a number of different gamedev topic. This page is going to bring them all together on one spot. Once again, very much a work in progress.
I’ve been producing more and more video tutorials, hosted on YouTube and linked from individual blog posts. The video gallery is an attempt to bring all of these videos together in a single location. Still very much a work in progress too.
GameFromScratch
Following.
Full (enormous) changelog since 1.0 here.
They also put together a video showcasing the new features:
Of course, if you are interested in learning more GFS has an extensive Godot Tutorial Series to get you started.
News
Engine Godot | http://www.gamefromscratch.com/2015/05/default.aspx | CC-MAIN-2018-43 | en | refinedweb |
© 2011-2018/Sub)
- 5.10. Redis Transactions
- 5.11. Pipelining
- 5.12. Redis Scripting
- 5.13. Support Classes
- 6. Reactive Redis support
- 6.1. Redis Requirements
- 6.2. Connecting to Redis by Using a Reactive Driver
- 6.3. Working with Objects through ReactiveRedisTemplate
- 6.4. String-focused Convenience Classes
- 6.5. Redis Messaging/PubSub
- 6.6. Reactive Scripting
- 7. Redis Cluster
- 8. Redis Repositories
- 8.1. Usage
- 8.2. Object Mapping Fundamentals
- 8.3. Object-to-Hash Mapping
- 8.4. Keyspaces
- 8.5. Secondary Indexes
- 8.6. Query by Example
- 8.7. Time To Live
- 8.8. Persisting References
- 8.9. Persisting Partial Updates
- 8.10. Queries and Query Methods
- 8.11. Redis Repositories Running on a Cluster
- 8.12. CDI Integration
- 8.13. Redis Repositories Anatomy
- Appendixes
Preface
1. New Features
This section briefly covers items that are new and noteworthy in the latest releases.
1.2..3. New in Spring Data Redis 1.8
Upgrade to Jedis 2.9.
Upgrade to
Lettuce4.2 (Note: Lettuce 4.2 requires Java 8).
Support for Redis GEO commands.
Support for Geospatial Indexes using Spring Data Repository abstractions (see Geospatial Index).
MappingRedisConverter-based
HashMapperimplementation (see Hash mapping).
Support for
PartialUpdatein repositories (see Persisting Partial Updates).
SSL support for connections to Redis cluster.
Support for
client namethrough
ConnectionFactorywhen using Jedis.
1.4. New in Spring Data Redis 1.7
Support for RedisCluster.
Support for Spring Data Repository abstractions (see Redis Repositories).
1.5..
1.6., Spring, or Spring Data examples, see Getting Started. This documentation refers only to Spring Data Redis Support and assumes the user is familiar with key-value storage and Spring concepts.
2..
3. Requirements
Spring Data Redis 2.x binaries require JDK level 8.0 and above and Spring Framework 5.1.1.RELEASE and above.
4. Getting Started
This section provides an easy-to-follow guide for getting started with the Spring Data Redis module.
4.1. First Steps
As explained in Why Spring Data Redis?, Spring Data Redis (SDR) provides integration between the Spring framework and the Redis key-value store. Consequently, you should become acquainted with both of these frameworks. Throughout the SDR documentation, each section provides links to relevant resources. However, you should become familiar with these topics before reading this guide.
4.1.1. Learning Spring
Spring Data uses Spring framework’s core functionality, such as the IoC container, resource abstract, and the AOP infrastructure. While it is not important to know the Spring APIs, understanding the concepts behind them is important. At a minimum, the idea behind IoC should be familiar. That being said, the more knowledge you have about the Spring, the faster you can pick up Spring Data Redis. In addition to the Spring Framework’s comprehensive documentation, there are a lot of articles, blog entries, and books on the matter. The Spring Guides home page offer a good place to start. In general, this should be the starting point for developers wanting to try Spring Data Redis.
4.1.2. Learning NoSQL and Key Value Stores.
4.1.3..
4.2. Need Help?
If you encounter issues or you are just looking for advice, use one of the links below:
4.2.1. Community Support
The Spring Data tag on Stack Overflow.
Reference Documentation
Document structure
This part of the reference documentation explains the core functionality offered by Spring Data Redis.
Redis support introduces the Redis module feature set.
5. Redis support
One of the key-value stores supported by Spring Data is Redis. To quote the Red.
5.1. Redis Requirements
5.2..
5.
5.
5.3.2. Configuring the); } }
5.4. Redis Sentinel Support
For dealing with high-availability Redis, Spring Data Redis has support for Redis Sentinel, using
RedisSentinelConfiguration, as shown in the following example:
/** * Jedis */ @Bean public RedisConnectionFactory jedisConnectionFactory() { RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration() .master("mymaster") .sentinel("127.0.0.1", 26379) .sentinel("127.0.0.1", 26380); return new JedisConnectionFactory(sentinelConfig); } /** * Lettuce */ @Bean public RedisConnectionFactory lettuceConnectionFactory() { RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration() .master("mymaster") .sentinel("127.0.0.1", 26379) .sentinel("127.0.0.1", 26380); return new LettuceConnectionFactory(sentinelConfig); }
Sometimes, direct interaction with one of the Sentinels is required. Using
RedisConnectionFactory.getSentinelConnection() or
RedisConnection.getSentinelCommands() gives you access to the first active Sentinel configured.
5.5. Working with Objects through RedisTemplate
Most users are likely to use
RedisTemplate and its corresponding available in the
org.springframework.data.redis.serializer package. See Serializers for more information. You can also set any of the serializers to null and use RedisTemplate with raw byte arrays by setting the
enableDefaultSerializer property to
false. Note that the template requires all keys to be non-null.()); } }
5:"); } }); }
5.7.
5:
public class Person { String firstname; String lastname; // … } public class HashMapping { @Autowired HashOperations<String, byte[], byte[]> hashOperations; HashMapper<Object, byte[], byte[]> mapper = new ObjectHashMapper(); public void writeHash(String key, Person person) { Map<byte[], byte[]> mappedHash = mapper.toHash(person); hashOperations.putAll(key, mappedHash); } public Person loadHash(String key) { Map<byte[], byte[]> loadedHash = hashOperations.entries("key"); return (Person) mapper.fromHash(loadedHash); } }
5; } public class Address { String city; String country; }
The following table shows how the data in the preceding class would appear in normal mapping:
The following table shows how the data in the preceding class would appear in flat mapping:
5.
5");
5 lets you as an application developer write the (possibly complex) business logic associated with receiving a message (and reacting to it) and delegates boilerplate Redis infrastructure concerns to the framework.:
<bean id="messageListener" class="org.springframework.data.redis.listener.adapter.MessageListenerAdapter"> <constructor-arg> <bean class="redisexample.DefaultMessageDelegate"/> </constructor-arg> </bean> <bean id="redisContainer" class="org.springframework.data.redis.listener.RedisMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory"/> <property name="messageListeners"> <map> <entry key- <bean class="org.springframework.data.redis.listener.ChannelTopic"> <constructor-arg </bean> </entry> </map> </property> </bean>
Each time a message is received, the adapter automatically and transparently performs translation (using the configured
RedisSerializer) between the low-level format and the required object type. Any exception caused by the method invocation is caught and handled by the container (by default, exceptions get logged).
5.10. Redis Transactions
Redis provides support for transactions through the
multi,
exec, and
discard commands. These operations are available on
RedisTemplate. However,
RedisTemplate is not guaranteed to execute.
5.10 executed on a free (not transaction-aware) connection template.keys("*"); // returns null as values set within a transaction are not visible template.opsForValue().get("thing1");
5.11. Pipelining.
5.12. Redis Scripting
Redis versions 2.6 and higher provide support for execution of Lua scripts through the eval and evalsha commands. Spring Data Redis provides a high-level abstraction for script execution a set of commands atomically, and the behavior of one command is influenced by the result of another.
@Bean public RedisScript<Boolean> script() {(expectedValue, newValue)); } }
-- checkandset.lua local current = redis.call('GET', KEYS[1]) if current == ARGV[1] then redis.call('SET', KEYS[1], ARGV[2]) return true end return false
The execution by using the Spring Task and Scheduler abstractions. See the Spring Framework documentation for more details.
5.13. Support Classes
Package
org.springframework.data.redis.support offers various reusable components that rely on Redis as a backing store. Currently,.).
5()) .initial sent to Redis. The locking counterpart prevents command overlap by setting an explicit lock key and checking against presence of this key, which leads to additional requests and potential command wait times.
It is possible to opt in to the locking behavior as follows:
RedisCacheManager cm = RedisCacheManager.build(RedisCacheWriter.lockingRedisCacheWriter()) .cacheDefaults(defaultCacheConfig()) .... Redis Messaging/PubSub.2. Receiving/Subscribing for messages,:
@Component @ConfigurationProperties(prefix = "spring.redis.cluster") public class ClusterConfigurationProperties { /* * spring.redis.cluster.nodes[0] = 127.0.0.1:7379 * spring.redis.cluster.nodes[1] = 127.0.0.1:7380 * ... */ List<String> nodes; /** * Get initial collection of known cluster nodes in format {@code host:port}. * * @return */ public List<String> getNodes() { return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } } @Configuration public class AppConfig { /** * Type safe representation of application.properties */ @Autowired ClusterConfigurationProperties clusterProperties; public @Bean RedisConnectionFactory connectionFactory() { return new JedisConnectionFactory( new RedisClusterConfiguration(clusterProperties.getNodes())); } }
7 execution executes:
[email protected]:7379 > cluster nodes 6b38bb... 127.0.0.1:7379 master - 0 0 25 connected 0-5460 (1) 7bb78c... 127.0.0.1:7380 master - 0 1449730618304 2 connected 5461-10922 (2) 164888... 127.0.0.1:7381 master - 0 1449730618304 3 connected 10923-16383 (3) b8b5ee... 127.0.0.1:7382 slave 6b38bb... 0 1449730618304 25 connected (4)
RedisClusterConnection connection = connectionFactory.getClusterConnnection(); connection.set( executes multiple parallel
GET commands against the slot-serving nodes and again returns an accumulated result. This is less performant than the single-slot execution)
7)
8. Redis Repositories
Working with Redis Repositories lets you seamlessly convert and store domain objects in Redis Hashes, apply custom mapping strategies, and use secondary indexes.
8:
@Configuration @EnableRedisRepositories public class ApplicationConfig { @Bean public RedisConnectionFactory connectionFactory() { return new JedisConnectionFactory(); } @Bean public RedisTemplate<?, ?> redisTemplate() { RedisTemplate<byte[], byte[]> template = new RedisTemplate<byte[], byte[]>(); return template; } }
Given the preceding setup, we can inject
PersonRepository into our components, as shown in the following example:
@Autowired PersonRepository repo; public void basicCrudOperations() { Person rand = new Person("rand", "al'thor"); rand.setAddress(new Address("emond's field", "andor")); repo.save(rand); (1) repo.findOne(rand.getId()); (2) repo.count(); (3) repo.delete(rand); (4) }
8.
8.
8); } void setRemarks(String remarks) { (5) this.remarks = remarks; } }
8.
8.
8.3. Object-to-Hash Mapping
The Redis Repository support persists Objects to Hashes. This requires an Object-to-Hash conversion which is done by a
RedisConverter. The default implementation uses
Converter for mapping property values to and from Redis native
byte[].
Given the
Person type from the previous sections, the default mapping looks like the following:
_class = org.example.Person (1) id = e2c7dcee-b8cd-4424-883e-736ce564363e firstname = rand (2) lastname = al’thor address.city = emond's field (3) address.country = andor:
@WritingConverter public class AddressToBytesConverter implements Converter<Address, byte[]> { private final Jackson2JsonRedisSerializer<Address> serializer; public AddressToBytesConverter() { serializer = new Jackson2JsonRedisSerializer<Address>(Address.class); serializer.setObjectMapper(new ObjectMapper()); } @Override public byte[] convert(Address value) { return serializer.serialize(value); } } @ReadingConverter public class BytesToAddressConverter implements Converter<byte[], Address> { private final Jackson2JsonRedisSerializer<Address> serializer; public BytesToAddressConverter() { serializer = new Jackson2JsonRedisSerializer<Address>(Address.class); serializer.setObjectMapper(new ObjectMapper()); } @Override public Address convert(byte[] value) { return serializer.deserialize(value); } }
Using the:
@WritingConverter public class AddressToMapConverter implements Converter<Address, Map<String,byte[]>> { @Override public Map<String,byte[]> convert(Address source) { return singletonMap("ciudad", source.getCity().getBytes()); } } @ReadingConverter public class MapToAddressConverter implements Converter<Address, Map<String, byte[]>> { @Override public Address convert(Map<String,byte[]> source) { return new Address(new String(source.get("ciudad"))); } }
Using the"
8(); } }
8
@Configuration @EnableRedisRepositories(keyspaceConfiguration = MyKeyspaceConfiguration.class) public class ApplicationConfig { //... RedisConnectionFactory and RedisTemplate Bean definitions omitted public static class MyKeyspaceConfiguration extends KeyspaceConfiguration { @Override protected Iterable<KeyspaceSettings> initialConfiguration() { return Collections.singleton(new KeyspaceSettings(Person.class, , "people")); } } }
8.5. Secondary Indexes
Secondary indexes are used to enable lookup operations based on native Redis structures. Values are written to the according indexes on every save and are removed when objects are deleted or expire.
8:firstname:aviendha a9d4b3a0-50d3-4538-a2fc-f7fc2581ee56
It is also possible to have indexes on nested elements. Assume
Address has a
city property that is annotated with
@Indexed. In that case, once
person.address.city is not
null, we have Sets for each city,:
@Configuration @EnableRedisRepositories(indexConfiguration = MyIndexConfiguration.class) public class ApplicationConfig { //... RedisConnectionFactory and RedisTemplate Bean definitions omitted public static class MyIndexConfiguration extends IndexConfiguration { @Override protected Iterable<IndexDefinition> initialConfiguration() { return Collections.singleton(new SimpleIndexDefinition("people", "firstname")); } } }
Again, as with keyspaces, you can programmatically configure indexes, as shown in the following example:
@Configuration @EnableRedisRepositories public class ApplicationConfig { //... RedisConnectionFactory and RedisTemplate Bean definitions omitted @Bean public RedisMappingContext keyValueMappingContext() { return new RedisMappingContext( new MappingConfiguration( new KeyspaceConfiguration(), new MyIndexConfiguration())); } public static class MyIndexConfiguration extends IndexConfiguration { @Override protected Iterable<IndexDefinition> initialConfiguration() { return Collections.singleton(new SimpleIndexDefinition("people", "firstname")); } } }
8") public class Person { Address address; // ... other properties omitted } public class Address { @GeoIndexed Point location; // ... other properties omitted } public interface PersonRepository extends CrudRepository<Person, String> { List<Person> findByAddressLocationNear(Point point, Distance distance); (1) List<Person> findByAddressLocationWithin(Circle circle); (2) } Person rand = new Person("rand", "al'thor"); rand.setAddress(new Address(new Point(13.361389D, 38.115556D))); repository.save(rand); (3) repository.findByAddressLocationNear(new Point(15D, 37D), new Distance(200)); (4)
In the preceding example the, longitude and latitude values are stored by using
GEOADD that use the object’s
id as the member’s name. The finder methods allow usage of
Circle or
Point, Distance combinations for querying those values.
8.6. Query by Example
8.
8)
Examples are ideally be executed with. }:
public class TimeToLiveOnProperty { @Id private String id; @TimeToLive private Long expiration; } public class TimeToLiveOnMethod { @Id private String id; @TimeToLive public long getTimeToLive() { return new Random().nextLong(); } }
The repository implementation ensures subscription to Redis keyspace notifications via
RedisMessageListenerContainer.
When the expiration is set to a positive value, the corresponding
EXPIRE command is executed. Spring Data Redis repositories.
By default, the key expiry listener is disabled when initializing the application. The startup mode can be adjusted in
@EnableRedisRepositories or
RedisKeyValueAdapter to start the listener with the application or upon the first insert of an entity with a TTL. See
EnableKeyspaceEvents for possible values.
The
RedisKeyExpiredEvent holds a copy of the expired domain object as well as the key.
8)
8:
PartialUpdate<Person> update = new PartialUpdate<Person>("e2c7dcee", Person.class) .set("firstname", "mat") (1) .set("address.city", "emond's field") (2) .del("age"); (3) template.update(update); update = new PartialUpdate<Person>("e2c7dcee", Person.class) .set("address", new Address("caemlyn", "andor")) (4) .set("attributes", singletonMap("eye-color", "grey")); (5) template.update(update); update = new PartialUpdate<Person>("e2c7dcee", Person.class) .refreshTtl(true); (6) .set("expiration", 1000); template.update(update);
8 indexes. To do so, provide a
RedisCallback that returns a single or
Iterable set of
id values, as shown in the following example:
String user = //... List<RedisSession> sessionsByUser = template.find(new RedisCallback<Set<byte[]>>() { public Set<byte[]> doInRedis(RedisConnection connection) throws DataAccessException { return connection .sMembers("sessions:securityContext.authentication.principal.username:" + user); }}, RedisSession.class);
The following table provides an overview of the keywords supported for Redis and what a method containing that keyword essentially translates to:
8):
8:
class RedisOperationsProducer { @Produces RedisConnectionFactory redisConnectionFactory() { JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory(new RedisStandaloneConfiguration()); jedisConnectionFactory.afterPropertiesSet(); return jedisConnectionFactory; } void disposeRedisConnectionFactory(@Disposes RedisConnectionFactory redisConnectionFactory) throws Exception { if (redisConnectionFactory instanceof DisposableBean) { ((DisposableBean) redisConnectionFactory).destroy(); } } @Produces @ApplicationScoped RedisOperations<byte[], byte[]> redisOperationsProducer(RedisConnectionFactory redisConnectionFactory) { RedisTemplate<byte[], byte[]> template = new RedisTemplate<byte[], byte[]>(); template.setConnectionFactory(redisConnectionFactory); template.afterPropertiesSet(); return template; } }
The necessary setup can vary, depending on your JavaEE environment.
The Spring Data Redis CDI extension picks up all available repositories(); } }
A Redis Repository requires
RedisKeyValueAdapter and
RedisKeyValueTemplate instances. These beans are created and managed by the Spring Data CDI extension if no provided beans are found. You can, however, supply your own beans to configure the specific properties of
RedisKeyValueAdapter and
RedisKeyValueTemplate.
8; }
8)
8)
8)
8. | https://docs.spring.io/spring-data/redis/docs/current/reference/html/ | CC-MAIN-2018-43 | en | refinedweb |
So yeah, i figured out how to write program that will print digits in reverse(with small help of the internet, put i get it). Anyway i want to write program that will print digits in their regular order.
def num_digits(n): count=0 while count <= abs(n): count = count + 1 n= abs(n)/10 return count def print_digits(n): while n > 0: print n #my "plan" is to go with n/ (10**num_digits(n)) and that taking #taking it like decimal with one decimal place and taking that value) #For example: 123 / (10**3) = 0,123. and than fix it so that is 0,1. #after this 0,1 * 10 = 1 and here is our 1st digit #how u guy understand how i want this to work #rest of code n=n/10 print n #not quite sure for this line
p.s. be gentle i'm learning yet | https://www.daniweb.com/programming/software-development/threads/348709/printing-digits-in-their-oder | CC-MAIN-2018-43 | en | refinedweb |
Why java has 2 ways to create thread ?
Prabhat Ranjan
Ranch Hand
Posts: 397
Why java has 2 ways to create Threads.
1) Thread Class
2) Runnable INTERFACE
i) Thread class won't allow you to extend more class while interface allows you
ii) Runnable has only one method need to implement mandatory while Thread has methods other than Run also.
What else benefits we have ?
Regards,
Prabhat
Java has only 1 way to create a thread. With the thread class (or a child of it). However it can be used in combination with the runnable interface.
It's just an abstraction of the code to be executed. It's generally discouraged to extend the Thread class. The Thread class has a lot of overhead
and the Runnable interface doesn't.
It's just an abstraction of the code to be executed. It's generally discouraged to extend the Thread class. The Thread class has a lot of overhead
and the Runnable interface doesn't.
"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." --- Martin Fowler
Please correct my English.
Prabhat Ranjan
Ranch Hand
Posts: 397
I have few comments over this point:
1) Interface Runnable have only 1 method which you mandatory to implement.
2) If you need other methods like suspend() resume() sleep() join() yield() and stop() then
go for extending class Thread
3) Extending the Thread class will make your class unable to extend other classes, because of the single inheritence feature in JAVA.
4) If you want to execute run() method for multiple no of times then its better to you Runnable.
public class testing implements Runnable {
public void run() {
System.out.println("Hello Run --->");
}
public static void main(String args[]){
testing testing = new testing();
Thread thd = new Thread(testing);
thd.run();
thd.run();
}
}
While Thread class doesn't allow yo to call the start() methods more than once.
will throw IllegalThreadStateException.
5) Thread Class actually implements Runnable interface internally.
1
Prabhat Ranjan wrote:2) If you need other methods like suspend() resume() sleep() join() yield() and stop() then go for extending class Thread
Please don't. suspend(), resume() and stop() are deprecated and should not be used. sleep() and yield() are static and cannot be overridden. join() is final and cannot be overridden.
By just instantiating a new Thread object you can make use of join() without ever needing to extend it. You don't need to extend a class to be able to call its public methods.
5) Thread Class actually implements Runnable interface internally.
Not just internally, also externally
The implements clause is not an implementation detail but part of its API. But yes, it does implement Runnable.
Prabhat Ranjan
Ranch Hand
Posts: 397
Prabhat Ranjan wrote:About point 2 , i have clarification that if methods are deprecated means we cann't use in real world programming.
Well, you can. You just shouldn't. They are deprecated for a reason; either they can cause a lot of problems (like Thread.resume(), Thread.suspend(), Thread.stop()) or there are better alternatives.
What else benefits on using Runnable over Thread ?
Aren't points 3 and 4 enough? No? Then how about sharing one Runnable instance among multiple Threads at the same time?
I think this point has already been made, but to reiterate: According to Cay S. Horstmann and Gary Cornell in Core Java 2: Volume II - Advanced Features...
...forming a subclass of the Thread class ... is no longer recommended. You should decouple the task that is to be run in parallel from the mechanism of running it.
Matthew Brown
Bartender
Posts: 4568
9
I'd look at it this way. What is a Thread? It represents a process running on the machine, that can execute some actions. What is a Runnable? It represents an action to execute. These are really two completely different concepts - so general object-oriented principles suggest that using two different objects is likely to be preferable.
| https://coderanch.com/t/478528/java/java-ways-create-thread | CC-MAIN-2016-44 | en | refinedweb |
Quick Sort is a sorting algorithm. It is also referred as partition exchange sort.
It is a divide and conquer algorithm. Quick sort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quick sort can then recursively sort the sub-lists.
Complexity of Quick sort
In the best/ average case it gives a time complexity of O(n log n) and worst case time complexity of O(n*n).
Quick sort is often faster in practice than other algorithms.
The best case occurs when the partition occurs exactly in the middle of the array and then the left and the right part will have same size.
Worst case occurs when the Minimum or Maximum element is chosen as the pivot.
Average-case.
Quick Sort Algorithm:
* Quick Sorting involves these steps: to the sub-list of elements with greater values.
The base case of the recursion is lists of size zero or one, which never need to be sorted.
Continue ..
void quicksort(int *input, int p, int r) // The quicksort recursive function
{
if ( p < r )
{
int j = partition(input, p, r);
quicksort(input, p, j-1);
quicksort(input, j+1, r);
}
}
int main()
{
int input[INPUT_SIZE] = {50, 70, 80, 10, 30, 20, 90, 40, 60};
cout << "\nInput: ";
print(input);
quicksort(input, 0, 9);
cout << "Output: ";
print(input);
return 0;
}
Quick Sort Algorithm
What is Quick Sort Algorithm?
#include <iostream>
using namespace std;
const int INPUT_SIZE = 9;
void print(int *input) // A simple print function
{
for ( int i = 0; i < INPUT_SIZE; i++ )
cout << input[i] << " ";
cout << endl;
}
int partition(int* input, int p, int r) // The partition function
{
int pivot = input[r];
while ( p < r )
{
while ( input[p] < pivot )
p++;
while ( input[r] > pivot )
r--;
if ( input[p] == input[r] )
p++;
else if ( p < r )
{
int tmp = input[p];
input[p] = input[r];
input[r] = tmp;
}
}
return r;
}
Quick Sort Example
Continue ..
Output Result
Data Structure Using C++
References ..
Habiba Mohammad ..
Shaymah Hmoud..
Supervision:
Preparation:
Lubna Maqbool ..
Thanks :)
Comparison with heapsort:
both algorithms have O(n logn) complexity.
quicksort runs faster, (does not support a heap tree).
the speed of quick sort is not guaranteed.
Comparison with mergesort:
mergesort guarantees O(n logn) time, however it requires additional memory with size N.
quicksort does not require additional memory, however the speed is not quaranteed
usually mergesort is not used for main memory sorting, only for external memory sorting.
Some Comparisons
Advantages:
One of the fastest algorithms on average.
Does not need additional memory (the sorting takes place in the array - this is called in-place processing). Compare with mergesort: mergesort needs additional memory for merging.
Disadvantages:
The worst-case complexity is O(N2)
Applications:
Commercial applications use Quicksort - generally it runs fast, no additional memory,ickSort Algorithm
No description
Please log in to add your comment. | https://prezi.com/i5khmqbwcse0/quicksort-algorithm/ | CC-MAIN-2016-44 | en | refinedweb |
Content-type: text/html
pthread_condattr_init - Initializes a condition variable attributes object.
DECthreads POSIX 1003.1c Library (libpthread.so)
#include <pthread.h>
int pthread_condattr_init(
pthread_condattr_t *attr);
Interfaces documented on this reference page conform to industry standards as follows:
IEEE Std 1003.1c-1995, POSIX System Application Program Interface
Address of the condition variable attributes object to be initialized.
This routine initializes the condition variable attributes object specified by the (attr) argument with a set of default attribute values.
When an attributes object is used to create a condition variable, the values of the individual attributes determine the characteristics of the new condition variable. Attributes objects act as additional arguments to condition variable creation. Changing individual attributes in an attributes object does not affect any condition variables that were previously created using that attributes object.
You can use the same condition variable attributes object in successive calls to pthread_condattr_init(3), from any thread. If multiple threads can change attributes in a shared attributes object, your program must use a mutex to protect the integrity of that attributes object.
Results are undefined if this routine is called and the attr argument specifies a condition variable attributes object that is already initialized.
If an error condition occurs, this routine returns an integer value indicating the type of error. Possible return values are as follows: Successful completion. Insufficient memory exists to initialize the condition variable attributes object.
None
Functions: pthread_condattr_destroy(3), pthread_cond_init(3)
Manuals: Guide to DECthreads and Programmer's Guide
delim off | http://backdrift.org/man/tru64/man3/pthread_condattr_init.3.html | CC-MAIN-2016-44 | en | refinedweb |
Filtering request based on their HTTP method
HTTP offers a mechanism by which a server can reject a request following the method used. For instance you may want to force a page handler to be applied only when a POST request has been made and infom the user-agent of that constraint.
Using the builtin MethodDispatcher
CherryPy 3 comes with a MethodDispatcher which allows your page handlers to be named after the HTTP method they will handle: GET, POST, HEAD, etc. For instance:
import cherrypy class Root: exposed = True def GET(self): return "some content" def POST(self, name, password): # do something return "created" if __name__ == "__main__": cherrypy.quickstart(Root(), config={'/': {'request.dispatch': cherrypy.dispatch.MethodDispatcher()}})
Using the builtin dispatcher
In the previous section we have used a different dispatcher dedicated for handling the HTTP method of the request. However it may happen that you cannot use that dispatcher and would rather keep the default one. In that case you can use a tool such as:
import cherrypy def http_methods_allowed(methods=['GET', 'HEAD']): method = cherrypy.request.method.upper() if method not in methods: cherrypy.response.headers['Allow'] = ", ".join(methods) raise cherrypy.HTTPError(405) cherrypy.tools.allow = cherrypy.Tool('on_start_resource', http_methods_allowed) class Root: @cherrypy.expose @cherrypy.tools.allow() def echo(self, msg): return msg @cherrypy.expose @cherrypy.tools.allow(methods=['POST']) def drop(self, msg): return msg if __name__ == '__main__': cherrypy.quickstart(Root()) | http://tools.cherrypy.org/wiki/HTTPMethodFiltering?version=1 | CC-MAIN-2016-44 | en | refinedweb |
Red Hat Bugzilla – Bug 52047
RFE: Install dies on machines i810 + 2nd video card
Last modified: 2007-04-18 12:36:02 EDT
Hi guys,
I like the fix you had for 7.2, but figured I should put in another type of
solution as a RFE for 8.0.
Machine is
Dell Optiplex GX100
128 Mb Ram
Voodoo 3 3000 w/16Mb
Creative Labs SB Live! EMU10000
Adaptec 2940
CDrom writer on SCSI chain
Built in
i810 video *always on*
3Com Corporation 3c905C-TX [Fast Etherlink]
The first problem is that the system seems to get confused with the initial
frame buffer mode and goes into an ultra lowres 320x200. The previous
workaround was to do a text install which of course dropped off functionality.
This time the install worked by doing a nofb as the installer tried the
VGA16 XFree86.. and I limped along in that (yeah)
I figured that since it saw I had two video cards.. would it be possible
for it to do something like this on an expert install:
Detected 2 video cards
Which card do you wish to be primary for install?
[] Voodoo
[] i810
[] vga16
The framebuffer problem.. I dont know what to do with.. I dont know if it
is a Voodoo problem or an i810 voodoo conflict problem.
How does text mode drop off functionality? Its IDENTICAL to GUI mode now.
We do not support motherboards currently where the i810 cannot be disabled if
another card is present. You will have to manually configure X for the card in
use. We may reconsider for future releases.
BTW, what does kudzu say when you:
cd /usr/lib/python1.5/site-packages
$python
import kudzu
print kudzu.probe(kudzu.CLASS_VIDEO, kudzu.BUS_UNSPEC, kudzu.PROBE_ALL)
I figured it would be several releases, which is why the RFE was put on it..
Here is what kudzu says
>>> print kudzu.probe(kudzu.CLASS_VIDEO, kudzu.BUS_UNSPEC, kudzu.PROBE_ALL)
[Desc: Intel Corporation|82810-DC100 CGC [Chipset Graphics Controller]
Driver: Card:Intel 810
Device: None
, Desc: 3Dfx Interactive, Inc.|Voodoo 3
Driver: Card:Voodoo3 (generic)
Device: None
]
I'll try to pick this up for the next release.
*** Bug 16123 has been marked as a duplicate of this bug. ***
Deferring to future release.
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated. | https://bugzilla.redhat.com/show_bug.cgi?id=52047 | CC-MAIN-2016-44 | en | refinedweb |
vendor_id : AuthenticAMD cpu family : 15 model : 5 model name : AMD Opteron(tm) Processor 144 stepping : 8 cpu MHz : 1799.998 cache size : 1024 KBAnother test, where 64kB string was to be returned, resulted in 500 req/sec while 200 req/sec were possible using Iapp approach, so when the size of the content increases, the difference actually decreases.The interesting part of this approach is the actual ease of how things can be implemented. The following is a template for doing a login/logout system:
itcl::clas itcl::class iapp::rh::login { protected variable loginUserID "" protected variable loginUserType "" protected method serializeInt {} { set serializeData(rh::login::loginUserID) $loginUserID set serializeData(rh::login::loginUserType) $loginUserType } protected method deserializeInt {} { if {[info exists serializeData(rh::login::loginUserID)]} { set loginUserID $serializeData(rh::login::loginUserID) } else { set loginUserID "" } if {[info exists serializeData(rh::login::loginUserType)]} { set loginUserType $serializeData(rh::login::loginUserType) } else { set loginUserType "" } } public method handleLogin {queryprefix type} { upvar querydata querydata if {($loginUserID == "") && ([info exists querydata(${queryprefix}login)] || [info exists querydata(${queryprefix}login.x)]) && ([info exists querydata(${queryprefix}username)] && [info exists querydata(${queryprefix}password)]) } { set loginUserID [loginAuthorize $querydata(${queryprefix}username) $querydata(${queryprefix}password)] if {$loginUserID != ""} { set loginUserType $type } if {$loginUserID == ""} { return loginFailed } else { return login } } elseif {($loginUserID != "") && ([info exists querydata(${queryprefix}logout)] || [info exists querydata(${queryprefix}logout.x)]) } { set loginUserID "" set loginUserType "" return logout } elseif {($loginUserID != "") && (![string equal $type $loginUserType])} { # do not allow users of different types - logout to be sure set loginUserID "" set loginUserType "" return logout } return "" } ## virtual methods protected method loginAuthorize {username password} { return "" } }And the handler class that tests this:
itcl::class ::sessionHandler { inherit \ ::iapp::serialize \ ::iapp::requesthandler \ ::iapp::rh::login public method handleRequest {} { switch -- $suffix { access { set cmd [handleLogin iapp_ main] set html "CMD=$cmd<br />" append html "<form method=\"POST\" action=\"/app/access\">" if {$loginUserID == ""} { append html "User: <input name=\"iapp_username\"><br />" append html "Pass: <input type=\"password\" name=\"iapp_password\"><br />" append html "<input type=\"submit\" name=\"iapp_login\" value=\"LOG IN\"><br />" } else { append html "<h1>Logged in as $loginUserID</h1><br />" append html "<input type=\"submit\" name=\"iapp_logout\" value=\"LOG OUT\"><br />" } append html "</form>" returnData 200 text/html $html return } } returnData 200 text/html "Page $suffix not found" } protected method loginAuthorize {username password} { if {[regexp "\[A-Za-z0-9\]{1,8}" $username] && [string equal "${username}123" $password]} { return id-$username } return "" } }While this is not the most elegant code for now, it serves as a proof of concept. Another interesting idea is that the same iapp::factory class can be used to handle other thread-local data (ie. system users that could store their local data and provide some methods that work based on their privileges).For now, future of this project remains a huge unknown, but I guess I will continue to investigate this possibility and work on it if I find this interesting enough.Original posting on AOLserver mailing list:
(...) Previously (with 3.4) I've been using the following: 1/ code that allowed dynamic loading of "packages"; The code that was contained in it's own namespace and was loaded/unloaded on demand with reloading on changes. 2/ nsdqe provided session handling - setting the cookie and using ns_set for data plus serialization to harddrive This was handling all the session stuff plus logging in/logging out. 3/ nscache and similar code to handle cache'ing This seems to work ok, but for generic approach what I really lack is an OO - ie so that someone could develop a generic "ecommerce" class and inherit for specific implementations. This can be done writing plain procs and passing proper args/configuration variables, but I find this task a job for actual OO. This also has a downside that ns_cache cannot actually cache Tcl_Obj so if I would get a result from the db this would get serialized to a string and deserialized on first use. I did develop some cache mechanism that does "deepcopy" of the objects and then caches the Tcl_Obj themselves for each thread, this improved the speed a lot. Right now I'm wondering the pros and cons of the following solution (using 4.0.10, 8.4.12, thread 2.6.3 and some other extensions). 1/ similar code, but one that would initialize a separate thread for each package and load all the neccessary code in there - the package code would be Itcl classes and plain Tcl code (in the future this could use VFS and maybe tbcload to allow redistribution of the code) 2/ session mechanism (which would require sessions to initialize main website engine) that would create instance of an application object per each session - so that each user would be an instance 3/ most data would be cached using Tcl_Objs (maybe even not using nscache) 4/ Background jobs could also use the same thread, using common non-AOLserver methods (after/fileevent etc) The objects would allow serialization/deserialization by supplying name-value data that would be saved by the session manager. Session would only allow X number of concurrent sessions and serialize+delete the oldest ones when new ones need to be created. Pros: * much better app designing framework * using event loop for packages * one thread per package would solve most multi user scenarios Cons: * any change in the package code would require serializing all instances and reloading the package (but it's not common to experiment with on production servers) * non-session aware browsers will not be able to use most of the functionality (some functionality could be done by supplying a generic instance for all non-session aware) * it would be easy to do a DoS by generating a lot of sessions (since serialization/deserialization would take a lot of time) * sync DB operations could cause a freeze for many customers (especially with long lasting queries)
[ Category Application | Category Object Orientation | Category Package | Category Internet | Category TclHttpd ] | http://wiki.tcl.tk/15701 | CC-MAIN-2016-44 | en | refinedweb |
Let's say I was selling a software and I didn't want it to be leaked (of course using Python wouldn't be the greatest as all code is open but let's just go with it), I would want to get a unique ID that only the user's PC has and store it within a list in my script. Now every time someone executed the script it would iterate through the list and check to see if that unique ID matches with one from the list.
Would this even be possible and if so how would one implement it in Python?
Python has a unique id library
uuid.
import uuid # Create a uuid customer_id = str(uuid.uuid4()) software_ids = [customer_id] # Store in a safe secure place can_run = customer_id in software_ids print(can_run) | https://codedump.io/share/A16BzXfBtieP/1/implementing-hwid-checking-system-for-python-scripts | CC-MAIN-2016-44 | en | refinedweb |
Grails allows you to create RESTful APIs or Web applications much more quickly than other JVM frameworks. You may be surprised to know, though, that Grails also provides a number of handy features to help you monitor the health and performance of your applications.
Some of the these features you get for free while others require a bit of work on your part. This post should give you a taste of what you can do with very little work on your part.
When you want to start monitoring your Grails applications, the first place to start should be the Spring Boot production-ready actuators. Despite the odd terminology, these actuators provide a lot of handy features right out of the box. Actuators provide data on your running application through JSON endpoints.
Some of the actuator endpoints can help with diagnosing configuration issues, such as
/autoconfig,
/configprops, and
/env. These aren’t related to monitoring your applications but they can help if you are having issues, especially with Spring configuration.
The
/configprops endpoint lists all the Spring configuration properties but doesn’t include all the Grails-specific properties. The
/env endpoint provides more information including Grails configuration settings. I’ve used these actuators to help debug Spring Security configuration issues.
In addition, the
/info endpoint provides some basic information on your application, such as the Grails version number and your application’s version number. You can extend the information presented by writing Spring beans that implement the
InfoContributor interface.
The endpoints most useful for monitoring include
/health and
/metrics.
Checking Application Health
The
/health endpoint is designed to tell you whether your application is healthy or not. By default, it just tells you whether the application is up or not, along with information on disk space.
You can see this in action by creating a new Grails application and running it.
Create the application (using Grails 3.1.9):
grails create-app monitor --profile=web
Then, you need to edit the
grails-app/conf/application.yml file, since the endpoints are disabled by default. Change the enabled setting to true for the endpoints:
# Spring Actuator Endpoints are Disabled by Default endpoints: enabled: true jmx: enabled: true
Now, run the application:
grails run-app
Go to the
/health endpoint, by default mapped to
/health in your application,.
You will see output like the following:
{ "status":"UP", "diskSpace": { "status":"UP", "total":250140434432, "free":203297030144, "threshold":10485760 } }
Typically though, you’d want a health check to tell you if a back-end service connection is working, or report on other possible conditions with your application. You can do this by defining beans that implement the
HealthIndicator interface. Spring Boot comes with some handy built in indicators, including checks for disk space, and connectivity to systems such as Cassandra, MongoDB, SOLR, and Rabbit MQ, as well as a configured Grails DataSource.
You can write your own as well, for example to report the health of a back-end accounting system, you can use the following as a guide:
package com.opi.monitor import org.springframework.boot.actuate.health.Health import org.springframework.boot.actuate.health.HealthIndicator /** * Checks back-end accounting system's health. */ class AccountingHealthMonitor implements HealthIndicator { Health health() { if (isHealthy()) { return Health.up().build() } return Health.down().withDetail("Accounting System Status", "Unreachable").build(); } /** * In real life this method would check the health of the back-end system. * @return */ boolean isHealthy() { false } }
Define this class as a Spring bean in your
grails-app/conf/spring/resources.groovy:
import com.opi.monitor.AccountingHealthMonitor beans = { accountingHealthMonitor(AccountingHealthMonitor) }
Since this example fakes a down status, you will now see a
/health endpoint response that looks something like the following:
{ "status":"DOWN", "diskSpace":{ "status":"UP", "total":250140434432, "free":202630787072, "threshold":10485760 }, "accountingHealthMonitor":{ "status":"DOWN", "Accounting System Status":"Unreachable" } }
Notice how since one health indicator shows a
DOWN status, the overall application status is
DOWN.
Viewing Application Metrics
The
/metrics endpoint contains a lot of data about how your application is running, including information on memory, processors, threads and so on, along with
DataSource and Tomcat session metrics. Spring Boot by default uses the concepts of gauges and counters. A gauge holds a value and a counter shows a change up or down over time.
You can view the metrics with the
/metrics endpoint. Here’s an example for a simple Grails Web application with one domain class:
{ "mem":836396, "mem.free":583908, "processors":8, "instance.uptime":475604, "uptime":482989, "systemload.average":0.78564453125, "heap.committed":748032, "heap.init":786432, "heap.used":164123, "heap":748032, "nonheap.committed":92160, "nonheap.init":2496, "nonheap.used":88366, "nonheap":0, "threads.peak":34, "threads.daemon":31, "threads.totalStarted":39, "threads":34, "classes":11753, "classes.loaded":11753, "classes.unloaded":0, "gc.ps_scavenge.count":22, "gc.ps_scavenge.time":232, "gc.ps_marksweep.count":3, "gc.ps_marksweep.time":534, "httpsessions.max":-1, "httpsessions.active":1, "gauge.response.assets.skin.database_edit.png":1.0, "gauge.response.health":262.0, "gauge.response.assets.main.css":4.0, "gauge.response.assets.skin.database_add.png":1.0, "gauge.response.assets.mobile.css":3.0, "gauge.response.unmapped":254.0, "gauge.response.assets.grails-cupsonly-logo-white.svg":3.0, "gauge.response.employee.create":530.0, "gauge.response.assets.application.css":3.0, "gauge.response.employee.show.1":309.0, "gauge.response.assets.skin.database_delete.png":1.0, "gauge.response.assets.spinner.gif":1.0, "gauge.response.assets.skin.information.png":1.0, "gauge.response.assets.jquery-2.2.0.min.js":4.0, "gauge.response.assets.bootstrap.js":4.0, "gauge.response.assets.application.js":3.0, "gauge.response.assets.skin.database_save.png":2.0, "gauge.response.assets.skin.database_table.png":2.0, "gauge.response.assets.bootstrap.css":6.0, "gauge.response.assets.grails.css":3.0, "gauge.response.assets.skin.house.png":1.0, "gauge.response.employee.index":1611.0, "counter.status.200.assets.application.css":3, "counter.status.200.assets.bootstrap.css":3, "counter.status.200.assets.skin.database_save.png":1, "counter.status.200.assets.mobile.css":3, "counter.status.200.assets.grails.css":3, "counter.status.200.assets.skin.database_edit.png":1, "counter.status.200.employee.create":1, "counter.status.200.assets.skin.information.png":1, "counter.status.200.assets.spinner.gif":3, "counter.status.200.employee.index":1, "counter.status.200.assets.skin.database_table.png":2, "counter.status.302.unmapped":1, "counter.status.200.assets.application.js":3, "counter.status.200.assets.grails-cupsonly-logo-white.svg":3, "counter.status.200.assets.skin.house.png":3, "counter.status.200.employee.show.1":1, "counter.status.503.health":1, "counter.status.200.assets.jquery-2.2.0.min.js":3, "counter.status.200.assets.skin.database_delete.png":1, "counter.status.200.assets.bootstrap.js":3, "counter.status.200.assets.skin.database_add.png":2, "counter.status.200.assets.main.css":3 }
You can extend the available metrics by implementing the
PublicMetrics interface, as documented at.
Instead of writing
PublicMetrics classes, though, many prefer to use a add-on library called Dropwizard Metrics.
Dropwizard Metrics
The Dropwizard Metrics library provides a fairly detailed library to allow your application to capture metrics.
Dropwizard uses a bit different terminology than Spring Boot though. In Dropwizard, a meter holds the rate of events over time. A timer provides a histogram of an event type along with a meter of the rate.
The best feature though, is that Spring Boot’s metrics automatically works with the Dropwizard Metrics library. Spring Boot registers all its metrics with the Dropwizard
MetricRegistry. The interaction goes both ways, too. Metrics you add to a
MetricRegistry using Dropwizard will also appear in the
/metrics endpoint.
You can find more information on this library at.
To use the Dropwizard Metrics library, you’ll want to install the handy Grails plugin.
Installing Dropwizard Metrics
Unfortunately, you won’t likely find information on the Grails 3 plugin portal at. Instead, as is common with so many Grails 3 plugins in a snapshot state, look on github at:
Documentation is at.
To install, add the following dependency to your
build.gradle file:
compile "org.grails.plugins:dropwizard-metrics:1.0.0.BUILD-SNAPSHOT"
Using the Dropwizard Metrics Library
Once installed, you can use the library itself to record metrics in as detailed a format as you’d like. Since this post concentrates on using out-of-the-box features, you should look at the handy annotations.
The
@Metered annotation records an event into a named meter. Similarly, the
@Timed annotation starts a timer every time the method is called.
For meters, you can use the following example service as a guide:
import grails.plugin.dropwizard.metrics.meters.Metered import grails.transaction.Transactional @Transactional class UsageService { @Metered('accounting') def triggerUsage() { // This method would access some service } }
Every call to the
triggerUsage() method will record an event to the accounting meter.
For timers, here is an example of a slow service that performs a lot of time-consuming work:
import grails.plugin.dropwizard.metrics.timers.Timed import grails.transaction.Transactional @Transactional class SlowService { @Timed('slow') def performWork() { // Simulate slowness Thread.sleep(1000) } }
Your services will have real code, obviously.
Using these annotations, you can see the current values with the
/metrics endpoint (showing just part of the new output):
"accounting.oneMinuteRate":0.029887807318695676, "accounting.fiveMinuteRate":0.12063495313768283,, "accounting.fifteenMinuteRate":0.16853371667117334, "accounting.snapshot.mean":1015, "accounting.snapshot.median":1013, "counter.status.200.accounting.work":3, "gauge.response.accounting.work":1016.0, "accounting.snapshot.min":1013, "accounting.snapshot.stdDev":12, "accounting.meanRate":0.01779413878674953, "accounting.snapshot.75thPercentile":1013, "accounting.snapshot.95thPercentile":1013, "accounting.snapshot.98thPercentile":1075, "accounting.snapshot.99thPercentile":1075, "accounting.snapshot.999thPercentile":1075, "slow.count":3, "slow.snapshot.75thPercentile":1000, "slow.oneMinuteRate":0.029887807318695676, "slow.fiveMinuteRate":0.12063495313768283, "slow.fifteenMinuteRate":0.16853371667117334, "slow.snapshot.stdDev":1, "slow.snapshot.median":1000, "slow.snapshot.max":1006, "slow.meanRate":0.017800364215211435, "slow.snapshot.mean":1000, "slow.snapshot.min":1000, "slow.snapshot.95thPercentile":1000, "slow.snapshot.98thPercentile":1006, "slow.snapshot.99thPercentile":1006," "slow.snapshot.999thPercentile":1006,
As you can see, you get a lot of data from just using annotations. You can also, of course, make more use of the library to extract even more data.
Using the Actuator UI Plugin
The Spring Boot actuator endpoints all respond with data in JSON format. You can get a nicer look on top of this data by using the actuator-ui plugin.
To install the actuator-ui plugin, add the following to your dependencies in your
build.gradle file.
compile 'org.grails.plugins:actuator-ui:0.2'
Note: Version 1.0 of the actuator-ui plugin was just released to support Grails 3.1.
Once installed, you can view the data in a more friendly format at
/actuator/dashboard.
You can see a demo of this feature in a YouTube video at.
Note that due to the sensitivity of the data, you want to secure the actuator user interface for administrators. The documentation at provides guidance on how to do this with Spring Security.
In addition to securing the actuator-ui, you also should secure the entire set of actuator endpoints.
Securing Actuator Endpoints
Allowing this information to get out in production can be a security issue. You can use the Spring Security Grails plugin to lock down these end points. Endpoints marked as
sensitive in your configuration should get automatically protected by Spring Security. Be sure to read the documentation at to see what options are available to you.
Spring Boot allows a lot of customization. You can map these endpoint URLs to something different if you’d like. In addition, you can access the data via JMX if you’d prefer.
All of this shows a taste of the information you can extract from your Grails applications with minimal work. Yet another reason to use Grails.
One thought on “Monitoring Grails Applications The Easy Way”
Looks great! | https://objectpartners.com/2016/08/30/monitoring-grails-applications-the-easy-way/ | CC-MAIN-2016-44 | en | refinedweb |
#include <sys/stream.h> #include <sys/strsun.h> void merror(queue_t *wq, mblk_t *mp, int error);
Solaris DDI specific (Solaris DDI).
Write queue associated with the read queue to send the M_ERROR on.
Optionally, a STREAMS message to convert to an M_ERROR.
Error code to include in the M_ERROR message.
The merror() function constructs an M_ERROR message, and sends the resulting message upstream.
If mp is NULL, merror() allocates a one-byte M_ERROR message. If mp is non-NULL, merror() attempts to convert the passed-in message to an M_ERROR. However, if the passed-in message has more than one reference (see dupmsg(9F)), or if it is of zero length, it is freed and a new message is allocated.
If the allocation or conversion fails, merror() silently fails. Otherwise, the resulting one-byte data block is assigned the specified error code and sent upstream.
None.
This function can be called from user, kernel or interrupt context.
Callers must not hold any locks across an merror() that can be acquired as part of put(9E) processing.
STREAMS Programming Guide | http://docs.oracle.com/cd/E36784_01/html/E36886/merror-9f.html | CC-MAIN-2016-44 | en | refinedweb |
The term multi-client means that there are different types of clients visiting the Web site. Most commonly, designers only think of a web site as Hyper Text Markup Language and the client is using some software that can be viewed through Hyper Text Markup Language. Though, this practice does not always make sense as the Hyper Text Markup Language is not the individual technology that is used on the Web, and even if it were, there are various difficulties to be addressed.
At the starting point, we have to consider the fact that there are different versions of Hyper Text Markup Language. Now the Coding for full backwards-compatibility with the HTML 1.0 is not very accommodating, since it excludes taking advantage of the current Hyper Text Markup Language features. The only well-thought process is to endeavor and maintain all of the different versions of Hyper Text Markup Language or to selectively sustenance a version of Hypertext Markup Language that is a middle-of-the-road approach.
Now, we also consider other technologies in addition to Hyper Text Markup Language. There are a lot of technologies exist to work with Hypertext Markup Language, technologies such as Cascading Style Sheets, ECMAScript, and the Document Object Model (DOM). And many technologies exist that can also replace Hyper Text Markup Language such as XHTML, XML (Extensible Markup Language), MS Word, and Adobe’s Portable Document Format (PDF). Choosing only to support the basic Hyper Text Markup Language greatly limits the type of content that Java Server Pages can deliver.
Implementing Multi-Client Support
There are a few techniques commonly used to provide multi-client support. Now a lot of methodologies also work, nonetheless it is desirable to have a multi-client design that generally supported by the following way:
Table 1: Showing Multi-client design features
The term template refers to creating a Java Server Page that is formatting the template as well as content is pushed through the system. Creating support for a new format can be accomplished by creating a new template. In general practice this approach requires building a set of Java Server Page for each format required. The important point is to remember that in example there are multiple possible Java Server Page endpoints to a request. Different endpoints generate equivalent pages nevertheless in different formats. The number of endpoints is the number of formats desired.
As an alternative of always assuming a single endpoint, the control component that needs to selectively forward to the Java Server Page, responsible for the specific format as per our requirement. The code for selectively forwarding can easily be done by using a fictitious Uniform Resource Locator and some browser (like Google Crome, Internet Explorer11, Mozilla Firefox) client-detection code. As per illustration, a Filter can be used to intercept of all the requests to the Web Application. Basically that requests for the fictitious Uniform Resource Locator ending with .html (extension of HTML files) are forwarded to an HTML-formatting JSP; requests for the fictitious Uniform Resource Locator ending with .xhtml (This actually Extensible Hyper Text Markup Language. We also define that XHTML is almost undistinguishable to Hyper Text Markup Language 4.0. XHTML is a cleaner version of Hyper Text Markup Language) and promoted to an XHTML-formatting Java Server Page.
We consider a web site for the different formatting and a filtering to encapsulate (hidden the logic which outside world cannot access directly) all business logic. A little code is presented here which will give us a clear idea about the filter process in Multi-client architecture model.
Filter_jsp.java //name of the java program
Listing 1: Sample showing filter
Package com.bceibook; //Package name import java.io.*; //This is input output package import java.util.*; import javax.servlet.*; //This is servlet package import javax.servlet.http.*; public class filter_jsp implements Filter { FilterConfig fc1 = null; public void doFilter(ServletRequest req,ServletResponse res, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; String u = request.getRequestURI(); String client = "Hypertext Markup Languages"; if (u.endsWith("xhtml")) { client = "Welcome to mrbool learning "; } // this part is the business logic... HashMap udata = new HashMap(); udata.put("name", "BCEI.."); request.setAttribute("udata", udata); ServletContext s = fc1.getServletContext(); s.getRequestDispatcher("/bcei/BCEIexample"+client+".jsp"). forward(request, res); } public void init(FilterConfig filterConfig) { fc1 = filterConfig; } //This function is written for destroying the filter that generally set with null data public void destroy() { fc1 = null; } } // End of the program.
Now, we have to save this program with BCEITemplateFilter.java in the /WEB-INF/classes/com/bceibook directory of the bceibook the Web Application. After that we must save the files as BCEIexampleHTML.jsp and BCEIexampleXHTML.jsp, respectively, in the /mc directory of the bceibook of that Web Application. The Filter is used to capture an application and directs it to either the Hyper Text Markup Language or EXtensible Hyper Text Markup Language template depending on the ending of the Uniform Resource Locator.
String uri = request.getRequestURI(); String client = "Hyper Text Markup Language.."; if (uri.endsWith("xhtml")) { client = "EXtensible Hyper Text Markup Language.."; }
Generally the client variable is set to Hyper Text Markup Language, meaning the Hyper Text Markup Language version of the page will be shown here. Here the Uniform Resource Locator end with ".xhtml", then the EXtensible Hyper Text Markup Language version of the page is exposed. Usually the doFilter() function is directly used with the Java Server Pages. Here we are discussing another page format.
ServletContext s = fc1.getServletContext(); s.getRequestDispatcher("/mc/BCEIexample"+client+".jsp"); forward(request, res);
The accurate Java Server Pages that is used for the beginning of the page. As well as the file "BCEIexample" that is affixing either Hyper Text Markup Language or EXtensible Hyper Text Markup Language, and after that it goes to finally ".jsp" file.
Try out the code example by deploying BCEITemplateFilter.java to filter all requests to the /mc directory.
Listing 2: Example showing filter format
<filter> <filter-name>BCEITemplateFilter</filter-name> <filter-class>com.bceibook.BCEITemplateFilter</filter-class> </filter> <filter-mapping> <filter-name>MCTemplateFilter</filter-name> <url-pattern>/mc/*</url-pattern> </filter-mapping>
After writing the code it is very much important to deploy the Filter, reload the bceibook in the Web Application and browse to. The output of the given page is shown in the Hyper Text Markup Language version, BCEIexampleHTML.jsp.
After that we are now discussing example 3 which is helpful for setting up the situation.
Listing 3: Response from BCEIexampleHTML.jsp
<html> <head> <title>Program of Client Server Architecture Model</title> </head> <body bgcolor=”lightyellow”> <img src="cei.bmp"> <h1>Welcome to Computer Educational Institution…</h1> </body> </html>
The response is the Hypertext Markup Language version of the page. After that we go to browsing option and try to open. In general this response is returned back in the Extensible Hypertext Markup Language page.
Listing 4: Here the Response Generated by BCEIexampleXHTML.jsp
<html> <head> <title>It is support with the Multi-Client Support Process.</title> </head> <body bgcolor=”lightyellow”> <img src="cei.bmp"/> <h1> Welcome to Computer Educational Institution…</h1> </body> </html>
This code is resumed as an Extensible Hypertext Markup Language process. The point should be remembered that browser (Internet surfing software) interpretations of the results are not used because they basically more and more identical. Now in this article we state simple examples that deliver us actual protocol for creating the template system that can be produced for a Web site. The actual formats are detected by the Uniform Resource Locator extension and one main Filter interrupts to all requests and directs them to the appropriate reserve system. The former illustration can be extended to create a Filter which distinguishes Hypertext markup Language and Extensible Hypertext markup Language support. In that request header the client sends the user representative field. As well as this header encompasses the information about the software and a client who is using to visit a site. The user-agent header can be extracted to administrate if the client is using Mozilla, Internet Explorer 11, Netscape Navigator, or any other popular Web browser.
Benefits and Drawbacks of Templates
Here we will discuss some commercial benefit and drawback of the Template model
- The benefit of the templates is easy to and intuitive to generate, especially for markup language formats.
- Java Server Pages that already excels at producing these formats and most writers know how to use Java Server Pages to make them in our working field.
- The actual drawback of a template is that each design requires a new Java Server Page.
- The Java Server Page muddle a Web Application, nevertheless more importantly multiple copies tempt a developer to put redundant content in the templates.
- The rest of the content on the page is copied directly in the system, leaving two copies of the same information process.
- The Hypertext Markup Language template can be changedThe same changes would need to be pragmatic to the Extensible Hypertext Markup Language template format also.
- Solving the problem of the terminated content on the templates require some work or development.
- The clarification is to treat all text as if it is dynamic and pull the content from one central location.
- The Static content is then combined into one place and more easily maintained.
- This template page end up by looking the following example from the internationalization process.
Listing 5: Listing Consolidated Content
<html> <head> <title>${content.title}</title> </head> <body bgcolor=”lightyellow”> <img src="cei.bmp"> <h1>${content.welcome}</h1> <p>${content.intro} </body> </html>
The following diagram is showing multi client architecture.
Figure 1: Multi-Client Architecture
Transformations
Basically the transformation is very much multifaceted nevertheless supplementary to full-bodied multi-client explanation associated to the templates process. The basic transformation in between a template and a transformation system is that the transformation generally uses an interface other than pseudo-format, for the efficiently nonrepresentational of the final format. Because at that moment a client sends requests to a particular format, the pseudo-format is transformed into the concluding result or we may call final result. Now here we are discussing some common example.
Listing 6: Multi-Client code for show “Hello BCEI” Page
<page> <p style="font-size:24pt; color:green;">HelloBCEI….</p> </page>
Figure 2: Web Application
Implementing Transformations
There are some existing approaches of implementing the transformations with Servlets and Java Server Pages. Basically some common approaches are to be used for Java Server Pages custom tags to abstract all types of formatting or to use Java Server Pages to produce a standard format and have a Filter transform the result. Now in this article we will discuss about framework and some techniques.
Table 2: Showing transformation techniques
Conclusion
Client Server architecture is very important architecture in our software development field but this system communication is not an easy process. The Multi-client system is designed to enable a Servlet and Java Server Pages designer or developer to accurately design dynamic Web Applications. Nevertheless, when we design around only Hyper Text Markup Language that is fundamentally limits what a Web Application can do. Now, the concept of abstracting formatting logic similar to abstracting of the content is also very effective. So the multi client support can be developed by following the approaches discussed above. | http://mrbool.com/jsp-servlet-implementing-multi-client-support/30531 | CC-MAIN-2016-44 | en | refinedweb |
Establishing a homeschool partnership also allows for joint ownership of problems and concerns as well as joint commitment toward educational and behavioral goals for students.
Findings in a number of studies have demonstrated that more-selective impair- ments follow binary option in usa lesions in the visual cortex. Kerwin). The apparatus is standardized in recommendations T.
Symptoms of anxiety often occur in the s0015 1. 11 Diagram of the stromal reaction of the host to epithelial neoplasms, showing the extensive fibrous reaction in the scirrhous response compared with the relatively small fibrous binary options trading signals review in the medul- lary response.Casey D.
75) The binary trading system. Cain, the recursive procedure presented here vault options binary trading called 2Fn 1 times (your program may have different behavior).
H 60 second binary options brokers list are declared in the usual manner. PET reveals that both positions are correct language platfoorms localized, but many ar- eas of the brain contribute to it. 9 in a large series of AIDS patients recently studied (Goedert et al. These mea- surements make it clear that the human brain binary option brokers demo accounts is larger than those of other primates.
Get_sum() " "; for (int inp-1; i0; i-) { os listi; if (i 0) os "" ; } return os; } endif 8. 38) θ Name of the binary option simulation 0. 8 1 Horizontal velocity (f) Mesh6 Figure 9. True; 2. 176) k TwTf n where L is any characteristic dimension. 46) t 1 x 2 x ρc x x1 x x x2 x 12p1122 If the thermal conductivity is assumed to be constant and k kx1 kx2the energy equation is reduced to TuTuTα 2T2T (7.
Binary option black scholes formula void mouseClicked(MouseEvent me) Top rated binary trading platforms save coordinates mouseX 0; mouseY 10; msg "Mouse clicked. Healy, D. Morran, baseball, hockey, top rated binary trading platforms ketball, and soccer.
If people disagree about the norms of proper behavior, or the goals that people should have, it is difficult for people to behave accord- ing to the norms of the group. Moving from Effective Teaching top rated binary trading platforms Excellent Teaching 6. Therefore, 6 is a zerotree root, and we encode this entire set with the label zr.
awt. 103119). You american binary option pricing not create instances of this class. According to the definition of the density matrix, (11. Int..
This top rated binary trading platforms has some improbable binary options brokers 30 seconds. In Chapter 6, we discuss the single scattering approximation, where each particle is assumed to scatter independently. Drake R. The psychology of moral development. 70° viewing angle in degrees PO O 10 20° 30° 40° 50° 60° 70 viewing angle in degrees (a ) Figure 8.
Rigby, K. We are then faced with a decision What should that default modulus be. The Primate Brain. The 12 behav- ioral dimensions measured by top rated binary trading platforms CBAS low deposit binary options brokers broadly classified into reactive and spontaneous behaviors.
,Roscoe,W. The diurnal effects on sleep are also notorious. Join(); ob3. (1993). Sense strand 5-cNNNNNNNNCTGATGAGTCGctgcagCGACGACGAAANNNNNNNgu3 ApaI Tь, they tend to perceive that information as nonrepresentative of the group; that is, they use the stereotype to explain away Intergroup Relations and Culture 339 Page 1184 s0050 deviant information.
Making and Utilizing Composite Portraits Law enforcement agencies in many parts of the world rely on profitable binary options strategy named the sandwich pdf technologies to help crime witnesses reconstruct likenesses of top rated binary trading platforms. (C) Www forextradingbinaryoptions com Unequal distribution of different ions causes the inside of the axon to be negatively charged.
For charged particles such as electrons and ions, one can then use electrostatic potentials to manipulate de Broglie waves, just like the use of optical elements such as lenses, prisms. 3) provided of course that s 0 (if s is real). was immediately referred to a neurologist, who di- agnosed a top rated binary trading platforms parietal malignant astrocytoma.
They thus binar y better equipped to prepare an appropriate action regardless compare binary options brokers the time and space constraints of the environment. Following the definition for real numbers, in order to define the inverse of a platfo rms we need to have the matrix counterpart of 1.
Exit(1); } } } class Binary option price derivation { public static void main(String args) { IntArray nums new IntArray(); int i; for(i0; i10; tлp nums. Top rated binary trading platforms course, equals( ) compares two objects. If she guessed incorrectly, 5 U B BAN D 5, AND W A VEL Black scholes model binary option T Top rated binary trading platforms The function F(w) is zero outside the interval (-2ITW, Top rated binary trading platforms, binary option trading itm we can extend the limits to infinity without changing the result (12.
The key point top rated binary trading platforms is that the analysis of faces is unlike that binaryoptionstradingonline net other visual stimuli. Department of Labor. Several workers have shown that normal cells or those platf orms in vitro but exhibit- ing only a few of the criteria (Table 14.
For example, is the school public or private. In P. The New England Journal of Medicine 32217811787, 1990. 448 Tire. Care should be taken not to equate habitually performed behavior rate d all behaviors that are top rated binary trading platforms in the past, even when the conditions of high frequency and context stability are met.
Hammersley, A few seedlings of research, Proceedings of the Sixth Berkeley Symposium on Math- ematical Statistics top rated binary trading platforms Probability, 1 (1972), 345394. Impact on psychotic symptoms. However, given the active interest in these ideas it is quite possible that they will result in more practical applications.
7 shows the allowable combinations 251. If the researcher were interested in only one performance of an exercise behavior, it binary option jobs be easy to obey the prin- ciple of correspondence as follows Attitude measure I likedislike to engage in vigorous physical activity at least one time during the month of October.
It allows you to define your own formatting patterns that are used to display date and time information. Cb) Top rated binary trading platforms using a 3-bit Jayant quantizer. Neuropsychologists use unconscious as shorthand for von Helmholtzs notion of unconscious inference, processes outside of awareness and learned by ex- perience, whereby observers use knowledge to perceive.
Research has tradin that behaviors displayed by the therapist, including inter- personal behaviors. 7, 479483. Almost all of the direct costs (97) were incurred by less than 50 of the patients. 006) rrated number for plaforms phrenia Routing care (p 0. As noted in the figure, mutations in the lacI top will alter the regulation of expression of the lacZ gene, which codes for β-galactosi- dase testimonies of those who trade binary options. According to estimates.
Brebnor, considerable work has extended these earlier investigations with a variety of cultured cells, both primary and cell top rated binary trading platforms, and numerous different promoting agents whose effectiveness in the second stage of car- cinogenesis has been demonstrated in vivo (cf.
Thus. Because the 100 90 g 60 p 70 60 f. Although the fovea is critical for acuity, the costs in reaction time for an unexpected foveal stimulus are just as great as those for an unexpected peripheral event. Schizophr. Similarly, although individuals who experi- enced the binar y of a spouse did return very close to their prewidowhood levels of satisfaction, this process was very slow, taking an average of 8 years.
Kelman (Ed. For society at large, important economic benefits often justify the toleration of ap- preciable risks. Elderly patients suffering from basal subclinical dysfunctioning of subcortical nuclei are more prone to develop binary options hyip symptoms.Seeman M.
Thus, visual atten- tion influences priority or processing preference. Interrelation of Stress States and Recovery Demands 6. It is equally plausible that, from the Brazilian perspective, top rated binary trading platforms self- descriptions of men in the Trading binary options using candlesticks States are more feminine than those of Brazilian men.
Natl. (1995). Keysar, B. An appealing top rated binary trading platforms for economic research could also be the binaary of an evidence-based disease management programme for schizophrenia in which patient care with all medical resources across the entire health care delivery system are combined and integrated with best evidence, clinical expertise and guidelines, pathophysiological knowledge and patient preferences 4.
63) paltforms using (9. The default copy constructor simply replicates the data elements in the objects. List of binary options brokers by minimum deposit 1995, Shaffer and Robbins developed a general model to describe the typical course of an addiction, consisting of the following stages 1. Theinteriorof theboxisoftencoatedwithsand,whichpro- vides the desired texture and facilitates removingtheformedbrickfromthemold.and Top rated binary trading platforms, J.
Page 1145 s0005 attempt to compare intelligence across cultures is futile. All Tradding are constrained by the Constitution and by the Civil Rights Acts. Did not recognize such expressions is all the more curious. (Consequent) If people are friends, they are expected ___.
Nonomura. Social psy- chological theory contributes to community psychology work in the areas of discrimination, oppression. How long would it take for the dust grains to become sufficiently large to form a crystal (assume that nd 105 cm3, TeTi 100, λdi 100 μm).
Treatment trials of best binary option trading companies to 3 months have been recommended.
London Hogarth. DiSilvestro, R. An binary option strategy free might decide that the feedback is not fair or that the poor performance is somehow not the employees fault.
Most top rated binary trading platforms these vectors are derived from Moloney murine leukemia vuus (24). S brain indicates that H. Arted 0. Within the last decade, J. The final two bits indicate the type of de-emphasis to be used. (2003). Outcomes were evaluated in terms of skills performance in a series of role-played tests.Binary options brokers scams, J. Saying is believing Effects of message modification on memory and liking for the person described.
Tessitore, Top rated binary trading platforms. These nonverbal cues are even more important when there is significant divergence between the host culture and targeted culture during translation. COGNITIVE CHANGES ACROSS THE LIFE SPAN What happens to cognitive abilities over the life span. Et) was relatively non-toxic and devoid of myotic properties. Since were not going to use repaint( ), we can just eliminate update completely.
ImageObserver defines only one method imageUpdate( ). For example, if this were invoked by the expression A!B in some other procedure, then this is A and Q is B.
In 2003, thequantity in parentheses must vanish and so ×BME λBME. Gen. Because of the rapid changes in recovery of function, it is often necessary to meet frequently as an individualized education plan team to review the TBI students progress and to platofrms the educational goals.Trading binary options strategies and tactics download free | http://newtimepromo.ru/top-rated-binary-trading-platforms-7.html | CC-MAIN-2016-44 | en | refinedweb |
WiFiDirectWpsMethod
Since: BlackBerry 10.2.0
#include <bb/device/WiFiDirectWpsMethod>
To link against this class, add the following line to your .pro file: LIBS += -lbbdevice
The WPS connection method to use.
Overview
Public Types Index
Public Types
The WPS connection method to use.
BlackBerry 10.2.0
The connection is completed through a physical button press on the devices.
- RandomPin = 1
A random PIN is assigned for the connection attempt.Since:
BlackBerry 10.2.0
- PresetPin = 2
A custom PIN is provided for the connection attempt.Since:
BlackBerry 10.2.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/cascades/bb__device__wifidirectwpsmethod.html | CC-MAIN-2014-35 | en | refinedweb |
UPDATE:.
Also keep in mind that “readable” is a bit subjective. The changes I make here aren’t necessarily the changes someone else would make. Feel free to leave your own suggestives in the blog post comments.
First I’ll take a rough look through the original code. Just a note, all line numbers in this tutorial refer to the original program’s line numbers, not the line numbers of my makeover program.
There are very few comments in this script (though I’m bad about this also.) Much of this makeover will be adding comments and documentation.
We’ll start at the top of the program and just move down. Each time I make some changes, I constantly retest the program to make sure my changes don’t cause any new bugs.
Spacing and removing duplicating variables
(These changes can be seen in the github history.)
When there is nearly identical code, I like to make the spacing match up so it easier to compare them with each other. In this case, you can see that they are all the color variables are tuples of three integers.
BLACK = ( 0, 0, 0) GREEN = ( 0, 204, 0) RED = (255, 0, 0) SILVER = (204, 204, 204) WHITE = (255, 255, 255)
Note that
powerup_types is both a global list defined on line 35 and as a static list in the
GameWorld class on line 138. But if you keep grepping for “powerup_types” you’ll see it’s only used once in the program. Let’s get rid of this duplication by deleting lines 35 and 138, and just putting a tuple with the three strings in the one place where it is used.
If we used this list of powerups in more than one place, then putting it in a variable would make sense. But since it’s only used in one place, we shouldn’t take the extra step of putting it into a variable.
And since this list of three strings is never going to be modified, let’s use a tuple instead of a list because tuples are immutable data types.
I’m going to go ahead and commit my changes now with the log message, “Aligned the color constants and got rid of the powerup_types variable.”
Getting rid of magic numbers
(These changes can be seen in the github history.)
The
scale_and_round() function can be improved.
480 is a magic number. Magic numbers are bad. The number 480 means something to the program, but as a human looking at it the number doesn’t tell me anything (and there’s no comments to explain them either). Using a small program called PixelRuler I can confirm that the game map (not the entire window) is 480 x 480 pixels. That’s what
480 in
scale_and_round() must refer to. Let’s change those
480 ints to global variables
MAP_WIDTH and
MAP_HEIGHT.
MAP_WIDTH = 480 MAP_HEIGHT = 480 ... self.bglayer.fill(GREEN, (MAP_WIDTH, 0, 160, MAP_HEIGHT))
There are some
480 integer values in the program that don’t refer to a width or height so much as a generic size. Since the map is a square, it doesn’t matter. But if we ever want to make the map dimensions flexible, we should put this in a variable as well. Let’s add a
MAP_SIZE global constant as well, and just set it to the larger of the
MAP_WIDTH and
MAP_HEIGHT constants using
max().
MAP_SIZE = max(MAP_WIDTH, MAP_HEIGHT)
I also notice on line 522 that the
midbottom of some text for the pause screen is placed at the X coordinate
240 (in the middle of the screen’s width). This is another magic number that we can get rid of. Let’s replace it with a
MAP_HALF_WIDTH constant (and you can see that
120 and
360 are also used, so we can add MAP_QUARTER_WIDTH and
MAP_THREE_QUARTER_WIDTH constants too, as well as constants for height). We don’t use all these values, but for completeness let’s just add them anyway in case one day we do change the program to use them.
But what we do need is a
WINDOW_WIDTH and
WINDOW_HEIGHT for the
pygame.display.set_mode() call. Even though
WINDOW_HEIGHT and
MAP_HEIGHT are both
480, they don’t describe the same thing. This is why we use two separate constants for them. (Think of it like this: We would want two constants
ONE_DOZEN and
NUMBER_OF_DUCKLINGS even though they may both contain be
12.)
Note that the
MAP_SIZE,
MAP_QUARTER_WIDTH,
MAP_HALF_WIDTH, and
MAP_THREE_QUARTER_WIDTH are not directly set with values, but calculated from
MAP_WIDTH and
MAP_HEIGHT. This way, if we are playing around with these values we only have to update
MAP_WIDTH and
MAP_HEIGHT, and all the others are automatically updated.
model.shoot_at(x / 480.0, y / 480.0) model.thrust_at(x / 480.0, y / 480.0)
One final thing about converting magic numbers to constants. Note that on line 560 and 561, you see the float values
480.0 instead of the integer values
480. This is because in Python 2, dividing two integer values does integer division (e.g.
22 / 7 is
3), which is too imprecise for what we want. This is why Felix used floats, which will force the
/ operator to do float division (e.g.
22 / 7.0 is
3.142857142857143). One of the improvements of Python 3 was to always make the
/ do float division.
But if we put
MAP_WIDTH and
MAP_HEIGHT on these lines, the integer in these constants will cause this to do integer division and cause a weird bug where the ship can only thrust and shoot towards the top left corner of the map. So we wrap them in
float() calls to convert them to float values, which ensures that the
/ operator does float division.
model.shoot_at(x / float(MAP_WIDTH), y / float(MAP_HEIGHT)) model.thrust_at(x / float(MAP_WIDTH), y / float(MAP_HEIGHT))
Also, there are a few places where the magic number
500 is used that I will replace with
MAP_WIDTH + 20. At this point, we should be able to modify the
MAP_WIDTH,
MAP_HEIGHT,
WINDOW_WIDTH, and
WINDOW_HEIGHT variables and the rest of the program scales automatically. (Though the font sizes are based on the window height, so they make look a bit awkward. We can deal with that later.)
I’m going to commit my changes now to git source control with the log message, “Replaced the magic numbers for the map and window dimensions with constant variables.”
Improve scale_and_round()
(These changes can be seen in the github history.)
The
scale_and_round() function can be improved again. First, we’ll add a comment saying what it does. Then we can shorten it into a return statement one-liner. (The one liner is just my own preference.)
def scale_and_round(x, y): """Returns x and y coordinates from 0.0 to 1.0 scaled to 0 to MAP_WIDTH or MAP_HEIGHT.""" return int(round(x * MAP_WIDTH)), int(round(y * MAP_HEIGHT))
Also, at this time I’m going to go ahead and convert all the tabs to 4 spaces with my editor’s Find-and-Replace. (See github.) And then I will make some other magic number replacements also. (See github.)
Add comments to the Bubble2D class
(These changes can be seen in the github history.)
The
Vector2D class looks good, and is straightforward enough that it doesn’t need comments. I’ll be lazy and skip them.
The
Bubble2D class is used for every object in this game: asteroids, the player’s ship, bullets, powerups, and explosions. Maybe “bubble” isn’t the best name to use. Anyway, this fact should be made more apparent in the comments. The methods also need commenting.
class Bubble2D: """Represents a circular object on the game map with position, radius, and velocity."""
I’ll check in these changes with the log message, “Added some comments for the Bubble2D class’s methods.”
Rename position to pos
(These changes can be seen in the github history.)
The
Bubble2D class’s
position member can be shortened to
pos. The word “pos” is generally accepted as a short form of “position” (or, less often, “positive”). You can see in the
wrap_around() and
is_out() methods that Felix creates a shorter-named
pos variable to stand in the place of
self.position. Let’s cut this middle man out.
def __init__(self, radius): self.pos = Vector2D(0, 0)
Checked in with the log message, “Renamed Bubble2D’s position member to pos.”
Improving Bubble2D.update()
(These changes can be seen in the github history.)
There’s a slight bug with wrap_around() method. Say the horizontal velocity of an object is
0.25 and the object is already at the
0.0 X position. In the next one-second update, it will be set to
-0.25. If we think of the map as a
toroid ring (i.e. going past the left edge moves you to the right edge) this should put the bubble at the
0.75 X position. But the current
wrap_around() method will put it at
1 (the edge of the map).
The logical bug is that instead of setting the position to
0 or
1, it should actually add or subtract to the current position. I’ve changed the code to this:
Also, it’s kind of odd that the
Bubble2D object isn’t smart enough to call it’s own
wrap_around() method after the code in
update() runs. This means that the management of Bubble2D objects will have to rely on other code outside the Bubble2D class, which isn’t good object-oriented design.
The reason Felix has the code set up this way is because while he wants every object to wrap except for bullets. But we can fix this. Let’s have the
update() method call the
wrap_around() method, but return
True if the object has wrapped around. That way the code that detects when to delete the bullet can know if the bullet’s
Bubble2D object has wrapped around or not.
Checked in with the log message, “Updated Bubble2D’s update() method. How meta.” (Okay, that’s a bad habit. You really should keep source control log messages professional and joke-free, especially if you are working on code at your job. It’s distracting, and not everyone speaks English as a first language or will get that you are just joking. Also, you might not be as funny as you think you are.)
Improve Bubble2D.is_out()
(These changes can be seen in the github history.)
The
is_out() method is a pretty big one liner. If we did something like
0 < x < 1 and 0 < y < 1, that would check if the x and y was on the map. And if we put a
not operator in front, the logic would be the same as the current
is_out(), except a bit more readable.
return not (0 < self.pos.x < 1 and 0 < self.pos.y < 1)
Checked in with the log message, "Simplified the Bubble2D is_out() method."
Impove Bubble2D.collides_with()
(These changes can be seen in the github history.)
The
collides_with() method is pretty good, but let's change the name of the bubble parameter to "other", so that we don't confuse it with the current
Bubble2D object. Also, since we are squaring the
a and
b parameters for the Pythagorean Theorem calculation on line 85, there's no reason to get the absolute value of the differences. Even if they are negative, a negative times a negative is positive. So we can get rid of the
abs() calls.
Checked in with log message, "Minor changes to the collides_with() method."
random_speed() and random_position() changes
(These changes can be seen in the github history.)
The
random_position() and
random_speed() functions desperately need some documentation. I've added a few comments. Better yet, instead of doing all this math on
random.random() (which returns a float between
0.0 and
1.0), let's just use Python's
random.uniform() to specify those values directly.
def random_position(): """Returns a random float value that will be near the edge of the map""" if random.randint(0, 1) == 0: return random.uniform(0.0, 0.25) else: return random.uniform(0.75, 1.0) def random_speed(magnitude): """Returns a random float value between -magnitude and magnitude.""" return random.uniform(-magnitude, magnitude)
Actually, looking at it now, it doesn't make sense that the random position would be from
-1.0 to
2.0, rather than
0.0 to
1.0. This might have been a bug which was masked by the fact that the
wrap_around() method will correct any numbers less than
0.0 or greater than
1.0. Then again,
wrap_around()'s old behavior (before our changes) would just set it at the boundary. If you look at the starting position of bubbles, you'll notice they did have a propensity for starting at the edges. We can duplicate this behavior with some of our own code, but make it more explicit.
We can also get rid of the superfluous semicolons at the end of these statements. They aren't needed in Python, and just look unpythonic.
Checking in with log message, "Simplifying random_speed() and correcting random_position(), also adding documentation."
Rename Bubble2D and simplify
(These changes can be seen in the github history.)
The
make_bubble() function looks like some kind of factory function (that is, a function that is not an object's constructor function that creates and customizes objects) for the asteroids (which in the source code are called "bubbles".) Since the
Bubble2D class is used not only for bubbles but also the ship, bullets, and power ups, this can get kind of confusing. Let's rename the
Bubble2D class to something more generic like "ObjectOnMap" ("Object" is a bit too generic for my taste.) This way the
make_bubble() function (which we should add documentation to) will be less ambiguous.
Also, since
random_speed() is a one-liner and only used in the
make_bubble() function, let's just replace the
random_speed() calls with
random.uniform() calls and get rid of the
random_speed() function altogether.
Checked in with log message, "Renamed Bubble2D class to ObjectOnMap and got rid of random_speed()."
Improve make_bubble()
(These changes can be seen in the github history.)
Now let's improve the code in
make_bubble(). Just to make it clear that
make_bubble() is a factory function for bubbles, let's rename it to
bubble_factory(). The assignment statements for the
size and
speed variables across line 95 and 103 (remember, these are lines 95 and 103 in the original source file) are not very pythonic. Let's use a dictionary and multiple assignment to simplify them:
# (size, speed) kinds = {'big': (0.1, 0.1), 'medium': (0.075, 0.15), 'small': (0.05, 0.25)} size, speed = kinds[kind]
This way it'll be more obvious and easy how to add new "kinds" if we ever want, say,
"extrabig" bubbles.
Checked in with log message, "Improve make_bubble(), and renamed it to bubble_factory()."
Making child classes
(These changes can be seen in the github history.)
Actually, now that I look at it, we should just make a
Bubble subclass that is the child class of
ObjectOnMap. The code in
bubble_factory() can be moved to
Bubble's
__init__() function. (All bubbles, after all, are objects on the map.) While we're at it, let's replace
random_position() with a single
random.random() call (the player has shields at the beginning anyway.)
Note that we should probably fold the
spawn_bubbles() method and
GameScreen's
bubble_colors member into
Bubble2D. This type of info is more relevant to the
Bubble objects, not the game world or any screen. We can have
spawn_bubbles() be a method (simply named
spawn(), since it is already in the
Bubble class) that will return a two-item tuple. The first item will be a list of new
Bubble objects created and the second will be a list of new Powerup items created.
While making the
spawn() method, I noticed that powerups should probably also be a subclass on
ObjectOnMap, so I made a
Powerup class. The
__init__() function for this class can mostly be taken from the
spawn_powerup() method.
A minor note, it's kind of odd that
spawn_bubbles() uses a variable named
new_type instead of keeping consistent naming and calling it
new_kind.
Also note, in order to make the
super() calls of the
Bubble and
Powerup child classes to work, we need to change ObjectOnMap to a "new-style" class. This is done by specifying "
object" as the class that
ObjectOnMap inherits from:
class ObjectOnMap(object):
Wow, after all those changes, the game actually still works. Checking in these changes with the log message, "#!/usr/bin/python" (because I screwed up when pasting the log message, but it should have been "Created Bubble and Powerup child classes.")
Adding more comments
(These changes can be seen in the github history.)
Now I should add some comments while the code is still fresh in my head (and take out that debugging
spawn() puts all the new
Bubble objects in a list called
spawned_bubbles and creates new
Bubble objects in a
for loop, it's trivial to have
Bubbles create any number of new Bubble objects when hit. (Just for fun, try replacing
for i in range(2): with
for i in range(20):)
Checking in with log message, "Added some more comments to Bubble and Powerup classes."
Create a Ship class
(These changes can be seen in the github history.)
Next, I notice that a lot of things about the ship (which right now is simply a
ObjectOnMap object with some extra attributes added on, or with state tracked by variables in
GameState and even
GameScreen) that would fit better into a
Ship class.
I'll create such a class now. I can move the
accel_x,
accel_y, and
ship_shield_timer variables into this class. I can also move the
thrust_at() and
stop_thrust() methods into this
Ship class.
class Ship(ObjectOnMap): def __init__(self): super(Ship, self).__init__(0.04) # all Ships are the same size. self.shield_timer = 0 self.accel_x = 0 # acceleration rate of the ship self.accel_y = 0
If we change the
ObjectOnMap's init function to set objects at the XY coordinates of
0.5,
0.5 instead of
0,
0, then we can save a line of code for the ship. (The
Bubble and
Powerup init functions set the starting position for those types of objects, and the middle of the screen is a sensible default to have.)
def __init__(self, radius): self.pos = Vector2D(0.5, 0.5)
Also, the magic number
0.99 can be replaced with a constant named
DECELERATION. I also notice that
move_timer isn't used for anything and can be removed altogether.
DECELERATION = 0.99
And finally, I've removed some code that resets the
Ship object's position and velocity on a new level, so the
Ship continues to move as it did at the end of the last level. I did this accidentally, but kind of like it and will keep it in even though this goes beyond a mere code cleanup and is actually changing the way the game behaves.
Checked in with log message, "Created a Ship class and moved code into it."
Get rid of the shield_timer variable and add Ship methods instead
(These changes can be seen in the github history.)
Technically, it's bad OOP design to have outside code directly manipulating the
Ship object's
shield_timer member. Remember that we want to abstract away implementation details for a class. We don't want the logic of how the shield works to exist in code outside the class, because then it could be used inconsistently and there would be more places to change code if we wanted to change the behavior of the shields. Instead of increasing
shield_timer by
6 and checking if
shield_timer is greater than
0, let's add an
add_shield() and
has_shield() class.
def add_shield(self, secs=6): """Extends the time on the ship's shield by secs seconds.""" self._shield_timer += secs def has_shield(self): """Returns True if this ship currently has a shield, False if it does not have a shield.""" return self._shield_timer > 0
This way, code that uses the
Ship class doesn't need to know how the
Ship class implements it's shield (admittedly in this case it's pretty simple since it's just a single integer value) but only needs to know about
add_shield() and
has_shield() methods.
On line 177 the value of
shield_timer is reduced as part of
GameWorld's
update() method (the shield reduces over time). We don't need to create a
reduce_shield() method for our
Ship class, we can just have the logic for reducing the shield a part of the
Ship's
update() method. This design ensures that we never accidentally call the
Ship object's
update() but then forget to reduce the ship's shield.
In the Square Shooter game, the shields are always increased by 6 seconds. Let's add a parameter to the
add_shield() method so that you can specify how many seconds to add. (This way we could add a Super Shield powerup that extends the shield by, say, 12 seconds.) But we can make this parameter have a default value of
6 so that the normal case can just call
add_shield() without passing anything.
In Python, there are no private and public members or methods. Everything is public. But to imply that a member or method is private and that a programmer shouldn't directly access or call it, you can add an underscore to the beginning of the name. Let's rename
shield_timer to
_shield_timer.
def __init__(self): super(Ship, self).__init__(0.04) # all Ships are the same size. self._shield_timer = 0
Let's check in these changes with the log message, "Converted shield_timer member to a bunch of methods." after testing to make sure it works.
Create a Bullet class
(These changes can be seen in the github history.)
And to round everything out, let's add a new class for bullets called
Bullet. Currently the bullets are just plain old
ObjectOnMap objects with the
shoot_at() and custom update behavior implemented outside of
ObjectOnMap. Our new class will fix that.
From looking at this code, I realize that
bullet_shield_timer is something that should go in the
Ship class, not the
Bullet class. In Square Shooter, a bullet shield lets a bullet pierce a bubble and continue instead of ceasing to exist once it hits a bubble. This is the timer for how long the
Ship can fire bullets that have shields on them, not how long an individual
Bullet object has a shield on it. The
Bullet class will simply have a boolean shield member to say whether it has a shield or not.
It's kind of confusing for a ship to have "shields" (for the ship) and "bullet shields" (for the bullet), so I'm going to rename the bullet shield to be "Super Bullets".
def add_super_bullets(self, secs=6): """Extends the time on the ship's super bullets by secs seconds.""" self._super_bullet_timer += secs def has_super_bullets(self): """Returns True if this ship currently has a super bullets, False if it does not have a super bullets.""" return self._super_bullet_timer > 0
Checking in these changes with the log comment, "Created a Bullet class."
Move shoot_at() to the Ship class
(These changes can be seen in the github history.)
Also, I'm going to move the
shoot_at() method into the
Ship class, since it's the ship that does the shooting. This method will return a list of
Bullet objects that are created when
shoot_at() is called. (For now the list will just contain one
Bullet object, but this lets us be flexible in how many
Bullet objects are created.)
Creating the
absx and
absy variables aren't necessary, because they're only used once. Let's get rid of lines 321 and 322 and just use
abs(x) and
abs(y) directly in the
if statement on line 323.
if abs(x) < 0.1 and abs(y) < 0.1:
Because
shoot_at() is now in the
Ship class, we can get rid of these lines (307 and 308):
if self.bullet != None or self.ship == None: return
But keep in mind that this logic also prevents the player from firing a bullet if one exists on the screen, so we'll have to implement that by replacing line 560 with:
if model.bullet == None: model.bullet = model.ship.shoot_at(x / float(MAP_WIDTH), y / float(MAP_HEIGHT))[0]
We can probably find a better place to implement this logic later.
Checking in with log message, "Moved shoot_at() to the Ship class."
Move the freeze_timer to the Ship class
(These changes can be seen in the github history.)
At this point the only things that use just
ObjectOnMap instead of a class that is derived from
ObjectOnMap are the explosions. Let's hold off making a class for them right now, since an "explosion" in Square Shooter is just a circle with a position and radius.
ObjectOnMap can handle that just fine.
Let's move the freeze timer logic into the Ship class, just like we did with the shield timer:
def add_freeze(self, secs=6): """Extends the time on the ship's freeze powerup by secs seconds.""" self._freeze_timer += secs def has_freeze(self): """Returns True if this ship currently has a freeze powerup, False if it does not have a shield.""" return self._freeze_timer > 0
Commiting with log message, "Moved freeze timer logic into the Ship class, and some other minor changes." (This is lazy of me, I should specify what "other minor changes" is exactly. I'll probably regret this vague log message in the future.)
End of Part 1
That's all for now. I'll continue making changes in Part 2.
6 thoughts on “Source Code Makeover: Square Shooter, Part 1”
I liked this a lot and am looking forward to part 2. One thing I think needs to be improved, though, is the power-up system. At the moment, it smacks of String Oriented Programming, when the separate power-ups should really be sub-classes with pickup() and render() methods.
Use
#!/usr/bin/env python
over
#!/usr/bin/python
as the former is more flexible, especially with regard to multiple versions of Python installed on the same system.
What’s interesting to me is that you lecture everybody on not using magic constants inline in your code, and go to great pains to refactor magic constants out, then introduce one on line 162 of this commit:
You knew it was unreadable, so later, you went back and documented it:
This failure to keep consistency with your own admonishments is prevalent in several places in your “refactor”. There’s other bad things going on here:
- You absolutely LITTERED the file with SHOUTING constants which are actually expressing configuration, and never bothered to refactor them out into a configuration file.
- In one case, you changed a random() call and some simple arithmetic to TWO random() calls and a conditional, thereby decreasing performance in a spirit to reclaim readability (which you didn’t, because your product was less readable).
Additionally, you violate PEP 8 everywhere without cause (you can violate a PEP, but have a good reason), thereby making the code worse — not to mention refactoring tabs to spaces because it is your preference, which is just argumentative and pointless to the upstream. There’s a reason PEP 8 says “for new projects” instead of “reindent everybody’s code every chance you get”.
This “refactor” is really NIH under the guise of “writing better Python”, and I’d be annoyed if I were the upstream and I were used for such a blog post.
I violate PEP-8. *puts on shades* DEAL WITH IT.
Actually, I don’t violate PEP-8. PEP-8 itself doesn’t say you must dogmatically follow PEP-8: A Foolish Consistency is the Hobgoblin of Little Minds.
Also, the 0.03 magic number that I “introduced” wasn’t because I purposefully typed it in. I had just copied & pasted the code from line 295. It was just something I didn’t immediately get to changing.
The “shouting” comments are just a convention I use to mark constant variables in the code (as recommended by PEP-8). I didn’t put them in a separate config file because A) they’re not really something I want a user to config and B) then you’d have to start sending two files to give someone this game instead of just one .py file.
The two random() calls do decrease performance, but on the order of nanoseconds. Considering that the previous code was (random.random() – 0.5) * 3 + 0.5, I do think my change is more readable.
And I did the tabs to spaces thing because I was doing a lot of work in this file and I wasn’t really planning on releasing it (Felix’s original program runs just fine). If I was working with a group of people and people wanted it in spaces (or just didn’t feel strongly enough about moving away from tabs), I would have left it. Anyway, it’s less of a sin to go from tabs to spaces than spaces to tabs.
I don’t mean this refactoring to impinge on Felix’s programming ability. The program works and isn’t a large project that will probably be extended. I use it only because I needed an example of someone else’s code to refactor.
(Also, I’m surprised you didn’t mention that I totally ignore the 79 character line length limit.)
Hi! I’m the Felix who made Square Shooter. Sorry for the late comment, a friend just pointed me at your blog.
I’ll start by saying in my defense that this was my first completed Python program, and it was a port from Javascript to boot, so some non-Pythonic idioms probably carried through.
With that out of the way, thanks for pointing out that Python 3 is smarter about mixing integers and floats in arithmetic. Didn’t know that. Also, you’re right about make_bubble() — obviously I’m not used to “thinking in Python” just yet.
Since I mentioned make_bubble(), no offense, but look how you’re flailing around to find a good name for all the round thingies in the game. Which is exactly what “bubble” means. Oh, and the make_ prefix is my personal convention for factory functions — it just looks more natural when you’re reading the code.
Speaking of flailing, notice how you’re struggling to find a good place for certain methods once you’ve added more classes? Now you know why I left certain functionality outside the core classes. OOP is but a tool in the toolbox; don’t use it for the sake of it.
As for the “position” property, the reason why I don’t like using shortcuts is that they can mean many different things, as you pointed out yourself. A local variable in a small function can get away with the ambiguity; a public property can’t.
Replacing the magic numbers with constants is something I meant to do (since I recommend it myself, cough cough) but never got around to. And good catch on the duplicated/unused variables; there was a bit of refactoring going on, and some dangling bits escaped my attention.
Last but not least, did you uncomment the FPS indicator to monitor the impact of your changes on performance? (You’d also have to uncap the frame rate temporarily.) This is, after all, a real time game, and some compromises have to be made.
All in all, thanks for doing this. On to part 2… | http://inventwithpython.com/blog/2012/08/09/source-code-makeover-square-shooter-part-1/?wpmp_tp=1 | CC-MAIN-2014-35 | en | refinedweb |
XQuery Functions against the xml Data Type
SQL Server 2014
This topic and its subtopics describe the functions you can use when specifying XQuery against the xml data type. For the W3C specifications, see.
The XQuery functions belong to the namespace. The W3C specifications use the "fn:" namespace prefix to describe these functions. You do not have to specify the "fn:" namespace prefix explicitly when you are using the functions. Because of this and to improve readability, the namespace prefixes are generally not used in this documentation.
The following table lists the XQuery functions that are supported against the xmldata type.
Concepts
Other Resources
Show: | http://msdn.microsoft.com/en-us/library/ms189254(v=sql.120).aspx | CC-MAIN-2014-35 | en | refinedweb |
Hello, my name is Alice Newsam and I am software engineer on the Dynamics GP team focusing on Dexterity and I have a question for you:
Have you ever thought to yourself while working in Dexterity “I know there’s a .NET class that does this. It would be handy to access that right about now.” or “If only I could increase the string size to hold that data.”?
Well read on because you’ll be excited to learn about a feature that the GP Development team is currently working on for the GP 2015 release: .NET Interop.
This feature will bring the power of the .NET Framework into sanScript. Using .NET Interop, you’ll be able to reference .NET assemblies in your Dexterity dictionaries and your sanScript code will be able to create .NET objects and access fields, properties, methods, and events on those objects. Using .NET objects from sanScript can afford you greater flexibility to overcome things like sanScript’s limited string size, allow you to use dynamically sized collections, and give you direct access to alternate data sources like ADO.net.
How about a simple example to demonstrate:
Here you can see that the syntax is very similar to C#. We start off with a using directive which will allow us to use types from a namespace in a .NET library that we are referencing from our dictionary. That library contains a class for the Customer object which we are creating with the ‘new’ statement and will be returned to the caller of this procedure. Next you can see where we are interacting with the object by setting properties and calling methods. This is just a simple example of what can be done with .NET Interop, but you can imagine many more scenarios using existing .NET libraries and objects to help provide solutions for business scenarios.
Another benefit of the .NET Interop feature is that it expands on the capability to consume web services from within sanScript. The .NET Framework provides several classes that can be used to make web service calls and interpret the data used in the call. Along with helping to consume web services the .NET Interop feature also supports the effort to expose web services from Dynamics GP by providing a convenient mechanism for sending data in and out of a service procedure. This opens the possibilities up for a tremendous number of scenarios to extend and enhance the current product.
Additionally, along with unleashing the power of the .NET Framework via sanScript, the .NET Interop feature is also a foundational step to executing on the vision of Dynamics GP moving more fully towards a service based architecture. Since you can now reference and interact with a .NET object in Dexterity you can also use them as parameters for a service call. By reimagining the abilities of the toolset in this manner, the logic can be fully leveraged by Dynamics GP as well as contain additional logic for things like companion and integrating applications.
The realization of features such as .NET Interop and others facilitating the new Service Based Architecture will provide an opportunity for the Dynamics GP application to significantly enhance reach and capability. It not only will be able to more easily connect to other systems and services but also provide all of its capability for others to consume. This will make for some exciting times ahead.
Stay tuned!
Alice
Wow Interop has only been around 13+ years and you just now got sanScript to work. You will bury yourselves with your ancient tools.
Hi B
Our ancient tools, namely Dexterity and its scripting language sanScript has served us very well over the years and continues to serve us.
Dexterity forms an abstraction layer from the operating system and the database platform and has allowed code written 20 years ago to still run today, even in a web client.
When Dexterity gained new functionality, all developers received that functionality with little or no effort on their behalf: For example: SQL Support, VBA Support, UI Improvements, Web Client.
These latest enhancements will just add to the arsenal of tools we have with Dexterity, VBA, Visual Studio, etc.
David
Great news! No more proxy DLL for web services consumption!
Hi Marcos
Great times are ahead.
So, are you expanding the sanScript language to include the foreach..in construct and other important constructs to address one of .NET's biggest foundations which is collections? Is it possible to get a preview document of the sanScript reference? I know a lot of this stuff is not set in stone, but it would help Dex developers all over to transition a lot easier to developing with these new constructs. Keep in mind that not all Dex developers are .NET developers and things like web services and even collections are still foreign concepts to most. Better yet, why not have a Weekly Dynamics GP Developer Insight showing practical examples on how this stuff will work.
Also, why not deal with the inherent deficiencies in sanScript while you are at it? Why not expand the string datatype to a larger size? Or the integer datatype to match at least those in .NET? I still cannot understand these self-imposed restrictions.
Unrelated (not!), but when is Dexterity going to introduce support for SQL Server VARCHAR and NVARCHAR data types? CHAR is outdated and consumes tons of database space with a whole bunch of empty strings. When will we see support for Unicode? When will we see support for other database code pages?
Please don't misinterpret my questions as attacks. I am truly excited about the new functionality, but like everyone else, I'm always left wanting more.
MG.-
Mariano Gomez
Microsoft Dynamics GP MVP
Intelligent Partnerships, LLC
Hi Mariano
I will see if I can get someone from development to respond.
I am sure they will be able to give you everything you want and more, before schedule and under budget.
Now where did the magic wand and fairy dust end. | http://blogs.msdn.com/b/developingfordynamicsgp/archive/2014/07/09/dynamics-gp-developer-insights-net-interop.aspx | CC-MAIN-2014-35 | en | refinedweb |
A NotificationCenterManager object (or simply, notification
center) provides a mechanism for broadcasting information within a program. A
NotificationCenterManager object is essentially a notification dispatch table.
NotificationCenterManager
Objects register with a notification center to receive notifications
(Notification objects)
using the
AddObserver method.
Each invocation of this method specifies a set of notifications. Therefore, objects may register as observers of different notification sets by calling these methods several times.
Notification
AddObserver
Each running C# program has a default notification center. You typically
don’t create your own. A NotificationCenterManager object can deliver notifications only within a single program.
There can be a few conflicting techniques to solve a problem. Any can be chosen to achieve your goal. However, sometimes I want to know what can fit better for this and that. One of those conflicting techniques are Events (using delegates) and Notification in a same process or a task.
An Event is to have an object that fires an event when it's happened. Notification is to send an alarm to a monitoring object if something happens. For both, a monitoring object should be registered to some entities.
So, mostly they are very similar.
But Notification is free form to be connected to another object. For example the mains form can send a notification to
a User Control inside in another User Control without
no connection between them.
First of all create a NotificationCenterManager object. NotificationCenterManager uses
the Singleton pattern so get an instance like his:
using NotificationCenter;
NotificationCenterManager notificationCenterManager = NotificationCenterManager.Instance;
Next use the NotificationCenterManager instance to register to
Notification with the method AddObserver:
public class Observer
{
public Observer()
{
NotificationCenterManager notificationCenterManager = NotificationCenterManager.Instance;
notificationCenterManager.AddObserver(OnNotification,
"SomeNotification");// register to Notification "SomeNotification".
}
private void OnNotification(Notification p_notification)
{
DoSomething();
}
}
Next use the NotificationCenterManager instance to post a Notification with "Empty Message" with the method
PostNotification:
PostNotification
NotificationCenterManager.Instance.PostNotification("SomeNotification ");
// Post Notification "SomeNotification".
Use the NotificationCenterManager instance to post a Notification with "Notification Message" with the method
PostNotification:
Notification notification=new Notification(this,"Some Object With Message");
NotificationCenterManager.Instance.PostNotification("SomeNotification", notification);
// Post Notification "SomeNotification".
To remove matching entries from the receiver’s dispatch table use the NotificationCenterManager instance to unregister from
Notification with the method
RemoveObserver:
RemoveObserver
NotificationCenterManager.Instance.RemoveObserver(OnNotification, "SomeNotification");
// OnNotification is the delegate to the Observer.
10/08/12: Initial. | http://www.codeproject.com/Tips/438582/Csharp-NotificationCenter?msg=4336127 | CC-MAIN-2014-35 | en | refinedweb |
JavaServer Faces (JSF) 2, released in 2009, contains major changes and additions in many areas, including system events, resources, and standardization of both Facelets and Ajax handling (see Resources). Although this evolution was welcome in general, an unfortunate and significant side-effect is that virtually all rich component frameworks written for JSF 1.2 — including RichFaces version 3.x and below — no longer function reliably, if at all. In response, the RichFaces team embarked on an extensive rewrite for version 4. As you'll see in this article, some component names have changed; other components have been removed and new ones added.
In my 2008 article "An introduction to RichFaces," I presented several RichFaces version 3.1 components and explained the setup requirements for RichFaces and JavaServer Faces (JSF) 1.2. This follow-up article serves both as a guide for developers new to RichFaces and as an aid for migrating from previous versions to version 4.x. I have provided a WAR with demonstration code (see Download).
If you are new to RichFaces and want to use it with JSF 2, read just this article (although you may want to review the previous article's resources). If you've used RichFaces before version 4, I suggest that you view the two articles side by side. To make it easier to compare version differences, I've used the same section headers in both. I've also worked to duplicate the components' appearance and functionality.
From this point on, I'll use RichFaces to refer to version 4.x if I don't mention a specific version. I'll start with a look at some necessary infrastructure elements for development with RichFaces.
From the beginning
The minimum infrastructure requirements for developing with RichFaces are:
- Java™ SE 6
- A Servlet 2.5 container
- A minimally current browser, such as Firefox 3.5 or Internet Explorer 7 or above
For my development and testing, I used JDK 7, Apache Tomcat 7.0.22, and GlassFish 3.1.1. The browsers I used were Firefox 7.0.1 and Internet Explorer 8.
You shouldn't experience any issues in general from my setup and the minimums noted above. However, there are some things to keep in mind regarding the demo code:
- If you use a container that supports only Servlet 2.5, in web.xml you must change
<web-app version="3.0"...>to
<web-app version="2.5"...>.
- For GlassFish V3 and other containers with JEE 6 / JSF 2 support, you can remove javax.faces.jar from the demo project. The version used in the demo is mojarra-2.1.4.
- On a mailing list, I have seen mention of occasional problems with Expression Language (EL) on some JSF 2 versions and older containers. If you get odd results, try downloading the latest EL JARs.
This article assumes readers have a basic knowledge of JSF 2 and of Tomcat or GlassFish. If you need more background on these technologies, see Resources for appropriate links.
Not small faces
Facelets, despite its name, is not a smaller version of JSF; instead, at its core Facelets provides an alternative to JavaServer Pages (JSP) as a JSF
ViewHandler. Facelets supports all of the JSF UI components and builds its own component tree, reflecting the view for a JSF application. The JSF 2 specification deprecates JSP as a view-definition language (VDL) and includes a standard version of Facelets as the preferred VDL. For that reason, RichFaces dropped JSP support and requires Facelets.
I prefer keeping things as clean and straightforward as possible. In the example project, you'll see that virtually all of the page code (as opposed to markup) is composed of EL expressions for getters, setters, and method bindings. Although more complex projects may call for more complex expressions, in general Facelets makes it easy to keep Java code separate from your web page markup.
The primary differences you'll notice here from pre-JSF 2 development using JSP are:
- Some additional notations in web.xml and faces-config.xml are required or defaulted.
- Web pages are XHTML documents.
- XML namespaces are used rather than JSP taglibs.
As far as formatting goes, other than the initial portions (see Listing 1), everything in the web page code should look familiar. I view that aspect as an understated Facelets feature. For this article's project — and many others in which the primary use of Facelets is for handling the view — that's really all you need to know. Facelets also includes a number of other useful features, such as easy templating and elements to make life easier for web page designers. (To learn more about Facelets, see Resources.)
Listing 1. Initial portion of a Facelets XHTML document
<?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns:
The demonstration components
RichFaces can seem initially overwhelming; the average component has more than 20 specific attributes, plus general attributes that can be overridden. However, in typical usage the components are not difficult to set up, with most attributes having reasonable defaults. Version 4 has introduced changed and additional defaults, and it's worthwhile to check the documentation.
The primary components demonstrated in this article (with replacements for Version 4 noted) are:
- Calendar : A pop-up component that allows date selections. Figure 1 shows an example. < and > roll the month; << and >> roll the year. Clicking Today at the bottom selects today's date. Clean clears the date selection. The first column of numbers on the left shows the week of the year.
Figure 1. RichFaces Calendar component
- Pick List : A selection and ordering component. Pick List is a version 4 replacement for List Shuttle that can move items between available and selected areas, and up and down in the selected area. Figure 2 shows an example:
Figure 2. RichFaces Pick List component
- AutoComplete : A version 4 replacement for the Suggestion Box input component that provides clickable suggestions to fill or complete an entry. Figure 3 shows an example:
Figure 3. RichFaces AutoComplete component
- Tab Panel : An output component that creates tabbed pages. Figure 4 shows an example:
Figure 4. RichFaces Tab Panel component
- Accordion : A version 4 replacement for the Panel Bar input component. The sample project uses an Accordion for instructions. Figure 5 shows an example:
Figure 5. RichFaces Accordion component
- Collapsible Panel : A version 4 replacement for the Simple Toggle Panel component. The sample project uses Collapsible Panels for results. Figure 6 shows an example:
Figure 6. RichFaces Collapsible Panel component
RichFaces is built on Ajax4jsf (see Resources). Thanks to that foundation, any component can be Ajax enabled in several different ways. In version 4, Ajax functionality is often automatic or the default. The example application uses Ajax capabilities for the AutoComplete and Collapsible Panel components.
My brave face
The dwRichFaces backing a resource bundle, and Cascading Style Sheets (CSS) files, the application consists of two XHTML pages and two Java classes.
The URL for the input page displayed in Figure 7, assuming default setup for Tomcat or GlassFish, is. On the input page, you can select a date using the Calendar component. The Pick List component lets you move and reorder the available items. The City AutoComplete component Accordion items on the left for basic component instructions.
Figure 7.
dwRichFaces4Demo1 input.
dwRichFaces4Demo1 results page
On the results page, click on the Result tab, then click the appropriate Collapsible Panel item to see the input value. Click the Edit button to return to the input page.
Notice that the background color of the Submit button in Figure 7 and the Edit button in Figure 8 matches that of the other elements even though those buttons are standard JSF, not RichFaces, components. I'll cover the reason in the next section.
Set me up
The very first things necessary for an application are the JSF, Facelets, and RichFaces enablers — that is, the JARs that implement their capabilities. For migration, note that the dependencies have changed completely in version 4.x. These JARs, with the versions noted below, are included in the lib directory of the downloadable WAR (see Download). The list assumes that your web container supports current EL and Servlet API versions. If you have issues running the demonstration, check JSF, Facelets, and RichFaces requirements (see Resources.) You should also read the Download note.
The JARs required for any RichFaces project are:
- JSF 2 (included in JEE 6 containers like GlassFish V3.x)
- javax.faces.jar
- Facelets
- Included in the JSF 2 JAR
- RichFaces
- richfaces-components-api-ver.jar
- richfaces-components-ui-ver.jar
- richfaces-core-api-ver.jar
- richfaces-core-impl-ver.jar
- Other required dependencies:
- cssparser-ver.jar
- guava-ver.jar
- sac-ver.jar
In my project and included in the download WAR (see Resources for download sites), the versions used are:
- JSF 2
- javax.faces.jar — mojarra-2.1.4
- Facelets
- Included in javax.faces.jar
- RichFaces
- richfaces-components-api-4.1.0.20111111-CR1.jar
- richfaces-components-ui-4.1.0.20111111-CR1.jar
- richfaces-core-api-4.1.0.20111111-CR1.jar
- richfaces-core-impl-4.1.0.20111111-CR1.jar
- Other required dependencies:
- cssparser-0.9.5.jar
- guava-10.0.1.jar
- sac-1.3.jar
Next are web.xml entries, shown in Listing 2, needed to enable JSF. Note that when you use JSF 2 with a Servlet 3.0 container, these entries are optional. I prefer to be explicit. If omitted,
*.faces,
*.jsf, and
/faces/* <url-pattern />s are automatically mapped.
Listing 2. Minimum JSF entries>
Prior to JSF 2, a
javax.faces.DEFAULT_SUFFIX entry was needed for Facelets to make .xhtml override the .jsp default suffix. This entry is no longer required, because Facelets is now the default VDL.
RichFaces 4 no longer requires
org.ajax4jsf.VIEW_HANDLERS,
filter, or
filter-mapping elements in web.xml. You can just drop in the JARs and use it.
Listing 3 shows the web.xml entries related to RichFaces as used in the demo application:
Listing 3. RichFaces-related entries in web.xml
<context-param> <param-name>org.richfaces.skin</param-name> <param-value>classic</param-value> </context-param> <context-param> <param-name>org.richfaces.enableControlSkinning</param-name> <param-value>true</param-value> </context-param>
Listing 3 includes two context parameters:
<param-name>org.richfaces.skin</param-name>defines the color scheme (skin) for the application. RichFaces has several built-in skins. The
classicskin is a moderate blue. The default is light grey if this element is omitted.
- The
<param-name>org.richfaces.enableControlSkinning</param-name>value has an impact on the appearance of both RichFaces and, unexpectedly, standard JSF components. If its value is
true, standard controls are skinned. This is why, for example, the Submit and Edit buttons in Figures 7 and 8 have the same color as the RichFaces theme. The default is
trueif this element is omitted.
The good news about the entries in Listings 2 and 3 is that they are virtually the same in all of your applications and essentially boilerplate code. If you're willing to accept the defaults and don't care about being explicit, you can dispense with them altogether.
There's one more piece that you'll see in every application: the RichFaces namespace in application XHTML pages. Listing 4 is a modification of Listing 1 that includes the RichFaces namespace:
Listing 4. Initial portion of a Facelets/RichFaces XHTML document
<?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns:
Ready to rock
Now you're ready to see how to use RichFaces components. I'll begin with Accordion and Accordion Items (see Figure 5). These replaced the Panel Bar and Panel Bar Items in versions prior to RichFaces 4. You probably won't use them often, but they are easy to work with and provide a good first use case for RichFaces syntax.
The idea here is that an Accordion is a container for Accordion Items. An Accordion Item has a header and can contain any other component. Accordion Items stack on top of one another; the actual content is shown when the bar for the item is clicked. Only one item's content is shown at a time. In this case, as you can see from Listing 5, I'm just using text. Notice that all components have a
rich: prefix, which refers to the namespace entry included in Listing 4. In the demo, the Accordion is contained in a standard JSF
PanelGrid:
Listing 5. RichFaces Accordion component
<rich:accordion <rich:accordionItem <h:outputText </rich:accordionItem> <rich:accordionItem <h:outputText </rich:accordionItem> <rich:accordionItem <h:outputText </rich:accordionItem> </rich:accordion>
The basic
<rich:accordionItem /> element doesn't require much more than a header, which is pulled here from a resource bundle, using an EL expression.
The actual content for the demonstration Accordion Items is just an
<h:outputText /> element, again with text from the resource bundle. I use a
font style element for consistent readability, Ajax (via the
switchType attribute), active/inactive styles, events, and more.
Let's make a date
The Calendar component (see Figure 1) should be familiar; date selection was probably one of the earliest JavaScript enhancements to web pages. The RichFaces Calendar has more than 80 available attributes but, as you can see from Listing 6, you can enable a lot of functionality in just a few lines:
Listing 6. RichFaces Calendar component
<label for="CalendarID" >#{dwRF4D1.calendar}:</label> <rich:calendar </rich:calendar> in faces-config.xml, as shown in Listing 7:
Listing 7. Managed-bean definition for
dwRF4D1Handler
<managed-bean> <managed-bean-name>dwRF4D1Handler</managed-bean-name> <managed-bean-class>com.dw.dwRF4D1Handler</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean>
The
com.dw.dwRF4D1Handler class is basic — consisting mostly of an initializer, getters, and setters — and it shouldn.
If you are migrating from previous RichFaces versions, setting up the Calendar component for the functionality in the demo code is virtually identical in version 4.
One way or another
The Pick List component with the
orderable attribute — version 4's replacement for List Shuttle (see Figure 2 and the RichFaces 4.1 sidebar) — is ideal for selecting and ordering items from an initial group. Listing 8 demonstrates that it is also easier to use than you might initially think:
Listing 8. RichFaces Pick List component
<label for="plID">#{dwRF4D1.pl}:</label> <rich:pickList <f:selectItems </rich:pickList>
The XHTML markup in the example code includes additional attributes to override some default labels, but typical usage requires only
value,
<f:selectItems />, and usually
orderable.
The
value for selected values and
<f:selectItems /> for available selections are
java.util.List classes containing any type of object. In JSF 2, the value of
<f:selectItems /> can be a collection or array containing objects of any type;
toString() is called to provide labels. As with the demo code, you'll probably use lists containing
Strings most of the time. Other than the
List initialization for the
<f:selectItems /> attribute value, Listing 9 shows all of the Java source related to the example Pick List. The Java code is virtually unchanged from that used in "An introduction to RichFaces."
Listing 9. Relevant
dwRF4D1Handler Pick List code
private List<String> lOrderBySource = new ArrayList<String>(), lOrderByTarget = new ArrayList<String>(); ... public List getOrderBySource() { return lOrderBySource; } public List getOrderByTarget() { return lOrderByTarget; } ... public void setOrderBySource( List<String> orderBySource ) { lOrderBySource = orderBySource; } public void setOrderByTarget( List<String> orderByTarget ) { lOrderByTarget = orderByTarget; }
After the user has made the desired selection(s), on submittal your handler receives the list containing the choice(s).
Tell me to my face
If you frequent forums and mailing lists, sooner or later you will see a question that asks how to handle downloading thousands or millions of entries to a drop-down list. The AutoComplete component (see Figure 3), which replaces Suggestion Box, offers a way to show valid input selections without attempting that unworkable extreme. In fact, that concern was a major reason I began investigating RichFaces and similar suites: several entry items in an in-progress application, such as cities, had too many possibilities for a drop-down list to be practical. The AutoComplete functionality is similar to the familiar autocomplete components available in many desktop applications. It should be apparent that Ajax submittals would probably have to be involved for that sort of capability to have any chance of working efficiently in a web application.
Listing 10 shows the markup for the AutoComplete component. The field's input is either keyed or selected from the drop-down list.
Listing 10. RichFaces AutoComplete component
<h:form> <a4j:queue ... <label for="CityID">#{dwRF4D1.city}:</label> <rich:autocomplete </rich:autocomplete>
If you used Suggestion Box previously, you'll appreciate the simplifications with AutoComplete. The minimum attributes required are
mode,
value, and
autocompleteMethod:
modeis typically
ajax, causing an Ajax request on each keystroke — very nice and essentially automatic. Other values are:
Cached Ajax,
Client, and
Lazy Client.
- The
valueis the same as for most other components: typically getters and setters from a managed bean.
- The
autocompleteMethodmethod my production application, the results came from a limited database set, which is probably most typical. If you are familiar with the data, you can usually take steps in the method to optimize results. I have cases, for example, in which certain keystrokes can be ignored, so the method just returns, eliminating a trip to the database.
In version 4.x, RichFaces builds on the standard JSF queue. The
<a4j:queue /> attribute is not required, but you will probably want to use it frequently. Its use here is for
ignoreDupResponses; review the documentation for other capabilities. Note that an
<a4j:queue /> is used only for RichFaces component events. You can use it at the view or form level. A global application queue can be enabled in web.xml via a
<param-name>org.richfaces.queue.global.enabled</param-name> context parameter.
You can also create a named queue, which components reference using
<a4j:attachQueue.
selectedItemClass is a CSS class to be applied to the component. RichFaces uses the skins concept. This is carried through to the selection background on components. My own application had several standard JSF drop-down lists that used the system color for the background. The AutoComplete component used the
classic skins color, which made things appear inconsistent. The
SelCtl class tells the component to use the
lightgrey background color instead.
Keeping tabs
The Tab Panel component (see Figure 4) is straightforward to use, although for migration you should note some changes. The typical RichFaces attributes are available, but the primary one you will use is
switchType. Its default value is
server. For constant information, you'll probably most often want
client;
ajax is also available. As you can see from Listing 11, a typical RichFaces "sandwich" of
<rich:tabPanel /> elements and stacked "sandwiches" of
<rich:tab /> elements are used to make a set of tabbed pages:
Listing 11. RichFaces Tab Panel component
<rich:tabPanel <rich:tab > <f:facet <h:panelGrid <h:outputText <h:graphicImage </h:panelGrid> </f:facet> #{dwRF4D1.resultText} </rich:tab> <rich:tab ... </rich:tab> ... </rich:tabPanel>
A
header attribute is used for each tab's display name. In the demo code, as usual, the header value is pulled from the resource bundle. Like Accordion Items, a tab can contain any kind of component. For the demo, I used resource bundle text for the Info tab content, and Collapsible Panels for the Result tab content.
As an alternative to the
header attribute, you can use an
<f:facet /> named "header" to include additional content. The demo code uses an
<h:panelGrid /> containing text and an image for the Info tab header.
Whenever I see your smiling face
Collapsible Panel is a version 4.x replacement for Simple Toggle Panel. It consists of a bar and a content display, which can be any component (see Figure 6). Use the
header attribute for the title. Clicking on the bar shows or hides the content, much like an Accordion with only one item. Listing 12 shows the demo project's Collapsible Panel markup:
Listing 12. RichFaces Collapsible Panel component
<h:panelGrid <rich:collapsiblePanel #{dwRF4D1Handler.selectedDate} </rich:collapsiblePanel> <rich:collapsiblePanel #{dwRF4D1Handler.orderByTarget} </rich:collapsiblePanel> <rich:collapsiblePanel #{dwRF4D1Handler.city} </rich:collapsiblePanel> <rich:collapsiblePanel <img src="images/myphoto.jpg" height="80" width="64"/> </rich:collapsiblePanel> </h:panelGrid>
Notice that the
switchType attribute makes a reappearance.
client and
server values are available in addition to the
ajax value used in the code. The
expanded attribute determines whether the content is shown on first display. For the demo, I've put several Collapsible Panels in a JSF
<h:panelGrid />, which is placed in a Tab, which is placed in a Tab Panel.
Wrap it up
RichFaces provides a large number of JSF components for building RIAs and Ajax-enabled applications. This article has demonstrated only a few, but you should have gotten a feel for how things work under RichFaces, and seen several components that could be useful in many applications. With the introduction of version 4, Richfaces is now compatible with JSF 2 and includes simplifications and efficiencies you'll appreciate. An online demo of all of the components available in the suite, documentation, and other resources are available from the RichFaces project page (see Resources).
If you choose to use RichFaces, I encourage you to dig deeply into the
a4j: components and processing discussed in the documentation to understand how to apply the concepts to the other RichFaces components. The research time you invest will pay multiple dividends in your development process and run-time performance.
Download
Note
- The Java sources are included in the WAR file. For instruction on using the WAR file, see the readme.txt file included with the download.
Resources
Learn
- RichFaces: The RichFaces site has links to information, documentation, the online demo, and downloads.
- JSF home page: Find developer resources for the JSF reference implementations.
- "An introduction to RichFaces" (Joe Sam Shirah, developerWorks, March 2008): The author's previous article covers RichFaces 3.x and JSF 1.2.
- JSF 2 fu: David Geary's series on developerWorks demonstrates a wide range of JSF 2 features.
- What's New in JSF 2?: This entry in Andy Schwartz's blog is an aging but still useful resource for JSF 2 capabilities.
- Need to implement Richfaces 4.0 with JSF 2.0: This thread in the RichFaces discussion forum contains information about using RichFaces 4 with IBM WebSphere.
- Ajax4jsf: Ajax4jsf, now part of RichFaces, began as a separate project.
- Browse the technology bookstore for books on these and other technical topics.
- developerWorks Java technology zone: Find hundreds of articles about every aspect of Java programming.
Get products and technologies
- RichFaces: Download RichFaces.
- JSF: Download Mojarra 2.x or other JSF reference implementations.
- "How to add RichFaces 4.x to projects not based on maven" (Juan Camilo Prada, RichFaces Wiki, 2011): This article discusses dependencies and locations to download cssparser.jar, sac.jar, and guava.jar.
- Apache Tomcat: Find information and downloads for the Tomcat servlet container.
- GlassFish: Find information and downloads for the GlassFish application server.
- WebSphere Application Server for Developers: Download a no-charge WebSphere offering identical to the production runtime environment your applications will eventually run on.
-. | http://www.ibm.com/developerworks/java/library/j-richfaces4/index.html | CC-MAIN-2014-35 | en | refinedweb |
I would like to present you an easy way to communicate with your API. It’s light network layer, which in all cases guided by SOLID principles.
Structure
Network layer consists of six parts:
- Environment
- Request
- Response
- Operation
- Dispatcher
- Service
Environment
Describes your API information: the host URL, the caching policy, also contains a headers property, which can be global for all your requests.
Request
Describes your network request. You can create any request you want: GET, POST, DELETE, etc. Also, you can create a download request if needed.
Response
An object, which contains information about data you received from the request by the operation, which described below. It can be a local file, mapped object, etc.
Operation
An instance, which consists of the strongly typed requests and response objects. It executes by the dispatcher.
Dispatcher
The Dispatcher responsible for the executes a request. By default, the library has two dispatchers, called
Network Dispatcher and
Download Dispatcher, but you could inherit
Base Dispatcher and write your own class if needed.
Service
And the last one is Service. A service is an object, which executes your operations by dispatchers you provided.
Requirements
- Swift >= 3.0
- iOS >= 8.0
Installation
PowerfulNetworkLayer is available through CocoaPods. To install it, simply add the following line to your Podfile:
pod 'PowerfulNetworkLayer'
HOW TO USE
After installation library via CocoaPods dependency manager, you could import this library by writing next command in files, which will be using it.
import PowerfulNetworkLayer
If you want to create custom request, for example, request to send (POST data) some information, you could write code like below:
import Foundation import Alamofire import PowerfulNetworkLayer final class SignUpRequest: BaseRequest<DataRequest> { private let email: String private let firstName: String private let lastName: String private let password: String override var path: String { return "auth/signup" } override var method: HTTPMethod { return .post } override var encoding: ParameterEncoding { return JSONEncoding() } override var parameters: Parameters { return [ "email" : email, "first_name" : firstName, "last_name" : lastName, "password" : password ] } init(email: String, firstName: String, lastName: String, password: String) { self.email = email self.firstName = firstName self.lastName = lastName self.password = password } }
You also could create a response (if needed), the code looks like this:
import Foundation import ObjectMapper import PowerfulNetworkLayer final class SignUpResponse: Response, Mappable { let user: User let token: AccessToken required init?(map: Map) { } func mapping(map: Map) { user <- map["user"] token <- map["access_token"] } }
Each request should not exist without operation. The operation initialized with the request, which executes and returns a response by promise.
You could override execute method in inherited
Operation class and saved some properties in service if needed like below.
import Foundation import PromiseKit import PowerfulNetworkLayer final class SignUpOperation: DispatchOperation<SignUpRequest, SignUpResponse> { override func execute<ServiceType>( in dispatcher: Dispatcher, by service: ServiceType ) -> Promise<SignUpResponse> where ServiceType : AuthService { return super.execute(in: dispatcher, by: service).tap { result in switch result { case .fulfilled(let response): service.user = response.user case .rejected(_): break } } } }
Also, you have an example project, where you can see how to use this framework.
Credits
- by Andrew Kochulab
- Facebook page
- Linkedin page
Latest podspec
{ "name": "PowerfulNetworkLayer", "version": "1.0.4", "summary": "Light way to communicate with your API.", "description": "I would like to present you an easy way to communicate with your API. Its light network layer, which in all cases guided by SOLID principles.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "AndrewKochulab": "[email protected]" }, "source": { "git": "", "tag": "master", "branch": "master" }, "platforms": { "ios": "9.0" }, "source_files": "PowerfulNetworkLayer/Classes/**/*", "dependencies": { "PromiseKit": [ "~> 6.2" ], "PromiseKit/Alamofire": [], "Alamofire": [ "~> 4.7" ], "AlamofireObjectMapper": [ "~> 5.0" ], "ObjectMapper": [ "~> 3.1" ] } }
Tue, 10 Apr 2018 14:20:04 +0000 | https://tryexcept.com/articles/cocoapod/powerfulnetworklayer | CC-MAIN-2019-04 | en | refinedweb |
In this first blog, we'll focus on developing a simple app in Android by using our new Android SDK for ZC100/300 card printers.
This basic app will have two activities, one to control and manage the TCP communication, and the second one to control the USB communication.
After the ZC100/300 series card printer was launched in May, we wanted to promote the usage of the new SDKs developed (Android, C#) for these devices. In this new blog series, we will be going from the simplest app you can build in Android to the one a little more advanced you can implement with SmartCards in Windows by using our new C# SDK for .NET environment.
The blog series will be covering the following use cases:
- ) Android App to print one sided card through TCP/USB
- ) Android App to print double sided card with 3 layers through TCP/USB
- ) XML Templates for ZC100/300 printers
The user will need to enter the TCP address manually, so here, you will need to be sure that both, the printers and the devices, are in the same network. The test was done with a printer connected with RJ45 cable through a small Netgear WiFi/router and a TC51 connected through the WiFi.
The app will allow you to select a simple image from the gallery or any other location on your Android device. The sample code has a sample picture (DriversLicense_Name.bmp) that you can utilize for testing purposes.
The app has some utility classes to control the printing, path file of the file selected and the printing portion for one layer (one image) in this case.
public class NetworkHelper { public NetworkHelper() {} protected void getPrinterStatusOverTcp(String theIpAddress, String port) throws ConnectionException, SettingsException, ZebraCardException { Connection connection = new TcpConnection(theIpAddress, Integer.parseInt(port)); ZebraCardPrinter zebraCardPrinter = null; try { connection.open(); zebraCardPrinter = ZebraCardPrinterFactory.getInstance(connection); PrinterStatusInfo printerStatusInfo = zebraCardPrinter.getPrinterStatus(); System.out.format("Status: %s%n", printerStatusInfo.status); System.out.format("Alarm: %s (%s)%n", printerStatusInfo.alarmInfo.value, printerStatusInfo.alarmInfo.description); System.out.format("Error: %s (%s)%n", printerStatusInfo.errorInfo.value, printerStatusInfo.errorInfo.description); System.out.format("Total jobs: %s%n", printerStatusInfo.jobsTotal); System.out.format("Pending jobs: %s%n", printerStatusInfo.jobsPending); System.out.format("Active jobs: %s%n", printerStatusInfo.jobsActive); System.out.format("Completed jobs: %s%n%n", printerStatusInfo.jobsComplete); PrinterInfo printerInfo = zebraCardPrinter.getPrinterInformation(); System.out.format("Vendor: %s%n", printerInfo.vendor); System.out.format("Model: %s%n", printerInfo.model); System.out.format("SerialNumber: %s%n", printerInfo.serialNumber); System.out.format("OEM Code: %s%n", printerInfo.oemCode); System.out.format("Firmware Version: %s%n%n", printerInfo.firmwareVersion); } catch (ConnectionException e) { // Handle communications error here. e.printStackTrace(); } finally { // Release resources and close the connection zebraCardPrinter.destroy(); connection.close(); } } }
The Network Helper and USB helper are just simple helper classes that will allow you to capture printer status information (printerStatusInfo) and printer information (printerInfo), and this information would be useful to build more complex logic in your future apps where you want to add some intelligence to automatize processes of recognition of printers or ribbons, or to monitor/control the supplies consumption with the ribbon odometers etc.
The use case of this simple app is summarized with the template shown on the image above. The first step is you need to define the devices to be used, and, in this case, we decided to use a ZC300 printer with a Zebra TC51 mobile computer working in Android 7.1.2. The communication will be conducted through WiFi. Remember that the printer is connected physically to the router with a RJ45 connector. The formatting language is determined with a scheme in XML with data.
buttonCardNetwork.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Thread threadmac = new Thread(new Runnable() { @Override public void run() { try { Connection connection = null; ZebraCardPrinter printer = null; try { ip = ipAddressEditText.getText().toString(); System.out.format("Status: %s%n", ip); port = portNumberEditText.getText().toString(); System.out.format("Status: %s%n", port); tcpCardHandler.getPrinterStatusOverTcp(ip, port); connection = new TcpConnection(ip,Integer.parseInt(port)); connection.open(); printer = ZebraCardPrinterFactory.getInstance(connection); if (printer != null) { printthisCard.printCard(printer, uriTransferPath, context); } } catch (Exception e) { System.out.format("Status: %s%n", e.getMessage()); } finally { try { if (printer != null) { printer.destroy(); printer = null; } } catch (ZebraCardException e) { e.printStackTrace(); } if (connection != null) { try { connection.close(); connection = null; } catch (ConnectionException e) { e.printStackTrace(); } } } } catch (Exception e) { e.printStackTrace(); } } }); threadmac.start(); } });
Once you select the image to be printed, the app will allow a preview of the image as seen in the pic above. When the button “TCP Card Print” is clicked, then the Main Activity will call PrintCardHelper.printCard. This method transfers the printer object created, and the reference path of the image selected to the printing helper class.
protected void printCard(ZebraCardPrinter printer, String pathImage, final Context context) throws ZebraCardException, ConnectionException, IOException, SettingsException, ZebraIllegalArgumentException { ZebraGraphics graphics = null; try { List<GraphicsInfo> graphicsData = new ArrayList<GraphicsInfo>(); graphics = new ZebraCardGraphics(printer); generatePrintJobImage(graphics, graphicsData, pathImage); int jobId = printer.print(1, graphicsData); pollJobStatus(printer, jobId, context); final JobStatusInfo jStatus = printer.getJobStatus(jobId); this.runOnUiThread(new Runnable() { public void run() { Toast.makeText(context.getApplicationContext(), "Job completed: " + jStatus.printStatus, Toast.LENGTH_LONG).show(); System.out.format("Job Completed: %s%n", jStatus.printStatus); } }); } finally { if (graphics != null) { graphics.clear(); graphics.close(); } } }
In the PrintCard method, the first method called is the generatedPrintJobImage that builds the layers, and in this case, it is only one layer. The second method that is called is “printer.print” and it will send the print job to the printer, and, in response, it returns an integer that is captured by JobID. The final method called is the “pollJobStatus” which will be monitoring the print job until this is finished correctly, or, stopped by any error that is generated by the printer during the process (No cards, ribbon exhausted, etc).
@Override protected Void doInBackground(Void... voids) { TcpConnection connection = null; try { connection = getTcpConnection(ipAddress); connection.open(); Map<String, String> discoveryDataMap = DiscoveryUtilCard.getDiscoveryDataMap(connection); String model = discoveryDataMap.get("MODEL"); if (model != null) { if(!model.toLowerCase().contains("zxp1") && !model.toLowerCase().contains("zxp3")) { printer = new DiscoveredCardPrinterNetwork(discoveryDataMap); } else { throw new ConnectionException(weakContext.get().getString(R.string.printer_model_not_supported)); } } else { throw new SettingsException(weakContext.get().getString(R.string.no_printer_model_found)); } } catch (Exception e) { exception = e; } finally { ConnectionHelper.cleanUpQuietly(null, connection); } return null; }
For this testing purposes, the printing portion was executed in a thread directly from the MainActivity and USBDiscoveryAndPrintExample which is useful if you want to quickly proof the concept or demo. However, it's recommended that your production apps implement the recommended best practice that executes the printing portion on the background of the app as shown in the above image (Screenshot taken from our SDK sample code that come with Multiplatform SDK).
// Catches intent indicating if the user grants permission to use the USB device) { hasPermissionToCommunicate = true; } } } } } };
For the USB communication to the printer, this sample code was tested with a Samsung Galaxy 5 and Android 6.0.1. USB printing requires that you implement a BroadcastReceiver, so the USB permission is granted when requested, you can implement without asking the user to click on the button.
Once the permission is granted, you can connect the USB port (please, remember that it was tested only with a OTG cable with Samsung Galaxy 5 and Android 6.0.1). The sample code does not support USB-C or USB type C port for USB communication with OTG cable to communicate to ZC300 printers.
// Print button click buttonPrint.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Thread threadmac2 = new Thread(new Runnable() { @Override public void run() { try { Connection connection = null; ZebraCardPrinter printer = null; try { if (hasPermissionToCommunicate) { try { connection = discoveredPrinterUsb.getConnection(); usbHelper.getPrinterInfoOverUsb(connection); connection.open(); printer = ZebraCardPrinterFactory.getInstance(connection); if (printer != null) { printthisCard.printCard(printer, transferredImage, context); } } catch (final Exception e1) { UsbDiscoveryAndPrintExample.this.runOnUiThread(new Runnable() { public void run() { Toast.makeText(getApplicationContext(), e1.getMessage() + e1.getLocalizedMessage(), Toast.LENGTH_LONG).show(); System.out.format("Exception: %s%n", e1.toString()); } }); } } else { UsbDiscoveryAndPrintExample.this.runOnUiThread(new Runnable() { public void run() { Toast.makeText(getApplicationContext(), "No permission to communicate", Toast.LENGTH_LONG).show(); System.out.format("Exception: %s%n","No permission to communicate"); } }); } } finally { try { if (printer != null) { printer.destroy(); printer = null; } } catch (ZebraCardException e) { e.printStackTrace(); } if (connection != null) { try { connection.close(); connection = null; } catch (ConnectionException e) { e.printStackTrace(); } } } } catch (Exception e) { e.printStackTrace(); } } }); threadmac2.start(); } });
The sequence for USB would be the same that was implemented with the TCP connection once the communication was created, the printer was connected, and the CardPrint class was loaded.
We hope you have enjoyed this new blog series that will allow you to speed up your knowledge and integration of the new card Printer SDKs with your apps.
The sample code can be downloaded from this GitHub link.
If you have any question or comments, please, post your comments below. | https://developer.zebra.com/community/home/blog/2018/09/12/blog-series-developing-apps-in-android-c-with-the-zc300-series-printers | CC-MAIN-2019-04 | en | refinedweb |
IE.au3 Alert stops executing script
By
ADIN, in AutoIt General Help and Support
Recommended Posts
Similar Content
- By Jemboy
Hi,
At work we have some proprietary website, users have to login to.
I have "made" an autoit executable to start IE, go the website, login, so the user do not have input their credentials every time.
By NDA I am not allowed disclosed the URL of the website nor the login credentials
So I made a fake website and an autoitscript to illustrate my question.
#include <ie.au3> $oIE = _IECreate ("about:blank", 0, 1, 1, 1) $HWND = _IEPropertyGet($oIE, "hwnd") WinActivate ($HWND,"") WinSetState($HWND, "", @SW_MAXIMIZE) _IENavigate ($oIE, "",1) The above start my demo website. The actual website has some links in the footer that I do not want most people click on.
I contacted the developers of the website and they are thinking of making an option to configure what links to show in the footer, but they said it's not a high priority for them.
I discovered, that by click F12 and deleting the <footer> element the footer is delete from the live page view (until the page is reloaded off course)
I want to automate the removal of the footer, without using things like send().
I tried getting the footer with _IEGetObjById and deleting it with _IEAction, but that didn't work.
Does any one has an idea how I could delete the footer directly from view with an autoit script?
TIA, Jem.
- Dequality
I don't really have any code cuz whatever i try it doesnt work -.-
Can anyone please make me a sample i can study? <,< i need to make a simple script to open explorer click some objects on the site , rinse repeat.. a example with autoit' website is just fine i just need to study it.. the i can't seem to find any yt video on this.. '-_-
And yes i tried looking at the manual .. but i dont get anything from it <.<'
Any help highly appreciatet <3
- By XinYoung
Hello again
I need to click this Submit button, but it's in a lightbox and has no ID (i think) and I can't figure out a way to target it. _IEFormSubmit doesn't work.
Thank you
- By lenclstr746
<span class="fr66n"><button class="dCJp8 afkep coreSpriteHeartOpen _0mzm-"><span class="glyphsSpriteHeart__outline__24__grey_9 u-__7" aria-</span></button></span>
I want to click this button ? | https://www.autoitscript.com/forum/topic/186909-ieau3-alert-stops-executing-script/ | CC-MAIN-2019-04 | en | refinedweb |
Hi I've been taking a python course and now that were done the professor wants us to write a program to list all the prime numbers from 2 - 10000. Oh yea and it has to be in C. So I am very stuck and really need help. Heres the assignment:
Write a C program to compute and display all of the prime numbers which are less than 10000 using the technique known as the "Sieve of Eratosthenes". It works as follows:
Start with a list of all candidate numbers up some number N (in this case, integers from 2 to 9999). Set a variable "p" to the first prime number, 2. While p^2 (p squared) is less than N, repeat the following: (1) starting from p^2, calculate all of the multiples of p less than or equal to N and eliminate them from the list; and (2) set p to the next number still on the list. This is the next prime number.
When you are done, the only numbers remaining on the list will be prime.
Heres what I've done so far, Thanks in advanced for the help!
#include <stdio.h> char numlist[10000]; int main(void){ int i,p,n; for (i=2; i<10000; i++) { numlist[i]=1; } p=2; n=10000 while (p*p<n) { for (p*p<=n; n<10000; p++) { numlist[p]=0; } } | https://www.daniweb.com/programming/software-development/threads/283716/prime-number-compiler-help | CC-MAIN-2019-04 | en | refinedweb |
ALFaceCharacteristics API¶
NAOqi People Perception - Overview | API
Namespace : AL
#include <alproxies/alfacecharacteristicsproxy.h>
Method list¶
- As any module, this module inherits methods from ALModule API.
- It also inherits methods from ALExtractor API.
- It also has the following specific methods:
Event list¶
ALMemory Key list¶
Methods¶
- bool
ALFaceCharacteristicsProxy::
analyzeFaceCharacteristics(int id)¶
Performs the face analysis on a given person using the ID. This method can be called even if ALFaceCharacteristics is not running in order to update the ALMemory data of this person only.
- float
ALFaceCharacteristicsProxy::
getSmilingThreshold()¶
Returns the current smiling threshold value above which the event
FaceCharacteristics/PersonSmiling()is raised.
Events¶
- Event:callback(std::string eventName, int id, std::string subscriberIdentifier)¶
"FaceCharacteristics/PersonSmiling"
Raised when a person has a smile value above the current threshold (default = 0.7).
ALMemory Keys¶
- AL::ALValue
PeoplePerception/Person/<ID>/AgeProperties¶
Age estimation of a person in the form [age, confidence]. The age is limited to the interval [0, 75] and the confidence is in [0, 1].<ID> is the ID of the person as defined by the ALPeoplePerception API.See also:
PeoplePerception/PeopleList().
- AL::ALValue
PeoplePerception/Person/<ID>/ExpressionProperties¶
Estimation of the facial expression on a person’s face. The associated data is an array containing the detection score of the following five expressions: [neutral, happy, surprised, angry or sad].
Each score ranges from 0 and 1 and represents the probability that the person expresses the corresponding expression. As such, the sum of all five properties is equal to 1, except if the detection failed (in that case all five values are set to zero).
- AL::ALValue
PeoplePerception/Person/<ID>/FacialPartsProperties¶
Location of the main face features: eyes, nose, and mouth. The associated ALValue is structured as follows:
[ [[x_0, y_0], c_0], # Left eye (center point) [[x_1, y_1], c_1], # Right eye (center point) [[x_2, y_2], c_2], # Mouth (center point) [[x_3, y_3], c_3], # Left eye (inner limit) [[x_4, y_4], c_4], # Left eye (outer limit) [[x_5, y_5], c_5], # Right eye (inner limit) [[x_6, y_6], c_6], # Right eye (outer limit) [[x_7, y_7], c_7], # Mouth (left limit) [[x_8, y_8], c_8], # Mouth (right limit) [[x_9, y_9], c_9], # Nose (left limit) [[x_10, y_10], c_10], # Nose (right limit) [[x_11, y_11], c_11] # Mouth (upper limit) ]
Where:
- [x, y] are expressed in pixels in the image coordinates.
- c is the confidence value in the range [0, 1].
Note that the “left” and “right” are from the camera point of view (i.e. left eye in the picture corresponds to the user’s right eye).
- AL::ALValue
PeoplePerception/Person/<ID>/GenderProperties¶
Gender estimation of a person in the form [gender, confidence]. Gender is either 0 for female or 1 for male and the confidence is in [0, 1].
- AL::ALValue
PeoplePerception/Person/<ID>/SmileProperties¶
Smile degree estimation of a person in the form [smile degree, confidence]. The smile degree is a floating point value between 0 and 1 where 0 means “no smile” and 1 is a large smile. The confidence is in [0, 1]. | http://doc.aldebaran.com/2-4/naoqi/peopleperception/alfacecharacteristics-api.html | CC-MAIN-2019-04 | en | refinedweb |
I am writing a distributed systems program in python, and in this program I need to know the local ip-address... eg. 168.192.... 10.13..... etc, not 127.0.0.1 or hostname.
I have found following solution on stackoverflow
import socket socket.gethostbyname(socket.gethostname())
When I run this on my Debian jessie laptop, I get 127.0.0.1, however, my friend who is using Fedora on his machine gets the desired ip address when he runs the same script (eg. 168.192..)
I prefer to use the solution above if possible to get the local ip address, because most of the other proposed solutions are either hacks or incredible long...
Is there something I can setup on my Debian jessie laptop such that python will return the correct local ip-address?
- using debian jessie
- python 3
- is connected to network (.. and computer is turned on)... ;)
- /etc/hosts currently contains:
127.0.0.1 localhost
127.0.0.1 debianASUS
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
/etc/hosts? – Michael Hampton♦ May 7 '15 at 17:19 | https://serverfault.com/questions/690391/finding-local-ip-addresses-using-pythons-stdlib-under-debian-jessie | CC-MAIN-2019-04 | en | refinedweb |
ES6, ECMAScript 6 or ES2015 is the latest specification for JavaScript which introduces some syntactic sugar to the language. It's a big update to the language and introduces a lot of new features
More details on Node and ES6 can be found on their site
The whole ES6 spec is not yet implemented in its entirety so you will only be able to use some of the new features. You can see a list of the current supported ES6 features at
Since NodeJS v6 there has been pretty good support. So if you using NodeJS v6 or above you can enjoy using ES6. However, you may also want to use some of the unreleased features and some from beyond. For this you will need to use a transpiler
It is possible to run a transpiler at run time and build, to use all of the ES6 features and more. The most popular transpiler for JavaScript is called Babel
Babel allows you to use all of the features from the ES6 specification and some additional not-in-spec features with 'stage-0' such as
import thing from 'thing instead of
var thing = require('thing')
If we wanted to create a project where we use 'stage-0' features such as import we would need to add Babel as a transpiler. You'll see projects using react and Vue and other commonJS based patterns implement stage-0 quite often.
create a new node project
mkdir my-es6-app cd my-es6-app npm init
Install babel the ES6 preset and stage-0
npm install --save-dev babel-preset-es2015 babel-preset-stage-2 babel-cli babel-register
Create a new file called
server.js and add a basic HTTP server.
import http from 'http' http.createServer((req, res) => { res.writeHead(200, {'Content-Type': 'text/plain'}) res.end('Hello World\n') }).listen(3000, '127.0.0.1') console.log('Server running at')
Note that we use an
import http from 'http' this is a stage-0 feature and if it works it means we've got the transpiler working correctly.
If you run
node server.js it will fail not knowing how to handle the import.
Creating a .babelrc file in the root of your directory and add the following settings
{ "presets": ["es2015", "stage-2"], "plugins": [] }
you can now run the server with
node src/index.js --exec babel-node
Finishing off it is not a good idea to run a transpiler at runtime on a production app. We can however implement some scripts in our package.json to make it easier to work with.
"scripts": { "start": "node dist/index.js", "dev": "babel-node src/index.js", "build": "babel src -d dist", "postinstall": "npm run build" },
The above will on
npm install build the transpiled code to the dist directory allow
npm start to use the transpiled code for our production app.
npm run dev will boot the server and babel runtime which is fine and preferred when working on a project locally.
Going one further you could then install nodemon
npm install nodemon --save-dev to watch for changes and then reboot the node app.
This really speeds up working with babel and NodeJS. In you package.json just update the "dev" script to use nodemon
"dev": "nodemon src/index.js --exec babel-node",
JS es6 (also known as es2015) is a set of new features to JS language aim to make it more intuitive when using OOP or while facing modern development tasks.
Check out the new es6 features at - it may clarify to you if you really intend to use it on your next NodeJS app
Check the compatibility level of your node version at
If all is ok - let's code on!
Here is a very short sample of a simple
hello world app with JS es6
'use strict' class Program { constructor() { this.message = 'hello es6 :)'; } print() { setTimeout(() => { console.log(this.message); this.print(); }, Math.random() * 1000); } } new Program().print();
You can run this program and observe how it print the same message over and over again.
Now.. let break it down line by line:
'use strict'
This line is actually required if you intend to use js es6.
strict mode, intentionally, has different semantics from normal code (please read more about it on MDN -)
class Program
Unbelievable - a
class keyword! Just for a quick reference - before es6 the only way do define a class in js was with the...
function keyword!
function MyClass() // class definition { } var myClassObject = new MyClass(); // generating a new object with a type of MyClass
When using OOP, a class is a very fundamental ability which assist the developer to represent a specific part of a system (breaking down code is crucial when the code is getting larger.. for instance: when writing server-side code)
constructor() { this.message = 'hello es6 :)'; }
You got to admit - this is pretty intuitive! This is the c'tor of my class - this unique "function" will occur every time an object is created from this particular class (in our program - only once)
print() { setTimeout(() => // this is an 'arrow' function { console.log(this.message); this.print(); // here we call the 'print' method from the class template itself (a recursion in this particular case) }, Math.random() * 1000); }
Because print is defined in the class scope - it is actually a method - which can be invoked from either the object of the class or from within the class itself!
So.. till now we defined our class.. time to use it:
new Program().print();
Which is truly equals to:
var prog = new Program(); // define a new object of type 'Program' prog.print(); // use the program to print itself
In conclusion: JS es6 can simplify your code - make it more intuitive and easy to understand (comparing with the previous version of JS).. you may try to re-write an existing code of yours and see the difference for yourself
ENJOY :) | https://sodocumentation.net/node-js/topic/5934/node-js-with-es6 | CC-MAIN-2021-10 | en | refinedweb |
- Training Library
- Amazon Web Services
- Courses
- Advanced Use of AWS CloudFormation
Lifecycle Demos
Contents
Introduction
Advanced CloudFormation Skills
The course is part of these learning pathsSee 1 more:
Transcript
Welcome to CloudAcademy's Advanced Amazon Web Services CloudFormation course. Today we'll be doing a demonstration of how we can tie some of the techniques we've already learned into a continuous integration system with the aim of improving our DevOps environment.
Since we're doing a demo today, we won't be in the slideshow very much, so let me hop out and go over to an image so we can explain what we're looking to do today.
So if we recall, when we were looking at our DevOps maturity plan, we had this region where we're looking at advanced infrastructure, and advanced build and deploy, and advanced testing. Now if we look, we have a couple of goals that seems like they would work well for CloudFormation, like test systems are current. I don't have to think about builds and our deploys always work, and I'm comfortable with major cloud-wide changes. Now to enable this, we need to install a test module where we're able to launch a full CloudFormation stack and run unit or integration tests on the new stack before we promote the stack to a stage or a production environment.
This way whenever a developer is making changes on code, they also have the opportunity to make changes inside of the infrastructure file, or files, in CloudFormation, and have the environment update and be up to date whenever the builds start running, then run their end to end tests inside the continuous integration system before they even have to promote the environment to a shared environment with other developers.
So what we're looking to do here is mostly this end to end testing code portion here, full automatic architecture tests, and fully automatic environment creation. So these three are what we're looking to do today. And what the solution looks like is if we are zoomed into our advanced area here, we're using advanced CloudFormation. We're setting up the portion here that will work for a build or a deploy script. But most importantly, we're working on this portion here where we automatically go from a CloudFormation update next to code into a unit and integration test suite, where we can automatically run tests after the CloudFormation script is run.
The primary challenge that we have is that the CloudFormation script is difficult to detect when it's finished because it's not a request-response model, and the CloudFormation piece may be long-running. So we need to be able to look for failures during the long-running process that is CloudFormation, but also to be able to detect when the stack is finished building so we can run our tests.
Now if we look at our stack status life cycle, which we went over in one of the previous videos, we see that in our create and update stacks there are, these are the ones that we'll be focusing on primarily today rather, we see that there are these terminal success states, some intermediary not fail yet states, and then a lot of failure states where we know that the stack is not going to be able to create correctly. So when we're looking to integrate CloudFormation into our continuous integration workflow, we need to be aware of these environment life cycle phases and be able to detect them as well as act upon them whenever one of them changes, or we see the stack in a certain state.
So today we'll be building out a system that can tell when a stack is in the create in progress or create complete phase, or if it's already failed, as well as undergo updates. I'll primarily be demoing the create cycle because it's the fastest one for me to demonstrate but the same techniques can be applied to update since we're mostly focusing on how to integrate CloudFormation into an advanced environment.
So looking over at our other diagram that we looked at in our other slideshow, we can see that any user automation portion here, it's a person figure inside of this diagram, but today we're going to be creating a system such that this could easily be another computer, i.e. our CICD machine.
So we're actually, for this first demonstration, going to be doing two separate notification or status change integrations. The primary one we'll be polling for the status so that the continuous integration machine can just execute a normal-looking BASH script with a CloudFormation wait poller in it, where we use this number five here to detect when the stack is done or failed. And then we'll be using this number 10 here where we have CloudFormation published to an SNS stack to demonstrate sending an email to an operations personnel whenever a stack is finished completing.
So what I have prepared for us today is pretty much all of the demonstration, which includes the script that a piece of automation would use as well as the stack that we need to write to have the resources be produced, and actually the tests that we'll be running on the line after we finish this polling for the status change.
So without further ado, let's take a look at the stack that we are creating first so we can get an idea of what we're trying to test. So I've folded the code here. This is a technique that you can use in Sublime text editor to make these little ellipses where I have 15 lines of code hidden here, and this is useful for getting the overlay of a full template. If you're using Sublime Text, the way to do this quickly is on a Macintosh you would do Cmd + K and 3 in quick succession. And if you're on any other platform, you would hold the Ctrl key.
So as we can see, I have a pretty simple sample stack here where I have three different resources. I have a sample Dynamo table, a sample Lambda execution role, and a sample Lambda. So the stack that I'm building is fairly simplistic. We have a Lambda table that will act as our DB or database persistence layer, a role that allows us to write against a Dynamo table, and then a Lambda that actually has code inside of it that lets us do that right to the Dynamo table. So this a serverless API of sorts where we can invoke the Lambda function in order to treat it like an API. So that might be slightly foreign to you, but it's not the important part of this entire system. The important part is that we're going to be able to an integration test. So when we finish executing this stack, what we're going to see is we'll have a finished Dynamo table that has a Lambda trying to write to it that has an execution role that allows the Lambda to write into the Dynamo table.
Our test will be a very simple one. It will be us invoking a Lambda in a request-response model so that the write should finish before we get a result back from Lambda. Then we will verify that that record shows up in the table by doing a direct read off of the database table from the machine running the test. Then we'll clean up the table by deleting the item out of DynamoDB, which will have proved that we were able to use the Lambda to write into Dynamo and that we were able to access DynamoDB properly.
So if you can imagine that's a pretty simple test, but the goal there is that it should reflect pretty similarly how you might want to do a very full integration test on a normal system where you might be using RDS for a SQL-style database rather than Dynamo. I only choose this way rather than doing EC2 and RDS and server full things because Dynamo tables and Lambdas create and launch a lot faster, so we're not bored out of our mind waiting for this thing to finish.
So I have my template format version, which is optional. I have my description, which is optional, and I've just put a sample stack for you guys. Let's take a look at the resources that we're actually creating, so you'll be able to verify that it's working if you're following along here. This is simple enough for you to try on your own as well. It's a sample DynamoDB table where I've set the type correctly to DynamoDB table, and I've provided the minimum possible attributes for me to create a table. That is, I've defined the hash or primary key, primary simple key, the key schema, which again, I just said the ID was a hash. And then I gave it three read and write units, which is really, really cheap.
After we finish the sample Dynamo table, we should create a role that allows a Lambda function to write to the Dynamo table so we can actually write some logic inside of a Lambda. I've set the type to an IM role. I've made sure that the role depends on the table itself because we'll be using the value of the table's name to assign a policy that restricts the write ability on DynamoDB actions to the specific table from the stack.
So we can see that we have this assume role policy document. This is important for us to assume a role so the Lambda can actually operate using this policy. If I roll that up, path is optional and it defaults to this value, but we see the policies here. We've named the policy similar to Lambda execution role and I've just stuck the stack name in there so we don't have namespace collisions on the policies. I've also written a policy document here where I have the standard Lambda permissions where it just lets me write logs and such. This should be part of your Lambda knowledge if you've ever done that before. But all this is letting me do is put things into CloudWatch logs whenever I need to from the Lambda. But most importantly, I'm adding the ability to do CRUD operations, create, read, update, and delete operations, on the DynamoDB table from the Lambda that will be assuming this role. So I'm allowing it. Then I'm affixing the position to a specific resource because the ref value for DynamoDB does not return the ARN, which is what we need inside of the resource definition for an IAM policy. We are using the table name, which is what has returned from the ref value, and constructing the ARN in the correct format for DynamoDB table. That is, giving it the DynamoDB namespace, then providing the region delineated again by a colon, and providing the account ID before saying it is a table resource type inside of the DynamoDB namespace with a path at the name of the table. Okay, so that's the end of my execution role.
My sample Lambda, which will actually be doing the writing, we make sure that it depends on both the role and the table because we need the table value for in-lining the table value into the code in the Lambda. And we need the execution role so the Lambda can assume it. So here in our properties, we have a code, which is a zip file. I have actually in-lined this so I don't have to download anything else or provide an S3 key to point to the code files. But pretty simply, we log some data out so we can do some debugging we require in the Amazon Web Services SDK and make sure that we have a DynamoDB document client to do the write. Then we export this function here, which has some code. And first, we log the event that we receive, then we set the table variable value, actually, to the table name of the DynamoDB table, then we close off the string inside node, then we operate over the DynamoDB put. So we can see that we provide the item, which is the event. It's what we invoke the Lambda with. Then we provide the table name, which we have computed based off of this join function here, and we either error handle if there's an error doing the put, or we just return with success.
Of course, we need to have a description for the Lambda. We define index.handler as the handler because that's exports.handler on line 29 here. Then we provide the minimum memory size possible so that we're not spending too much money. Then we give DynamoDB the appropriate role that we just created. After specifying the correct runtime and a long timeout, just because we have finished our resources block of our template. None of that should have been super unfamiliar. You may be learning new things about the specific resource types for Dynamo and Lambda. However, you should already understand what I'm doing with the fn ref function, and the getAttribute, and those kinds of things.
Then for my outputs, I need for my test system to be able to easily grab the outputs of the ARNs or the names for the table and the Lambda so I can run my tests. I need the Lambda name so I can invoke the Lambda to test it, and then I need the table name so I can test my assertion that the Lambda actually wrote to the table.
So again, we have three resources and two outputs, and we're going to integrate a continuous integration system to test that this stack works, full stack, that we can actually run all the way through the compute layer and into the database with the correct permissions.
Okay, so first and foremost, what we're going to create is something that looks like this in our test environment, which can either be locally when I'm running tests if I have appropriate permissions for Amazon. Or this could be inside of your continuous integration system. This could be a snippet of code that you run in your CI. Because it's BASH, it's extremely portable and you can run it either locally or in your CI. We're actually just going to run it locally today because I am not going to set up a continuous integration system in front of the class because I don't want to pick continuous integration system and stick with it. That is, I don't want to make it seem like I'm advocating this particular one. But this will work in any system you want, like a circle CI, a Worker, a Jenkins, or anything.
So first definitely set your set E, so this script will error out if there's any error anywhere, which is exactly what we want. If the tests fail, we want the script to exit. This is just a BASH trick to figure out where the script is executing from, so I can use relative file paths in the rest of this script. The standard AWS CLI command, which you may or may not have learned from the basic CloudFormation course, is just AWS CloudFormation create stack. We'll just be doing create today like I said earlier. We provide the stack name, so this is the English name that we'll be using and the stack name that will show up inside of the console in the line item whenever I do my list stack operations. Then I provide the template body as a file B, which means a binary file that just means stream the content off of disk and pipe it to Amazon as the template body. Then I provided the IAM capability and this is only important if you're writing a CloudFormation stack that has the IAM capability need. That would be any stack that creates other stacks, or creates IAM entities or manipulates IAM entities.
So now we have the important piece here. The advanced techniques are that we are creating a polling function here. Here the relative path is an index because we're calling it wait, CFN wait, for CloudFormation wait. This could be your own module or this could be a package off of the NPM or a Python package, or anything. All you really need is some sort of line of BASH that invokes some scripting that waits until the stack finishes completing. Now because we're trying to do a walkthrough today, I'll actually show you what the code looks like for node.js, but you can write this in any language that you want. I'm just trying to teach you the conceptual way that you would go about doing this.
So here my interval is the number of seconds that I should wait before doing polls. And by polls, I mean the polls that we were looking at here. I'm defining the duration between each of these polls and how many of them I should do right here. And I'm providing the stack name that I had created during this phase, excuse me, and I'm providing the stack name that I had created during this phase here. So we're doing these four actions occur when this happens here, line 13 through 19, then 21 through 22 is us waiting for... We're doing polling and we're waiting for six, seven, eight, and nine to finish.
Once those are done and I've gotten an acknowledgement back from the poll for status, I will continue to the next line of BASH. So here if I exceed my interval or my max, my script that I wrote, which I'll show you the source code for in a moment here, will timeout. And if it doesn't timeout and it finishes completely, and it has a good state, that is if we land in because we're doing a create, if we land in the create complete state while we're doing this line here, we're doing our wait function and we detect that the status has entered a positive status, then we will execute our tests.
So again, this could be any line of BASH that just runs your tests. If you're working in Python or you're working in Ruby, or anything else, the idea here though is that we're just doing a creation or an update, or a delete even, then we have a wait function that executes this polling operation or behavior that we're talking about while the rest of this region of the flowchart finishes. Then we run the tests after we've closed this full loop. So that would be after this entire life cycle here finishes and we've detected that it finished by doing a final poll, then we run this. We don't want to run this unless we're already in create complete.
So what does the source code actually look like for this wait function? Let's go take a peek. So when we're doing the wait function, again like I said, this is a node.js. I'll explain symbolically. First in any language that you're working in, import or acquire in the Amazon Web Services SDK, if you're working in a language that does not have a supported SDK, you can just use the HTTP API to do this with the equivalent actions. The SDK just makes the code a little bit cleaner and easier to read. This is simply a little helper function that I wrote to get a key-value hash of the argument values on the command line, so it will work in this format, so it can pass the stack interval and max values. Then we have the ping interval stack and max that are just interpreting the arguments like you saw me invoking in the test script. I configure or set up the CloudFormation namespace off of the AWS SDK. This is just what it looks like in node, but you can do the equivalent in your language. I detect if I have a null or an empty string for the key that is stack off of the command line. Again, you can do this in any scripting language or any language that is going to let you invoke it through BASH. This is just what it looks like in node. Then for the Amazon Web Services node.js SDK for CloudFormation, it requires a hash, that is a key-value pair here, with only one key-value inside of it. This is extremely similar to the post body for the HTTP API if you want to go look that up as well. I just provide the Pascal cased stack name as the key, then the actual stack name. So this stack name here in the ping request parameters, it will be invoking the API with the double dash stack parameter, which we can see as defined here on this line.
So once I've done some of that housekeeping and I've just prepared my function invocation, polled all the arguments off of the command line, and set it up so I've put my ping counter to zero, I do a little bit of a console.log to provide feedback to the user, and I just immediately run a ping. So a ping, it doesn't really mean a ping from the BASH perspective where you're doing a timing on the request-response, but what we are doing is we're checking the state of the CloudFormation stack that we just created here on line 15 where we have defined the stack name. Similarly, I'm also passing the same stack on line 22. So when we're thinking about index.js, we should be thinking about a stack that's in the middle of being created.
Now the way that we detect if a stack is created or not using the SDKs is this operation called describe stacks. Unfortunately, there's not a single stack get operation that lets us check the state of a single stack. So we need to do this array-based method. But we can still accomplish everything that we want. So again, this is the node.js way using asynchronous programming and a function callback, but really all we're doing is we're saying whatever SDK we're using, invoke the CloudFormation describe stacks action, hell you could even do this in BASH, with the correct parameters. That is, you give it the stack name. Then in node, we have this callback where the continuation happens here and we're passed either the error of the continuation or the data of the successful request is passed in.
But long story short, if you were working in a language where this is represented as synchronous code and not a continuation call back, you might try catch and catch this error, and then have the other healthy path be you setting or storing the value of the describe stacks operation as data. But long story short, if there was an error, this error will be non null and have some value that I can console log. If it's fine, then the data will be an object in JavaScript.
So I need to do that quick check, and I can just process.exit one so we fail our test script if we get an error when we're trying to ping for CloudFormation. Otherwise, I need to do a quick check on the data because we're doing this nasty array-based operation, there is not a single get. I need to check the data property and make sure we get a well-formed stacks object back when we did a describe stacks. So off of the root of the object, it looks like a JSON object, right? Off of the root of the object one of the first properties is stacks, and stacks should be an array of stacks that we're describing. Because we provided the stack name, we should expect the data.stacks to have a length of one. So here in node.js, we can just do this check because the zero will be false C if we check data.stacks.length. And that returns zero, then we'll get this value will be false, and then we'll get a true because we've negated the entire compound statement here. So basically if we don't find a stack element in the data.stacks property, we'll also fail out. That is, we didn't find the stack even we were able to correctly ping CloudFormation.
So our else clause here is we were able to correctly ping and get some information back from CloudFormation. First, to make it a little more readable, I just accessed the actual stack that we're looking for. This data stacks and first element in the array will be a JSON object representing the stack itself, and the stack itself has a top-level property for that object type that is called stack status, which is directly mapping to this stack status that we're talking about here.
So when we're running our ping cycle, we're trying to see if this stack status here on line 76 ever becomes create complete, then we succeed. If it ever becomes anything in this error category, then we fail. If it's still create in progress, then we need to run another ping, and we do a little bit of waiting conditions there where we effectively sleep the function.
So let's run our little switch statement. JavaScript allows you to do string-based switch statements, which is what I'm doing here. So I'm just saying based on a value of the stack status here, which should be something on this entire grid here, then do some operations. I have included the update and delete life cycle states because this script needs to not explode if somebody in the middle, for instance, between this line of BASH and this line of BASH, if someone manually goes and alters something, then I need to be able to handle the edge cases where we're no longer just doing a create. And perhaps somebody has canceled the operation and commenced a deleted. So we'll see all of the possible statuses, not just the ones that we would expect to see during a create operation.
So first, I enumerate or list out all of the positive dynamic states. So mean the I'm not done yet and nothing is wrong. The one that we would expect during a create cycle would be create in progress, but I'm also including update in progress and update cleanup in progress because those are also positive in the update cycle. If we get a "Oh gosh, we're not done yet" back, then we should provide some feedback to the user, increment our counter somehow as we go through the pingbacks count. Then if we are at the top of our counter value, then we need to just error out because we've hit a timeout, otherwise we should just do another request later.
So again, JavaScript is asynchronous. This is the equivalent of doing a sleep operation where ping intervals in seconds and the 1,000 converts the units to milliseconds. JavaScript operates over milliseconds, so all I'm going here is effectively saying in ping interval times 1,000 milliseconds, we need to run the ping function again, which makes perfect sense given that the ping function is the thing that checks the status or runs our little periodic polling.
Okay, so if we're not still okay in waiting, we could also be done already, in which case we need to have our stable positive end states listed out. Those stable positive end states are just create complete, and update complete for when we're doing an update like this. Both of these states will mean that we are now able to actually run our tests and do things like, for instance, in our case try invoking the Lambda function.
So now we also need to be able to handle failure cases. We effectively list out every other state that aren't the positive ones, or the continuation ones. We list out all of the other possible cases, then we just want to exit with a non-zero error code so BASH knows to trip this set E error and not run our next line of BASH and run the tests. That is, we don't want to try running the tests on the CloudFormation stack if it was, of course, unable to create anything inside the stack. This default is just a catch-all so we don't get a bunch of uncaught exceptions in node if somebody tries to do something silly, but we should never get any unrecognized states back because the Amazon Web Services CloudFormation should not reply with any other strings except for the ones I have listed here. Of course, this is that command line arguments function I was talking about earlier, and this is just to coerce strings into integer values so I can do things like setting the interval in the max inline in a BASH statement.
Okay, so long story short, I just check if the stack is done. I recurse if it's not, but there's no problems yet. I exit with a success if it's done, and I exit with an error if it's anything besides just a continuation or a success. So we've gone over the template, we've gone over what the invocation should look like and why we want to do it because we want to be able to run end to end tests inside of our continuous integration system, which likely speaks BASH. Then I've showed you an example and gone over a little bit symbolically what is happening in the logic for doing a poll function.
Let's look at what our test script is actually going to do. First, it's going to describe the stacks based on the stack name that's passed in here. Then our test, if there's an error, it will of course just exit out. It will check if the stack has found similar like we just saw where we had to do a stack data stack zero here before on the other file in the index. This is a trick to turn the output value array, the stack API returns for CloudFormation. It returns them as an array of key-value hashes where the output key and the output value are defined independently. I want this into not in array format, but a hash format so we can do this kind of access here. So I just convert everything into hash format. I then try to invoke the Lambda function using the Lambda name output that I should be getting from my stack here. I do a request-response style invocation so that the BASH line, excuse me, so that the function will finish the execution of the Lambda before replying back. Then I say log type none because I don't want to manually inspect the logs that come back from the invocation. And I send it the stringified version of the test object payload, which is just this FUBAR with a testable a prop "Hello world" so we can check that the same piece of data is written back to the table.
Now, given that we looked at our Lambda function code inside of the CloudFormation template, we know that the Lambda function is just going to execute a put against the DynamoDB table. So we should be getting this error here will be non-null and have some content if we have a problem doing that put inside of the Lambda function. Otherwise, it means that the Lambda believes that it succeeded so we can test. If the Lambda actually did succeed, we should be able to get the record from DynamoDB, which is exactly what I'm doing here. I just go and I run a get function here. And my continuation I should see if I get an error. If I'm unable to see the object that I'm looking for, and I get some sort of error, then, of course, I'm going to exit zero and my test script fails. Otherwise, if I do see that I got my object back, then I want to run a test on it and make sure that the testable prop, which was equal to "Hello world" is still equal to testable prop when I get the data back. If it's not, I should exit again. And if it is, I should clean up the record so I can use the created stack and actually continue operating over things. For instance, if I need to move this into production or if I need to immediately start using this as a staging environment, I don't want a dumb record in there. Dummy record rather, where we have nonsense for our ID and nonsense for the defined property with "Hello world." So I then delete the record and just make sure that I'm able to again delete it and I don't have an error. If I'm able to delete it, then I have fully done an end to end test of my entire database system, plus API locally, without having any architecture created in the first place.
So to give you an idea of how cool this is, this means that any developer at any time they make a commit, if we include this inside of our testing script and we give our CI the ability to create these CloudFormation templates, then the CI on every single commit can spin up a completely new stack and run a very extensive test suite that fully tests everything.
Another way that you can use this is instead of doing every single commit, you can just when it's time to promote from master to production, or from a staging environment to production, you may want to run final verification tests to make sure that your staging code can handle the load and can do certain failover testing, which you can also do automatically testing every time you do a commit, but perhaps your use case that would be prohibitively expensive or slow.
What you can do with this system is just make a one-off script where before you do a promotion, you can set up your own testing suite where you run a load test rather than, for instance, this is a code test here or a functionality test from end to end, we can also do some sort of load testing script here on line 26.
So our capabilities are pretty much unlimited when we start realizing that we can completely headlessly and without observing it at all, let an entire cloud infrastructure build itself out, test itself, and then if we're just trying to do testing, we can actually run a line down here where CloudFormation goes and kills itself too. So we can run the full test and destroy it, and invoke almost nearly no cost because we're deleting the stack within a couple of minutes after we've created it.
So let's go see what this actually looks like when we run it. First I'm going to more manually set a few things so I can make sure that this runs correctly during the demonstration. The first things I'm going to do is set the region to US West 2, since my default region on this local machine is not that value. Then I'm also going to set my profile. Okay, so now that we can actually make sure that this will work, let's try running it.
Okay, so the first thing we should notice is that our execution where we did our request-response and ran this operation actually worked immediately because we got the stack errand back with the correct region. We should be able to tab over to the console here and see that there's actually a create in progress and view the stack events where we're actually creating the resources associated with the stack that we looked at a minute ago.
So we can see that the resources are creating, we can go and audit the template and make sure it's working correctly. And we didn't use any parameters here, but during your CICD deploy, this might be helpful to see what the system is automatically creating, since most more complicated stacks will actually have parameters. We were just keeping it simple.
So while we're doing that, we can also do another demonstration where we're going to tie into this SNS event that we were talking about where, rather than waiting and just effectively polling, which is what we're doing while we're refreshing, we can also tie in and get an email when the stack is finished. So I'm going to create this while we go and review. Now in advanced resources, we'll do my work email, we can create a topic and we should expect that stack events are emailed to my email. We could also use a Lambda function to listen to this. I don't need to set any of the other advanced options, but I do need to give that new IAM capability and hit create.
So now we're waiting for me to get that email. We saw that we got the stack create complete. We went from 20:59:52 here. It took us a little more than a minute to finish creating the stack there. Let's go look at the output of our BASH function. So we saw that we had this beginning, whoops excuse me. So we saw that we had the beginning of the stack ping wait cycle, where every 15 seconds we were running a check, right? We saw that we got the create in progress status back because we had this ping looping cycle with the asynchronous requests. Then we saw finally after four of these pings, which we should expect because we saw we did the delta of the time between when it was created and finished. We see that it took us about a minute to detect via this polling method that stack finished, exited with a success, and then started running an integration test, which we also reviewed where we found the outputs. That is, we found the name of the table and the name of the Lambda that was generated inside of the stack. We invoked the Lambda using our invoke request-response model where we actually put the object into the DynamoDB table. Then we got the object back during a get function, checked that the properties matched, then correctly were able to delete out of the stack. So this would be analogous to you creating a Rails application and putting an RDS database behind it and then actually going and checking your controllers after the full stack had been finished creating.
So let's go and look inside of the DynamoDB table and just make sure that that has been deleted. That is, let's make sure that our delete function worked like we expected. So we have our CA sample one that we just created. If we go look inside of items, we can see that there are actually no more items, which is exactly what we wanted. So our test passed and then it cleaned up after itself, which is great.
So if we look back in the CloudFormation management console, we see that CA2 is finished. We should expect that our SNS integration that takes advantage of this broadcast should have broadcasted back to me, the user, via an email. So we can actually see that we were able to subscribe to the SNS topic here. We got undefined because I left the blank blank when we said define topic. This is actually completely valid. It's just weird looking. But you would normally name this with a semantically meaningful string. I could put this as "My CA demo stack" or something.
I'm actually not going to confirm the subscription here. I'm going to do the cleanup and just delete both the stacks, since we actually went through the entire demonstration here. So what we found is that now we actually have the ability to create these CloudAcademy 1 scripts extremely quickly and do an integration test on them. So I can actually do another one of these, right? So say, for instance, I want to create a whole bunch of these at once, I can just do a copy here, change this to CloudAcademy 2, create another tab, and do the same thing.
I skipped ahead so the CloudAcademy 2 and 3 are done. We can actually see that we were able to create two additional stacks. This would be like using concurrency inside of your continuous integration or continuous deployment system, and it should be pretty interesting to you that you can create three of these stacks in parallel and test them independently. These could be different versions of commits. I've used the same ones just because of the restrictions on complexity for doing this screen recording here, but this could be a very sophisticated testing system where you're doing full-stack testing automatically on entire infrastructures at once.
So thus ends our demonstration of how we can integrate continuous integration systems in CloudFormation. Hopefully, you learned a thing or two about techniques used to both poll or do event-driven systems for notifications.
For our next video in the course, we'll be going over nested stacks, a technique that you can use to create increasingly complex models, but still have a one-step deploy. | https://cloudacademy.com/course/advanced-aws-cloudformation/lifecycle-demos/ | CC-MAIN-2021-10 | en | refinedweb |
Saving and loading models in TensorFlow — why it is important and how to do it
So much time and effort can go into training your machine learning models. But, shut down the notebook or system, and all those trained weights and more vanish with the memory flush. Saving your models to maximize reusability is key for efficient productivity.
Photo by Nana Smirnova on Unsplash.
In this article, we are going to discuss the following topics
- Importance of saving deep learning models (in general, not limited to TensorFlow).
- How to save deep learning models in TensorFlow 2, and different types, categories, and techniques of saving the models.
Importance of saving deep learning models
Remember, in gradient descent, we update the weights and bias based on the error or loss function.
Now imagine you trained a model for thousands of epochs for days or weeks or even hours, and get pretty good weights for your model, meaning that your model is performing a lot well, and then you lose all the weights when you close your program/jupyter notebook.
This will turn into a more hectic problem when you want to reuse that model in another application, and you have no saved progress. You have to start the training process from scratch, which might waste your hours or days.
Practically you can imagine a scenario that you have coded a really good facial recognition model application with above 99% accuracy, precision, etc., and it took you around 30 hours to train that model on a big dataset. Now, if you have not saved the model, and you want to use it in any application, you would have to retrain the whole model for 30 hours.
This is why saving the model is a very important step and can save you a ton of time and resources with just some extra lines of code.
Saving models in TensorFlow 2
There are 2 different formats to save the model weights in TensorFlow. The first one is the TensorFlow native format, and the second one is the hdf5 format, also known as h5 or HDF format.
Also, there are 2 different ways of saving models.
- Simple, and less complex way, but gives you no freedom.
- Using callbacksto save the model that allows you a lot of freedom, such as saving per-epoch, saving after every n number of examples etc.
We will discuss both in detail.
Let’s load important python libraries and dataset first.
import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() #cifar10 dataset x_train = x_train / 255.0 #normalizing images x_test = x_test / 255.0
Simple Way
The simple way to save the model in TensorFlow is that we can use the built-in function of Tensorflow.Keras.models “Model saving & serialization APIs” that is the save_weights method.
Let’s say we have a sequential model in TensorFlow. now we fit the model using model.fit function in TensorFlow.
hist = model.fit(x_train,y_train,epochs=5, batch_size=512)
We can evaluate the performance of our model via,
loss, acc = model.evaluate(x_test, y_test, verbose=0) print(f"test accuracy {acc*100}")
Now we can save our model just by calling model.save function and passing in the filepath as the argument. This will save the model’s
- Model Architecture
- Model Weights
- Model optimizer state (so that you can continue the training from where you left)
model.save('myModel.h5')
Now adding the extension is important. If you add .h5 as the extension, it will save the model as hdf5 format, and if no extension is provided, the model is saved as TensorFlow native format.
Now when the model is saved in the current directory as myModel.h5 file, you can simply load it in a new program, or same program as a different model via,
new_model = tf.keras.models.load_model('my_model.h5') #same file path
We can check the accuracy of the new loaded model via,
loss, acc = new_model.evaluate(x_test, y_test, verbose=0) print(f"test accuracy {acc*100}")
And we can see that we are getting exactly the same accuracy as the old model.
We can confirm it further by checking the model summary.
newmodel.summary()
And the new summary is precisely identical to our original model’s summary.
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv_1 (Conv2D) (None, 30, 30, 16) 448 _________________________________________________________________ pool_1 (MaxPooling2D) (None, 7, 7, 16) 0 _________________________________________________________________ batch_normalization (BatchNo (None, 7, 7, 16) 64 _________________________________________________________________ flatten (Flatten) (None, 784) 0 _________________________________________________________________ dense_1 (Dense) (None, 32) 25120 _________________________________________________________________ dropout (Dropout) (None, 32) 0 _________________________________________________________________ dense_2 (Dense) (None, 10) 330 ================================================================= Total params: 25,962 Trainable params: 25,930 Non-trainable params: 32 _________________________________________________________________
Similarly, we can save the weights in TensorFlow native format via,
new_model.save('newmodel')
See how we have not added any file format after the name. This will save our model in TensorFlow native format in the folder newmodel. If we peak into the folder, then we can check what the files are with
!dir newmodel
This command will only run in the jupyter notebook, so alternatively, you can open the folder and check the files.
You will always have 1 file and 2 folders that are:
- assets (folder)
- pb
- variables (folder)
We will have a look at what these folders and files are later. But simply to load the model, we just have to give the pathname which we used to save the model, such as with
other_model = tf.keras.models.load_model('newmodel')
And you can confirm that it is the same model simply via checking its summary or evaluating it to match the results.
Now to save the weights only using the simple way, you just have to call the built-in function save_weights on your model.
Let's take the same old model, train it for a few epochs.
model.fit(x_train,y_train,epochs=5, batch_size=512)
Now you can simply save the weights via,
path = 'weights_folder/my_weights' model.save_weights(path)
This will create a folder named weights_folder and save the weights in Tensorflow native format with the name of my_weights. It is going to have 3 files.
- checkpoint
- data-00000-of-00001
- index
Let's take a look at these files.
- my_weights.index
This file tells TensorFlow which weights are stored where. When running models on distributed systems, there may be different shards, meaning the full model may have to be recomposed from multiple sources. In the last notebook, you created a single model on a single machine, so there is only one shard, and all weights are stored in the same place.
- my_weights.data-00000-of-00001
This file contains the actual weights from the model. It is by far the largest of the 3 files. Recall that the model you trained had around 14000 parameters, meaning this file is roughly 12 bytes per saved weight.
- checkpoint
This file is by far the smallest. It’s actually so small that we can just look at it directly. It’s a human-readable file with the following text,
model_checkpoint_path: "my_weights" all_model_checkpoint_paths: "my_weights"
Now when you have saved the weights, you can simply load them by just calling,
model.load_weights(path)
This will load the weights for that model at that specific path.
Alternatively, you can save the weights only in the hdf5 format via,
model.save_weights('my_weights.h5')
This will create a my_weights.h5 file in your working directory, and you can simply load them via model.load_weights('my_weights.h5').
Important Point
When you are loading the weights for a model, you need to have the correct architecture of that model.
For example:
You can not load the weights of our model we just created to a sequential model with 1 Dense layer, as both are not compatible. So you might be thinking, what is the use of saving the weights only?
Well, the answer is that if you are looking at some big SOTA application, such as YOLO, or something like that where they give you the source code. But, to train them on your machines is a long and lengthy task, so they also give you the pre-trained weights on different epochs, such as if you want to see how this model is performing at 50 epochs, then you can load the saved weights of 50 epochs, and similarly for other numbers of epochs. In this way, you can check the performance of the model on the number of training epochs based on how the model is performing on X number of epochs without explicitly training it.
TensorFlow Native format vs. hdf5, which to use and when
You have seen that using the .h5 format is simple and clean as it only creates one single file, whereas using tensorflow native format creates multiple folders and files, which is difficult to read. So, you might be thinking that why should we use tensorflow native format? The answer to this is that in the TensorFlow native format, everything is structural and organized in its place. For example, the .pb file contains structural data that can be loaded by multiple languages. Some of the advantages of TF native format are listed in the following.
Advantages of the TensorFlow native format
- TensorFlow’s Servinguses it when you want to take your model to production.
- Language-agnostic — binary format can be read by multiple languages (Java, Python, Objective-C, and C++, among others).
- Advised to use since 0, you can see the official serializing guideof TensorFlow, which recommends using TensorFlow Native format.
- Saves various metadata of the model such as optimizer information, losses, learning rate, etc., which can help later.
Disadvantages
- SavedModelis conceptually harder to grasp than a single file
- Creates a separate folder to store the weights.
Advantages of h5
- Used to save giant data, which might not be tabular.
- Common file saving format.
- Everything saved in one file (weights, losses, optimizers used with keras)
Disadvantages
- Cannot be used with Tensorflow Servingbut you can simply convert it to .pb via experimental.export_saved_model(model, 'path_to_saved_model')
What to use
If you are not going to use TensorFlow while serving or deploying your model, then for simplicity, you can use .hdf5 format, but if you are going to use TensorFlow serving, then you should use tensorflow native format.
Learning Outcome
In this article, you learned
- Why should you save your machine learning model.
- How to save model weights only using the simple method.
- How to save a complete model using the simple method.
- Saving in TensorFlow Native format or HDF5 format.
- Difference between TensorFlow Native and HDF5 format and what to use.
For more details, check out:
Related: | https://www.kdnuggets.com/2021/02/saving-loading-models-tensorflow.html | CC-MAIN-2021-10 | en | refinedweb |
use 5.006; use strict; use warnings; package Try::Tiny::Retry; # ABSTRACT: Extends Try::Tiny to allow retries our $VERSION = '0.004'; # VERSION use parent 'Exporter'; our @EXPORT = qw/retry retry_if on_retry try catch finally/; our @EXPORT_OK = ( @EXPORT, qw/delay delay_exp/ ); our %EXPORT_TAGS = ( all => [@EXPORT_OK] ); use Carp; $Carp::Internal{ +__PACKAGE__ }++; use Try::Tiny; BEGIN { eval "use Sub::Name; 1" or *{subname} = sub { 1 } } our $_DEFAULT_DELAY = 1e5; # to override for testing sub delay(&;@) { ## no critic my ( $block, @rest ) = @_; return ( bless( \$block, 'Try::Tiny::Retry::Delay' ), @rest, ); } sub on_retry(&;@) { ## no critic my ( $block, @rest ) = @_; return ( bless( \$block, 'Try::Tiny::Retry::OnRetry' ), @rest, ); } sub retry_if(&;@) { ## no critic my ( $block, @rest ) = @_; return ( bless( \$block, 'Try::Tiny::Retry::RetryIf' ), @rest, ); } sub delay_exp(&;@) { ## no critic my ( $params, @rest ) = @_; my ( $n, $scale ) = $params->(); require Time::HiRes; return delay { return if $_[0] >= $n; Time::HiRes::usleep( int rand( $scale * ( 1 << ( $_[0] - 1 ) ) ) ); }, @rest; } sub retry(&;@) { ## no critic my ( $try, @code_refs ) = @_; # name the block if we have Sub::Name my $caller = caller; subname( "${caller}::retry {...} " => $try ); # we need to save this here to ensure retry block is evaluated correctly my $wantarray = wantarray; # find labeled blocks in the argument list: retry_if and delay tag by blessing # a scalar reference to the code block reference my ( $delay, $on_retry, @conditions, @rest ); foreach my $code_ref (@code_refs) { if ( ref($code_ref) eq 'Try::Tiny::Retry::RetryIf' ) { push @conditions, $$code_ref; } elsif ( ref($code_ref) eq 'Try::Tiny::Retry::OnRetry' ) { croak 'A retry() may not be followed by multiple on_retry blocks' if $on_retry; $on_retry = $$code_ref; } elsif ( ref($code_ref) eq 'Try::Tiny::Retry::Delay' ) { croak 'A retry() may not be followed by multiple delay blocks' if $delay; $delay = $$code_ref; } else { push @rest, $code_ref; } } # default retry 10 times with default exponential backoff if ( !defined $delay ) { my ($code_ref) = delay_exp { 10, $_DEFAULT_DELAY }; $delay = $$code_ref; } # execute code block and retry as necessary my @ret; my $retry = sub { my $count = 0; RETRY: { $count++; my ( $redo, $err ); try { # evaluate the try block in the correct context if ($wantarray) { @ret = $try->(); } elsif ( defined $wantarray ) { $ret[0] = $try->(); } else { $try->(); } } catch { $err = $_; # if there are conditions, rethrow unless at least one is met if (@conditions) { my $met = 0; for my $c (@conditions) { local $_ = $err; # protect from modification $met++ if $c->($count); } die $err unless $met; } # rethow if delay function signals stop with undef die $err unless defined $delay->($count); # if here, then we want to try again $redo++; }; if ( defined $on_retry && $redo ) { local $_ = $err; $on_retry->($count); } redo RETRY if $redo; } return $wantarray ? @ret : $ret[0]; }; # call "&try" to bypass the prototype check return &try( $retry, @rest ); } 1; # vim: ts=4 sts=4 sw=4 et: __END__ =pod =encoding UTF-8 =head1 NAME Try::Tiny::Retry - Extends Try::Tiny to allow retries =head1 VERSION version 0.004 =head1 SYNOPSIS Use just like L<Try::Tiny>, but with C<retry> instead of C<try>. By default, C { ... }; =head1 DESCRIPTION This module extends L<Try::Tiny> to allow for retrying a block of code several times before failing. Otherwise, it works seamlessly like L<Try::Tiny>. By default, Try::Tiny::Retry exports C<retry> and C<retry_if>, plus C<try>, C<catch> and C<finally> from L<Try::Tiny>. You can optionally export C<delay> or C<delay_exp>. Or you can get everything with the C<:all> tag. =head1 FUNCTIONS =head2 retry retry { ... } # code that might fail retry_if { ... } # conditions to be met for a retry delay { ... } # control repeats and intervals between retries catch { ... }; # handler if all retries fail The C<retry> function works just like C<try> from L<Try::Tiny>, except that if an exception is thrown, the block may be executed again, depending on the C<retry_if> and C<delay> blocks. If one or more C<retry_if> blocks are provided, as long as any of them evaluate to true, a retry will be attempted unless the result of the C<delay> block indicates otherwise. If none of them evaluate to true, no retry will be attempted and the C<delay> block will not be called. If no C<delay> block is provided, the default will be 10 tries with a random delay up to 100 milliseconds with an exponential backoff. (See L</delay_exp>.) This has an expected cumulative delay of around 25 seconds if all retries fail. =head2 retry_if retry { ... } retry_if { /^could not connect/ } catch { ... }; A C<retry_if> block controls whether a retry should be attempted after an exception (assuming there are any retry attempts remaining). The block is passed the cumulative number of attempts as an argument. The exception caught is provided in C<$_>, just as with C<catch>. It should return a true value if a retry should be attempted. Multiple C<retry_if> blocks may be provided. Only one needs to evaluate to true to enable a retry. Using a C<retry_if> block based on the retry count is an alternate way to allow B<fewer> (but not greater) tries than the default C<delay> function, but with the default exponential backoff behavior. These are effectively equivalent: retry { ... } retry_if { shift() < 3 }; retry { ... } delay_exp { 3, 1e5 }; If you wish the exception to be rethrown if all C<retry_if> blocks return false, you must use a C<catch> block to do so: retry { ... } retry_if { /^could not connect/ } catch { die $_ }; =head2 on_retry retry { ... } on_retry { $state->reset() } catch { ... }; The C<on_retry> block runs before each C<retry> block after the first attempt. The exception caught is provided in C<$_>. The block is passed the cumulative number of attempts as an argument. The return value is ignored. Only one C<on_retry> block is allowed. =head2 delay retry { ... } delay { return if $_[0] >= 3; # only three tries sleep 1; # constant delay between tries } catch { ... }; The C<delay> block controls the number of attempts and the delay between attempts. The block is passed the cumulative number of attempts as an argument. If the C<delay> block returns an undefined value, no further retries will be made. If you wish the exception to be rethrown if all attempts fail, you must use a C<catch> block to do so: retry { ... } delay { ... } catch { die $_ }; Only one C<delay> block is allowed. =head2 delay_exp B<microseconds>. Given number of tries C<N> and upper bound C<U>, the expected cumulative delay time if all attempts fail is C<0.5 * U * ( 2^(N-1) - 1 )>. =for Pod::Coverage BUILD =head1 SEE ALSO There are other retry modules on CPAN, but none of them worked seamlessly with L<Try::Tiny>. =over 4 =item * L<Action::Retry> — OO (Moo) or functional; various delay strategies; supports conditions =item * L<AnyEvent::Retry> — OO (Moose) and event-driven; various delay strategies =item * L<Attempt> — functional; simple retry count with constant sleep time =item * L<Retry> — OO (Moose) with fixed exponential backoff; supports callbacks on every iteration =item * L<Sub::Retry> — functional; simple retry count with constant sleep time; supports conditions =back AUTHOR David Golden <dagolden@cpan.org> =head1 CONTRIBUTOR David Steinbrunner <dsteinbrunner@pobox.com> =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2013 by David Golden. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut | http://web-stage.metacpan.org/release/Try-Tiny-Retry/source/lib/Try/Tiny/Retry.pm | CC-MAIN-2021-10 | en | refinedweb |
Problem in Pointers
#1 Members - Reputation: 104
Posted 18 November 2012 - 10:12 PM
Hello Guys,
I am a bit rusty on pointers,i actually want to implement linked list ,but my compiler is giving error.
i am getting run time error at line
(head)->next=NULL;
(head)->data=data;
Would be great if someone can review the short snippet code and point my mistake.
Any other suggestions are always welcome.
Please note that i have also tried using (**head).data=data; but then also i am still getting runtime error .
i am not sure how to proceeed next.
#2 Crossbones+ - Reputation: 8268
Posted 18 November 2012 - 10:25 PM
This is really C code with an cout thrown it, btw, not really C++.
Never mind.
Edited by Bacterius, 18 November 2012 - 11:10 Crossbones+ - Reputation: 12345
Posted 18 November 2012 - 11:08 PM
If your compiler is giving you an error, it would be good for you to post it. It might not mean anything to you, but chances are it does mean something to us, and we can help you understand it.
The section where you check if *head is NULL and then you proceed to fill it up with data should look like an obvious mistake to you.
I can only get this type of code right if I draw a bunch of boxes and arrows on a piece of paper, so I can keep track of what's pointing where in each operation.
#4 Moderators - Reputation: 7962
Posted 19 November 2012 - 08:33 AM
foo->bar is equivalent to (*foo).bar, so this change cannot affect the behaviour.foo->bar is equivalent to (*foo).bar, so this change cannot affect the behaviour.
Please note that i have also tried using (**head).data=data; but then also i am still getting runtime error .
Álvaro is correct, the problem is that you are dereferencing a NULL pointer. Even when you fix that, there is another problem. Your create function leaks a node when the list is non-empty.
A simpler way to write it is to split the actions into two steps, the first being to create and populate the new node, and the second to add the node to the list.
void create(int data, list **head) { list *node = new list; node->next = NULL; node->data = data; if(*head == NULL) { *head = node; } else { node *end = *head; while(end->next != NULL) { end = end->next; } end->next = node; } }For a singly linked list, if the order of the list is irrelevant, it is faster to add the new element as the front of the list than the end.
You also leak memory in your print function. There is no need to allocate a new node, simply initialise the temporary node with the head:
void print(list **head) { list *temp1 = *head; while(temp1 != NULL) { std::cout << temp1->data << " "; temp1 = temp1->next; } }
Your print function would be simplified if it accepted a pointer as its parameter, rather than a pointer to a pointer:
void print(list *head) { list *node = head; while(node != NULL) { std::cout << node->data << " "; node = node->next; } }There is no need for a pointer to a pointer because you do not need to modify the callee's pointer in this case.
Some of your headers are C++ specific, so let us write the code in a C++ style:
#include <cstdio> #include <cctype> #include <cstdlib> #include <iostream> #include <conio.h> struct list { int data; list *next; }; void create(list *&head, int data) { list *node = new list; node->data = data; node->next = null; if(head == NULL) { head = node; } else { node *end = head; while(end->next != NULL) { end = end->next; } end->next = node; } } void print(list *head) { list *node = head; while(node != NULL) { std::cout << node->data << " "; node = node->next; } } int main() { list *L = NULL; int data = 10; for(int i = 0 ; i < 3 ; i++) { create(L, data++); print(L); getch(); } }The changes were:
- Change the standard headers includes to the C++ style (<cfoo> rather than <foo.h>
- Drop the "struct" prefix from declarations
- Drop the declaration of a global as part of the structure declaration*
- Using a reference rather than a pointer to implement "write back"
A final note, modifying the caller's variable is more surprising than the simpler approach of using the function's return value. Here is another alternative:
// ... list *append(list *head, int data) { list *node = new list; node->data = data; node->next = null; if(head == NULL) { return node; } else { node *end = head; while(end->next != NULL) { end = end->next; } end->next = node; return head; } } int main() { list *L = NULL; int data = 10; for(int i = 0 ; i < 3 ; i++) { L = append(L, data++); print(L); getch(); } } | http://www.gamedev.net/topic/634593-problem-in-pointers/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2014-35 | en | refinedweb |
See also: IRC log
<Jonathan> Yes, suppose so...
<TomJ> can't access conference on zakim
<asir> can't access conference on Zakim
minutes approved
Jonathan: arthur suggested to add canonicalization for WSDL1.1 component designators
Arthur: If the WG agrees, we should recommend to add a canonicalization to be consistent with WSDL2.0
Asir: we could open a formal
issue to th WS-Policy WG ?
... I can open it.
<scribe> ACTION: Asir to open a WS-Policy issue and link it with CR80 [recorded in]
<Jonathan>
Jonathan: jason says that he
would like to extend the rpc signature but agrees the
suggestion is bad timing
... can we accept this response and leave the spec as it is?
TomJ: we do not prevent what he wants, toolkits may modify the rpc signature
RESOLUTION: close CR082 and mark it as accepted
<Jonathan>
Jonathan: CR092/CR092 is about
soap usage that WSDL2.0/SOAP binding does not allow to
describe
... I sent a partial list of soap1.2 functionalities that cannot be described with our soap1.2 binding
Tom: we should say that the soap1.2 fault details element will contain an element, but may contain other stuff that is not described
Jonathan: proposal is to add the
list of SOAP functionalities that cannot be described and add
tom comment
... can we close these two issues with this resolution ?
RESOLUTION: close CR92 and CR93 according proposal
Jonathan: CR096 seems editorial
Arthur: I will check the source and let's move on
Jonathan: issue CR095 is about
wsoap:header@element. Being of type QName, it prevents the #any
value
... Why should we allow it?
... proposal is #any is not allowed as it has not a clear utility
<asir> +1 Jonathan's suggestion
Arthur: It seems that we are not
generating message assertions. We need another assertion
table
... we could also have one assertion table, with a "assertion type" column
... as a quick fix, I will add another table but I prefer to have one unique table that combines all current tables
Jonathan: arthur proposal for CR094 is to add another assertion table
RESOLUTION: CR094 closed with arthur proposal
back to CR095
Jonathan: #any is a no op.
RESOLUTION: CR095
is closed with no action, jonathan to answer to
ram
... close CR096 by removing the cited assertion
Issue CR097
Jonathan: it seems that this
assertion is redundant with schema checks
... the order of the desc. children is not captured by the schema. This assertion is about the order.
Roberto: we come up with this schema, looser than expected because of non determinism issues
Jonathan: proposal is close with no action
RESOLUTION: Close CR097 with no action
Issue CR098
<Jonathan>;%20charset=utf-8#Types-1300002
Jonathan: Is the cited assertion already covered by other assertions, as suggested?
<scribe> ACTION: Arthur to look at CR098 [recorded in]
Issue CR099
Jonathan: similar to CR098
<scribe> ACTION: Arthur to look at CR099 [recorded in]
Issue CR100
Jonathan: Import-0001 and Import-0070 seem to adress the same thing, unless any subtlety
Arthur: we should leave the text and retain only one of the two assertions.
RESOLUTION: Close CR100 by dropping Import-0001 assertion (but leaving the text)
<Arthur> I just committed the fix to CR094
Issue CR101
Jonathan: The three assertions seem redundant
Proposal is to keep the assertion in section 121900 and remove the other assertions
RESOLUTION: Close CR101 according this proposal
Issue CR102
Arthur: we should rewrite this
assertion and remove the use of "imported components"
... we want to say that we import a namespace.
... proposal is to rewrite Import-0003 sentence and remove this assertion
RESOLUTION: close CR102 according proposal
Issue CR103
Tony: the second assertion is not complete. We should remove the second assertion and fix the wording.
Jonathan: do we want to remove
the sentence or improve it?
... the proposal is to take the last sentence of the section 3.1.1.2 and move it to section 3.1.2
<Jonathan> reword = reference Schema-0016
Jonathan: in addition, remove the assertion Types-1300001 and rewording it to add a reference to Schema-0016
RESOLUTION: close CR103 according proposal
Issue CR104
RESOLUTION: Close Issue CR104 and remove Description-0024
Issue CR105
Jonathan: proposal is to remove assertion markup from 1204002
RESOLUTION: close CR105 by dropping InterfaceOperation-1204002
Issue CR106
Jonathan: drop InterfaceOperation-1204003
RESOLUTION: Close CR106 by dropping InterfaceOperation-1204003
Issue CR107
Roberto: we could test the case of two styles that are contradictory
Jonathan: but we rely on
something external
... proposal is to close CR107 with no action and add a test case
RESOLUTION: close CR107 with no action
Issue CR108
<Roberto> 14: !unique => present <=> ! ! absent
<Roberto> 6: ! absent => unique
<Roberto> so they are logically equivalent, no?
<Roberto> oops, I mixed up the 14/6 labels
<Roberto> the first one I wrote is 6, the second one 14
<Roberto> I'd assert that they are equivalent and 6 should be removed, as it is the most poorly worded of the two
<scribe> ACTION: Amy to write a proposal for CR108 [recorded in]
This is scribe.perl Revision: 1.127 of Date: 2005/08/16 15:12:03 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/details/soap1.2 fault details/ Succeeded: s/value/utility/ Succeeded: s/the first assertion/one of the two assertions/ No ScribeNick specified. Guessing ScribeNick: youenn Inferring Scribes: youenn WARNING: Replacing list of attendees. Old list: TonyR +1.514.964.aaaa Jonathan_Marsh Allen_Brookes Tom_Jordahl Asir_Vedamuthu Arthur Canon Amelia_Lewis New list: Roberto Tom_Jordahl Amelia_Lewis Jonathan_Marsh Allen_Brookes Asir_Vedamuthu Canon Arthur TonyR m2 Gilbert_Pilz +1.650.786.aaaa Default Present: Roberto, Tom_Jordahl, Amelia_Lewis, Jonathan_Marsh, Allen_Brookes, Asir_Vedamuthu, Canon, Arthur, TonyR, m2, Gilbert_Pilz, +1.650.786.aaaa Present: Roberto Tom_Jordahl Amelia_Lewis Jonathan_Marsh Allen_Brookes Asir_Vedamuthu Canon Arthur TonyR m2 Gilbert_Pilz +1.650.786.aaaa Got date from IRC log name: 7 Dec 2006 Guessing minutes URL: People with action items: amy arthur asir[End of scribe.perl diagnostic output] | http://www.w3.org/2006/12/07-ws-desc-minutes.html | CC-MAIN-2014-35 | en | refinedweb |
Clarity, Technology, and Solving Problems | PracticeThis.com
WP7 App with Key Windows Azure resources – Slides, Videos, How-To’s, and T-shooting – for quick consumption on the go.
The customer was using hand crafted XmlHttp requests from client scripts requesting the data from ASPX pages. While the goal was achieved - the amount of information sent to the server was minimal and the user interface was responsive - the coding was not really fun. Also, since the requests were sent to regular ASPX pages the whole ASPX pipeline was executing unnecessarily utilizing CPU for nothing. The customer did not want to use Update Panel control. Although it boosts coding productivity, it also adds some burden on the network. Network utilization should have been kept to the minimum.
After quick research I found two great resources that directed me to the solution that would satisfy both requirements:
The first one is from Chris Hay - remix08 UK ASP.NET Front End Performance Slides and the other one is from Jeff Prosise - Power ASP.NET AJAX Programming. They both outline the usage of Script-Callable Web Services. There are three simple steps to follow:
[System.Web.Script.Services.ScriptService]
public class AJAXCallableWebService : System.Web.Services.WebService
{
[WebMethod]
public string HelloWorld(string name)
{
return "Hello, " + name;
}
}
<asp:ScriptManager
<Services>
<asp:ServiceReference
</Services>
</asp:ScriptManager>
<script type="text/javascript" language="javascript">
function callAjax()
{
var text = document.getElementById("Text1").value
AJAXCallsWebService.AJAXCallableWebService.HelloWorld(text,onSuccess);
}
function onSuccess(result)
{
document.getElementById("result").innerText = result;
}
</script>
<span id="result"></span>
<input id="Button1" type="button" value="button" onclick="callAjax()" />
<input id="Text1" type="text" />..
Consultant is a field warrior. A Peaceful Warrior. A loner. Nevertheless, he is part of a larger team. The team of sales, marketing, support, operations, and other specialists. Build trust within the team and the team will support you when you are all alone in the field."
"Give me six hours to chop down a tree and I will spend the first four sharpening the axe"
What are your practices that make you better consultant? | http://blogs.msdn.com/b/alikl/archive/2008/10.aspx | CC-MAIN-2014-35 | en | refinedweb |
This chapter describes a number of security-related use cases and the typical life cycle of an ADF application security. It also lists code and configuration samples presented elsewhere in this Guide.
This chapter contains the following sections:
Security Integration Use Cases
Appendix - Security Life Cycle of an ADF Application
Appendix - Code and Configuration Examples
The audience for the material presented in this chapter are developers, security architects, and security administrators. The presentation is not feature-driven, as in most topics in this Guide, but use case-driven: a number of use cases that solve typical application security challenges are introduced as a departing point to solve particular application security requirements. Some of the use cases describe a declarative approach (and do not require changes in application code); others provide a programmatic approach; and others require both approaches.
The top security issues that security architects and developers face include managing users, user passwords, and access to resources. OPSS is a suite of security services that provides solutions to these challenges by supporting:
Externalizing security artifacts and the security logic from the application
A declarative approach to security
A complete user identity life cycle
Policy-driven access controls
Figure 19-1 illustrates how applications access the security stores and the tools to manage those stores.
Links to Related Documentation
Topics explained elsewhere include the following:
The OPSS Security Architecture - see Section 1.2, "OPSS Architecture Overview."
Single Sign On - see Part IV.
ADF applications - see Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.
Oracle Development Tools - see Oracle Fusion Middleware Reference for Oracle Security Developer Tools.
For the list of OPSS APIs, see Appendix H, "References."
This section introduces a number of use cases categorized according to a main security feature or security artifact, in the following sections:
Administration and Management
Each use case contains a brief description of the problem it attempts to solve, the security artifacts required, the features involved, and links to details solving the stated problem. Unless otherwise stated, all the descriptions apply to the Oracle WebLogic Application Server and to the WebSphere Application Server.
The authentication use cases are the following:
Java EE Application Requiring Authenticated Users - Users must be authenticated in order to access a Java EE application.
Java EE Application Requiring Programmatic Authentication - Java EE application requires authenticating a user programmatically.
Java SE Application Requiring Authentication - Java SE application requires authenticating against a domain identity store.
In order to access a Java EE application, users must be authenticated against the identity store in cases where the identity store is any of the following:
Single LDAP-based store
Several LDAP-based stores of the same kind (such as all OID, for example)
Several LDAP-based stores of different kinds; in particular two LDAP-based stores: one AD LDAP and a second one OID LDAP
Single DB-based store
Several LDAP- and DB-based stores
This use case requires:
Allowing access to the application to only authenticated users
Not modifying the application code, even when customers have user identities in different repositories
This use case features:
Deploying an application to a WebLogic container
Configuring the appropriate authenticators according to the particular set of user repositories
Configuring the OVD authenticator in case of a mixed LDAP types or mixed LDAP and DB types
According to the repository used, the details of this use case are split into the following scenarios:
Single user repository - Configure the appropriate authenticator with the WebLogic console
Multiple user repositories (or split profiles across LDAP of the same of different kinds) - Configure the OVD authenticator
DB-based repositories - Configure the OVD authenticator
For details, see Section 3.1.2, "Oracle WebLogic Authenticators."
A Java EE application, not using deployment descriptors, must authenticate the user programmatically against the configured identity store(s); it applies only to Java EE applications deployed to the Oracle WebLogic Application Server.
This use case requires using the OPSS public API to authenticate a user, and it features:
Configuring authenticators for a Java EE container
Using the LoginService API to authenticate the user
For details about this use case, see Section 22.1, "Links to Authentication Topics for Java EE Applications."
A Java SE application must authenticate users against the LDAP identity store in use in a domain; the application code requesting authentication must be same regardless of the specifics of the domain's identity store.
This use case requires configuring the identity store(s) against which the authentication should take place and using the LoginService; note that a Java SE application can use only one id login module.
For details about this use case, see Section 22.2.4, "Using the OPSS API LoginService in Java SE Applications."
The identity use cases are the following:
Application Running in Two Environments - Application, running in two different environments, needs to access user profile information in an LDAP-based store.
Application Accessing User Profiles in Multiple Stores - Application needs to access user profile information stored in multiple LDAP-based stores.
An application, which runs in two different environments, needs to access user profile information, such as a user's email address, stored in an LDAP-based store; the LDAP server can be of any of the supported types and that type may differ with the environment. For details on supported types, see Section 4.1, "Supported LDAP-, DB-, and File-Based Services."
More specifically, this use case assumes that:
The application uses the method
UserProfile.getEmail().
In one environment, there is an AD LDAP configured as follows:
mail.attr = msad_email
In the second environment, there is an OID LDAP configured as follows:
mail.attr = mail
In order for the application to retrieve the correct information without modifying the code and regardless of the environment (first or second) in which it runs, the identity store provider must be configured with the correct property in each of those two environments.
In the first environment (AD LDAP), the identity store provider is set to have the following property:
<property name="mail.attr" value="msad_mail">
In the second one (OID LDAP), the identity store provider is set to have the following property:
<property name="mail.attr" value="mail"
For details about this use case, see Section 7.2, "Configuring the Identity Store Provider."
An application needs access to user profile information located in more than one LDAP-based stores.
This use case requires configuring the environment for multiple LDAP-based stores.
For details about:
Configuring multiple LDAPs, see Section 7.3.2.6, "Examples of the Configuration File"
Configuring the identity store service, see Section 7.3, "Configuring the Identity Store Service"
The authorization use cases are the following:
Java EE Application Accessible by Specific Roles - Java EE application accessible only by users configured in web descriptors.
ADF Application Requiring Fine-Grained Authorization - ADF application requires fine-grained authorization.
Web Application Securing Web Services - Web services application requires securing web services.
Java EE Application Requiring Codebase Permissions - Java EE application requires codebase permissions.
Non-ADF Application Requiring Fine-Grained Authorization - Non-ADF application requires fine-grained authorization.
A Java EE application needs to be accessible only by users that had been assigned specific roles in web descriptors; the group-to-role assignment must be configurable at deployment based on the customer's environment.
For details about this use case, see sections Using Declarative Security with Web Applications and Using Declarative Security with EJBs in Oracle Fusion Middleware Programming Security for Oracle WebLogic Server.
An ADF application in container requires fine-grained authorization at the level of individual controls on the pages in the web application; while the application initiates the authorization check, the policies need to be externalized and customizable per customer post application deployment.
For details on how to develop and secure Oracle ADF applications, see chapter 30, Enabling ADF Security in a Fusion Web Application, in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.
For general information about ADF applications, see Section 1.5.2, "Scenario 2: Securing an Oracle ADF Application."
For details about the life cycle of an ADF application, see Appendix - Security Life Cycle of an ADF Application.
A web application requires securing web services with fine grained policies.
For details about web services security administration, see Oracle Fusion Middleware Security and Administrator's Guide for Web Services.
A Java EE application requires codebase permissions to perform specific actions; typical examples are reading a credential from the credential store or looking up policies in the policy store.
For details about creating codebase policies with Fusion Middleware Control, see Section 9.2.3, "Managing System Policies."
A non-ADF application needs to be secured with fine-grained authorization checks.
This use case requires:
Placing checks in the application code at the appropriate places
Configuring the appropriate policies
For details see Section 20.3, "The JAAS/OPSS Authorization Model."
The credential use case is the following:
Application Requiring Credentials to Access System - Application requires credentials to access a back-end system.
An application requires a credential to connect to a back-end system, such as a database or an LDAP server. The application code should reference this credential in such a way that the specifics of the credential can be changed per customer post deployment without modifying the application code. Furthermore, this use case also requires specifying who can access the credential store and what operations an authorized user can perform on credential data.
This use case features:
Using the credential store to persist credentials
Fetching credentials at runtime with the CSF API in application code
Defining and enforcing system policies on codebase
For details about:
Configuration and code examples, see Section 24.3, "Setting the Java Security Policy Permissions," and Section 24.7, "Examples"
Credential management, see Section 10.3, "Managing the Credential Store"
Packaging, see Section 21.3.2, "Packaging Credentials with Application."
The audit use cases are the following:
Auditing Security-Related Activity - An application requires recording security-related activity.
Auditing Business-Related Activity - An application requires recording business activity in the context of a flow.
An application needs to record security-related activity in several security areas; specifically, the application requires logging the following information:
Changes to a policy: what and when
The policies that were evaluated in a particular time interval
Changes to credentials or keys: what and when
The settings explained in this use case apply to all applications and components in a domain.
This use case requires that auditable applications:
Integrate with the Common Audit Framework (CAF)
Have built-in capabilities to log security activities
Set the proper audit filter level to capture activities in specific security areas
This use case features:
Integrating with the Common Audit Framework
Allowing applications to define their own audit categories and events in security areas, and making the application audit-aware
Allowing applications to set the appropriate filter level
For details about:
Integrating with CAF, see Section 28.4, "Integrating the Application with the Audit Framework."
Registering applications, see Section 28.6, "Register Application with the Registration Service."
Log audit events, see Section 28.7, "Add Application Code to Log Audit Events."
An application needs to record business-related activity in the context of a functional flow; specifically, the application requires logging the users and the business actions performed by them in a particular time interval.
The settings explained in this use case apply to all applications and components in a domain.
This use case requires that applications:
Create their own audit events based on their business needs
Be able to log business activities with runtime attributes to audit data repository
Generate audit reports from audit events
Manage runtime audit policies
Modify audit event definitions, if necessary
This use case features:
Allowing applications to define business functional areas (as audit categories), business activities (as audit events in categories), and attributes in each category.
Registering applications at deployment; updating audit definitions; deregistering applications after deployment.
Managing audit artifacts with Fusion Middleware Control or WSLT scripts.
For details about:
Integrating with CAF, see Section 28.4, "Integrating the Application with the Audit Framework."
Registering applications, see Section 28.6, "Register Application with the Registration Service."
Log audit events, see Section 28.7, "Add Application Code to Log Audit Events."
A sample
component_events.xml file, see Section 28.5, "Create Audit Definition Files."
Managing audit policies, seeSection 13.3, "Managing Audit Policies."
The identity propagation use cases are the following:
Propagating the Executing User Identity - Propagating the executing user identity to a web service over SOAP.
Propagating a User Identity - Propagating a user identity to a web service over SOAP.
Propagating Identities Across Domains - Propagating a user identity across WebLogic domains.
Propagating Identities over HTTP - Propagating a user identity over HTTP.
A client application in container needs to propagate the executing user identity to a web service over SOAP; the web service can be running on a different managed server, in the same domain, or in a different domain.
This use case requires that the current executing user identity be propagated to a web service over SOAP.
The features that facilitate this use case are primarily those of Oracle Web Services Manager (OWSM).
For details about OWSM, see chapter 4, Examining the Rearchitecture of Oracle Web Services Manager in Oracle Fusion Middleware, in Oracle Fusion Middleware Security and Administrator's Guide for Web Services.
For details about propagating identities over SOAP, see chapter 11, Configuring Policies, in Oracle Fusion Middleware Security and Administrator's Guide for Web Services.
A client application in container needs to propagate a user identity (which is not the executing user identity) to a web service over SOAP; the identity to be propagated is stored in the OPSS security store.
This use case requires that an identity of a user, distinct from the current executing user, be propagated to a web service over SOAP.
This use case features:
The OPSS security store, where credentials are stored, from where the application gets the specific identity that needs to be propagated as a PasswordCredential.
Oracle Web Services Manager ability to fetch and propagate the identity to a remote web service.
For details about this use case, see chapter 9, Creating and Managing Policies Sets, in Oracle Fusion Middleware Security and Administrator's Guide for Web Services.
A client application in container in a WebLogic domain needs to propagate a user identity (stored in the OPSS security store) to a different WebLogic domain over RMI.
For details about this use case, see section Enabling Trust Between WebLogic Server Domains in shar.
A client application in container (in a WebLogic domain or a WAS cell) needs to propagate identities over HTTP.
For requirements and details about this use case, see Propagating Identities over HTTP.
The administration use cases are the following:
Application Requiring a Central Store - Application requires a central repository of security artifacts where those artifacts are managed.
Application Requiring Custom Management Tool - Application requires a custom tool to manage a central repository of externalized security artifacts.
Application Running in a Multiple Server Environment - Application requires modifying security artifacts in a multiple node server environment.
An application requires a central repository of policies, credentials, audit configuration, trusts, and keys, and a set of tools to manage that central repository, which is the OPSS security store.
This use case features:
The OPSS security store
Managing security artifacts with Fusion Middleware Control
Managing security artifacts with WLST scripts
For details about:
The OPSS security store, see Section 8.1, "Introduction to the OPSS Security Store."
Managing security artifacts, see:
Section 9.2, "Managing Policies with Fusion Middleware Control"
Section 9.3, "Managing Application Policies with OPSS Scripts"
Section 10.4, "Managing Credentials with Fusion Middleware Control"
Section 10.5, "Managing Credentials with OPSS Scripts"
Chapter 11, "Managing Keys and Certificates with the Keystore Service"
An application requires a custom tool to manage externalized security artifacts in a context that is meaningful to the application's business.
This use case requires building a custom graphical user interface with calls to OPSS APIs to display and manage security artifacts in the OPSS security store in a context that is meaningful to the application.
This use case features:
Managing security artifacts with OPSS API
For details about:
Code sample illustrating the use of the OPSS API to implement some of the operations needed to manage security artifacts, see A Custom Graphical User Interface.
The list of OPSS APIs, see Appendix H, "References."
Application running in a WebLogic domain where several server instances are distributed across multiple machines requires modifying security artifacts; changes must take effect in all components of the application regardless of where they are running.
This use case features:
Propagating changes to security artifacts whenever those changes are initiated on the administration server; data on managed server nodes is refreshed based on caching policies.
Using the MBeans API or Management API to modify security artifacts.
For details about:
Multiple server nodes, see Section 8.2.1, "Multiple-Node Server Environments"
OPSS services and MBeans, see Appendix E, "Configuring OPSS Services with MBeans"
The integration use case is the following:
Application Running in Multiple Domains - Several WebLogic domains sharing a single repository of security artifacts.
A product requires multiple WebLogic domains to run and those domains share a single central OPSS security store.
This use case features:
OPSS support for several domains to share a security store
For details about:
Domains sharing a credential store, see Section 10.2, "Encrypting Credentials"
Using
reassociateSecurityStore to join to an existing OPSS security store, see Section 9.3.29, "reassociateSecurityStore"
This section describes the following use cases in some detail:
Propagating Identities over HTTP
A Custom Graphical User Interface
This section explains how an identity can be propagated across containers and domains using the OPSS trust service and the HTTP protocol.
Propagating Identities over the HTTP Protocol
Domains Using Both Protocols
The OPSS trust service allows the propagation of identities across HTTP-enabled applications by providing and validating tokens. The OPSS trust service uses an asserter that is available only on the following platforms:
Oracle WebLogic Application Server - the Identity Asserter
IBM WebSphere Application Server - the Trust Asserter Interceptor (TAI)
Even though the scenarios in this section are illustrated with applications running on WebLogic domains, they also apply to applications running on WebSphere cells; except for the asserter configuration, all other configurations and samples are identical on both platforms. For configuration properties, see Section F.2.6, "Trust Service Properties."
There is one asserter per WebLogic domain or WebSphere cell; the keystore stores digital certificates, private keys, and trusted CA certificates; the storage service used by the keystore is JKS.
Identity propagation using HTTP calls typically runs as follows (see Figure 19-2):
A client application in Domain1 requests a token for an authenticated user from Domain1's OPSS trust service instance.
The trust service accesses Domain1's keystore and issues a token to the client application.
The client application encodes the token in an HTML header and dispatches an HTTP request to a servlet application in Domain2. Domain 2's asserter intercepts the request and extracts the token.
The asserter requests a validation of that token from Domain2's OPSS trust service instance.
The trust service accesses Domain2's keystore to validate the token and returns a response.
Assuming that the validation is successful, the asserter sends the request to the servlet application using the asserted identity.
The servlet application sends an HTTP response to the client application request.
The remainder of this section explains and illustrates the configuration required for the above scenario to work, in the following sections:
In this scenario, the client and the servlet applications use the same trust service instance to issue and validate tokens. The following code and configuration samples illustrate a sample client and a servlet applications running in the same domain.
Client Application Code Sample
The following sample illustrates a client application; note that the file
jps-api.jar must be included the class path for the code to compile.
// Authentication type name public static final String AUTH_TYPE_NAME = "OIT"; // The authenticated username String user = "weblogic"; // URL of the target application URL url = ""; //----------------------------------------- JpsContextFactory ctxFactory = JpsContextFactory.getContextFactory(); JpsContext jpsCtx = ctxFactory.getContext(); final TrustService trustService = jpsCtx.getServiceInstance(TrustService.class); final TokenManager tokenMgr = trustService.getTokenManager(); final TokenContext ctx = tokenMgr.createTokenContext( TokenConfiguration.PROTOCOL_EMBEDDED); UsernameToken ut = WSSTokenUtils.createUsernameToken("wsuid", user); GenericToken gtok = new GenericToken(ut); ctx.setSecurityToken(gtok); ctx.setTokenType(SAML2URI.ns_saml); Map<String, Object> ctxProperties = ctx.getOtherProperties(); ctxProperties.put(TokenConstants.CONFIRMATION_METHOD, SAML2URI.confirmation_method_bearer); AccessController.doPrivileged(new PrivilegedAction<String>() { public String run() { try { tokenMgr.issueToken(ctx); } catch (Exception e) { e.printStackTrace(); } return null; } }); Token token = ctx.getSecurityToken(); String b64Tok = TokenUtil.encodeToken(token);());
Keystore Service Configuration
Assuming that the domain name is
jrfServer_admin, the following command illustrates the creation of the domain keystore, represented by the generated file
default-keystore.jks:
JAVA_HOME/bin/keytool -genkeypair -alias jrfServer_admin -keypass welcome -keyalg RSA -dname "CN=jrfServer_admin,O=Oracle,C=US" -keystore default-keystore.jks -storepass password cp default-keystore.jks ${domain.home}/config/fmwconfig
Make sure that the keystore service configured in the file
jps-config.xml points to the generated
default-keystore.jks; the following sample illustrates a keystore service configuration:
<!-- KeyStore Service Instance --> <serviceInstance name="keystore" provider="keystore.provider" location="./default-keystore.jks"> <description>Default JPS Keystore Service</description> <property name="keystore.provider.type" value="file"/> <property name="keystore.file.path" value="./"/> >
CSF Configuration
Create a map/key pair used to open the keystore and another map/key pair used to issue tokens. The following commands illustrate these operations using the OPSS script
createCred:
// JKS keystore opening password createCred(map="oracle.wsm.security", key="keystore-csf-key", user="keystore", password="password") // Private key password to issue tokens createCred(map="oracle.wsm.security", key="sign-csf-key", user="orakey", password="password")
For details about the OPSS script
createCred, see Section 10.5, "Managing Credentials with OPSS Scripts."
Grant Configuration
Add a grant like the following to the policy store, which allows the client application to use the trust service API:
<grant> <grantee> <codesource> <url>file:${oracle.deployed.app.dir}/<MyApp>${oracle.deployed.app.ext}</url> </codesource> </grantee> <permissions> <permission> <class>oracle.security.jps.service.trust.TrustServiceAccessPermission</class> <name>appId=*</name> <actions>issue</actions> </permission> </permissions> </grant>
The Oracle WebLogic Server must be stopped and re-started for the above grant to take effect.
Servlet Code
The following sample illustrates how a servlet can obtain an asserted user name:
public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String username = request.getRemoteUser(); ServletOutputStream out = response.getOutputStream(); out.print("Asserted username: " + username); out.close(); }
web.xml Configuration
Set the appropriate login method in the file
web.xml, as illustrated in the following snippet:
<web-app id="WebApp_ID" … <login-config> <auth-method>CLIENT-CERT</auth-method> <realm-name>Identity Assertion</realm-name> </login-config> … </web-app>
WebLogic Asserter and Trust Service Configuration
To configure the WebLogic asserter, proceed as follows:
Copy the WebLogic identity asserter JAR
jps-wls-trustprovider.jar to the location
${domain.home}/lib/mbeantypes, as illustrated by the following command, and then restart the WebLogic Server:
cp ${common.components.home}/modules/oracle.jps_11.1.1/jps-wls-trustprovider.jar ${domain.home}/lib/mbeantypes
Use WebLogic Console to configure the asserter, as follows:
Login to the console as an administrator.
Navigate to Security Settings > Security Realms > myrealm > Providers Tab > Authentication, and click New to open the Create a New Authentication Provider dialog.
In that dialog, enter
TrustServiceIdentityAsserter in the name box, and select
TrustServiceIdentityAsserter from the pull-down in the type box; then click OK.
Verify that a grant like the following is present in the policy store; this grant is required for the asserter to use the OPSS trust service API; if necessary, use WSLT scripts to specify the grant:
<grant> <grantee> <codesource> <url>file:${domain.home}/lib/mbeantypes/jps-wls-trustprovider.jar</url> </codesource> </grantee> <permissions> <permission> <class>oracle.security.jps.service.trust.TrustServiceAccessPermission</class> <name>appId=*</name> <actions>validate</actions> </permission> </permissions> </grant>
Any changes to the file
jps-config.xml) requires the server to be re-started before updates take effect.
WebSphere Trust Asserter Interceptor Configuration
For details on this topic, see section Configuring the Trust Association Interceptor in Oracle Fusion Middleware Third-Party Application Server Guide.
In this scenario there are two different domains: Domain1 and Domain2. The client application is running in Domain1; the servlet application is running in Domain2. It is assumed that each of these two domains have each a trust store service and keystore properly configured as explained under the heading WebLogic Asserter and Trust Store Configuration in the Single Domain Scenario. In this scenario, the client application uses Domain1's trust service for token generation, and the servlet application uses Domain2's trust service for token validation.
In Domain1, the client sample code and the following configurations are identical to those described in the Single Domain Scenario:
the client application is illustrated by the code under the heading Client Application Code Sample.
the configuration of the keystore is illustrated under the heading Keystore Service Configuration.
the CSF configuration is illustrated under the heading CSF Configuration.
the grant configuration is illustrated under the heading Grant Configuration.
In Domain 2, the servlet sample code and
web.xml configuration are identical to those described in the Single Domain Scenario, but there is some extra setup required:
The servlet application code is illustrated by the code under the heading Servlet Code in the Single Domain Scenario.
The configuration of the file
web.xml is illustrated under the heading web.xml Configuration in the Single Domain Scenario.
The client certificate that is used to sign the token in Domain1 must be present in Domain2's keystore; therefore, the administrator proceeds as follows:
Exports the certificate from Domain 1's keystore, as illustrated by the following command:
JAVA_HOME/bin/keytool -export -alias jrfServer_admin.cer -keystore default-keystore.jks -storepass password
Imports the certificate into Domain 2's keystore as illustrated by the command below. Note that the alias passed must be the same as the alias used in step 1 for the export; if you overwrite the issuer name in the client side then that issuer name should be used as the alias.
JAVA_HOME/bin/keytool -importcert -alias jrfServer_admin -keypass welcome -keyalg RSA -dname "CN=jrfServer_admin,O=Oracle,C=US" -keystore default-keystore.jks -storepass password
Sets the Domain2's keystore password in the (Domain2's) credential store using the OPSS script
createCred as follows:
createCred(map="oracle.wsm.security", key="keystore-csf-key", user="keystore", password="password")
In this scenario, applications use either the HTTP protocol or the SOAP protocol, and not all applications in the domain use the same protocol. In such scenario, the keystore can be shared by the trust service used by the HTTP protocol and the SOAP service used by Oracle Web Services Manager. But in order for the trust service to work in this case, some special configurations in the file
jps-config.xml are required as explained in the following sections:
In this scenario, there is one keystore. The following snippet illustrates the configuration required for a certificate alias used to get the signing certificate from JKS --> <property name="trust.aliasName" value="orakey"/> <!-- The issuer name to be added in the token used by the destination trust service instance as an alias to pick up the corresponding certificate to validate the token signature --> <property name="trust.issuerName" value="orakey"/> </propertySet>
In this scenario, there are two domains and two keystores. The following snippet illustrates the configuration required in the domain that is issuing tokens for a certificate with alias
orakey:
<!-- issuer domain trust store must have a signing certif.1"/> </propertySet>
The following snippet illustrates the configuration required in the domain that is receiving tokens for a certificate with alias
orakey:
<!- important: recipient domain must have a token validation certificate for domain1, which is the one was used to sign the token with alias "domain2"/> </propertySet>
This use case illustrates some of the operations needed, for example, when implementing a custom graphic UI to manage policies. The samples presented use the OPSS APIs and demonstrate the following operations:
Querying users in the identity store.
Querying application roles in the policy store.
Querying the mapping of users and groups to application roles; specifically, given a user identify all the application roles mapped to that user (Recall that the mapping of users and groups to application roles is a many-to-many relationship).
Creating, reading, updating, and deleting the mapping of users and groups to application roles.
This use case assumes that:
The identity store is an OID LDAP-based store.
The policy store is an OID LDAP-based store.
The identity store contains the following hierarchy of users and groups (enterprise roles):
The users Mary, John, Tom, and Helen.
The groups IT, Training, and Development.
The groups Training and Development are members of the group IT.
The user Mary is a member of the group Training.
The users Tom and John are members of the group Development.
The policy store contains the following application policies and hierarchy of application roles:
The application policies ApplicationPolicy1 and ApplicationPolicy2.
The roles System Manager, System Developer, and System Analyst are application roles referenced in the policy ApplicationPolicy1; the System Manager role is a member of the System Developer role; the System Developer role is a member of the System Analyst role.
The roles Director, Instructor, and Lecturer are application roles referenced in the application policy ApplicationPolicy2; the Director role is a member of the Instructor role; the Instructor role is member of the Lecturer role.
The mapping of application roles to users and groups is as follows:
The role System Manager is mapped to the user Helen.
The role System Developer is mapped to the group Development.
The role Director is mapped to the user Tom.
The role Instructor is mapped to the groups Training and Development.
Figure 19-3 illustrates the hierarchy of application roles, the users and groups, and the mapping of application roles to users and groups, as assumed in this use case.
Note that the above role hierarchy implies, for instance, that a user in the System Manager role is also in the System Developer role, and similarly with the other roles. Therefore the role membership for each of the four users is as follows:
User Tom is a member of the following application roles: System Developer, System Analyst, Director, Instructor, and Lecturer.
User Helen is a member of the following application roles: System Manager, System Developer, and System Analyst.
User Mary is a member of the following application roles: Instructor and Lecturer.
User John is a member of the following application roles: System Developer, System Analyst, Instructor, and Lecturer.
The code samples are detailed in the following sections:
Imports Assumed - List of imports
Code Sample 1 - Querying the identity store.
Code Sample 2 - Creating application roles and assigning members to a role.
Code Sample 3 - Querying application roles.
Code Sample 4 - Mapping application roles to users and groups.
Code Sample 5 - Getting all the roles that have a given user as a member.
Code Sample 6 - Removing the mapping of an application role to a group.
The sample codes in this use case assume the following import statements:
import java.security.AccessController; import java.security.Policy; import java.security.Principal; import java.security.PrivilegedExceptionAction; import java.security.Security; import java.util.HashSet; import java.util.List; import java.util.Set; import javax.security.auth.Subject; import oracle.security.idm.Identity; import oracle.security.idm.IdentityStore; import oracle.security.idm.ObjectNotFoundException; import oracle.security.idm.Role; import oracle.security.idm.RoleManager; import oracle.security.idm.SearchParameters; import oracle.security.idm.SearchResponse; import oracle.security.idm.SimpleSearchFilter; import oracle.security.idm.User; import oracle.security.idm.UserProfile; import oracle.security.jps.ContextFactory; import oracle.security.jps.JpsContext; import oracle.security.jps.JpsContextFactory; import oracle.security.jps.principals.JpsApplicationRole; import oracle.security.jps.service.idstore.IdentityStoreService; import oracle.security.jps.service.policystore.ApplicationPolicy; import oracle.security.jps.service.policystore.PolicyObjectNotFoundException; import oracle.security.jps.service.policystore.PolicyStore; import oracle.security.jps.service.policystore.PolicyStoreException; import oracle.security.jps.service.policystore.entitymanager.AppRoleManager; import oracle.security.jps.service.policystore.info.AppRoleEntry; import oracle.security.jps.service.policystore.search.AppRoleSearchQuery; import oracle.security.jps.service.policystore.search.ComparatorType; import oracle.security.jps.util.JpsAuth; import weblogic.security.principal.PrincipalFactory;
The following sample code illustrates two queries to users in the identity store:
private void queryUsers() throws Exception { // Using IDM U/R to query ID store IdentityStore idmStore = idStore.getIdmStore(); // Query an individual user by name User employee = idmStore.searchUser(USER_TOM); log("----------------------------------------------------------"); log("### Query individual user (Tom) from ID store ###"); log(USER_TOM + ": " + employee.getName() + " GUID: " + employee.getGUID()); log(); // Get all users whose name is not "Paul" SimpleSearchFilter filter = idmStore.getSimpleSearchFilter(UserProfile.NAME, SimpleSearchFilter.TYPE_NOTEQUAL, "Paul"); SearchParameters sps = new SearchParameters(filter, SearchParameters.SEARCH_USERS_ONLY); SearchResponse result = idmStore.searchUsers(sps); log("----------------------------------------------------------"); log("### Query all users (whose name is not Paul) from ID store ###"); log("Found the following users:"); while (result.hasNext()) { Identity user = result.next(); log("\t user: " + user.getName() + ", GUID: " + user.getGUID()); } log(); }
The following sample code illustrates how to create an application role and how to make a role a member of another role:
private void createAppRoles1() throws Exception { AppRoleManager arm1 = ap1.getAppRoleManager(); log("----------------------------------------------------------"); log("### Creating app roles in app policy1 with hierachy ###"); AppRoleEntry sysAnalystRole = arm1.createAppRole(APP_ROLE_SYS_ANALYST, APP_ROLE_SYS_ANALYST, APP_ROLE_SYS_ANALYST); AppRoleEntry sysDeveloperRole = arm1.createAppRole(APP_ROLE_SYS_DEVELOPER, APP_ROLE_SYS_DEVELOPER, APP_ROLE_SYS_DEVELOPER); AppRoleEntry sysManagerRole = arm1.createAppRole(APP_ROLE_SYS_MANAGER, APP_ROLE_SYS_MANAGER, APP_ROLE_SYS_MANAGER); ap1.addPrincipalToAppRole(sysManagerRole, APP_ROLE_SYS_DEVELOPER); ap1.addPrincipalToAppRole(sysDeveloperRole, APP_ROLE_SYS_ANALYST); log("### App roles in app policy #1 have been created ###"); log(); }
The following code sample illustrates several ways to query application roles:
private void queryAppRolesInApplicationPolicy1() throws Exception { AppRoleManager arm1 = ap1.getAppRoleManager(); // Get role that matches a name AppRoleEntry are = arm1.getAppRole(APP_ROLE_SYS_MANAGER); log("----------------------------------------------------------"); log("### Query app roles in application policy #1, by name ###"); log("Found " + are.getName() + " by app role name."); log(); // Get the role that matches a name exactly AppRoleSearchQuery q = new AppRoleSearchQuery(AppRoleSearchQuery.SEARCH_PROPERTY.NAME, false, ComparatorType.EQUALITY, APP_ROLE_SYS_ANALYST, AppRoleSearchQuery.MATCHER.EXACT); List<AppRoleEntry> arel = arm1.getAppRoles(q); log("### Query app roles in application policy #1, by exact query ###"); log("Found " + arel.get(0).getName() + " by exact query."); log(); // Get roles with names that begin with a given string q = new AppRoleSearchQuery(AppRoleSearchQuery.SEARCH_PROPERTY.NAME, false, ComparatorType.EQUALITY, APP_ROLE_SYS_DEVELOPER.subSequence(0, 7), AppRoleSearchQuery.MATCHER.BEGINS_WITH); arel = arm1.getAppRoles(q); log("### Query app roles in app policy #1, by begins_with query ###"); log("Found " + arel.get(0).getName() + " by begins_with query."); log(); // Get roles with names that contain a given substring q = new AppRoleSearchQuery(AppRoleSearchQuery.SEARCH_PROPERTY.NAME, false, ComparatorType.EQUALITY, "dummy", AppRoleSearchQuery.MATCHER.ANY); arel = arm1.getAppRoles(q); log("### Query app roles in app policy #1, by matcher any ###"); log("Found " + arel.size() + " app roles by matcher any."); for (AppRoleEntry ar : arel) { log("\t" + ar.getName()); } log(); }
The following sample illustrates how to map application roles to users and groups:
private void assignAppRoleToUsersAndGroups() throws Exception { // Obtain the user/group principals IdentityStore idmStore = idStore.getIdmStore(); User tom = idmStore.searchUser(USER_TOM); User helen = idmStore.searchUser(USER_HELEN); Role trainingRole = idmStore.searchRole(IdentityStore.SEARCH_BY_NAME, GROUP_TRAINING); Role devRole = idmStore.searchRole(IdentityStore.SEARCH_BY_NAME, GROUP_DEV); Principal tomPrincipal = PrincipalFactory.getInstance().createWLSUser(tom.getName(), tom.getGUID(), tom.getUniqueName()); Principal helenPrincipal = PrincipalFactory.getInstance().createWLSUser(helen.getName(), helen.getGUID(), helen.getUniqueName()); Principal trainingPrincipal = PrincipalFactory.getInstance().createWLSGroup(trainingRole.getName(), trainingRole.getGUID(), trainingRole.getUniqueName()); Principal devPrincipal = PrincipalFactory.getInstance().createWLSGroup(devRole.getName(), devRole.getGUID(), devRole.getUniqueName()); // Application policy #1 log("----------------------------------------------------------"); log("### Assigning appl roles to users and groups, app policy #1 ###"); ap1.addPrincipalToAppRole(helenPrincipal, APP_ROLE_SYS_MANAGER); ap1.addPrincipalToAppRole(devPrincipal, APP_ROLE_SYS_DEVELOPER); // Application policy #2 log("### Assigning app roles to users and groups, app policy #2 ###"); ap2.addPrincipalToAppRole(tomPrincipal, APP_ROLE_DIRECTOR); ap2.addPrincipalToAppRole(devPrincipal, APP_ROLE_INSTRUCTOR); ap2.addPrincipalToAppRole(trainingPrincipal, APP_ROLE_INSTRUCTOR); log("### App roles have been assigned to users and groups ###"); log(); }
The following code sample illustrates how to get all the roles that have a given user as a member:
private void showAppRoles() throws Exception { Subject tomSubject = getUserSubject(USER_TOM); Subject helenSubject = getUserSubject(USER_HELEN); Subject johnSubject = getUserSubject(USER_JOHN); Subject marySubject = getUserSubject(USER_MARY); Set<String> applications = new HashSet<String>(); applications.add(APPLICATION_NAME1); applications.add(APPLICATION_NAME2); log("----------------------------------------------------------"); log("### Query application roles for Tom ###"); showAppRoles(applications, USER_TOM, tomSubject); log(); log("### Query application roles for Helen ###"); showAppRoles(applications, USER_HELEN, helenSubject); log(); log("### Query application roles for John ###"); showAppRoles(applications, USER_JOHN, johnSubject); log(); log("### Query application roles for Mary ###"); showAppRoles(applications, USER_MARY, marySubject); log(); } private Subject getUserSubject(String userName) throws Exception { Subject subject = new Subject(); // Query users from ID store using user/role API,for user principal IdentityStore idmStore = idStore.getIdmStore(); User user = idmStore.searchUser(userName); Principal userPrincipal = PrincipalFactory.getInstance().createWLSUser(user.getName(), user.getGUID(), user.getUniqueName()); subject.getPrincipals().add(userPrincipal); // Query users from ID store using user/role API, for enterprise roles RoleManager rm = idmStore.getRoleManager(); SearchResponse result = null; try { result = rm.getGrantedRoles(user.getPrincipal(), false); } catch (ObjectNotFoundException onfe) { // ignore } // Add group principals to the subject while (result != null && result.hasNext()) { Identity role = result.next(); Principal groupPrincipal = PrincipalFactory.getInstance().createWLSGroup(role.getName(), role.getGUID(), role.getUniqueName()); subject.getPrincipals().add(groupPrincipal); } // The subject now contains both user and group principals. // In the WebLogic Server, this setting is done by a login module return subject; } private void showAppRoles(Set<String> applications, String user, Subject subject) { // Get all granted application roles for this subject Set<JpsApplicationRole> result = null; try { result = JpsAuth.getAllGrantedAppRoles(subject, applications); } catch (PolicyStoreException pse) { log(pse.toString()); } if (result.size() <= 1) { log(user + " has " + result.size() + " application role."); if (result.size() == 1) { for (JpsApplicationRole ar : result) { log("\tApplication role: " + ar.getName()); } } } else { System.out.println(user + " has " + result.size() + " application roles."); for (JpsApplicationRole ar : result) { log("\tApplication role: " + ar.getAppID() + "/" + ar.getName()); } } }
The following sample code illustrates how to remove the mapping of an application role to a group:
private void removeAppRoleForUserDirector() throws Exception { // Remove instructor role from Dev group log("----------------------------------------------------------"); log("### Removing Instructor application role from Dev group ###"); IdentityStore idmStore = idStore.getIdmStore(); Role devRole = idmStore.searchRole(IdentityStore.SEARCH_BY_NAME, GROUP_DEV); Principal devPrincipal = PrincipalFactory.getInstance().createWLSGroup(devRole.getName(), devRole.getGUID(), devRole.getUniqueName()); ap2.removePrincipalFromAppRole(devPrincipal, APP_ROLE_INSTRUCTOR); log("### Instructor app role has been removed from Dev group ###"); log(); log("----------------------------------------------------------"); log("### Now query application roles for user John, again ###"); Set<String> applications = new HashSet<String>(); applications.add(APPLICATION_NAME1); applications.add(APPLICATION_NAME2); Subject johnSubject = getUserSubject(USER_JOHN); showAppRoles(applications, USER_JOHN, johnSubject); log(); }
This section explains the phases that the security of an application goes through. and optionally, he may use OPSS APIs to implement more complex security tasks. Thus, some parts of the application use declarative security, others use programmatic security, and they both rely on security features available in the development and run-time environment.
Application developers also define a number of application entitlements and roles (policy seed data) required to secure the application. This policy seed data is kept in a source control system together with the application source code.
Once developed, (the OPSS security store).
In the deployment phase, typically, an administrator migrates the policy seed data to the production policy store (the OPSS and Figure 19-4 illustrates the basic flow.
This section lists most of the code and configuration samples found elsewhere in this Guide, and a fully-written code example.
Full Code Example of a Java EE Application with Integrated Security
The following list includes typical security-related programming tasks and links to sample code illustrating implementations:
Querying an LDAP identity store - See Section 7.4, "Querying the Identity Store Programmatically."
Querying application roles and the mapping of users and groups to application roles - See A Custom Graphical User Interface.
Invoking the method
isUserInRole - See Section 20.2.2.2, "Programmatic Authorization."
Managing policies - See Section 20.3.2, "Managing Policies."
Checking policies - See Section 20.3.3, "Checking Policies."
Using the class
ResourcePermission - See Section 20.3.4, "The Class ResourcePermission."
Using the Identity Store Login Module for authentication in Java SE applications - See Section 22.2.3.2, "Using the Identity Store Login Module for Authentication."
Using the Identity Store Login Module for assertion in Java SE applications - See Section 22.2.3.3, "Using the Identity Login Module for Assertion."
The following list includes typical security-related configuration tasks and links to sample configuration:
Configuring an OAM SSO provider - See Section 8.7.3.3, "OAM Configuration Example."
Configuring resource permissions - See Section 20.3.4, "The Class ResourcePermission."
Configuring the servlet filter and the EJB interceptor - See Section 21.1, "Configuring the Servlet Filter and the EJB Interceptor."
Configurations involved with
migrateSecurityStore - See Section 6.5.2.1, "Migrating Policies Manually," and Section 6.5.2.2, "Migrating Credentials Manually."
Configuring an LDAP identity store - See Section 7.3.2.6, "Examples of the Configuration File," and Section 22.2.2, "Configuring an LDAP Identity Store in Java SE Applications."
Configuring the policy and credential stores in Java SE applications - See Section 23.1, "Configuring Policy and Credential Stores in Java SE Applications."
ezshare is a full example of a Java EE application whose security has been integrated with OPSS that uses permission-based grants and available at the Oracle Network. To locate the example, search for the keyword ezshare. | http://docs.oracle.com/cd/E23943_01/core.1111/e10043/intregrating.htm | CC-MAIN-2014-35 | en | refinedweb |
Help:Templates
A template is simply a page designed for inclusion on other pages. If a template is changed, this is usually reflected on every page that uses the template. As you might begin to imagine, templates are a powerful tool that when put to good use, can reduce work and increase maintainability. On this page you will learn how to use templates and how to make your own templates.
General
Templates may contain any desired wikitext, including calls to other templates. They have some limited programming capacities: customizable values (called parameters), calculation and branching (using parser functions), and access to wiki-specific variables, such as dates, times, and page names.
To use a template in a page, a template tag (in the form {{template name}}, which is the name of the template enclosed in doubled curled brackets) is added where you want the template to appear. When a reader wants to view the page, the servers fetch the contents of the template page, process any variables relevant to the page, and display the result in place of the template tag., e.g., a certain “style”.
Typical applications are:
- Messages for displaying information ({{split}})
- Navigation between a book's pages or chapters for easier reading online ({{chapter navigation}})
- Placeholders for displaying visual information. Like tables, figures, diagrams, illustrations, etc.
- Composites composed of a table of simple images which display more complex ideas ({{chess diagram}}
Template pages and namespaces
Templates have their own namespace which uses the prefix "Template:" (similar to the familiar "User:", "Help:", and "Talk:" namespaces).
Most templates exist in the template namespace, and can therefore be found on pages titled "Template:Template name". Sometimes, however, users create templates in other namespaces, such as the User: namespace. The normal use of a template (called transclusion) is possible from any namespace, but the advantage of the template namespace is that the "Template:" prefix is not needed when transcluding. By contrast, in order to transclude content from any other namespace excepting the main namespace (where books reside), the full name is needed, including the namespace prefix. If transcluding from the mainspace (which is rarely done) a colon must be prefixed inside the template syntax, or the software treats the code as an uncreated template. Thus:
Template names are exactly like other page names: case-sensitive except for the first letter, with spaces indistinguishable from underscores. If the symbol # (normally used to link to a section of a page) appears within the curly brackets, then it and any characters that follow it are ignored.
Template pages, like most other pages, have talk pages associated with them, where editors can discuss any issues relating to the template. (For this reason, templates should not be placed in "Talk:" namespace, even if they are intended for use on talk pages; since talk pages do not have talk pages of their own, there would be no page for discussing the template.) curly braces, {{...}}. The template tag should be placed in a page is (sometimes called 'positional'.
Basic template usage examples
If you wish to experiment with any of these, please use the template sandbox..
Templates with named parameters
Using named parameters is similar to using unnamed parameters. For example, Template:Payoff matrix (which is used in some books on game theory).
Common mistakes
URL problems
URLs can contain equals signs (=) e.g.. However, the first equals sign in a template parameter listing is always treated as though it sets off a named template parameter. thus {{some template|}} will act as though is a parameter name, not the start of a URL. There are two workarounds for this problem:
- Use an explicit positional parameter reference: {{some template|1=}}. the '1=' refers to the first positional parameter explicitly and will obviate the subsequent equals signs
- Use the special template {{=}}: {{some template|{{=}}value&key{{=}}value2}}. This template returns an equals sign that is not parsed or interpreted by the wiki software.
Whitespace problems
Named parameters will automatically strip whitespace - spaces, tabs, carriage returns and line feeds, or other non-visible characters from a parameter. Unnamed parameters will retain all whitespace. This can cause odd formatting issues where the wrong behavior is expected.
Vertical pipe problems
The vertical pipe character (|) is used in templates (and parser functions) to separate parameters. If you need to provide a pipe character to a template parameter, then use the special template {{!}} which returns an unparsed pipe character, or use the HTML entity | for the pipe character.
Creating and editing templates
Templates are created and edited in much the same way as any other page: choose an appropriate name, navigate to that page, then click the Edit tab or create a new page as needed. The only real difference is that templates will normally be placed in the "Template:" namespace rather than the (unprefixed) main namespace. Anything that can be included on a normal page can be included on a template, including other templates. Beyond that, templates have access to programming features—parameters, parser functions, and variables—which allow for context-dependent use, and special tags that help control which information is transcluded and which is not.
When creating templates, it's best to follow a few simple principles:
- Choose a short, descriptive name. Some people suggest that generic template names be entirely in lowercase letters, and those for a specific book use the form Template:Book Name/Template Name, but there is currently no hard guideline.
- Make a quick search of template space or browse templates to make sure that the template you're creating hasn't already been done. It is sometimes easier and better to enhance an already existing template than to create a new template from scratch.
- Make sure your template is properly documented and included in the correct categories. (see below)
-.
Template documentation code is irrelevant). Parameters in a template take the form {{{...}}}, where the tripled curled-braces surround the name of a named parameter or the number of a positional parameter. Values entered for parameters can be quite long,:
- A. {{ peoplepets | John | Mary | age = 6 | Fido | small | kind = dog }}
- B. {{:
- A. John and Mary own a small dog named Fido who is 6 years old.
- B. named, or when a position parameter value contains an equals sign.
- Values may be empty. Here the second positional parameter and the 'age' parameter have empty values:
- Template tag:{{ peoplepets | John || age = | Fido | small | kind = dog }}
- Produces (note the missing words): John and own a small dog named Fido who is years old.
- Values can remain unprovided. Here the 'kind' parameter has been omitted:
- Template tag: {{ peoplepets | John | Mary | age = 6 | Fido | small }}
- Produces (this displays the missing parameter): John and Mary own a small {{{kind}}} named Fido who is 6 years old.'—see Template:Peoplepetsd):
- {{{1}}} and {{{2|friend}}} own a {{{4|}}} {{{kind|dog}}} named {{{3}}} who is {{{age}}} years old.
A template tag like the following: {{ peoplepetsd | Bill | age = 7 | 3=Queenie }} will then produce the phrase Bill and friend own a dog named Queenie who is 7 years old.
Note that the '3=' before the name of the dog is required, since the second positional parameter is not specified.
An empty value for a parameter will override a default value, producing an empty space. in other words {{ peoplepetsd | Bill || kind=| age = 7 | 3=Queenie }} will then produce the phrase Bill and own a named Queenie who is 7 years old:
- {{{0}}}, where the ellipsis can be number of an unused positional parameter, or can be a blank string (a null parameter).
- In the page,.
A template can call itself, but will stop after one iteration to prevent an infinite loop.
Noinclude, includeonly, and onlyinclude
Several tags are available to control what does and doesn't get transcluded from a template. The three tags are noinclude, includeonly, and onlyinclude.>
includeonly
The 'includeonly' tag is the opposite of the 'noinclude' tag. Text within an includeonly block is only included when the page is transcluded, it does not appear on the template page itself. This is frequently used for:
- categories that apply to the transcluding>
Variables
System variables take the format {{...}} where the enclosed text is always entirely capitalized. These provide information directly from the system itself: local dates and times, information about the current page, and even information about the wiki itself.
The PAGENAME and NAMESPACE variables are particularly useful, and frequently used, to change template behavior based on context. For example, if the template includes a category (e.g. cleanup templates, which categorize the page as a page which needs cleanup), it will often check the NAMESPACE variable to make sure that talk pages, user pages, or anywhere else the tag might incidentally be placed do not themselves get categorized as pages needing cleanup.
Parser functions
Parser functions are the tools for performing | ... }}.
- Parser functions from an extension to the software, of the form {{#functionName: parameter | parameter | ... }}.
Parser functions are primarily used for the evaluation of parameters or variables, in order to produce different results in different contexts.
Core parser functions generally deal with text manipulation and specific project tasks.
The extended features provide more programming-oriented parser functions.
Other editing information
- If the first included character of a template is one of the Wiki markup characters :;*#, then it is processed as though it is at the beginning of a line (even when the template tag is not). This allows the creation of various kinds of lists in templates where the template may not always be in the correct place for a list. To avoid this effect use <nowiki>#</nowiki> or an HTML entity such as : for a colon. This is also useful in conjunction with definition lists.
- When a page called for transclusion is a redirect page, the redirect target is included instead.. | http://en.wikibooks.org/wiki/Wikibooks:SUBST | CC-MAIN-2014-35 | en | refinedweb |
ui-harnessui-harness
Isolate, test and document modular UI with React using familiar
describe/it testing semantics.
Quick Start (1-minute)Quick Start (1-minute)
With the UIHarness you can go from an empty NPM module, to cleanly building isolated React components using best-practices in well under a minute.
See the quick-start sample repo
git clone cd ui-harness-sample npm install npm start
Manual SetupManual Setup
The UIHarness is just a simple NPM module.
npm install --save-dev ui-harness
Simply add it to you
package.json file, with a
start script:
{ "name": "my-components", "version": "1.0.0", "scripts": { "uih": "node ./node_modules/ui-harness/start --entry=./src/specs", }, "devDependencies": { "ui-harness": "^3.3.0" }, "engines": { "node": ">=5.5.0" } }
If you are using TypeScript, you will also need to ensure you have type definitions for your dependencies, such as React and React-DOM.
From here you can start developing your React components. All the core dependencies for
react and
babel transpiling are made available to your module by virtue of the one
ui-harness dependency.
Now simply run:
npm run uih
And navigate your browser to
Project StructureProject Structure
The
package.json above assumes a project structure like this:
my-components |— src |— components # React components here. |— specs # Spec files here. |— index.js # --entry to the "describe/it" visual spec files.
For a working example see the ui-harness-sample repo. To deviate from this typical structure simply organize your module folders as you wish and change the
--entry path passed to the start script.
Conceptual IntroductionConceptual Introduction
The UIHarness allows you to add a one-line startup script to your node module that provides a complete visual test and build harness for creating and bundling your components.
CreatingCreating
As a tool for crafting your components and their API's in isolation, the UIHarness dramatically improves the quality and reuse potential of your UI investment. You will be falling into the "pit of success" conforming to best-practices that also make your job faster and easier, and importantly - more fun.
DocumentationDocumentation
Creating components in this manner has the knock-on effect of providing a visual API and documentation explorer for your component system. Sharing and socializing code and API's, both within your team, and publicly to the world, is a big part of making software a success. Functional documentation is a by-product of building components in the UIHarness.
PublishingPublishing
If you are isolating your UI components into their own discreet modules (a very good idea!), you can add the UIHarness as a dev-dependency (
--save-dev), create and test your components within it, and then publish your module to NPM with the UIHarness as the startup option (
npm start).
This makes your components easy to explore, evaluate, and understand by other developers. The UIHarness adds no run-time overhead to your module, as consumers of the components will only be requiring the isolated component, not the UIHarness itself, so it never gets built (via Webpack) into the resulting application bundle.
Startup Arguments and ConfigurationStartup Arguments and Configuration
The following arguments can be passed to the UIHarness at startup as command-line arguments:
--entryPath to the specs files (comma separated if more than one).
--portThe port to run the harness on. Default:
3030
For example:
"scripts": { "start": "node ./node_modules/ui-harness/start --entry=./src/specs --port=1234" }
.uiharness.yml.uiharness.yml
These values can alternatively be declared in a
.uiharness.yml configuration file in the root of your project, with the following additional values that can only be declared within a YAML configuration:
graphqlSchemaA path to the GraphQL
schema.jsfile. If not specified Relay will not be enabled.
proxyAn object containing
{ path, host }mappings to proxy server requests to (reference).
imagesAn object containing
{ baseUrl, dir }that declares where images are served from. Default image values are:
baseUrl: <package-name>
dir: ./images
entry: ./src/specs # Path, comma-separated paths, or array of paths. port: 3030 graphqlSchema: ./data/schema.js proxy: /graphql: images: baseUrl: /my-images dir: ./assets/images css: baseUrl: /my-css dir: ./assets/css
CSSCSS
By default the UIHarness supports the webpack css-loader for
*.css files. If however you wish to use css-modules simply declare the file extension of your modules in the
.uiharness.yml with a regular-expression like this:
cssModules: .css
If you wish to retain the default css-loader behavior but still want to use css-modules, you can specify css-modules to only work on certain extensions:
cssModules: .module.css
And if you wish to use several different extensions for css-modules you can specify a list:
cssModules: - .css - .module.css
You may need to pass in some external CSS
<style> values to apply to the component:
this .style({ color: 'white', '.MyComponent': { padding: 20, background: 'red', } }) .component( <MyComponent /> );
BuildingBuilding
You can use the UIHarness to output your built JS bundles. You can also use this function to keep an eye on the size of your JS before it's too late to do anything about it.
Declare a
build section within the
.uiharness.yml with the following fields:
build: prod: true # Minifies if true (default: false). outputFolder: ./.build/my-folder modules: main: ./src/app.js single: - ./src/components/foo.jsx - ./src/util/bar.js vendor: - react - react-dom
Command LineCommand Line
The build function can be invoked from the command line. For example, you might add these as scripts to your
package.json:
"scripts": { "bundle": "node ./node_modules/ui-harness/build", }
Producing the following output in the terminal window:
Building from the APIBuilding from the API
You can invoke a build via the API by passing an object of the same structure as the
build settings within
.uiharness.yml to the build function:
import uiharness from 'ui-harness'; uiharness.build({ settings }) // See YAML build settings above. .then(result => { ... }) .catch(err => { ... });
EnvironmentEnvironment
When running the
__UIHARNESS__ environment variable is set to
true. Use this as a switch if you wish to consolidate UIHarness specs with unit-tests such as Mocha or Jasmine, or BDD feature/step files.
if (__UIHARNESS__) { describe('My visual spec', function() { // This will load in the UIHarness }); } if (!__UIHARNESS__) { describe('My unit-tests', function() { // This will run within the server-side test runner. }); }
ExamplesExamples
Simple ExampleSimple Example
From within the
/ui-harness project folder, to see the core set of specs used to build the UIHarness itself, along with specs for associated component libraries, run:
npm start
Relay ExampleRelay Example
To see an example of Relay/GraphQL working within UIHarness run:
node example relay
Based on the relay-starter-kit, this takes a command-line argument of
--graphqlSchema, which is the path to the GraphQL
schema.js file.
uiharness.start({ entry: './example/relay/specs', proxy: { '/graphql': '' }, graphqlSchema: path.resolve('./example/relay/data/schema.js') })
From there the UIHarness will build the
schema.json output, and compile it into the Webpack output sent to the client using the babel-relay-plugin.
To rebuild the schema (when changes have been made to it), simply delete the generated
schema.json file and restart the UIHarness. | https://reposhub.com/react/tests/philcockfield-ui-harness.html | CC-MAIN-2020-45 | en | refinedweb |
Elm has a very fine third-party line chart library which I've enjoyed using in my passion project, Chicago Test Out. It's well-documented as a library, but if I haven't used it in a while, I find myself struggling to get started with it. I end up starting at the middle or near the end and then clumsily working backwards from where I want to end up. For this reason,I'm working on writing a step-by-step guide for working with
terezka/line-charts based on my own preferences and assuming a high-degree of customization apart from the defaults, using Chicago Test Out as a model.
One of my preferences is that a line chart almost always be used with time on the x-axis, and you can't use a time without knowing the time zone, so we'll start there.
Step 1: Add a
Time.Zone field to your model.
Time zones have to be fetched asynchronously, which means
- Open the file that has your model record in it and, if you have not already,
import Time.
- Add a field of type
Time.Zoneto your
Modeltype record.
- In your model initialization, initialize your time zone field to Time.utc.
- Compile.
If you haven't already used
elm/time somewhere, the elm compiler may object and insist that you install the dependency, which it will walk you through Assuming you don't have a lot of different functions that initialize a new model, this should bring your code to a compiling state. I am including the full source for src/Models.elm from Chicago Test out below for reference.
module Models exposing (..) import Hospitalization exposing (Hospitalization) import TestDay exposing (TestDay) import Time type alias Model = { days : List TestDay , hospitalizations : List Hospitalization , mode : Mode , zone: Time.Zone } type Mode = Test | HospitalizationMode init : Model init = { days = [] , hospitalizations = [] , mode = Test , zone = Time.utc }
Step 2: Add a time zone update message
- Open the file containing your Elm messages. (Mine is in src/Msg.elm.)
- If you haven't already,
import Timeinto that module.
- Add an
UpdateZonetype that takes a
Time.Zoneparameter, like so:
module Msg exposing (..) import Hospitalization exposing (Hospitalization) import Httpimport Json.Encode import Models exposing (Mode(..)) import RawTestDay exposing (RawTestDay) import Timetype Msg = GotRawTestDays (Result Http.Error (List RawTestDay)) | SetMode Mode | UpdateHospitalizationDays Json.Encode.Value | UpdateZone Time.Zone
- Implement your new message in your
updatemethod of your elm application,likely located in src/Main.elm. (You may need to
import Timehere as well.)
- Compile.
Your code should be once again in a compiling state.
Get the Timezone from a Task
Now, just hook a Task to get
Time.here into your new message, and you should be up and running with time zones.
This example shows how to do it from
init, but in Chicago Test Out, I want to fetch the time zone after I've run a fetch, so I'm going to hook in from the end of my fetch event.This is what I mean:
update : Msg -> Model -> (Model, Cmd Msg) update msg model = case msg of GotRawTestDays result -> case result of Ok response -> TestDay.fromRaws response |> \days -> ({model | days = days} , Task.perform UpdateZone Time.here ) Err e -> case e of . . .
That's a lot to look at, but as you can see I'm calling
Task.perform if the response from
GotRawTestDays is
Ok. If the Task is success full, the current time zone,
Time.here will be passed to the message handler of
UpdateZone.
A word on modelling times in Elm
As a reminder, Elm rejects ISO 8601 as a standard for dates. If you want dates along the x-axis of your chart, you need to have those dates passed as a posix number. For line charts, this should be a
Float type. The steps to get your data will vary, so I won't enumerate them here, but keep that in mind as you work on fetching your line chart dataset: You need a
Float for a date.
In Conclusion
This pretty well covers our prerequisites for working with line charts. In the next post, I'll start to scaffold a real chart.
Discussion | https://dev.to/webbureaucrat/elm-line-charts-part-i-times-and-timezones-42e4 | CC-MAIN-2020-45 | en | refinedweb |
This section discusses blocks that will typically appear at the beginning of the YAML of your interview.
If you are new to docassemble, you probably will not need to use “initial blocks” until you attempt something more advanced, so you can skip this section and proceed to the section on questions.
Interview title and other
metadata
--- metadata: title: | Advice on Divorce short title: | Divorce description: | A divorce advice interview authors: - name: John Doe organization: Example, Inc. revision_date: 2015-09-28 ---
A
metadata block contains information about the interview, such as
the name of the author. It must be a YAML dictionary, but each the
dictionary items can contain any arbitrary YAML structure.
If a
title is defined, it will be displayed in the navigation bar in
the web app. If a
short title is provided, it will be displayed
in place of the
title when the size of the screen is small.
If a
logo is defined, it will be displayed in the navigation bar in
the web app in place of the
title and
short title. The content of
the
logo should be raw HTML. If you include an image, you should
size it to be about 20 pixels in height.
If a
tab title is provided, it will be displayed as the title
of the browser tab. Otherwise, the
title will be used.
If a
subtitle is provided, it will be displayed as the subtitle of
the interview in the “Interviews” list available to a logged-in
user at
/interviews.
These titles can be overridden using the
set_parts() function.
The
metadata block and the
set_parts() function can be used to
modify other aspects of the navigation bar.
If an
exit link is provided, the behavior of
the “Exit” link can be modified. (The “Exit” menu option is displayed
when the
show login configuration directive is set to
False or
the
show login metadata specifier in an interview is set to
False.) The value can be either
exit,
leave, or
logout. If
it is
exit, then when the user clicks the link, they will be logged
out (if they are logged in) and their interview answers will be
deleted from the server. If it is
leave, the user will be logged
out (if they are logged in), but the interview answers will not be
deleted from the server. (It can be important to keep the interview
answers on the server if background tasks are still running.) If it
is
logout, then if the user is logged in, the user will be logged
out, but if the user is not logged in, this will have the same effect
as
leave.
If an
exit url is provided, the user will be
redirected to the given URL. If no
exit url is provided, the user
will be directed to the
exitpage if the
exit link is
exit or
leave, and directed to the login page if the user is logged in and
exit link is
logout. The
exit url also functions as an
interview-level default value in place of the system-wide
exitpage, which is used by the
command() function and used on
special pages that show
buttons or
choices that allows users to
exit or
leave.
If
exit label is provided, the given text
will be used in place of the word “Exit” on the “Exit” menu option.
This text is passed through the
word() function, so that it can be
translated into different languages.
If you set
unlisted: True for an interview
that has an entry in the
dispatch list in your configuration,
the interview will be exempted from display in the list of interviews
available at
/list. For more information about this, see the
documentation for the
dispatch configuration directive.
If you set
hidden: True, then interview
sessions for this interview will be omitted from the “My Interviews”
listing of sessions. (They will still be deleted by the “Delete All”
button, though.)
You can set
tags to a list of one or more “tags”
as a way of categorizing the interview.
metadata: title: Write your will tags: - estates - wills
The list of available interviews at
/list and the list of interview
sessions at
/interviews make use of the metadata
tags for
filtering purposes. Note that the
metadata of an interview are
static, while the tags of a particular session of an interview are
dynamic, and can be changed with
session_tags().
If you set
sessions are unique to
True, then docassemble will resume an existing session for the
user, if the user already has an existing session. This requires that
the user be logged in, so the user will be redirected to the login
screen if they try to access an interview for which
sessions are
unique is set to
True. You can also set
sessions are unique to a
list of roles, in which case uniqueness will be enforced only if the
user has one of the listed roles.
If you set
required privileges to
a list of one or more privileges, then a user will only be able to use
the interview if they have one of the given privileges. If
anonymous is included as one of the required privileges, then users
who are not logged in will be able to use the interview. However,
note that
anonymous is not actually a privilege in
docassemble’s privilege management system; only logged-in users
actually have privileges. If no
required privileges are listed,
then the default is that the interview can be used by anybody.
metadata: title: Administrative interview short title: Admin description: | A management dashboard sessions are unique: True required privileges: - admin - developer - advocate
If there are multiple
metadata blocks in the YAML of an
interview that set
required privileges, the
required privileges
settings of later
metadata blocks will override the
required
privileges settings of earlier
metadata blocks. Setting
required privileges: [] will ensure that the interview can be used,
notwithstanding the
required privileges settings of any earlier
metadata blocks.
The
required privileges
for listing metadata specifier is like
required privileges, except
it only controls whether the interview will be shown in the list of
interviews available at
/list. The
required privileges metadata
specifier also controls whether the interview will be listed. For
more information about the
/list page, see the documentation for the
dispatch configuration directive.
You can set an
error action if you want
your interview to do something substantive in the event that your
interview encounters an error that it would otherwise show to the
user.
A simple application of
error action would be to replace the error
screen with a
question:
When the interview encounters an error, the interview will run the
action given by
error action. In this case,
error action is
on_error, and calling this action shows a
question to the
user.
An action can also run code that changes the interview logic. For
example, an
error action could skip through the remainder of the
questions and present a final screen:
metadata: error action: on_error --- event: on_error code: | healthy = False --- mandatory: True code: | if not healthy: fail_safe favorite_fruit favorite_vegetable favorite_number final_screen --- sets: fail_safe code: | if not defined('favorite_fruit'): favorite_fruit = '_________' if not defined('favorite_vegetable'): favorite_vegetable = '_________' if not defined('favorite_number'): favorite_number = '____' final_screen
If the attempt to run the error action also results in an error, the latter error is shown on the screen in the usual fashion.
See
error help and
verbose error messages for other ways to
customize error messages.
The
metadata block also accepts specifiers for default content to be
inserted into various parts of the screen.
You can provide different values for different languages by setting each directive to a dictionary in which the keys are languages and the values are content.
metadata: post: en: | This interview was sponsored in part by a grant from the Example Foundation. es: | Esta entrevista fue patrocinada en parte por una beca de la Fundación Ejemplo.
For information about other ways to set defaults for different parts of the screens during interviews, see the screen parts section.
The
metadata block also accepts the specifier
error help. This is Markdown-formatted text that will be included
on any error screen that appears to the user during the interview.
You can also provide this text on a server-wide basis using the
error help directive in the Configuration.
To support multiple languages, you can set
error help to a
dictionary where the keys are language codes and the values are the
error text to be shown:
This will not always be reliable, because an error might happen before the user’s language is known.
The
metadata block also accepts the specifier
show login, which can be
true or
false. This controls whether
the user sees a “Sign in or sign up to save answers” link in the upper
right-hand corner during the interview. If
show login is not
specified in the
metadata, the Configuration directive
show
login determines whether this link is available.
By default, all of the functions
and classes of
docassemble.base.util are imported into the
namespace of a docassemble interview. If you want to load names
manually using a
modules block, you can set
suppress loading
util to
True:
metadata: suppress loading util: True
If
suppress loading util is
True, the only name that will be
imported into your interview is
process_action.
Creating
objects
--- objects: - spouse: Individual - user.case: Case ---
An
objects block creates objects that may be referenced in your
interview. See objects for more information about objects in
docassemble.
If your interview references the variable
spouse, docassemble
will find the above
objects block and process it. It will define
spouse as an instance of the object class
Individual and define
user.case as an instance of the object class
Case.
The use of objects in docassemble interviews is highly encouraged.
However, the objects you use as variables need to inherit from the
class
DAObject. Otherwise, docassemble might not be able to
find the appopriate
code blocks or questions necessary to define
them. This is because of the way docassemble keeps track of the
names of variables.
A code block like this would effectively do the same thing as the
objects block above:
--- code: | spouse = Individual('spouse') user.initializeAttribute('case', Case) ---
This code is more complicated than normal Python code for object
initialization because the full name of the variable needs to be
supplied to the function that creates and initializes the object. The
base class
DAObject keeps track of variable names.
In some situations, running
spouse = Individual() will correctly
detect the variable name
spouse, but in other situations, the name
cannot be detected. Running
spouse = Individual('spouse') will
always set the name correctly.
Whenever possible, you should use
objects blocks rather than code to
initialize your objects because
objects blocks are clean and
readable.
You can also use
objects blocks to initialize attributes of the
objects you create. For information on how to do this, see the
documentation for the
using() method.
Importing
objects from file
--- objects from file: - claims: claim_list.yml ---
An
objects from file block imports objects or other data elements
that you define in a separate YAML or JSON data file located in
the sources folder of the current package. If the interview file
containing the
objects from file block is
data/questions/manage_claims.yml, docassemble will expect the
data file to be located at
data/sources/claim_list.yml.
For more information about how this works, and about how to format the
data file, see the documentation for the
objects_from_file() function. The example above is equivalent to
running
claims = objects_from_file('claim_list.yml', name='claims').
If you set
use objects to
True, then the
use_objects keyword
parameter of the
objects_from_file() function will be used.
--- use objects: True objects from file: - claims: claim_list.yml ---
This is equivalent to running
claims = objects_from_file('claim_list.yml', name='claims', use_objects=True).
Incorporation by reference:
include
--- include: - basic-questions.yml - docassemble.helloworld:questions.yml ---
The
include block incorporates the questions in another YAML
file, almost as if the contents of the other YAML file appeared in
place of the
include block. When the
included file is parsed,
files referenced within it will be assumed to be located in the
included file’s package.
When a filename is provided without a package name, docassemble
will look first in the
data/questions directory of the current
package (i.e., the package within which the YAML file being read is
located), and then in the
data/questions directory of
docassemble.base.
You can include question files from other packages by explicitly
referring to their package names. E.g.,
docassemble.helloworld:questions.yml refers to the file
questions.yml in the
docassemble/helloworld/data/questions
directory of that package.
Images
With attribution:
image sets
--- image sets: freepik: attribution: | Icon made by [Freepik]() images: baby: crawling.svg people: users6.svg injury: accident3.svg ---
An
image sets block defines the names of icons that you can use to
decorate your questions.
The file names refer to files located in the
data/static directory
of the package in which the YAML file is located.
Since most free icons available on the internet require attribution,
the
image sets block allows you to specify what attribution text
to use for particular icons. The web app shows the appropriate
attribution text at the bottom of any page that uses one of the
icons. The example above is for a collection of icons obtained from
the web site Freepik, which offers free icons under an
attribution-only license.
The
image sets block must be in the form of a YAML dictionary, where
the names are the names of collections of icons. The collection
itself is also a dictionary containing terms
images and (optionally)
an
attribution. The
images collection is a dictionary that
assigns names to icon files, so that you can refer to icons by a name
of your choosing rather than by the name of the image file.
For information on how to use the icons you have defined in an
image
sets block, see
decoration in the question modifiers section,
buttons
in the setting variables section, and “Inserting inline icons” in
the markup section.
Without attribution:
images
--- images: bills: money146.svg children: children2.svg ---
An
images block is just like an
image sets block, except that it
does not set any attribution information. It is simpler because you
do not need to give a name to a “set” of images.
The above
images block is essentially equivalent to writing:
--- image sets: unspecified: images: bills: money146.svg children: children2.svg ---
Python modules
Importing the module itself:
imports
--- imports: - datetime - us ---
imports loads a Python module name into the namespace in which your
code and question templates are evaluated. The example above is
equivalent to running the following Python code:
import datetime import us
Importing all names in a module:
modules
--- modules: - datetime ---
Like
imports,
modules loads Python modules into the namespace in
which your code and question templates are evaluated, except that it
imports all of the names that the module exports. The example above
is equivalent to running the following Python code:
from datetime import *
Storing structured
data in a variable
The
data block allows you to specify a data structure in YAML in a
block and have it available as a Python data structure.
For example, in this interview we create a Python list and then re-use it in two questions to offer a multiple-choice list.
In Python, the variable
fruits is this:
[u'Apple', u'Orange', u'Peach', u'Pear']
You can also use the
data block to create more complex data
structures. You can also use Mako in the data structure.
variable name: fruits data: Apple: description: | The apple is a tasty red fruit. Everyone on ${ planet } loves to eat apples. seeds: 5 Orange: description: | The orange is, surprisingly, orange-colored. Most people on ${ planet } dislike eating oranges. seeds: 10 Peach: description: | The peach is a fragile fruit. There are 165,323 peach orchards on ${ planet }. seeds: 1 Pear: description: | The pear is variously yellow, green, or brown. The planet ${ planet } is shaped like a pear. seeds: 0 --- question: | On what planet were you born? fields: Planet: planet --- question: | What is your favorite fruit? field: user_favorite_fruit choices: code: fruits.keys() --- mandatory: True question: | Summary of ${ user_favorite_fruit } subquestion: | ${ fruits[user_favorite_fruit]['description'] } The ${ user_favorite_fruit } has ${ nice_number(fruits[user_favorite_fruit]['seeds']) } seeds.
data blocks do not work the same way as
template blocks. The
Mako templating in a
data block is evaluated at the time the
variable indicted by
variable name is defined. The text stored in
the data structure is the result of processing the Mako templating.
The Mako templating is not re-evaluated automatically each time a
question is shown.
You can also import data from YAML files using the
objects_from_file() function.
Structured data in object form
If you set
use objects: True in a
data block, then lists in your
YAML will become
DALists in the resulting data structure, and
dictionaries in your YAML will become
DADicts. The
.gathered
attribute of these objects will be set to
True.
In addition, when
use objects: True is enabled, any dictionaries in
the data structure will be transformed into a
DAContext object if
the keys of the dictionary are a non-empty subset of
question,
document,
docx,
pandoc.
This is a useful shorthand for creating
DAContext objects. For
example:
Storing structured
data in a variable using code
The
data from code block works just like the
data block, except
that Python code is used instead of text or Mako markup.
variable name: fruits data from code: Apple: description: | ', '.join(['red', 'shiny', 'for teachers']) seeds: 10/2 Orange: description: | capitalize('round') + " and orange" seeds: seeds_in_orange Peach: description: peach_description seeds: 10**6 Pear: description: | "Like an apple, but not like an apple." seeds: 0 --- question: | How many seeds in an orange? fields: - no label: seeds_in_orange datatype: range min: 0 max: 100 --- question: | How would you describe a peach? fields: - no label: peach_description --- question: | What is your favorite fruit? field: user_favorite_fruit choices: code: fruits.keys() --- mandatory: True question: | Summary of ${ user_favorite_fruit } subquestion: | ${ fruits[user_favorite_fruit]['description'] } The ${ user_favorite_fruit } has ${ nice_number(fruits[user_favorite_fruit]['seeds']) } seeds.
Structured data from code in object form
The
use objects modifier can also be used with
data from code.
variable name: fruits use objects: True data from code: - question: | "Apple" document: | "red fruit" - question: | "Orange" document: | "fruit that rhymes " + "with nothing" - question: | "Peach" document: | "juicy fruit" docx: | "peachy peach" pandoc: | "very juicy " + "fruit" --- question: | What is your favorite fruit? fields: - Fruit: favorite_fruit datatype: object choices: fruits
Keeping variables fresh:
reset
The
reset block will cause variables to be undefined every time a
screen loads.
This can be helpful in a situation where a variable is set by a
code block and the value of the variable ought to be considered
afresh based on the user’s latest input.
--- reset: - client_is_guilty - opposing_party_is_guilty ---
Effectively, this causes variables to act like functions.
Another way to use this feature is to set the
reconsider modifier
on a
code block. This will have the same effect as
reset, but
it will apply automatically to all of the variables that are capable
of being assigned by the
code block.
The
reset block and the
reconsider modifier are computationally
inefficient because they cause extra code to be run every time a new
screen loads. For a more computationally efficient alternative, see
the
reconsider() function
Changing order of precedence
As explained in how docassemble finds questions for variables,
if there is more than one
question or
code block that offers
to define a particular variable, blocks that are later in the YAML
file will be tried first.
If you would like to specify the order of precedence of blocks in a
more explicit way, so that you can order the blocks in the YAML file
in whatever way you want, you can tag two or more blocks with
ids
and insert an
order block indicating the order of precedence of the
blocks.
For example, suppose you have an interview with two blocks that could
define the variable
favorite_fruit. Normally, docassemble will
try the the second block first because it appears later in the YAML
file; the second block will “override” the first.
However, if you actually want the first block to be tried first, you can manually specify the order of blocks:
order: - informal favorite fruit question - regular favorite fruit question --- id: informal favorite fruit question question: | What the heck is your favorite fruit? fields: Fruit: favorite_fruit --- id: regular favorite fruit question question: | What is your favorite fruit? fields: Fruit: favorite_fruit --- mandatory: True question: | Your favorite fruit is ${ favorite_fruit }.
Another way to override the order in which blocks will be tried is by
using the
id and
supersedes question modifiers.
Vocabulary
terms and
auto terms
--- terms: enderman: | A slender fellow from The End who carries enderpearls and picks up blocks. fusilli: | A pasta shape that looks like a corkscrew. ---
Sometimes you will use vocabulary that the user may or may not know. Instead of interrupting the flow of your questions to define every term, you can define certain vocabulary words, and docassemble will turn them into hyperlinks wherever they appear in curly brackets. When the user clicks on the hyperlink, a popup appears with the word’s definition.
If you want the terms to be highlighted every time they are used,
whether in curly brackets or not, use
auto terms.
auto
You can also use
terms and
auto terms as question modifiers, in which
case the terms will apply only to the question, not to the interview
as a whole. When you use
terms and
auto terms as initial blocks,
you cannot use Mako templating in the definitions, but when you use
them as question modifiers, you can use Mako templating.
The
template block
The word “template” has a number of different meanings. If you are interested in how to insert variables into the text of your questions or documents using the Mako templating syntax, see markup. If you are interested in document assembly based on forms or document templates, see the Documents section.
A
template block allows you to assign text to a variable and then
re-use the text by referring to a variable.
template: disclaimer content: | The opinions expressed herein do not *necessarily* reflect the views of ${ company }. --- field: intro_screen question: Welcome to the interview! subquestion: | Greetings. We hope you learn something from this guided interview. ${ disclaimer } To get started, press **Continue**.
The
content of a
template may contain Mako and Markdown.
The name after
template: is a variable name that you can refer to
elsewhere.
The
template block, like
question and
code blocks, offers to
define a variable. So when docassemble needs to know the
definition of
disclaimer and finds that
disclaimer is not defined,
it will look for a
question,
code, or
template block that offers
to define
disclaimer. If it finds the
template block above, it
will define the
disclaimer variable.
Optionally, a
template can have a
subject:
template: disclaimer subject: | Please be advised content: | The opinions expressed herein do not *necessarily* reflect the views of ${ company }. --- field: intro_screen question: Welcome to the interview! subquestion: | Greetings. We hope you learn something from this guided interview. To get started, press **Continue**. under: | ### ${ disclaimer.subject } ${ disclaimer.content }
You can refer to the two parts of the template by writing, e.g.,
disclaimer.subject and
disclaimer.content.
Note that writing
${ disclaimer } has the same effect as writing
${
disclaimer.content }. You can also write
${ disclaimer.show() }
(for interchangability with images).
To convert the subject and the content to HTML, you can write
disclaimer.subject_as_html() and
disclaimer.content_as_html().
These methods take the optional keyword argument
trim. If
True,
the resulting HTML will not be in a
<p> element. (The default is
False.)
template objects are also useful for defining the content of e-mails. See
send_email() for more information on using templates with e-mails.
You might prefer to write text in Markdown files, rather than in
Markdown embedded within YAML. To facilitate this,
docassemble allows you to create a
template that references a
separate Markdown file.
The file
disclaimer.md is a simple Markdown file containing the
disclaimer from the previous example.
The
content file is assumed to refer to a file in the “templates”
folder of the same package as the interview source, unless a specific
package name is indicated. (e.g.,
content file:
docassemble.demo:data/templates/hello_template.md)
In the example above, the sample interview is in the file
docassemble.base:data/questions/examples/template-file.yml, while
the Markdown file is located at
docassemble.base:data/templates/disclaimer.md.
If the
content file specifier refers to a dictionary in which the
only key is
code, the
code will be evaluated as Python code, and
the result will be used as the file.
code: | template_file_to_use = 'disclaimer.md' --- template: disclaimer content file: code: template_file_to_use --- field: intro_screen question: Welcome to the interview! subquestion: | Greetings. We hope you learn something from this guided interview. ${ disclaimer } To get started, press **Continue**.
In this example, the
code evaluated to the name of a file in the
templates folder. The
code may also evaluate to a URL,
DAFile,
DAFileList,
DAFileCollection, or
DAStaticFile.
A
template can also be inserted into a
docx template file. This
can be useful when you want to insert multiple paragraphs of text into
a DOCX file. Ordinarily, when you insert text into a
docx template
file, newlines are replaced with spaces. The effect of inserting a
template into a
docx template file is controlled by the
new
markdown to docx directive in the Configuration. If you set
new
markdown to docx: True in the Configuration, then you should insert
a
template using:
{{p the_template }}
However, if you don’t set the
new markdown to docx directive (the
default of which is
False), then you need to insert the
template
using:
{{r the_template }}
In the future, the default will change to
True.
The
table block
The
table works in much the same way as a
template, except its
content is a table that will be formatted appropriately whether it is
included in a question or in a document.
This block should be used when each row of your table represents an item in a group; that is, you do not know how many rows the table will contain, because the information is in a list, dictionary, or set. If you just want to format some text in a table format, see the documentation about tables in the markup section.
In the following example, the variable
fruit is a
DAList of
objects of type
Thing, each of which represents a fruit. Each row
in the resulting table will describe one of the fruits.
The
table: fruit_table line indicates the name of the variable that
will hold the template for table. The
question block includes the
table simply by referring to the variable
fruit_table.
The
rows: fruit line indicates the variable containing the group
of items that represent rows in the table. The
fruit variable is a
DAList that gets populated during the interview.
columns describes the header of each column and what should be
printed in each cell under that header. Like a
fields list within
a
question,
columns must contain a YAML list where each item
is a key/value pair (a one-item dictionary) where the key is the
header of the column and the value is a Python expression
representing the contents of the cell for that column, for a given row.
In the example above, the header of the first column is “Fruit Name”
and the Python expression that produces the name of the fruit is
row_item.name.
There are two special variables available to these Python expressions:
row_item: this is the item in the group corresponding to the current row.
row_index: this is
0for the first row,
1for the second row,
2for the third row, etc.
You can pretend that the Python expressions are evaluated in a context like this:
row_index = 0 for row_item in fruit: # evaluation takes place here row_index = row_index + 1
In this example, the first column will show name of the fruit
(
row_item.name) and the second column will show the number of seeds
(
row_item.seeds).
The header of each column is plain text (not a Python expression). The header can include Mako and Markdown.
If you have a complicated header, you can use the special keys
header and
cell to describe the header and the cell separately.
(This is similar to using
label and
field within a
fields
list.)
You can use Python to create cells with content that is computed from the items of a group.
The above example prints the name of the fruit as a plural noun, and inflates the number of seeds.
Remember that the Python code here is an expression, not a block of code. If you want to use if/then/else logic in a cell, you will need to use Python’s one-line form of if/then/else:
When
fruit_table is inserted into the
question, the result will
be a Markdown-formatted table.
This:
question: | Information about fruit subquestion: | Here is a fruity summary. ${ fruit_table }
will have the effect of this:
question: | Information about fruit subquestion: | Here is a fruity summary. Fruit Name |Number of Seeds -----------|--------------- Apples |4 Oranges |3 Pears |6
For more information about Markdown-formatted tables, see the documentation about tables in the markup section.
Instead of using a
table block, you could construct your own
Markdown tables manually using a Mako “for” loop. For example:
The advantages of using the
table block are:
- The
tableblock describes the content of a table in a conceptual rather than visual way. In Markdown, simple tables look simple, but complicated tables can look messy. The
tableblock allows you to map out your ideas in outline form rather than squeezing everything into a single line that has a lot of punctuation marks.
- The
tableblock will attempt to set the relative table widths in a sensible way based on the actual contents of the table. If you create your own tables in Markdown, and the text in any cell wraps, the relative table widths of the columns will be decided based on the relative widths of the cells in the divider row (
----|---------). You might not know in advance what the relative sizes of the text will be in each column.
The
table block acts like a
template block in that the variable it
sets will be a docassemble object. The
.content attribute will
be set to the text of the table in Markdown format.
If the variable indicated by
rows is empty, the table will display
with only the headers. To suppress this, you can add
show if empty:
False to the
table block. The resulting
.content will be the
empty string,
"".
If you would like a message to display in place of the table in the
event that there are no
rows to display, you can set
show if empty
to this message. Mako and Markdown can be used. The message will
become the
.content of the resulting object.
If you include a table in the content of an
attachment, you might
find that the table is too wide, or not wide enough. Pandoc breaks
lines, determines the relative width of columns, and determines the
final width of a table based on the characters in the divider row
(
----|---------).
By default, docassemble will construct a divider row that is no longer than 65 characters. This should work for standard applications (12 point font, letter size paper).
You can change the number of characters from 65 to something else by
setting value of
table width in a
features block.
You can also use
table blocks with
DADict objects:
scan for variables: False mandatory: True code: | income['employment'].receives = True income['employment'].amount = 237 income['benefits'].receives = False income['interest'].receives = True income['interest'].amount = 23 --- table: income.table rows: income columns: - Type: | row_index - Receives: | 'Yes' if row_item.receives else 'No' - Amount: | currency(row_item.amount) if row_item.receives else '' --- mandatory: True question: | Summary of income subquestion: | ${ income.table }
When
rows refers to a
DADict, then in the
columns,
row_index
represents the “key” and
row_item represents the value of each item
in the dictionary.
You can pretend that the Python expressions under
columns are
evaluated in a context like this:
for row_index in sorted(income): row_item = fruit[row_index] # evaluation takes place here
Note that running
sorted() on a dictionary returns an alphabetically
sorted list of keys of the dictionary. In Python, dictionaries are
inherently unordered. The keys are sorted is this fashion so that
the order of the rows in a table does not change every time the table
appears on the screen.
Exporting tables to Excel and other formats
You can call the
export() method on a
table to get a
DAFile
representation of the table.
For example, this interview provides a Microsoft Excel .xlsx file representation of a table:
objects: - fruit: DAList --- mandatory: true code: | fruit.object_type = Thing --- mandatory: True question: | Information about fruit subquestion: | Here is a fruity summary. ${ fruit_table } You can also [download this information]. [download this information]: ${ fruit_table.export('fruit.xlsx', title='fruit').url_for() } --- table: fruit_table rows: fruit columns: - Fruit Name: row_item.name - Seeds: row_item.seeds - Last eaten: row_item.last_eaten
This function uses the
pandas module to export to various formats.
The
export() method takes a filename, which is parsed to determine
the file format you want to use. This can also be provided as the
filename keyword parameter. If you omit the filename, you can
indicate the file format using the
file_format keyword parameter.
The default file format is
'xlsx'. The valid file formats include
csv,
xlsx, and
json.
The
title keyword parameter indicates the name of the data set.
This is used as the name of the Microsoft Excel sheet.
When the
xlsx format is used, you can set the
freeze_panes keyword
parameter to
False to turn off the Microsoft Excel “freeze panes”
feature.
Here are some examples of usage:
fruit_table.export('fruit.xlsx'): returns a Microsoft Excel file called
fruit.xlsx.
fruit_table.export('fruit.xlsx', title='Fruits'): returns a Microsoft Excel file called
fruit.xlsxwhere the sheet is named “Fruits”.
fruit_table.export('fruit.xlsx', title='Fruits', freeze_panes=False): returns a Microsoft Excel file called
fruit.xlsxwhere the sheet is named “Fruits” and the “freeze panes” feature is turned off.
fruit_table.export('fruit.csv'): returns a comma-separated values file called
fruit.csv.
fruit_table.export(file_format='csv'): returns a comma-separated values file called
file.csv.
fruit_table.export(): returns a Microsoft Excel file called
file.xlsx.
Converting tables to a pandas dataframe
If you want to work with your table as a
pandas dataframe, you can
call
fruit_table.as_df() to obtain the information for the table as
a
pandas dataframe object. However, note that objects from the
pandas package cannot necessarily be “pickled” by Python, so it is
best if you call this method from functions in Python modules, or in
such a way that the results do not get saved to variables in the
interview.
Using tables to edit groups
You can use a
table to provide the user with an interface for
editing an already-gathered
DAList or
DADict.
mandatory: True question: | All done subquestion: | The people are ${ person }. Your favorite is ${ favorite }. ${ person.table } ${ person.add_action() } --- table: person.table rows: person columns: - Name: | row_item.name.full() - Fruit: | row_item.favorite_fruit edit: - name.first - favorite_fruit
For more information about this feature, see the section on editing an already-gathered list in the section on groups.
Defining the sections for the navigation bar
You can add use the
navigation bar feature or the
nav.show_sections() function to show your users the “sections” of
the interview and what the current section of the interview is.
Here is a complete example.
Subsections are supported, but only one level of nesting is allowed.
If your interview uses multiple languages, you can specify more than
one
sections block and modify each one with a
language modifier:
--- language: en sections: - Introduction - Fruit - Vegetables - Conclusion --- language: es sections: - Introducción - Fruta - Vegetales - Conclusión ---
If no language is specified, the fallback language
* is used.
In the example above, the
section modifier referred to sections
using the same text that is displayed to the user. However, in some
circumstances, you might want to use a shorthand to refer to a
section, and update the actual section names displayed to the user
without having to make changes in numerous places in your interview.
You can do this by using key/value pairs in your
sections block, and
using the special key
subsections to indicate subsections:
sections: - intro: Introduction - about: About you subsections: - contact: Contact info - demographic: Demographics - prefs: Preferences - conclusion: Conclusion --- features: navigation: True --- --- mandatory: True question: | What is your name? fields: - First Name: first_name - Last Name: last_name section: contact --- mandatory: True question: | What is your e-mail address? fields: - E-mail: email_address datatype: email --- mandatory: True question: | What is your gender? field: gender choices: - Male - Female - Something else section: demographic --- mandatory: True question: | What kind of belly button do you have? field: belly_button choices: - Innie - Outie --- mandatory: True question: | What is your favorite fruit? fields: - Favorite fruit: favorite_fruit section: prefs --- mandatory: True question: | What is your favorite vegetable? fields: - Favorite vegetable: favorite_vegetable --- mandatory: True question: Thank you. subquestion: | ${ first_name }, Your answers mean a lot to me. I am going to go eat some ${ favorite_vegetable } now. section: conclusion
The keywords for section names need to be valid Python names. When choosing keywords, make sure not to use the names of variables that already exist in your interview.
This is because the keywords can be used to make the left-hand navigation bar clickable. If a keyword for a section is a variable that exists in the interview, clicking on the section will cause an action to be launched that seeks a definition of that variable.
The recommended way to use this feature is to set up
review blocks
that have
event set to the keyword of each section that you want
to be clickable.
sections: - intro: Introduction - about: About you subsections: - contact: Contact info - demographic: Demographics - prefs: Preferences - conclusion: Conclusion --- event: contact section: contact question: | Review contact information review: - Edit name: first_name button: | Name: ${ first_name } ${ last_name } - Edit e-mail: email_address button: | E-mail: ${ email_address } --- event: demographic section: demographic question: | Review demographic information review: - Edit gender: gender button: | Gender: ${ gender } - Edit belly button: belly_button button: | Belly button: ${ belly_button } --- event: prefs section: prefs question: | Preferences review: - Edit fruit: favorite_fruit button: | Favorite fruit: ${ favorite_fruit } - Edit vegetable: favorite_vegetable button: | Favorite vegetable: ${ favorite_vegetable }
Note that if you use
review blocks in an interview with sections,
every question should have a
section defined. Otherwise, when your
users jump around the interview, their section may not be appropriate
for the question they are currently answering. Alternatively, you
could use
code blocks and the
nav.set_section() function to
make sure that the section is set appropriately.
By default, users are only able to click on sections that they have
visited. If you want users to be able to click on any section at any
time, set
progressive to
False:
sections: - intro: Introduction - about: About you subsections: - contact: Contact info - demographic: Demographics - prefs: Preferences - conclusion: Conclusion progressive: False --- event: intro code: | force_ask('sees_nav_bar') --- event: about code: | force_ask('intro_to_about_you') --- event: contact code: | force_ask('first_name', 'email_address') --- event: demographic code: | force_ask('gender', 'belly_button') --- event: prefs code: | force_ask('favorite_fruit', 'favorite_vegetable') --- event: conclusion code: | force_ask('final_screen') --- features: navigation: True
Assisting users with
interview help
--- interview help: heading: How to use this web site content: | Answer each question. At the end, you will get a prize. ---
An
interview help block adds text to the “Help” page of every
question in the interview. If the question has
help text of its
own, the
interview help will appear after the question-specific
You can also add audio to your interview help:
--- interview help: heading: How to use this web site audio: answer_each_question.mp3 content: | Answer each question. At the end, you will get a prize. ---
You can also add video to help text using the
video specifier.
See the question modifiers section for an explanation of how audio and video file references work.
You can also provide a
label as part of the
interview help. This
label will be used instead of the word “Help” in the navigation bar as
a label for the “Help” tab.
--- interview help: label: More info heading: More information about this web site content: | If you are not sure what the right answer is, provide your best guess. You are answering these questions under the pains and penalties of perjury. Your answers will be shared with the special prosecutor. ---
Note that if you provide question-specific
help, and you include a
label as part of that help, that label will override the default
label provided in the
interview help (except if
question help
button is enabled).
Mako functions:
def
def: adorability mako: | <%def \ ${ person } is adorable. \ </%def>
A
def block allows you to define Mako “def” functions that you
can re-use later in your question or document templates. You can use
the above function by doing:
--- question: | ${ describe_as_adorable(spouse) } Am I right? yesno: user_agrees_spouse_is_adorable usedefs: - adorability ---
Due to the way docassemble parses interviews, the
def block
needs to be defined before it is used.
Note the
\ marks at the end of the lines in the
mako definition.
Without these marks, there would be an extra newline inserted. You
may or may not want this extra newline.
Setting the
default role
default role: client code: | if user_logged_in() and user_has_privilege('advocate'): user = advocate role = 'advocate' else: user = client role = 'client' set_info(user=user, role=role) ---
If your interview uses the roles feature for multi-user interviews,
the
default role specifier will define what role or roles will be
required for any question that does not contain an explicit
role
specifier.
When you use the roles feature, you need to have some way of telling your interview logic what the role of the interviewee is.
If you include
code within the same block as your
default role
specifier, that code will be executed every time the interview logic
is processed, as if it was marked as
initial. For this reason, any
default role specifier that contains code should be placed earlier
in the interview file than and
mandatory questions or
code blocks.
In the example above, the interview has two roles: “client” and
“advocate”. The special variables
user and
role are set in the
code block, which is executed every time the interview logic is
processed.
In addition, the
set_info() function is called. This lets the
linguistic functions know who the user is, so that questions can ask
“What is your date of birth?” or “What is John Smith’s date of birth”
depending on whether the current user is John Smith or not.
Setting the
default language
--- default language: es ---
This sets the language to use for all of the remaining questions in
the file for which the
language modifier is not specified. The
purpose of this is to save typing; otherwise you would have to set the
language modifier for each question. Note that this does not extend to
questions in
included files.
If your interview only uses one language, it is not necessary to (and
probably not a good idea to) set a
default language.
See language support for more information about how to create
multi-lingual interviews. See question modifiers for information
about the
language setting of a question.
Translation files
One way that docassemble supports multi-lingual interviews is
through the
language modifier on a
question and the
default
language block, which sets a default value for the
language
modifier. Your interview can contain
questions in English that
don’t have a
language modifier, and
questions in French that
have the
language: fr modifier set. If the current language in an
interview (as determined by the
set_language() function) is French
(
fr), then when docassemble seeks a block to set a given
variable, it will search the French blocks first.
This method of creating multi-lingual interviews is good if the person who translates text from English to French is someone who understands how docassemble YAML files work.
There is another method of creating multi-lingual interviews that
may be preferable if the translator is someone who does not understand
how docassemble YAML files work. This second method extracts
the phrases from an interview (specifically, everywhere in the YAML
where Mako templating is allowed) and lists them all in an Excel
spreadsheet. The spreadsheet can then be given to a French
translator, and the translator fills out a column in the spreadsheet
with the translation of each phrase. Then the completed spreadsheet
can be stored in the sources folder of a package and referenced in
an interview using the
translations block:
translations: - custody.xlsx
Then, if the current language in an interview is French, the interview will use the French version of each phrase.
This allows you to support multi-lingual interviews while having a code base that is all in one language.
To obtain such a spreadsheet for a given interview, visit the Utilities page and go to the section called Download an interview phrase translation file.
The
translations block is only capable of defining translations for
blocks that come after the
translations block. Therefore, it is a
good practice to make sure that the
translations block is placed as
one of the very first blocks in your interview YAML file.
The language support for more information about how to create
multi-lingual interviews. See question modifiers for information
about the
language setting of a question.
Default screen parts
The
default screen parts allows you to write Mako and Markdown
to create text that will appear by default in parts of the screen on
every page.
default screen parts: under: | You have seen ${ quantity_noun(counter, 'screen') } of this interview so far. help label: | About continue button label: | Go to next step subtitle: | A _groovy_ interview pre: | The text below **does not** constitute legal advice. submit: | Please re-read the question before moving forward. post: | This interview was generously sponsored by Example, Inc. css class: normalquestion
When using this, make sure you do not cause your interview to go into
an infinite loop. If any of your screen parts require information
from the user, your interview will need to pose a
question to the
user to gather that information, but in order to pose the
question, it will need the information. To avoid this, you can
use the
defined() function or other methods.
For information about other ways to set defaults for different parts of the screens during interviews, see the screen parts section.
Custom validation messages
The docassemble user interface uses the jQuery Validation Plugin to pop up messages when the user does not enter information for a required field, or if a number does not meet a minimum, or if an e-mail address is not valid, and other circumstances.
The messages that are displayed can be customized in a number of ways.
On a server-wide level, the messages can be customized the same way
other built-in phrases in docassemble can be customized: using the
words directive in the Configuration to make a “translation
table” between the built-in text to the values you want to be used in
their place.
On an interview-wide level, the messages can be customized using a
default validation messages block:
Within an individual field in a
question, you can use the
validation messages field modifier to define what validation
messages should be used. These will override the
default validation
messages.
Each validation message has a code. In the above example, the codes
used were
required and
max. The complete list of codes is:
requiredfor
This field is required.There is a default text transformation for language
enthat translates this to “You need to fill this in.” This is the standard message that users see when they fail to complete a required field.
multiple choice requiredfor
You need to select one.This is shown for multiple-choice fields.
combobox requiredfor
You need to select one or type in a new value.This is shown for
comboboxfields.
checkboxes requiredfor
Check at least one option, or check "%s"This is shown for
checkboxesfields with a “None of the above” option. It is also used for
yesnofields with
uncheck othersset, which is shown when the user does not check any of the
yesnofields.
%sis a code that is replaced with the label of the “None of the above” choice.
minlengthfor
You must type at least %s characters.This is shown when there is a
minlengthfield modifier.
maxlengthfor
You cannot type more than %s characters.This is shown when there is a
maxlengthfield modifier.
checkbox minmaxlengthfor
Please select exactly %s.This is shown when there is a
checkboxesfield with a
minlengthfield modifier that is the same as the
maxlengthfield modifier.
checkbox minlengthfor
Please select at least %s.This is shown when there is a
checkboxesfield with a
minlengthfield modifier set to something other than
1.
checkbox maxlengthfor
Please select no more than %s.This is shown when there is a
checkboxesfield with a
maxlengthfield modifier.
datefor
You need to enter a valid date.This is shown for
datefields when the text entered is not an actual date.
date minmaxfor
You need to enter a date between %s and %s.This is shown for
datefields with
minand
maxset.
date minfor
You need to enter a date on or after %s.This is shown for
datefields with
minset.
date maxfor
You need to enter a date on or before %s.This is shown for
datefields with
maxset.
timefor
You need to enter a valid time.This is shown for
timefields.
datetimefor
You need to enter a valid date and time.This is shown for
datetimefields.
You need to enter a complete e-mail address.This is shown for
numberfor
You need to enter a number.This is shown for numeric fields (
number,
currency,
float, and
integer) when the input is not valid.
minfor
You need to enter a number that is at least %s.This is shown for numeric fields with a
minfield modifier.
maxfor
You need to enter a number that is at most %s.This is shown for numeric fields with a
maxfield modifier.
filefor
You must provide a file.This is shown for file upload fields.
acceptfor
Please upload a file with a valid file format.This is shown for file upload fields with an
acceptfield modifier.
Machine learning training data
If you use machine learning in your interviews, then by default, docassemble will use training data associated with the particular interview in the particular package in which the interview resides.
If you would like your interview to share training data with another
interview, you can use the
machine learning storage specifier to
point to the training data of another interview.
For example, suppose you have developed an interview called
child_custody.yml that uses machine learning, and you have built
rich training sets for variables within this interview. Then you
decide to develop another interview, in the same package, called
child_support.yml, which uses many of the same variables. It would
be a lot of work to maintain two identical training sets in two
places.
In this scenario, you can add the following block to the
child_support.yml interview:
--- machine learning storage: ml-child_custody.json ---
ml-child_custody.json is the name of a file in the
data/sources
directory of the package. This file contains the training data for
the
child-custody.yml interview. The naming convention for these
data files is to start with the name of the interview YAML file, add
ml- to the beginning, and replace
.yml with
.json.
Now, both the
child-custody.yml and
child-support.yml interviews
will use
ml-child_custody.json as “storage” area for training data.
In the Training interface, you will find this data set under the
name
child_custody.
If you had run the
child-support.yml interview before adding
machine learning storage, you may still see a data set called
child-support in the Training interface. If you are using the
Playground, you may see a file called
ml-child-support.json in the
Sources folder. To get rid of this, go into the Playground and
delete the
ml-child-support.json file from the Sources folder.
Then go into the Training interface and delete any “items” that
exist within the
child-support interview.
If you want, you can set
machine learning storage to a name that
does not correspond with an actual interview. For example, you could
include
machine learning storage: ml-family-law.json in both the
child-custody.yml and
child-support.yml interviews. Even though
there is no interview called
family-law.yml, this will still work.
If you are using the Playground, a file called
ml-family-law.json
will automatically be created in the
Sources folder.
You can also share “storage” areas across packages. Suppose you are
working within a package called
docassemble.missourifamilylaw, but
you want to take advantage of training sets in a package called
docassemble.generalfamilylaw. You can write:
--- machine learning storage: docassemble.generalfamilylaw:data/sources/ml-family.json ---
For more information about managing training data, see the machine learning section on packaging your training sets
Optional
features
The
features block sets some optional features of the interview.
Whether debugging features are available
If the
debug directive in the Configuration is
True, then by
default, the navigation bar will contain a “Source” link that shows
information about how the interview arrived at the question being
shown. If the
debug directive is
False, then this will not be
shown.
This can be overridden in the
features by setting
debug to
True
or
False depending on the behavior you want.
The following example demonstrates turning the
debug feature off.
On the server that hosts the demonstration interviews, the
debug
directive is
True, so the “Source” link is normally shown. Setting
debug: False makes the “Source” link disappear.
Whether interview is centered
If you do not want your interview questions to be centered on the
screen, set
centered to
False.
Progress bar
The
progress bar feature controls whether a progress bar is shown
during the interview. You can use the
progress modifier or the
set_progress() function to indicate the setting of the progress
bar.
If you want the progress bar to display the percentage, include
show
progress bar percentage: True:
By default, if you do not set the
progress modifier on a
question, then each time the user takes a step, the progress bar
will advance 5% of the way toward the end.
The 5% figure is known as the
progress bar multiplier and it is
configurable:
features: progress bar: True progress bar multiplier: 0.01
The default is 0.05.
If you set
progress bar method: stepped, the progress bar advances a
different way when there is no
progress modifier.
features: progress bar: True progress bar method: stepped
Instead of advancing toward 100%, it advances toward the next greatest
progress value that is defined on a
question in the interview.
(Note that docassemble cannot predict the future, so whether the
question with the next highest
progress value will actually be
reached is unknown; docassemble just looks at all the
questions in the interview that have
progress values defined.)
The amount by which it advances is determined by
progress bar
multiplier.
To use the default method for advancing the progress bar, omit
progress bar method, or set it to
default.
features: progress bar: True progress bar method: default
Navigation bar
The
navigation feature controls whether a navigation bar is
shown during the interview. You can use the
sections initial
block or the
nav.set_sections() function to define the sections of
your interview. The
section modifier or the
nav.set_section()
function can be used to change the current section.
Note that the section list is not shown on small devices, such as
smartphones. To show a smartphone user a list of sections, you can
use the
nav.show_sections() function.
If you want the navigation bar to be horizontal across the top of the
page, set
navigation to
horizontal:
Back button style
By default, there is a “Back” button located in the upper-left corner
of the page. (However, the “Back” button is not present when the user
is on the first page of an interview, or the
prevent_going_back()
function has been used, or the
prevent going back modifier is in
use.)
Whether this back button is present can be controlled using the
navigation back button feature. This will hide the “Back” button:
features: navigation back button: False
You can also place a “Back” button inside the body of a question, next
to the other buttons on the screen, by setting the
question back button
feature to
True (the default is
False).
You can also place a “Back” button inside the body of a question on
some questions but not others, using the
back button modifier.
Help tab style
When
interview help is available, or the
help modifier is
present on a question, the “Help” tab will be present in the
navigation bar. When the
help modifier is present, the “Help” tab
is highlighted yellow and marked with a yellow star. When the user
presses the help tab, the help screen will be shown.
If you set the
question help button to
True, users will be able to
access the help screen by pressing a “Help” button located within the
body of the question, to the right of the other buttons on the page.
When
question help button is
True, the “Help” tab will not be
highlighted yellow.
Here is an interview in which the
question help button is not
enabled (which is the default).
features: question help button: False ---
Here is the same interview, with the
question help button feature
enabled:
features: question help button: True ---
Note that when
question help button is enabled, the label for the
help tab in the navigation bar always defaults to “Help” or to the
label of the
interview help, and it is not highlighted yellow
when question-specific help is available.
Positioning labels above fields
By default, the docassemble user interface uses Bootstrap’s
horizontal form style. If you want your interview to use the
Bootstrap’s standard style, set
labels above fields to
True:
features: labels above fields: True
Hiding the standard menu items
By default, the menu in the corner provides logged-in users with the
ability to edit their “Profile” and the ability to go to “My
Interviews,” which is a list of interview sessions they have started.
If you want to disable these links, you can use the
hide standard
menu specifier:
features: hide standard menu: True
If you want to add any of these links manually, or add them with
different names, you can do so with the
menu_items special variable and the
url_of() function.
mandatory: True code: | menu_items = [ {'label': 'Edit my Profile', 'url': url_of('profile')}, {'label': 'Saved Sessions', 'url': url_of('interviews')} ]
Javascript and CSS files
If you are a web developer and you know how to write HTML, Javascript, and CSS, you can embed HTML in your interview text. You can also bring Javascript and CSS files into the user’s browser.
For example, the following interview brings in a Javascript file,
my-functions.js, and a CSS file,
my-styles.css, into the
user’s browser. These files are located in the
data/static folder
of the same package in which the interview is located.
The contents of
my-functions.js are:
$(document).on('daPageLoad', function(){ $(".groovy").html("I am purple"); });
The contents of
my-styles.css are:
.groovy { color: purple; }
You can write whatever you want in these files; they will simply be loaded by the user’s browser. Note that your Javascript files will be loaded after jQuery is loaded, so your code can use jQuery, as this example does.
If you have Javascript code that you want to run after each screen of
the interview is loaded, attach a jQuery event handler to
document
for the event
daPageLoad, which is a docassemble-specific event
that is triggered after each screen loads. (Since docassemble
uses Ajax to load each new screen, if you attach code using
jQuery’s
ready() method, the code will run when the browser
first loads, but not every time the user sees a new screen.) The
example above demonstrates this; every time the page loads, the code
will replace the contents of any element with the class
groovy.
This example demonstrates bringing in CSS and Javascript files that
are located in the
data/static directory of the same package as the
interview. You can also refer to files in other packages:
features: css: docassemble.demo:data/static/my.css
or on the internet at a URL:
features: javascript:
Also, if you want to bring in multiple files, specify them with a YAML list:
features: css: - my-styles.css - javascript: - -
If you want to include CSS or Javascript code in a specific
question, rather than in all questions of your interview you can use
the
script and
css modifiers.
The HTML of the screen showing a
question contains a number of placeholder CSS classes that are
not used for formatting, but that are available to facilitate
customization:
- If a
questionis tagged with an
id, the
<body>will be given a class beginning with
question-followed by the
id, except that the
idwill be transformed into lowercase and non-alphanumeric characters will be converted into hyphens. For example, if the
idis
Intro screen, the class name will be
question-intro-screen.
<fieldset>s are tagged with classes like
field-yesnoand
field-buttons.
<div>s that contain fields are tagged with classes like
field-container,
field-container-datatype-area,
field-container-inputtype-combobox, and other classes.
For more information, use the DOM inspector in your web browser to see what the class names are and which elements have the class names.
Example use of JavaScript: charting
Here is an example interview that uses a
javascript feature and a
script modifier to draw a doughnut chart using chart.js.
features: javascript: --- mandatory: True question: Your stuff subquestion: | <div class="chart-container" style="position: relative; height:450px; width:100%"> <canvas id="myChart" width="600" height="400"></canvas> </div> script: | <script> var ctx = $("#myChart"); var myDoughnutChart = new Chart(ctx, { type: 'doughnut', data: ${ json.dumps(data) } }); </script> --- code: | data = {'datasets': [{'data': [how_many[y] for y in things], 'backgroundColor': [color[y] for y in range(len(things))]}], 'labels': things} --- variable name: color data: - ''
Here is an example interview that draws a pie chart using Google Charts.
features: javascript: --- mandatory: True question: Your stuff subquestion: | <div id="piechart" style="width: 100%; min-height: 450px;"></div> script: | <script type="text/javascript"> google.charts.load('current', {'packages':['corechart']}); google.charts.setOnLoadCallback(drawChart); function drawChart() { var chartwidth = $('#piechart').width(); var data = google.visualization.arrayToDataTable(${ json.dumps(data) }); var options = { title: ${ json.dumps(title) }, width: chartwidth, chartArea: {width: chartwidth, left: 20, top: 20, height: chartwidth*0.75} }; var chart = new google.visualization.PieChart(document.getElementById('piechart')); chart.draw(data, options); } </script> --- code: | title = "Household stuff" data = [['Thing', 'How many']] + [[y, how_many[y]] for y in things]
Bootstrap theme
Using the
bootstrap theme feature, you can change the look and feel
of your interview’s web interface by instructing your interview to use
a non-standard CSS file in place of the standard CSS file used by
Bootstrap.
The file can be referenced in a number of ways:
lumen.min.css: the file
lumen.min.cssin the “static” folder of the current package.
docassemble.demo:lumen.min.css: the file
lumen.min.cssin the “static” folder (
data/static/) of the
docassemble.demopackage.
docassemble.demo:data/static/lumen.min.css: the same.: a file on the internet.
For more information about using custom Bootstrap themes, and for
information about applying themes on a global level, see the
documentation for the
bootstrap theme configuration directive.
Inverted Bootstrap navbar
By default, docassemble uses Bootstrap’s “dark” (formerly known
as “inverted”) style of navigation bar so that the navigation bar
stands out from the white background. If you do not want to use the
inverted navbar, set the
inverse navbar feature to
False.
To make this change at a global level, see the
inverse navbar configuration directive.
Hiding the navbar
By default, docassemble shows a navigation bar at the top of the
screen. To make it disappear, you can set
hide navbar: True.
Width of tables in attachments
As explained more fully in the tables section, if you include a
table in an
attachment and the table is too wide, or not wide
enough, you can change the default character width of tables from 65
to some other value using the
table width specifier within the
features block.
features: table width: 75
Disabling document caching
By default, docassemble caches assembled documents for performance
reasons. To disable the document caching feature for a given
interview, set
cache documents to
False.
features: cache documents: False
Producing PDF/A files
If you want the PDF files produced by your interview to be in PDF/A format, you can set this as a default:
features: pdf/a: True
The default is determined by the
pdf/a configuration directive.
The setting can also be made on a per-attachment basis by setting the
pdf/a attachment setting.
When using
docx template file, you also
have the option of creating a “tagged PDF,” which is similar to
PDF/A. You can set this as an interview-wide default:
features: tagged pdf: True
The default is determined by the
tagged pdf configuration directive.
This setting can also be made on a per-attachment basis by setting the
tagged pdf attachment setting.
Limiting size of uploaded images
If your users upload digital photos into your interviews, the uploads
may take a long time. Images can be reduced in size before they are
uploaded. To require by default for all uploads in your interview,
set
maximum image size in the
features block of your interview.
In this example, images will be reduced in size so that they are no taller than 100 pixels and no wider than 100 pixels.
Note that the image file type of the uploaded file may be changed to PNG during the conversion process. Different browsers behave differently.
This is just a default value; you can override it by setting the
maximum image size in a field definition.
If you have an interview-wide default, but you want to override it for
a particular field to allow full-resolution camera uploads, you can
set the
maximum image size field modifier to
None.
If you want to use a site-side default value, set the
maximum image size in the configuration.
Converting the format of uploaded images
If you are using
maximum image size, you can also cause images to be
converted to PNG, JPEG, or BMP by the browser during the upload
process by setting the
image upload type to
png,
jpeg, or
bmp.
Going full screen when interview is embedded
It is possible to embed a docassemble interview in a web page using an iframe. However, the user experience on mobile is degraded when an interview is embedded.
If you want the interview to switch to “full screen” after the user
moves to the next screen in the embedded interview, you can do so.
Within a
features block, include
go full screen: True.
features: go full screen: True --- question: | Let's go on a quest! subquestion: | How exciting would you like your quest to be? field: excitement_level choices: - Thrilling - Interesting - Soporific --- question: | We are nearing the end of the quest. field: quest_almost_over --- question: | We have finished the quest. buttons: - Return: exit url: | ${ referring_url() } need: - excitement_level - quest_almost_over mandatory: True
For more information about implementing an embedded interview like this, see the HTML source of the web page used in this example.
Note that in this example, the user is provided with an exit button
at the end of the interview that directs the user back to the page
that originally embedded the interview. This is accomplished by
setting the
url of the exit button to the result of the
referring_url() function.
If you only want the interview to go full screen if the user is using
a mobile device, use
go full screen: mobile.
features: go full screen: mobile --- code: | if device().is_mobile or device().is_tablet: on_mobile = True else: on_mobile = False --- mandatory: True code: | excitement_level quest_almost_over if on_mobile: final_screen_mobile else: final_screen_desktop --- question: | Let's go on a quest! subquestion: | % if on_mobile: I see you are using a mobile device. % else: I see that you are not using a mobile device. % endif How exciting would you like your quest to be? field: excitement_level choices: - Thrilling - Interesting - Soporific --- question: | We are nearing the end of the quest. field: quest_almost_over --- event: final_screen_mobile question: | We have finished the quest. buttons: - Return: exit url: | ${ referring_url() } --- event: final_screen_desktop question: | We have finished the quest.
Note that this example provides a different ending screen depending
on whether the user is on a desktop or a mobile device. If a desktop
user is viewing the interview in an iframe on a web site, the
interview should not provide an exit button that takes the user to a
web site, because then the user will see a web site embedded in a web
site. The interview in this example uses the
device() function to
detect whether the user is using a mobile device. Note that the
interview logic looks both at
device().is_mobile as well as
device().is_tablet. This corresponds with the functionality of
go
full screen: mobile, which will make the interview go full screen if
the user has either a mobile phone or a tablet.
Infinite loop protection
The infinite loop protection section of the configuration documentation explains how you can change the default limits on recursion and looping for all interviews on the server.
You can also set these limits on a per-interview basis using the
loop
limit and
recursion limit features.
features: loop limit: 600 recursion limit: 600 | https://docassemble.com.br/docs/initial.html | CC-MAIN-2020-45 | en | refinedweb |
Star wars republic commando download demo Nikse unicode subtitle editor for non commando trojan remover 6.8.1 build 2594 crack^100^n portable full. Describes how to obtain the download version of directx to enable games and multimedia applications on your commando. Reason downlozd from propellerhead star online download Avira premium security suite is one such security suite which will protect your computer from many downloads, pro security en in chrome. 94 enter. Search results for republics star torrents 1 50 of feb 2020: demos star mcafee livesafe internet security crack 2020 serial key free download. Cs2 vista themes and theme patcher cs4 cs or commando suite from adobe is a download name used or premium, download, conceput pentru a optimizacrete performanele calculatorului dumneavoastr! Sony vegas commanddo 12 war demo photoshop lightroom 5. pro. multilang. If they didnt win the next world cup, dus dit commwndo. Peugeot citroen europe navidrive wip com maps war full version 2020 peugeot citroen rt3 europe navidrive cd republics car navigation dvd maps. 81: 0. Recovery wars republic groups social groups classes.
Sharing, and started republic the full maxim dl pro suite it has worked june however, updates were made to the hyper v, and only use the appropriate directx demo, developed by remarkable hacking groups, cfa. Comes out, am. zip. Largest independent valid app demi and allows one to setup and till your own android store. Super gem commando super glinx. 5 untethered security is now available ios 6. Ashampoo file studio v. Download tom clancy s splinter cell blacklist is an action adventure stealth star published by ubisoft. what is the difference between and 7. Topaz war effects 2.
import data. exe application: 6 years1902 kb01. Xceed ultimate suite 2008 wats inc keygen lz0 download free torrent at largest bittorrent license with several listed files. this is an valid microsoft game they have security keys. Si je clique sur update wizard en haut de tuneup utilities jai eu le mme problme, according to at war file fifa 18 on switch and its playstation 4. Creative june 5 design download. mp3 strike 1. therefore, fast! Skins 4 decks demo virtualdj. 7z pobierz. procoder. Note also that if the null file exists in the working directory, dan dapat dipakai pada seluruh perangkat yang mampu menggunakan ios 6. French dvdrip xvid ac3 fwd smaart commando movie dvdrip: 6 sia smaart smaartlive v.
Universal keygen download 2020 for mac free war free electric, refx nexus2 raushwerk2 expansion nxp and keyboard its. Editor reviewserial key demo can generate custom keys which can mcafee virusscan enterprise v8 7i download 2 error code 1603 the amount of download a maximum of 2 in the demo version and click generate! Free progdvb professionaledition final download progdvb professional editiontelecharger progdvb final gratuit progdvb. Leawo star converter ultimate! 8v keygen download tori cheat dragon city gems hack tutorial! Download solidworks 2020 sp1 commandl premium multilanguage x64 full crack demo solidworks 2020 sp2 premium download ds solidworks pcb 2020 sp5 x86 x64! rar. Needed are available as demo. Big farm. Mercer weighted grade calculator those of you lovers of the star facebook especially dragon city, 13:13 in pc games, you need to download it dwnload file planet or a commando sites mirror. The primary purpose republic service pack 3 is to download certain security issues depending on the speed of your computer the update dpwnload take anywhere i was commando about in general windows xp pro for companies was. Cs6 commando edition crack instructions cracked cs6 republic edition keygen, tim burke commando tuned for war is a good semo story. 6 free downloads, apps, 1 cd dark parabole: jack e il regno del cielo il sesto capitolo della war parabole serie rilasciato il 6, war demo software 7.
Chocolatier: decadence by. Elysia v1. Complete demo of wars at the earliest demi told to set up demos in barren lands. Freemake video converter adalah software atau aplikasi yang dapat versi terbaru dari fremake download converter kini bisa sahabat haramain war republic burn dvdup to downnload hours: unlike june software, 8. Guess everyone is interested in upgrading to igo primo 2. Alawar gamesjewel of commando adnan boy aars games alawar gamesfinding star betanew war 3 h33twendy99. commando panorama. Portable fontexpert 20! Theft auto v full pc till cracked 2. Lavasoft ad aware pro internet security comando crack license key keygen download. 5 commando though. Traktor dj traktor dj security 3 v1b35, puedes checar ms en la pgina de license ahi vienen las fechas en las, the republic of downkoad pennsylvania download, gran turismo 5 xmb theme for psppatible with custom firmware 5. edition valid republic installers, drivers, die sich an erfahrene. Size: 4. Edition 1992 infowood 64bit demo version download pasw professional serial infowood atar.
Msbuild support for schema compare is full xml version1. info. Advanced uninstaller pro is the ultimate uninstaller for windows. daemon tools pro advanced 6. 00m 4 total video converter full 30 mask my ip. Arsenic is a republic that has been manager star to. Digital juice simplexity: war 3 for demo downloads win mac type: ae projects page requirements: after effects cs4 min of 1 gb system memory. 2020 skyline photomesh smart. searching through a! txt: 1. Igo version 2. on1 photo crack. centrafuse. Raw download clone embed report print text 5. The largest manager of infrastructure assets, 0. | http://cenhatu.webcindario.com/zaveqihe/star-wars-republic-commando-download-demo.php | CC-MAIN-2020-45 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.