text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
import scala.actors.Actor
case class Trade(id: Int, security: String, principal: Int, commission: Int)
case class TradeMessage(message: Trade)
case class AddListener(a: Actor)
class TradingService extends Actor {
def act = loop(Nil)
def loop(traders: List[Actor]) {
react {
case AddListener(a) => loop(a :: traders)
case msg@TradeMessage(t) => traders.foreach(_ ! msg); loop(traders)
case _ => loop(traders)
}
}
}
An implementation of the Observer design pattern using message passing. Interested traders can register as observers and observe every trade that takes place. But without any mutable state for maintaining the list of observers. Not a very familiar paradigm to the programmers of an imperative language. The trick is to have the list of observers as an argument to the
loop()function which is tail called.
Nice .. asynchronous, referentially transparent, side-effect-free functional code. No mutable state, no need of explicit synchronization, no fear of race conditions or deadlocks, since no shared data are being processed concurrently by multiple threads of execution.
7 comments:
Hey, did you ever benchmark Rabbit MQ against other contenders? ActiveMQ, Qpid?
Just a small quibble, but the code is not really referentially transparent since it depends on the side effected react. React takes a partial function as a parameter and passes to it a message which is neither a parameter to react nor a value in the lexical scope.
That's not to say anything bad about the code. This kind of "state via recursion" programming is a great way to keep actors nice and clean.
It's just to recognize that message passing concurrency, and indeed most forms of concurrency that aren't simple parallel computation like data flow variables/futures, are inherently side effected.
Sir,
Just wondered what you thought of this :
and the package on which it is based :
The above to me looks more approachalbe instead of abandoning the Java language (if i take up Scala) or the entire eco-system (if I take up Erlang).
Regards,
@James: Thanks for pointing it out. I have posted a small update here.
@AnActor: I have not yet looked at Functional Java in detail. But why bend Java to do something which it was not designed for. Scala offers a much cleaner programming model - a neat actor implementation in a library, with much usable syntax (thanks to its better type system). And above all it's all on the JVM !
@anactor - though functional java looks quite brilliant, it still relies on closures, which are not yet shipping with java. in the recent neil grafter interview (up on infoQ), neil says he does not think closures will make it into java7. i'm not sure what ramifications that would entail should functional java's notion and dependency on a particular closure model get out of sync with what actually ships.
if you are stuck on java and need an actor framework, kilim might also be worth investigating. it appears very performant and well thought out.
or... scala is here now. the book will probably hit the press within next 30 days, and buzz will surely abound.
@JHerber: Thank you for the directives. Yes I intend to give Kilim a try one of these days.
Its not that I am stuck with Java but just exploring the options. I am trying to soften the communication I would need to deliver to management and still try out FP. If I pitch Scala or Erlang, for sure I would face understandable questions like :
1. Why the heck are we spending so much moolah trying to get our programmers competent on the Java platform ?
2. So we are not going to tap into the wealth of experience and competence gained on the Java ecosystem ?
3. Uhh so would we miss deadlines and therefore revenue targets for the next quarter because of this new stuff ?
4. How are you going to get your teams of young programmers (3-4 yrs of experience) to shift to a totally different way of programming/thinking - not just a new language, and finally deliver performant, maintainable, production quality systems that need to replace well established existing ones ?
5. Are you sure Scala or Erlang are not nitch players and would become mainstream and we would not need to pay a bomb just to hire/retain a bunch of average programmers ? | http://debasishg.blogspot.com/2008/08/asynchronous-functional-and.html | CC-MAIN-2018-13 | refinedweb | 716 | 61.56 |
×3. (the 0×3 is important)
- Tap or click the Slui.exe 0× :
damnn useful!!
Does anyone got successful key from CIC ? if yeas then please tell procedure for tht.
Many thanks
As specified in the post, You can get it in person from CIC.
please specify email address of CIC person
is it open for all Indian residents ? or only for student who in campus ?
Only for students in Campus. Infact, the key would work with Institute LAN only.
CIC is providing activation key for free????
To Specific versions of Windows, yes!
does anyone know the emailid of cic
For obvious concerns, I can’t give the E-mail ID here, If you are a resident of IIT Kharagpur, visit CIC at the earliest.
please tell the email-id of cic .
For obvious concerns, I can’t give the E-mail ID here, If you are a resident of IIT Kharagpur, visit CIC at the earliest.
Can you tell me, how do I do the same from ubuntu?
Ubuntu is free, you don’t need to activate it using licensed key.
am already having windows 8 installed on my PC,but its fake……
do i need a fresh installation or just the key will work???
It would only on the ISOs available on CIC Software Repository.
Where/What is CIC?
Computer and Informatics Center, IIT Kharagpur
can i have email of CIC???
For obvious concerns, I can’t give the E-mail ID here, If you are a resident of IIT Kharagpur, visit CIC at the earliest.
what is the CIC email id
can i know whom to meet in cic
Go there and ask about the Windows Product key, they will guide you.
@theDroidMaster
I followed the instruction but the repository takes a lot of time so search DC with the same name and got the 64 bit win 8 file..Now I’m trying to mount the image with Daemon Tool but its saying can’t copy file because its corrupt..Do you have the file with u? Please help
Since many users were downloading yesterday, the server was slow. Try it now through Software Repo, I am getting a good 8 MBPS now.
Can I visit CIC any time to get the key? Do I need to bring the ID card as well?
Visit during office hours.
hi i am not able to access the software repo by the process u have said the error message is
WINDOWS CANT ACCESS \\144.16.192.212
Check your network settings. Its working fine on my end.
What exactly could be the problem, in the network settings..
I myself am also not able to access it. Getting the same error.
can i update
Are you able to download apps from Microsoft Store in the Metro UI mode? I guess there are some issues when we try to do it over proxy..
I have found a way to downoad apps using our proxy. Will be posting an article regarding the same by tomorrow.
Thank You !!!
Slui.exe 0×3 is not opening ??
typ 0×3 instead of0×3……replace the cross sign with ‘x’ from keyboard.
Is it only for student who in campus ?
Yes
after creating bootable files on pendrive , where to click??
Boot your computer with your pendrive.
Thanks……….i have upgraded it.
for those who cannot open store..
Open command prompt as admin and write:
netsh
winhttp
import proxy source=ie
now click enter..
this worked for me
We can run the apps by using:
netsh winhttp import proxy ie
Can you please clarify one thing:
Do I need to run the same commands as mentioned above (after first disabling proxy in internet explorer) to disable app-proxy when I go home and use my home’s proxy-free wifi ?
To remove the system proxy use this
netsh winhttp reset
Thanks :)
are they providing license for microsoft visual studio registered version?
Yeah.
I have installed the windows 8 and it working properly.. thanks… my doubt is will it work outside campus without any problem…????
It will work fine. :)
Hi
i am now using window8.
But it didn’t asked for ACT KEY during booting.
Will it ask for KEY in future?
Thanks a lot
Check point 4. Activating Windows in the article.
in cic file list they have matlab 2012. is that registered version is for KGP students also?
cannot activate..displaying multiple activation key has exceeded its limit
Well, it is what it is then.
if that is the situation…will we get another key from institute..or not..i have installed windows 8 but did not activate it..what should i do know???
As far as I know, the Key to 64 bit versions have reached the limit. You can try 32 bit or ask CIC people for any solution.
I changed my OS to windows 8 after downloading from CIC and getting the serial key from there…. but now when i use that to activate the windows… it says that it cant be activated and that “The activation server has reported that the multiple activation key has exceeded its limit” …. somebody please help… :(
As far as I know, the Key to 64 bit versions have reached the limit. You can try 32 bit or ask CIC people for any solution.
Yeah Getting the same probloem….multiple activation key has exceeded its limit…What can be done in this case?
its replying that it multiple activation key has exceeded its limit
I have Windows vista basic genuine in my laptop. Can I install M S office 2010? Will it be compatible?
Yes , you can install MS office 2010 .
Hey has anyone done this recently??? Is it still showing “multiple activation key has exceeded its limit” or are we getting other key or some other way out by CIC???
i installed fifa13 and nfs on my windows8 pc but these games are not working. when i try to open a game it shows ” no apps are installed to open this type of link(origin)”. any solution for this problem?
I am getting an error when I am trying to activate it.. it is saying “windows couldnt be activated, the file name, directory name or label syntax is incorrect”. now what is that? plz tell me
try windows activation via phone.. that works :)
Great and interesting articles it is nice info in this post.
I am getting an error when I am trying to activate it.. it is saying “windows couldnt be activated, the file name, directory name or label syntax is incorrect”. now what is that? plz tell me | http://www.comptalks.com/get-licensed-windows-8-from-iitkgp-cic/ | CC-MAIN-2014-10 | refinedweb | 1,103 | 76.42 |
#include <sys/hello.h> Aurelien Jarno <aurelien@aurel32.net> (16/03/2007): > The package is currently building on my machine. I still don't know > why this package fails to build on the two i386 build daemons, while > they builds correctly on my machine or on io.debian.net. It builds > fine on the amd64 build daemon. To help me a bit with strange build systems enabling/disabling some features (and thus some files/directories are missing at the very end of the build and one might have difficulties to see why), I wrote a tiny script to extract the interesting part of a buildd log (which I get from experimental.ftbfs.de or from buildd.debian.org), i.e. the dpkg-buildpackage until dpkg-genchanges part, also replacing some HTML entities and mostly replacing the build directory by a given token, so that the "normalized" build log is independant of the box on which it was extracted. The same script also applies to a dpkg-buildpackage output, on a devel box. I don't know if it can be of some interest to anyone else but in this case, extract-log.pl on io, in my home. If someone knows about this kind of tool, please let me know. If not, I might be interested in developping it a bit to automatically fetch the logs matching a given architecture and version. By diff'ing or wdiff'ing two "normalized" build logs, it is quite easy to see that such directory isn't entered, that such feature isn't available on such arch, etc. > > - porter NMU of jabber-common (#407102) > > I agree to sponsor a NMU, as proposed by Cyril. I think the first > thing to do is to ping the maintainer and propose him a porter NMU. Which I've just done. Cheers, -- Cyril Brulebois
Attachment:
pgpEvkBqN8PXS.pgp
Description: PGP signature | http://lists.debian.org/debian-bsd/2007/03/msg00024.html | CC-MAIN-2013-48 | refinedweb | 312 | 64 |
The Sitecore Web Form for Marketers module offers content editors a flexible way to create data capture forms and then trigger certain actions to occur on submission. There’s a whole host of save action options out the box, such as sending an email, enrolling the user in an engagement plan or updating some user details.
However one save action that is often required is the ability to send the data onto a CRM system so that it can get to the people that need to act on it, rather than staying in Sitecore with the content editors. To do this your best option is to create a custom save action that can send on the information.
Creating a Save Action in Sitecore
Save actions in Sitecore are configured under /sitecore/system/Modules/Web Forms for Marketers/Settings/Actions/Save Actions. Here you can see all the standard ones that come out the box and add your own.
Right click Save Actions and insert a new Save Action. You will need to fill out the Assembly and Class fields so that Sitecore knows which bit of code to execute (details on creating this bit below).
Adding Field Mappings
To really make your save action usable you will want to allow the content editor to map the fields on their form with the ones in the CRM, rather than relying on them both having the same name and hard coding the expected WFFM field names in your save action logic.
On your Save Action Item in Sitecore their are 2 fields to fill out to enable the editor in WFFM (Editor and QueryString). Luckily Sitecore provide a mapping editor out the box so there’s very little effort involved here.
Within the Editor filed add the value:
control:Forms.MappingFields
And within the querystring field, add your list of fields in the format fields=FieldName|FieldDisplayText
fields=FirstName|First Name,LastName|Last Name,EmailAddress|Email Address,CompanyName|Company Name
When the content editor now adds the save action to their form they will now be able to select a form field for each of these fields.
Creating the Save Action in code
To create the save action you will need a class that inherits from either ISaveAction or WffmSaveAction. I’ve used WffmSaveAction as it already has some of the interface implemented for you.
The field list you added to the Querystring property of the save action in Sitecore will need to be added as public properties to your class. Sitecore will then populate each of these with the ID the field gets mapped to or null if it has no mapping.
Then all that’s left is to add an Execute method to populate your CRM’s model with the data coming in through the adaptedFields parameter and send it onto your CRM.
using Sitecore.Data; using Sitecore.WFFM.Abstractions.Actions; using Sitecore.WFFM.Actions.Base; using System; namespace MyProject.Forms { public class SendFormToCrmSaveAction : WffmSaveAction { public string EmailAddress { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string CompanyName { get; set; } public override void Execute(ID formId, AdaptedResultList adaptedFields, ActionCallContext actionCallContext = null, params object[] data) { // Map values from adapted fields into enquiry model IEnquiry enquiry = new Enquiry(); enquiry.Email = GetValue(this.EmailAddress, adaptedFields); enquiry.FirstName = GetValue(this.FirstName, adaptedFields); enquiry.LastName = GetValue(this.LastName, adaptedFields); enquiry.CompanyName = GetValue(this.CompanyName, adaptedFields); // Add logic to send data for CRM here } /// <summary> /// Get Value from field list data for a given form field id, or return null if not found /// </summary> /// <param name="formFieldId"></param> /// <param name="fields"></param> /// <returns></returns> private string GetValue(string formFieldId, AdaptedResultList fields) { if (string.IsNullOrWhiteSpace(formFieldId)) { return null; } if (fields == null || fields.GetEntryByID(formFieldId) == null) { return null; } return fields.GetValueByFieldID(formFieldId); } } } } | https://himynameistim.com/2017/08/16/creating-a-wffm-save-action-with-field-mappings/ | CC-MAIN-2019-47 | refinedweb | 629 | 50.67 |
Answered by:
Custom MediaStreamSource and Memory Leaks During SampleRequested
Greetings,
I have a nasty memory leak problem that is causing me to pull my hair out.
I'm implementing a custom MediaStreamSource along with MediaTranscoder to generate video to disk. The frame generation operation occurs in the SampleRequested handler (as in the MediaStreamSource example). No matter what I do - and I've tried a ton of options - inevitably the app runs out of memory after a couple hundred frames of HD video. Investigating, I see that indeed GC.GetTotalMemory reports an increasing, and never decreasing, amount of allocated RAM.
The frame generator in my actual app is using RenderTargetBitmap to get screen captures, and is handing the buffer to MediaStreamSample.CreateFromBuffer(). However, as you can see in the example below, the issue occurs even with a dumb allocation of RAM and no other actual logic. Here's the code:
void _mss_SampleRequested(Windows.Media.Core.MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args) { if ( args.Request.StreamDescriptor is VideoStreamDescriptor ) { if (_FrameCount >= 3000) return; var videoDeferral = args.Request.GetDeferral(); var descriptor = (VideoStreamDescriptor)args.Request.StreamDescriptor; uint frameWidth = descriptor.EncodingProperties.Width; uint frameHeight = descriptor.EncodingProperties.Height; uint size = frameWidth * frameHeight * 4; byte[] buffer = null; try { buffer = new byte[size]; // do something to create the frame } catch { App.LogAction("Ran out of memory", this); return; } args.Request.Sample = MediaStreamSample.CreateFromBuffer(buffer.AsBuffer(), TimeFromFrame(_FrameCount++, _frameSource.Framerate)); args.Request.Sample.Duration = TimeFromFrame(1, _frameSource.Framerate); buffer = null; // attempt to release the memory videoDeferral.Complete(); App.LogAction("Completed Video frame " + (_FrameCount-1).ToString() + "\n" + "Allocated memory: " + GC.GetTotalMemory(true), this); return; }
It usually fails around frame 357, with GC.GetTotalMemory() reporting 750MB allocated.
I've tried tons of work-arounds, none of which made a difference. I tried putting the code that allocates the bytes in a separate thread - no dice. I tried Task.Delay to give the GC a chance to work, on the assumption that it just had no time to do its job. No luck.
As another experiment, I wanted to see if the problem went away if I allocated memory each frame, but never assigned it to the MediaStreamSample, instead giving the sample (constant) dummy data. Indeed, in that scenario, memory consumption stayed constant. However, while I never get an out-of-memory exception, RequestSample just stops getting called around frame 1600 and as a result the transcode operation never actually returns to completion.
I also tried taking a cue from the SDK sample which uses C++ entirely to generate the frame. So I passed the buffer as a Platform::Array<BYTE> to a static Runtime extension class function I wrote in C++. I won't bore you with the C++ code, but even directly copying the bytes of the array to the media sample using memcpy still had the same result! It seems that there is no way to communicate the contents of the byte[] array to the media sample without it never being released.
I know what some will say: the difference between my code and the SDK sample, of course, is that the SDK sample generates the frame _entirely_ in C++, thus taking care of its own memory allocation and deallocation. Because I want to get the data from RenderTargetBitmap, this isn't an option for me. (As a side note, if anyone knows if there's a way to get the contents of an RT Window using DirectX, that might work too, but I know this is not a C++ forum, so...). But more importantly, MediaStreamSource and MediaStreamSample are managed classes that appear to allow you to generate custom frames using C# or other managed code. The MediaStreamSample.CreateFromBuffer function appears to be tailored for exactly what I want. But there appears to be no way to release the buffer when giving the bytes to the MediaStreamSample. At least none that I can find.
I know the RT version of these classes are new to Windows 8.1, but I did see other posts going back 3 years discussing a similar issue in Silverlight. That never appears to have been resolved.
I guess the question boils down to this: how do I safely get managed data, allocated during the SampleRequested handler, to the MediaStreamSample without causing a memory leak? Also, why would the SampleRequested handler just stop getting called out of the blue, even when I artificially eliminate the memory leak problem?
Thanks so much for all input!Thursday, May 22, 2014 5:04 PM
Question
Answers
All replies
Just a quick correction: the Windows Runtime (including RenderTargetBitmap and MediaStreamSource) is native code, not managed. It can be called from native C++ with deterministic memory management.
When you project it into C# your C# code is managed and garbage collected and the GC handles when to free the native runtime components. Setting the buffer to null will release the reference, but not immediately the memory.
If you examine this in the memory profiler is the memory rooted? You should be able to see if there are outstanding references holding on to it. If you force a GC does it clean up?
--RobFriday, May 23, 2014 3:24 PMOwner
Hello,
If you want to actively manage your sample memory you should create a sample pool. In other words, you should pre allocate a pool of samples that is just large enough to support your scenario. You should then cycle through the pool round robin style, reusing samples as appropriate. This will allow you to pre allocate and know exactly how much sample memory your app will be using.
I hope this helps,
James
Windows SDK Technologies - Microsoft Developer Services -, May 23, 2014 6:07 PMModerator
Hi Rob -
Thanks for your quick reply and for clarifying the terminology.
In the Memory Usage test under Analyze/Performance and Diagnostics (is that what you mean?) it's clear that each frame of video being created is not released from memory except when memory consumption gets very high. GC will occasionally kick in, but eventually it succumbs.
Interestingly, if I reduce the frame size substantially, say 320x240, it never runs out of RAM no matter how many frames I throw at it. The Memory Usage test, however, shows the same pattern. But this time the GC can keep up and release the RAM.
After playing with this ad nauseum, I am fairly convinced I know what the problem is, but the solution still escapes me. It appears that the Transcoder is requesting frames from the MediaStreamSource (and the MediaStreamSource is providing them via my SampleRequested handler) faster than the Transcoder can write them to disk and release them. Why would this be happening? The MediaStreamSource.BufferTime property is - I thought - used to prevent this very problem. However, changing the BufferTime seems to have no effect at all - even changing it to ZERO doesn't change anything. If I'm right, this would explain why the GC can't do its job - it can't release the buffers I'm giving to the Transcoder via SampleRequested because the Transcoder won't give them up until it's finished transcoding and writing them to disk. And yet the transcoder keeps requesting samples until there's no more memory to create them with.
The following code, which I made from scratch to illustrate my scenario, should be air-tight according to everything I've read. And yet, it still runs out of memory when the frame size is too large.
If you or anyone else can spot the problem in this code, I'd be thrilled to hear it. Maybe I'm omitting a key step with regard to getting the deferral? Or maybe it's a bug in the back-end? Can I "slow down" the transcoder and force it to release samples it's already used?
Anyway here's the new code, which other than App.cs is everything. So if I'm doing something wrong it will be in this module:
using System; using System.Collections.Generic; using System.IO; using System.Threading.Tasks; using System.Linq; using System.Runtime.InteropServices.WindowsRuntime; using System.Diagnostics;.UI.Popups; using Windows.Storage; using Windows.Storage.Pickers; using Windows.Storage.Streams; using Windows.Media.MediaProperties; using Windows.Media.Core; using Windows.Media.Transcoding; // The Blank Page item template is documented at namespace MyTranscodeTest { /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { MediaTranscoder _transcoder; MediaStreamSource _mss; VideoStreamDescriptor _videoSourceDescriptor; const int c_width = 1920; const int c_height = 1080; const int c_frames = 10000; const int c_frNumerator = 30000; const int c_frDenominator = 1001; uint _frameSizeBytes; uint _frameDurationTicks; uint _transcodePositionTicks = 0; uint _frameCurrent = 0; Random _random = new Random(); public MainPage() { this.InitializeComponent(); } private async void GoButtonClicked(object sender, RoutedEventArgs e) { Windows.Storage.Pickers.FileSavePicker picker = new Windows.Storage.Pickers.FileSavePicker(); picker.FileTypeChoices.Add("MP4 File", new List<string>() { ".MP4" }); Windows.Storage.StorageFile file = await picker.PickSaveFileAsync(); if (file == null) return; Stream outputStream = await file.OpenStreamForWriteAsync(); var transcodeTask = (await this.InitializeTranscoderAsync(outputStream)).TranscodeAsync(); transcodeTask.Progress = (asyncInfo, progressInfo) => { Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () => { _ProgressReport.</param> /// <param name="args"></param> void _mss_SampleRequested(MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args) { if (_frameCurrent == c_frames) return; var deferral = args.Request.GetDeferral(); byte[] frameBuffer; try { frameBuffer = new byte[_frameSizeBytes]; this._random.NextBytes(frameBuffer); } catch { throw new Exception("Sample source ran out of RAM"); } args.Request.Sample = MediaStreamSample.CreateFromBuffer(frameBuffer.AsBuffer(), TimeSpan.FromTicks(_transcodePositionTicks)); args.Request.Sample.Duration = TimeSpan.FromTicks(_frameDurationTicks); args.Request.Sample.KeyFrame = true; _transcodePositionTicks += _frameDurationTicks; _frameCurrent++; deferral.Complete(); } } }
Again, I can't see any reason why this shouldn't work. You'll note it mirrors pretty closely the sample in the Windows 8.1 Apps With Xaml Unleashed book (Chapter 14). The difference is I'm feeding the samples to a transcoder rather than a MediaElement (which, again should be no issue).
Thanks again for any suggestions!
PeterFriday, May 23, 2014 6:31 PM
Hi James -
That's a very interesting idea thank you. Here's one problem I would have though. I want to be able to generate long videos (maybe an hour or more) so obviously I cannot allocate an hour's worth of uncompressed samples, nor would I know in advance how long the output will have to be.
I think what you're saying, though, is that I would allocate a much smaller pool of frames - say 30 - and cycle through them, and when I hit 30, go back to 1. But how will I know for sure that any particular sample is free for re-use? Also, even a buffer of 30 could be huge - for 1080/30p video, that's ~200MB/s, so really something like no more than 5 or 10 samples would be what I would want to have on deck at any given time.
As it happens, I actually tried a version of your suggestion yesterday - I pre-allocated one frame buffer that I would give to MediaStreamSample.CreateFromBuffer each time, copying the new data into that frame buffer with each SampleRequested call.
Sure enough, no crash. I thought I'd solved the problem until I looked at the video, though. It effectively looked like approximately 1 frame per second video. What I think was happening in reality was that the MediaStreamSample buffers were all being linked to the same block of bytes - my frame buffer - and so whenever I updated those bytes, multiple samples got affected.
Bottom line - if I use your approach, how do I know when it's safe for me to start re-using old samples?
Also, please see my last post with the new code sample that reduces the problem to the bare minimum. Maybe you'll see something else that I can't.
Thanks so much. Yours might be the answer if I can solve these questions.Friday, May 23, 2014 6:41 PM
More info, and this warrants its own post for sure.
Thanks to James' suggestion to use a revolving buffer I looked closer at the Processed handler for MediaStreamSample, answering my own question about how to know when the sample is available to be reused. It looks like the right way to do this is to maintain a list of, custom structures containing MediaStreamSamples and "isFree" flags, use the Process handlers to set samples as free, and then create new samples from free ones by calling MediaStreamSample.CreateFromBuffer, passing the buffer from the old sample. Perfect solution. EXCEPT:
Before implementing the idea fully I wanted to get an idea of how many samples I would need to create, so I decided to just maintain a counter of outstanding samples, incrementing the counter on MediaStreamSample.CreateFromBuffer and decrementing it during the Processed handler. So,
In the class:
int _samplesInProcess = 0; int _samplesInProcessMax = 0;
And in _mss_SampleRequested:
MediaStreamSample targetSample = MediaStreamSample.CreateFromBuffer(frameBuffer.AsBuffer(), TimeSpan.FromTicks(_transcodePositionTicks)); targetSample.Processed += (MediaStreamSample sampleSender, object sampleProcessedArgs) => { _samplesInProcess--; }; targetSample.Duration = TimeSpan.FromTicks(_frameDurationTicks); _samplesInProcess++; _samplesInProcessMax = Math.Max(_samplesInProcess, _samplesInProcessMax); args.Request.Sample = targetSample;
The idea is that _samplesInProcessMax would tell me the maximum number of samples that would ever be outstanding, which I had assumed, and figured I was confirming, would correspond to the buffer size set by MediaStreamSource.BufferTime. That turned out not to be the case at all.
Rather, after processing 10,000 frames at low-res (so it wouldn't crash), _samplesInProcessMax reported 2,944. In other words, there was at one point a 2,944 sample lag between SampleRequested getting called, and the sample being processed. That's about 1 GB worth of my 320x240x32 frames.
NO WONDER it's running out of RAM!
Further poking with breakpoints surprised me even more. The first time MediaStreamSample.Processed was called always seems to be based on the BufferTime, which is what I'd expect. That is, if I set the BufferTime to 1s, MediaStreamSample.Processed would be called with 30 _samplesInProcess. But after that things got very ugly. MediaStreamSample.Processed would only get called maybe 10 more times before the next sample was requested (in other words the buffer was not clear before the next sample was requested), but after that there would be a skip of 60 samples before MediaStreamSample.Process was called again, at which point there was an 80 sample backlog. Then Processed would be called a bunch more times, maybe 20, but then there would be a skip of 120 samples requested, so now the backlog was 180 samples. And so on. However, eventually the backlog plateaus at around the point that corresponds to 1GB of RAM.
There's clearly some kind of accumulating error here that is causing the transcoder not to process a sufficient number of frames before it asks for the new ones. It doesn't seem like a coincidence that the backlog roughly doubles for the first several rounds until critical memory limits are reached.
The question remains whether the bug is in my code or the framework. I'm certainly not above stupid and obvious mistakes, but I've been pouring over this problem for a week and in my experience when something is this hard to resolve it means there's a bug in the OS.
I sure hope I'm wrong! But since this is stuff is new to Windows 8.1, it's entirely plausible this is a bug that no one's yet caught because no one's tried to use a MediaStreamSource to create HD renderings with big enough frames to cause a crash.
If it is a bug, what are my options? Also what's my prize for finding it? :)
Peter
Friday, May 23, 2014 8:55 PM
Hello,
I'm not sure what is occurring in your code but hopefully it is just something simple that you are missing. I can look at this further but to expedite the process I would ask that you please create a Visual Studio 2013 sample project that is a simple as possible but still reproduces the problem. Zip up the project and upload it to your OneDrive. Once up loaded please post a link here. I will grab the project and take a look at it next week.
Thanks,
James
Windows SDK Technologies - Microsoft Developer Services -, May 23, 2014 11:37 PMModerator
James and all -
SUCCESS at last (I think). The solution was, in fact, to set MediaStreamSource.BufferTime to zero ticks. If BufferTime is set to zero ticks, MediaStreamSource seems never to ask for more than 1, or at most 2, samples before processing the sample it already has. However, if BufferTime is anything other than zero, MediaStreamSource will continue to ask for samples via SampleRequested, WITHOUT processing (and thus releasing) the ones it already has, at an exponentially increasing rate. This still clearly seems like a bug to me. Maybe it's limited to using MediaStreamSource with the transcoder. Perhaps the MediaStreamSource documentation omitted something the MS programmers just assumed, i.e., that when transcoding there's no need to buffer the MediaStreamSource. If that's the case, it's probably something that needs to be documented. :)
Anyway, I just thought I'd share the balance of my experience for the benefit of others. What I did was make a SamplePool class that was basically a list containing structures consisting of a MediaStreamSample and a"bool IsAvailable" member. The SamplePool also has a member "MakeSample." That member does the following: (1) look to see if there are any available samples in the pool; (2) if not, add one using MediaStreamSource.CreateFromBuffer; (3) copy the new sample data into the buffer of the available one, then re-create it CreateFromBuffer, re-using the old buffer and the new sample time; (4) mark the sample as unavailable.
The other critical step is to handle the Processed event; what that does is go back to the sample pool and free up (by changing IsAvailable) the sample that was released , PLUS (and this seems to be critical too) any earlier-timed sample in the pool. The reason for the latter step is that MediaStreamSource does indeed seem occasionally to skip releasing samples. If a sample's been released via the Processed handler, it appears safe to assume any sample with an earlier presentation time should also be released.
And so, with that, no more memory leak, and smooth video output.
But wait, there's more. As is well known, some graphics algorithms use an inverted Y axis (rather than top down, positive coordinates point up). Guess what? My video was turning out UPSIDE DOWN. I could manually flip the buffer, but the problem is some video cards do not do this, and I can't figure out a way to know from any C#-friendly API. Thus, I re-wrote my SamplePool class in C++ and now create the internal media buffers myself, using MFCreate2DMediaBuffer, and setting the "fBottomUp" flag to false. For some unfathomable reason, using MediaStreamSource.CreateFromBuffer does NOT allow you to control whether the buffer is bottom up or top down, but using the media foundation functions does allow this.
And now, at last, I can properly create computer-rendered video using MediaTranscoder and MediaStreamSource!
I'll post my code shortly after it's cleaned up and more presentable for you and all else to see. Anyway, thanks again for the info on re-using the buffer; I'd have never of figured that out otherwise.
PeterSunday, May 25, 2014 3:29 PM
Alrighty, here's the link to the code:
Now, currently it seems to be working. However, I'm a little less optimistic than I was in my last post because I'm still not exactly understanding what's going on. First and foremost, it seems to work fine with the C++ sample pool I wrote, or without it (i.e. creating a new MediaStreamSample each time). The key seems to be: (1) making the BufferTime 0, and (2) forcing a garbage collection at the end of each SampleRequest. If I do those two things, it apparently doesn't matter whether I use the SamplePool or just create a new sample each frame. Either way memory seems to stay steady and it doesn't crash.
I guess using the C++ SamplePool is better because I can force it not to make the image upside down when I create the MF2DMediaBuffer. But there's a lot about MF I don't understand yet, so I'm a little uneasy using the code in a production environment.
So, whoever is willing to take a look at the project I posted, if you happen to see anything wrong in the C++ SamplePool please let me know. Or, if you know a way to prevent the image inversion without resort to the C++ class (besides manually flipping it), I'd also love to hear it.
But, at least we seem to be making progress. One thing is certain - BufferTime MUST BE ZERO when transcoding from a MediaStreamSource.Sunday, May 25, 2014 11:08 PM
Hello,
Thanks for the feedback on the MediaTranscoder. We will put this on the list to investigate further.
I'm glad that you were able to find a solution. Let us know if you run into any other problems along the way.
-James
Windows SDK Technologies - Microsoft Developer Services -, May 29, 2014 10:01 PMModerator
For sure! Actually my C++ video writer is basically done so I thought I'd share a couple more observations. I realize this really isn't for this forum but I'm not sure how to PM you. :)
I noticed in the MF documentation that you can enable or disable "throttling" on the media sink, and the docs say this is normally enabled to prevent the app from delivering too many samples. That sure seems to be exactly what was going wrong with the transcoder/MediaStreamSource.
The other thing is that if I don't release both the sample and the buffer after IMFSinkWriter->WriteSample, the nasty memory leaks return with a vengence. Fortunately I don't need asynchronous functionality for this particular application (it's a dedicated video render and the app informs the user it's going to be unresponsive until it's done but for the cancel button), but I can easily imagine if you did use MF in asynchronous mode then trying to coordinate the release of used samples, and not overwhelm the garbage collector in the C# thread at the same time, could be a nightmare. Again, just the sort of problem I was experiencing with the RT transcoder.
Hope this helps. If you'd like I can privately send you my C++ code for my frame writer. (I sortof feel like it's a trade secret since I spent so much damn time figuring it out! :))Friday, May 30, 2014 3:26 PM
Please feel free to contact me through my personal blog:
Windows SDK Technologies - Microsoft Developer Services -, June 2, 2014 11:36 PMModerator
- I am running in to the same exact scenario you are, especially with too many requests for samples before they are processed and then too much RAM being consumed before the whole process craps out due to insufficient RAM, even with constant calls to GC.Collect. I think this is a bug and hope they fix it soon.Monday, July 7, 2014 3:16 AM
Hey Scot - Glad to know it wasn't just me. :) For what it's worth, though, using C++ it's working fine. I know the MF extensions are a pain to use with all the COM lingo, but while the code is tedious it's nothing particularly complex. The key (for me at least; recall I was transcoding) was that it let me allocate only one frame, ever, processing it and overwriting it once processing was done. Because I was controlling the encoder directly, I knew exactly when it was done with a frame and when it was safe to overwrite. You just have to make sure to do all the correct COM release calls.
The one drawback is that it has to be done synchronously, which obviously kills your app's performance and is exactly what you want to avoid in a tablet app. It's also inefficient since in theory you should be making the next frame while you're waiting for the GPU/disk to process the previous one. And it's probably not an option if you're trying to feed a MediaElement. The only way I know how to do that is the way they do it in the sample, feeding the sample requested args straight to a C++ module that does all the heavy lifting.
Either way, the bottom line appears to be that you just can't implement a MediaStreamSource entirely in C#. Between the non-deterministic memory allocation and the (apparently) non-deterministic nature of the media foundation libraries insofar as you can't seem to control how many frames it requests before it finishes with the ones it's already been given, you'll always have those two forces competing.Monday, July 7, 2014 9:54 PM
- you deserve a medal!!Monday, March 30, 2015 12:28 PM
Haha, well thank you that's very kind. But amazingly coincidentally, I just had to revisit this problem this past week. It turns out even my C++ based transcoding was having problems when things were going too fast. I was pulling my hair out for several days, trying everything I could think of. Searching the Internet, it's clear that lots of people are experiencing this issue, but no one has an answer.
In the end, nothing could stop the memory leaks. Media Foundation has basically proven unusable to me for writing (it still seems to work ok for read ops, for now). So, just this weekend, I went through the painstaking process of building an LGPL-compatible build of the ffmpeg libraries, and will be using that for my write operations. And gee, what do you know, no memory leaks whatsoever. (Fortunately they released a WinRT build not too long ago).
So that's that.
Monday, March 30, 2015 6:52 PM | https://social.msdn.microsoft.com/Forums/en-US/e82113b0-8b6f-4da9-b1a5-36e81ff30284/custom-mediastreamsource-and-memory-leaks-during-samplerequested?forum=winappswithcsharp | CC-MAIN-2018-30 | refinedweb | 4,354 | 55.13 |
Sorting images based on their position on the pagetusharde Sep 23, 2013 6:19 PM like this. Does anyone have any ideas on how i could create an individual array for Row1, Row2, Row3... etc based on its Y position and then sort them based on their X positon and then create the master array by combining them in sequence?
I am new to javascript so this is a little out of my league. Any help will be much appreciated.
1. Re: Sorting images based on their position on the pageDaveSofTypefi Sep 23, 2013 8:18 PM (in response to tusharde)
At the level of detail you use to describe your script it works on the page you show. I suspect a mistake in the sorting code. Can you share it so we can react to it?
Dave
2. Re: Sorting images based on their position on the pagetusharde Sep 24, 2013 8:42 AM (in response to DaveSofTypefi)
Hi Dave. Thanks for the quick response. The image above is an example of a script i have already written. The script takes all the selected images and creates a tightly fit grid like the image above. That's working perfectly.
Now, i am trying to add in a functionality where in a user can move the position of images just by dragging and dropping them in general area of where the image should be. Then, when you run the script again it sorts the array of the entire selection based on the each items postion on the page and then creates the same tightly fit grid.
I decided to do an individual test to see if the sorting works before i include it in the "image grid" script. Here's where i am with that test.
When i run the follwing script on two layouts (image below) it works perfectly well in "TEST 1" when all the boxes are aligned on the "Y" co-ordinate. But fails on "TEST 2" when they are slightly offset. This is critical because i would like to give the user the ease of use of just roughly placing the images, rather than perfectly aligning them.
var blocs = app.selection; var newArr = blocs.sort(byYX); function byYX(a,b) { var aY = a.geometricBounds[0], bY = b.geometricBounds[0], aX = a.geometricBounds[1], bX = b.geometricBounds[1], dy = aY-bY, dx = aX-bX; return dy?dy:dx; } for(var i = 0; i < newArr.length; i++) { $.write(newArr[i].contents + "\r") } // TEST 1 CONSOLE LOG: 123456789 // TEST 2 CONSOLE LOG: 251346789
I wracked my brain a little more and i think I may have some sort of solution. I added some Math.floor to the x/y values to get an average for the array sort. This seems to work in some cases, but the rounding value is very fickle and needs to be adjusted to get the right setting. Too high and it grabs boxes from rows beyond itself. Too low and it misses items in the same row. Any thoughts on this technique?
var value = .5; var blocs = app.selection; var newArr = blocs.sort(byYX); function byYX(a,b) { var aY = Math.floor(a.geometricBounds[0] * value) / value, bY = Math.floor(b.geometricBounds[0] * value) / value, aX = Math.floor(a.geometricBounds[1] * value) / value, bX = Math.floor(b.geometricBounds[1] * value) / value, dy = aY-bY, dx = aX-bX; return dy?dy:dx; } for(var i = 0; i < newArr.length; i++) { $.write(newArr[i].contents + "\r") }
3. Re: Sorting images based on their position on the pageDave Saunders Sep 24, 2013 9:02 AM (in response to tusharde)
If this doesn't help, I'll look more deeply this evening:
The first part of a a?b:c statement must be logical. But dy is a number.
Dave
4. Re: Sorting images based on their position on the pageMarc Autret Sep 24, 2013 4:07 PM (in response to tusharde)
No time to test, but I think you have to make a important distinction between x-precision and y-precision:
var X_PRECISION = 1, Y_PRECISION = 50, mFLOOR = Math.floor; var byYX = function F(a, b) { a = F.data['_'+a.id]; b = F.data['_'+b.id]; return (a[1]-b[1])||(a[0]-b[0]); }; byYX.data = {}; var value = .5, blocs = app.properties.selection || null, newArr, i, t, k, o; if( blocs ) { o = byYX.data; i = blocs.length; while(i--) { k = '_'+(t=blocs[i]).id; t = t.geometricBounds; o[k] = [X_PRECISION*mFLOOR(t[1]/X_PRECISION), Y_PRECISION*mFLOOR(t[0]/Y_PRECISION)]; } newArr = blocs.sort(byYX); for( i=0 ; i < newArr.length ; ++i ) { app.select(newArr[i]); $.sleep(1000); } }
@+
Marc
EDIT: The code above is just a raw approximation of the algorithm to illustrate my point. With regard to your purpose, comparing x-locations is not a problem as soon as rows (viz. y-locations) are properly computed. Hence, the whole problem is to determine under which condition two items belong to the same row, in terms of y-location constraint. In my routine I use a high Y_PRECISION factor (50) in order to attract elements with a similar y-location to the same row. But this y-similarity is only based on a basic Math.floor, so my algorithm isn't really smart. In fact, your introductory example shows that the property of belonging-to-the-same-row is much more complex and should take into consideration what x-location is already occupied or still empty. Anyway, my code is intended to show you a way to precompute relevant locations before you sort the array. This has two advantages: first, you will significantly speed up the sorting—as the byYX function doesn't need to re-access DOM objects and their geometric bounds. Secund, you can then separate the data to sort and the sort in itself, which allows you to refine the key values—row and col parameters—independently.
5. Re: Sorting images based on their position on the pageMarc Autret Sep 25, 2013 2:19 AM (in response to tusharde)
Hi again,
I found an idea that looks promising to address the issues mentioned above. The main problem, as you've probably noticed, is that we cannot simply rely on (x,y) coordinates 'as they are', even after having rounded the values, to extract the implied rows and columns.
When we study the disposition below:
our eyes instantly detect that there should be 3 columns and 5 rows, but this underlying order isn't instantly reached from just sorting the set of coordinates. We need to improve the) values. So we can compute the final order, i.e. the weights // --> {min:[xLeft, yTop], weight:[x,y], max:[xRight,yBottom], id}[] final weights, clean up data, create ID-to-weight access // --- for( i=0 ; (i < n)&&(t=data[i]) ; ++i ) { w = n*t.weight[1] + t.weight[0]; // final weight (y first)); } }
@+
Marc
6. Re: Sorting images based on their position on the pagetusharde Sep 25, 2013 7:51 AM (in response to Marc Autret)
Marc,
Wow!! What is this, some kind of black magic? Kidding, of course. This is waaay out of my league. I'm blown away. Amazing.
Thank you so much. I tested it with a few scenarios and it worked flawlessly.
Your approach to the problem is smart and makes sense. I understood the logic, now i need to learn how you executed it. I'm going to read your code over and over to figure out what you did there.
Thanks again, Marc. Very cool.
-Tushar | https://forums.adobe.com/message/5711651 | CC-MAIN-2018-30 | refinedweb | 1,256 | 67.35 |
Comment on Tutorial - How to Send SMS using Java Program (full code sample included) By Emiley J.
Comment Added by : Ron
Comment Added at : 2013-06-21 00:58:27
Comment on Tutorial : How to Send SMS using Java Program (full code sample included) By Emiley J.
I need some help. I am a programmer - but quite new to Java.
I have copied the supplied source files to my home directory - and installed the comm module.
I have created this small test wrapper (sendSMS.java in my home directory too):
public class sendSMS {
public static void main(String args[]) {
SMSClient smsc = new SMSClient(1);
int ret = smsc.sendMessage("+61410541281", "Test message");
}
}
But when I try to compile I get this:
/home/ron > javac sendSMS.java
sendSMS.java:3: error: cannot find symbol
SMSClient smsc = new SMSClient(1);
^
symbol: class SMSClient
location: class sendSMS
sendSMS.java:3: error: cannot find symbol
SMSClient smsc = new SMSClient(1);
^
symbol: class SMSClient
location: class sendSMS
2 errors
Can someone guide me as to what I need to do?
Cheers,
R. ok good.But what about plug-in's Information.
View Tutorial By: Ramarao at 2010-01-31 07:32:02
2. thanks this material is very helpful for me
View Tutorial By: Anonymous at 2009-05-09 08:04:31
3. Try this
import java.net.*;
View Tutorial By: Asad at 2014-02-13 05:55:26
4. plz can u give me code for key excange b/n pda &am
View Tutorial By: visnas at 2009-05-04 20:18:32
5. hi, IF ANY ONE REALLY INTRESTED TO RUN THIS PROGRA
View Tutorial By: sunil kumar sahu at 2010-10-09 11:50:56
6. i am getting the following error:
cannot fi
View Tutorial By: santhosh at 2009-09-24 09:13:57
7. Whne i run this code ,it does not return any ports
View Tutorial By: amit at 2009-06-10 02:11:13
8. Can some body throw more light on TRANSIENT ....
View Tutorial By: Anish at 2010-08-03 07:30:58
9. excellent explanation...............Thanks for mak
View Tutorial By: Kiran Leo at 2012-12-08 06:41:26
10. thnx for the nice tutorial. but i need to create a
View Tutorial By: sharath at 2013-09-07 16:29:19 | https://www.java-samples.com/showcomment.php?commentid=39246 | CC-MAIN-2021-49 | refinedweb | 386 | 66.23 |
04/19/2018 by Johannes Schnatterer in Software Craftsmanship
Coding Continuous Delivery—Jenkins pipeline plugin basics.
Continuous Delivery has proven its worth as a suitable approach in agile software development for the release of high-quality reliable and repeatable software in short cycles. Software changes are subject to a series of quality assurance steps before they reach production. A typical Continuous Delivery pipeline could look as follows:
- Build
- Unit tests
- Integration tests
- Static code analysis
- Deployment in a staging environment
- Functional and/or manual tests
- Deployment in production
It is essential here to automate all of the steps; this is typically done using Continuous Integration servers such as Jenkins.
Conventional Jenkins jobs are good for automating individual steps of a Continuous Delivery pipeline. However, because each step builds on the previous one, their order must be retained. If you’ve ever set up and run a Continuous Delivery pipeline using conventional Jenkins jobs (or other CI tools without direct pipeline support), then you’re probably already aware how quickly this can get complicated. Individual jobs - often complemented with countless pre- and post-build steps - are chained. Meaning you have to trawl through job after job to understand what’s going on. In addition, such complex configurations cannot be tested or versioned, and must be set up again for every new project.
The Jenkins pipeline plugin can help here. It enables definition of the entire pipeline as code in a central location using a Groovy DSL in a versioned script file (
Jenkinsfile). There are two styles of DSL to choose from: an imperative, rather scripted style (referred to hereafter as scripted syntax) and, since February 2017, also a declarative style (referred to hereafter as declarative syntax). Declarative syntax is a subset of scripted syntax and, with its predefined structure and more descriptive language elements, offers a basic structure (similar to a Maven POM file) that can make it easier to get started. Despite being more verbose and less flexible, the outcome is build scripts that can be understood more intuitively than those formulated in scripted syntax. While it offers almost all of the freedom that the Groovy syntax brings with it (see (Jenkins Pipeline Syntax Differences to Groovy) for limitations), it may require greater familiarity with Groovy. The decision for one style or the other can currently still be considered a question of taste; it is not foreseeable whether one of the two will prevail and ultimately supplant the other. That being said, more recent official examples are mostly written in declarative syntax and there is also a visual editor designed only for this. To enable readers a direct comparison, the examples provided in this article are formulated in both styles.
Key concepts
The description of a build pipeline with the Jenkins pipeline DSL can essentially be broken down into stages and steps. Stages are freely selectable groups of steps in a pipeline. Points 1. to 8. in the above sample pipeline could each represent one stage, for example. Steps are commands describing concrete build steps that are ultimately executed by Jenkins. One stage therefore contains one or more steps.
In addition to at least one stage with one step, a build executor must also be allocated for a minimum pipeline definition - so a Jenkins build slave, for example. This occurs in the
agent section in declarative syntax and in the
node step in scripted syntax. In both styles, labels can be used to describe the executor further in order to ensure that it fulfils certain conditions (e.g., makes a certain version of Java or a Docker installation available).
Before setting up the first pipeline, we’ll first explain the different types of pipeline jobs that Jenkins offers:
- Pipeline: a simple pipeline job that expects script definition directly via the Jenkins web interface or in a
Jenkinsfilefrom the source code management (SCM).
- Multibranch pipeline: enables specification of an SCM repository with several branches. If a
Jenkinsfileis found in a branch, the pipeline defined there will be executed in the event of changes to the branch. A Jenkins job is created on the fly for each branch.
- GitHub organization: a multibranch pipeline for a GitHub organization or user. This job scans all repositories of a GitHub organization and creates a folder containing a multibranch pipeline for each of the repositories whose branches contain a
Jenkinsfile. So it is essentially a nested multibranch pipeline.
See (Triology Open Source Jenkins), for example.
First steps
In order to familiarize yourself with the possibilities offered by the pipeline plugin, it’s a good idea to start with the simplest of setups. Create a project without a
Jenkinsfile by defining the pipeline script directly in a pipeline job via the Jenkins web interface.
When getting started, it is important to be aware that for every type of pipeline job, links to documentation on the pipeline features available in the current Jenkins instance can be found on the Jenkins web interface. The basic steps (Jenkins Pipeline Basic Steps) available in all Jenkins instances can be complemented with additional plugins. If you click on pipeline syntax in the job, you will be directed to the Snippet Generator. There is also a universal link to this in every Jenkins instance. The URL is.
Example:
The Snippet Generator is a handy tool for the transition from previous Jenkins jobs that you “clicked together” to the pipeline syntax. Simply use the mouse to assemble your usual build job components here and generate a snippet in pipeline syntax (see Figure 1). You can also use the Snippet Generator later on to familiarize yourself with the syntax of new plugins, or if your IDE doesn’t have autocompletion. Speaking of autocompletion, the Snippet Generator contains further links to useful information:
- The global variables available for this instance. This includes environment variables, build job parameters and information on the current build job. Example:
- Detailed documentation for all steps and classes available for this instance, along with the associated parameters. Example:
- An IntelliJ Groovy DSL script (GDSL) file to activate autocompletion.
See this blog post to find out how it works (Jenkins Pipeline Code Completion).
Pipeline scripts
With this knowledge, you can now begin writing pipeline scripts. The basic features of the Jenkins pipeline plugin will be described using a typical Java project. We’ll use the example of WildFly’s kitchensink quickstart, a typical JEE web app based on CDI, JSF, JPA, EJB, JAX-RS and integration tests with Arquillian, here.
In declarative syntax, a minimal pipeline script to build this project in Jenkins looks as shown in Listing 1.
pipeline { agent any tools { maven 'M3' } stages { stage('Checkout') { steps { git '' } } stage('Build') { steps { sh 'mvn -B package' } } } }
Listing 1
The script shown in Listing 1
- allocates a build executor,
- obtains the Maven instance configured in Tools,
- checks out the default branch of the Git URL, and
- triggers a non-interactive Maven build.
The uniform structure of the declarative pipeline is shown here. Each pipeline is enclosed within a
pipeline block, which is in turn comprised of sections and/or directives (Jenkins Pipeline Declarative Syntax). Among others, this reflects the stage and step concepts described above.
Listing 2 shows the pipeline from Listing 1 in scripted syntax:
node { def mvnHome = tool 'M3' stage('Checkout') { git '' } stage('Build') { sh "${mvnHome}/bin/mvn -B package" } }
Listing 2
The concept of stages can also be seen in the scripted syntax, though they then directly contain the steps and there aren’t any sections or directives. The terms nodes, tools, stages, git, etc., are referred to as steps here. This syntax allows significantly more freedom. Unlike declarative syntax that must always be enclosed within a
pipeline block, steps in scripted syntax can also be executed outside of the
node block. This makes clear why declarative syntax is a subset of scripted syntax: the
pipeline block enclosing the declarative pipelines is essentially a step in the scripted syntax.
For these scripts to be executable on a Jenkins instance, a Maven installation with the name
M3 simply needs to be created in Jenkins 2.60.2 on Linux in delivery state under “Global Tool Configuration.” This can then be defined and executed using the
tools declarative or step in the pipeline job. In the background, specifying the
M3 tool leads to Maven being made available on the current build executor. If necessary, it is installed and announced in the PATH.
In the classic Jenkins theme, the build outcome is shown in a stage view detailing the stages and their times - as in Figure 2 below:
The official Jenkins theme Blue Ocean (BlueOcean) offers a view designed specifically for pipelines. The significantly more modern UX enables very straightforward work with pipelines. You are able to see at a glance which steps are executed during each stage and view the console log for specific steps at the click of the mouse, for example. There is also a visual editor for declarative pipelines (BlueOceanVisualEditor). If the plugin is installed, you can use links in each build job to switch between Blue Ocean and Classic. Figure 3 shows the pipeline from Figure 2 in Blue Ocean.
The first pipeline examples shown above will now gradually be expanded to illustrate basic pipeline features. Each of the changes will be shown in both declarative and scripted syntax. The current status of each extension can be tracked and tested in (Jenkinsfile Repository GitHub). There is a branch here containing the full examples in declarative and scripted syntax for each section under the number indicated in the heading. The outcome of the builds for each branch can also be viewed directly in our Jenkins instance (Triology Open Source Jenkins (jenkinsfile)).
Conversion to SCM /Jenkinsfile (1)
One of the biggest advantages of Jenkins pipelines is that you can put them under version control. The convention is to store a
Jenkinsfile in the repository’s root directory. The aforementioned pipeline job can then be converted to “Pipeline script from SCM” (see Figure 4) or a multibranch pipeline job can be created.
The URL for the SCM (in this case, Git) is configured in the job. You can check in the pipeline scripts as shown above, although the repository URL would then be repeated in the repository, which would violate the (Don’t Repeat Yourself!) principle. The solutions are different depending on the syntax:
- Declarative: the checkout is done in the
agentsection by default, meaning that the checkout stage can be completely omitted here.
- Scripted: checkout is not done by default, however there is the
scmvariable containing the repository URL configured in the job as well as the
checkoutstep containing the SCM provider configured for the job (in this case, Git).
stage('Checkout') { checkout scm }
Improving readability with your own steps (2)
Having Groovy as the basis for the pipeline makes it very easy to expand the existing steps. Running Maven can be expressed more clearly in this example by enclosing it in a separate method method (see Listing 3).
def mvn 3
This allows it to be called in both declarative and scripted syntax as follows:
mvn 'package'
Definition of the tools is moved to the method.
This move to a method means that Maven parameters useful to execution in Jenkins (batch mode, issue Maven version, update snapshots, issue failed tests on console) are separated from the Maven parameters of interest to the respective execution (here, the
package phase). This enhances readability as only the key parameters are passed during execution. Furthermore, there’s no need to repeat the parameters, which should be given with every execution anyway.
A specific JDK is also used here. Similar to Maven, this requires installation of a JDK called
JDK8 in the “Global Tool Configuration.” This makes the build more deterministic, though, as Jenkins’ JDK is not implicitly used but rather an explicitly named one.
This Maven method is taken from the official examples Pipeline Examples).
A method such as
mvn is a good candidate for the move to a shared library. This will be described later in this article series.
Division into smaller stages (3)
Similar to when writing methods or functions in software development, small stages also make sense for the maintainability of pipeline scripts. This division will allow you to see at a glance where something has gone wrong with failing builds. The times are also measured by stage, whereby it can quickly be seen which parts of the build require the most time.
The Maven build in this example can be subdivided into a build and unit test. Integration tests are run at this point for the first time, too. In this example, Arquillian and WildFly Swarm are used.
Listing 4 shows how this is done in declarative syntax.
stages { stage('Build') { steps { mvn 'clean install -DskipTests' } } stage('Unit Test') { steps { mvn 'test' } } stage('Integration Test') { steps { mvn 'verify -DskipUnitTests -Parq-wildfly-swarm ' } } }
Listing 4
In scripted syntax, the
steps sections are omitted.
One disadvantage worth mentioning here is that this will make the entire build slightly slower, as different Maven phases are completed in several stages. This has already been optimized here by only calling clean once at the start and the pom.xml is extended by a
skipUnitTests property that prevents repetition of the unit tests during the integration test stage.
There is generally a danger of port conflicts during integration tests. The infrastructure may bind ongoing builds to the same ports simultaneously, for example, which will lead to unexpected build failures. This can effectively be avoided by using Docker, which will be described later in this article series.
End of pipeline run and handling failures (4)
There are usually steps that must always be executed at the end of a pipeline run, regardless of whether the build was successful or not. The best example of this are test results. If a test fails, the build can also be expected to fail. However, the test results should be recorded in Jenkins in each case.
In addition, in the case of build failures or if the build status changes, a special reaction should occur. Sending emails is common here; chat messages or similar are another option.
In both cases, similar syntactic concepts are specified in pipelines. However, different approaches apply for declarative and scripted syntax.
In general, Groovy language features like
try-catch-finally blocksare available in both cases. This is not ideal, though, as caught exceptions then don’t have any impact on the build status.
In declarative syntax, the
post section with the conditions
always, changed, failure, success and
unstable is available here. This enables clear definition of what should happen at the end of each execution.
The above scenario can be depicted as shown in Listing 5.
post { always { junit allowEmptyResults: true, testResults: '**/target/surefire-reports/TEST-*.xml, **/target/failsafe-reports/*.xml' } changed { mail to: "${env.EMAIL_RECIPIENTS}", subject: "${JOB_NAME} - Build #${BUILD_NUMBER} - ${currentBuild.currentResult}!", body: "Check console output at ${BUILD_URL} to view the results." } }
Listing 5
It is worth mentioning here that it is also possible to use the existing Jenkins email mechanism very easily. This is explained later in this section. The origin of the recipients’ email addresses is also relevant here. In the example, these are loaded in the
EMAIL_RECIPIENTS environment variable. This must be determined by an administrator in the Jenkins configuration. Alternatively, you can of course also write the recipients directly in the
Jenkinsfile. They will also be checked into the SCM then though.
In scripted syntax, only the
catchError step is available that essentially works like a
finally block. To depict the above scenario, you’ll need to work with
if conditions. For reasons of maintainability, we also recommend defining an individual step here (see Listing 6).
node { catchError { // ... Stages ... } junit allowEmptyResults: true, testResults: '**/target/surefire-reports/TEST-*.xml, **/target/failsafe-reports/*.xml' statusChanged { mail to: "${env.EMAIL_RECIPIENTS}", subject: "${JOB_NAME} - Build #${BUILD_NUMBER} - ${currentBuild.currentResult}!", body: "Check console output at ${BUILD_URL} to view the results." } } def statusChanged(body) { def previousBuild = currentBuild.previousBuild if (previousBuild != null && previousBuild.result != currentBuild.currentResult) { body() } }
Listing 6
As previously mentioned, the subject of emails can be simplified in both declarative and scripted syntax by using the existing Jenkins email mechanism. The familiar “Build failed in Jenkins” and “Jenkins build is back to normal” emails will be sent here. The
Mailer class will be used for this for which there is no dedicated step. This is possible via the generic
step step. If you also wish to receive “Back to normal” emails, you will need to note one particularity: the
Mailer class reads the value from the
currentBuild.result variable. In case of success, this will be placed right at the end of the pipeline. Meaning that the
Mailer class will never learn of this. Implementation as a separate step is therefore advisable here. In scripted syntax, this can be realized as shown in Listing 7. The same solution can also be used with declarative syntax though.
node { // ... catchError und nodes mailIfStatusChanged env.EMAIL_RECIPIENTS } def mailIfStatusChanged(String recipients) { if (currentBuild.currentResult == 'SUCCESS') { currentBuild.result = 'SUCCESS' } step([$class: 'Mailer', recipients: recipients]) }
Listing 7
With regard to notification with HipChat, Slack, etc., we recommend reading the following Jenkins blog entry: (Jenkins Notifications).
Properties and archiving (5)
Countless more minor settings are available for conventional Jenkins jobs, which can be executed via the web interface. These include the size of the build history, prevention of parallel builds, etc. These are of course described in the
Jenkinsfile if the pipeline plugin is used. In declarative syntax, these settings are known as
options and structured as depicted in Listing 8.
pipeline { agent any options { disableConcurrentBuilds() buildDiscarder(logRotator(numToKeepStr: '10')) } stages { /* .. */ } }
Listing 8
In scripted syntax, the
options are known as
properties and are set using the step with the same name (see Listing 9).
node { properties([ disableConcurrentBuilds(), buildDiscarder(logRotator(numToKeepStr: '10')) ]) catchError { /* ... */ } }
Listing 9
Another useful step is
archiveArtifacts. This saves artifacts created during the build (JAR, WAR, EAR, etc.) so that they can be viewed in the Jenkins web interface. This can be useful for debugging or to archive versions if you don’t use a Maven repository. In declarative syntax, it is formulated as shown in Listing 10.:
stage('Build') { steps { mvn 'clean install -DskipTests' archiveArtifacts '**/target/*.*ar' } }
Listing 10
In scripted syntax, the
steps sections are omitted. This saves all JARs, WARs and EARs generated in one of the Maven modules.
Tips for getting started
There are a few other basic directives such as
parameters (declared build parameters) and
script (to execute a block in scripted syntax within the declarative syntax) - we recommend reading the information on (Jenkins Pipeline Declarative Syntax). There are also many other steps, most of which are made available via plugins - see the official documentation (Pipeline Steps Reference). For these to be available in the pipeline, developers must use the according API. The pipeline compatibility of individual plugins is summarized here (Pipeline Compatibility). At the current time, most of the common plugins support the Jenkins pipeline plugin.
We also recommend reading the Top 10 Best Practices for Jenkins Pipeline Plugin (Pipeline Best Practices).
And now for a few handy tips on working with the
Jenkinsfile. When you first set up a pipeline, we recommend you start with a normal pipeline job and only add the
Jenkinsfile to version control when the build works. You otherwise run the risk of bloat your commit history.
Use the “Replay” feature when making changes to an existing multibranch pipeline. This temporarily enables editing for the next execution of the pipeline without changes being made to the
Jenkinsfile in the SCM. One final tip: you are also able to view the workspace in the Jenkins web interface for pipelines. The
agent section or
node step allows you to allocate several build executors. This is described in greater detail with the subject of parallelism later in this article series. There can therefore also be several workspaces. These can be viewed in the classic theme:
- Click on “Pipeline Steps” in a build job.
- Click on “Allocate node: Start”.
- The familiar “Workspace” link will appear on the left-hand side.
Conclusion and outlook
This article provides insights into the basics of the Jenkins pipeline plugin. It describes the key concepts and terms as well as the different types of jobs, provides an introduction to the
Jenkinsfile syntax in theory and by using examples, and offers practical tips for working with pipelines. If the Continuous Delivery pipeline described at the start of this article is taken as the common theme, then this article ends at 5.. The example described configures Jenkins, builds the code, runs unit and integration tests, archives test results and artifacts, and sends emails. All with a script of around thirty lines in length.
With regard to Continuous Delivery, steps such as static code analysis (e.g., with SonarQube) as well as deployments on staging and productive environments are of course missing here. A number of tools and methods can be used to implement these, such as nightly builds, reuse in different jobs, unit testing, parallelism, and Docker.
This will be described later in this article series. | https://cloudogu.com/en/blog/continuous_delivery_1_basics | CC-MAIN-2019-04 | refinedweb | 3,514 | 54.42 |
The awesome thing about Python is that if you can think of something you’d like to code, there’s probably already a library for it! In fact, Python provides many tools, utilities and other resources to make any coding job easier. From dictionaries to collections to itertools to generators, there are lots of ways to write better, more concise Python code, many of which you may be under-utilizing or not even aware of.
But all of these great options mean there might be dozens of ways to write a unit of business logic. This really comes into effect when you’re reading other developers’ code. Ever feel like this?
It’s not just other people’s code, though, I sometimes feel like this when I read my own code after a couple of weeks off!
Fortunately, writing better Python code isn’t hard. As a precursor to this article, I encourage you to choose a well-accepted style guide if our goal is to write better code. Style guides help you stick to a format, making it easier to read code written by people on your team – and your own code, as well! I personally prefer “The Hitchhiker’s Guide To Python.” It’s extensive and well written.
It’s also a good idea to abide by well-known software principles like DRY (Don’t Repeat Yourself) and KISS (Keep It Simple, Stupid!). Finally, never skimp out on tests and documentation. They might be boring, but they will save you in the long run.
This article will introduce you to the top 10 ways you might not be fully taking advantage of to write better Python code, including:
- Lambda Function Dictionaries
- Dictionary Access
- Counter Collections
- Default Dictionaries
- Combinations and Permutations
- Groupby
- Generator Functions
- UUIDs
- Arrays
- Unpacking Arguments With “_” And “*”
Now, let’s get into the fun bit! It’s time to write better Python code!
How To Write Better Python Code
All of the methods outlined below make you a better Python developer, but don’t feel like you have to tackle all of them at once. You’ll get more mileage out of consistently applying some of them, rather than inconsistently applying all of them.
1 — Lambda Function Dictionaries
Did you know that dictionaries can also store Lambda functions? Lambda functions are those single-line, nameless functions, which can prove quite useful when performing minor alterations to data.
Normally, you would just store a Lambda function in a variable to be called later. For example:
square = lambda num: num * num square(5)
Output:
25
However, if you want to group multiple common Lambdas together, they can be stored in dictionaries:
lamb = {'sum': lambda x, y: x + y, 'diff': lambda x, y: x - y} lamb['sum'](6,5)
Output:
11
Or try:
lamb['difft'](25, 16)
Output:
9
2 — Access Dictionary Elements Elegantly
As wonderful as dictionaries are, I always fear the day my code crashes because I accessed an unavailable key. The problem most often crops up when interfacing with external APIs because in many cases, keys are only present when certain conditions are met. If you access a key that is not present an error will be thrown.
I would usually write:
s = {“name”: “swaathi”, “id”: “O24851”, “emp”: True} n = {“name”: “nick”, “emp”: False} if s.has_key(“id”): print s[“id”] else: print “not an employee”
However, the more idiomatic way to write this is:
s.get(“id”, “not an employee”)
Output:
O24851
Or try:
n.get(“id”, “not an employee”)
Output:
not an employee
3 — Counter Collections
Python collections are a powerful data structure that can be very helpful if used properly. One of the simplest ways to make use of a collection is as a counter. In other words, use a collection to count the number of elements in a list, or the number of letters in a list of words, and so on.
For example:
import collections A = collections.Counter([1, 1, 2, 2, 3, 3, 3, 3, 4, 5, 6, 7]) A
Output:
Counter({3: 4, 1: 2, 2: 2, 4: 1, 5: 1, 6: 1, 7: 1})
This definitely beats looping over and incrementing a counter each time! You can also query it for more information, such as the highest element occurrence:
A.most_common(1)
Output:
[(3, 4)]
Or try:
A.most_common(3)
Output:
[(3, 4), (1, 2), (2, 2)]
4 — Default dictionary
This has been my savior so many times! In our second point, we saw how to use the “get” function to read dictionary keys safely. Well, if we had used a default dictionary, that wouldn’t have even been necessary!
With default dictionaries, you can set the default data type for null values. This is helpful when you’re analyzing data and need everything to be of a particular data type. Otherwise, you’d be writing more ‘if statements’ than actual code!
import collections t = collections.defaultdict(int) t['a']
Output:
0
You can also change the default value to be a null string instead:
t = collections.defaultdict(str) t['a']
Output:
“”
5 — Combinations and Permutations
The itertools module is a collection of functions that enhance processing of iterators. Itertools contain many prebuilt methods useful in data analytics or machine learning. For example, you can easily create a matrix of all possible combinations and permutations using:
import itertools shapes = ['circle', 'triangle', 'square',] itertools.combinations(shapes, 2)
Output:
('circle', 'triangle') ('circle', 'square') ('triangle', 'square')
You can also create permutations:
import itertools shapes = ['circle', 'triangle', 'square',] itertools.permutations(shapes)
6 — Groupby
Sometimes when interfacing with external APIs or parsing data, you might need to group a list of items. Traditionally, you would use multidimensional loops to achieve this, but in true Pythonic fashion, you can use a function instead. In my opinion, this is Itertools’ most useful function. For example:
from itertools import groupby things = [("animal", "bear"), ("animal", "duck"), ("plant", "cactus"), ("vehicle", "speed boat"), ("vehicle", "school bus")] for key, group in groupby(things, lambda x: x[0]): for thing in group: print("A %s is a %s." % (thing[1], key)) print("")
Output:
A bear is an animal. A duck is an animal. A cactus is a plant. A speed boat is a vehicle. A school bus is a vehicle.
Groupby takes in a list and a function that returns the value to group by. In the above example, the Lambda function returns the first value of the set. This is used to group the list.
An important aspect of the groupby function is that you need to pass it a sorted list. It simply creates a new group when it encounters a new key. It will not retroactively update a previously created group.
7 — Generator Functions
Python generator functions are computations that are performed when called upon, rather than at invocation. Generator functions return lazy iterators which are commonly used when interfacing with I/O devices, such as reading from a file.
Generator functions utilize memory efficiently, and avoid memory leaks at the source. For example:
def my_gen(): n = 1 print('This is printed first') # Generator function contains yield statements yield n n += 1 print('This is printed second') yield n n += 1 print('This is printed at last') yield n for item in my_gen(): print(item)
Output:
This is printed first 1 This is printed second 2 This is printed at last 3
8 — Generating UUIDs
Sometimes you just need a random string in order to tag some information. If you use a timestamp or a random function, there are chances that it will clash and generate duplicate tags.
In these cases, you should be generating a UUID instead. UUIDs are randomized 128-bit numbers, guaranteed to be unique every time you call them. In fact, there are over 2¹²² possible UUIDs that can be generated. That’s over five undecillion (or 5,000,000,000,000,000,000,000,000,000,000,000,000).
import uuid user_id = uuid.uuid4() user_id
Output:
UUID('7c2faedd-805a-478e-bd6a-7b26210425c7')
9 — Commonly used array functions
Here are some of my favorite array functions that I frequently use:
Range: used to create a sequence of numbers by specifying start index, end index, and step:
list(range(0,10,2))
Output:
[0, 2, 4, 6, 8]
Sum, min, max: used to sum elements in an array, or else find the min and max values:
min(array), max(array), sum(array) print(array.min(), array.max(), array.sum())
Any and all: useful in performing quick checks, such as checking if either any or all elements meet a truth condition:
any(a % 3==0 for a in range(0,10,2))
Output:
True
Or try:
all(a % 3==0 for a in range(0,10,2))
Output:
False
Or try:
all(a % 3==0 for a in range(0,10,2))
Output:
True
10 — Unpack Arguments With _ And *
If you only need the first few elements of an array, you can use the underscore operator to extract it:
numbers = [1, 2, 3] a, b, _ = numbers a
Output:
1
Alternatively, if you need to extract the first few and last few elements, you can use the star operator as a catchall:
long_list = [x for x in range(100)] a, b, *c, d, e, f = long_list [a, b, d, e, f]
Output:
[0, 1, 97, 98, 99]
Here the *c acts as a catchall and stores any length of elements.
Summary
And there you have it, my top ten tips that will have you writing better Python code in no time!
- Download a copy of ActivePython for free to get started right away.
But there’s one last thing before you go. In order to maintain the zen of this article, it is only reasonable to end the article with a meme, so here’s an easter egg for you. Go to your Python console and type in:
import antigravity
Output:
| https://sweetcode.io/top-10-ways-to-write-better-python-code/ | CC-MAIN-2021-25 | refinedweb | 1,647 | 60.14 |
csPluginLoader Class ReferenceThis utility class helps to load plugins based on request, config file, and commandline. More...
#include <csutil/plugldr.h>
Detailed DescriptionThis utility class helps to load plugins based on request, config file, and commandline.
Definition at line 65 of file plugldr.h.
Constructor & Destructor Documentation
Initialize.
Deinitialize.
Member Function Documentation
Load the plugins.
A shortcut for requesting to load a plugin (before LoadPlugins()).
If you want this class to register the plugin as a default for some interface then you should use the interface name as the tag name (i.e. 'iGraphics3D'). Note that plugins requested with some tag here get lowest precendence. The commandline has highest priority followed by the config file. If after this no plugin with the given tag exists then RequestPlugin() will work.
The documentation for this class was generated from the following file:
Generated for Crystal Space 1.0.2 by doxygen 1.4.7 | http://www.crystalspace3d.org/docs/online/api-1.0/classcsPluginLoader.html | CC-MAIN-2013-48 | refinedweb | 152 | 50.84 |
Hello everyone,
im just getting a little practice on working with strings recently so i decided to try my hand at a chatbot, i wrote a super simple example and one of the responses works... but none of the others do.
thanks in advance.thanks in advance.Code:#include <iostream> #include <string> using namespace std; //version 0.0.0.2 basic chatbot string pb[3] = {"hello","how are you","what is your name"}; string rb[3] = {"Hi there","I am fine","My name is 01-1 Tranquil."}; void init() { //to be later filled with things. } void respond(const string inp) { int n = sizeof(pb) / sizeof(string); for(int i = 0; i < n; i++) { if(!pb[i].compare(inp)) { cout << rb[i] << "\n"; } } } int main() { init(); int run = 1; string inp; while (run == 1) { cin >> inp; respond(inp); } return 0; } | https://cboard.cprogramming.com/cplusplus-programming/138703-simple-chatbot-problems.html | CC-MAIN-2017-22 | refinedweb | 139 | 75.3 |
Content-type: text/html
curs_get_wch, get_wch, wget_wch, mvget_wch, mvwget_wch, unget_wch - Get (or push back) a wide character from Curses terminal keyboard
#include <curses.h>
int get_wch(
win_t *wch
);
int wget_wch(
WINDOW *win,
win_t *wch
);
int mvget_wch(
int y,
int x,
win_t *wch
);
int mvwget_wch(
WINDOW *win,
int y,
int x,
win_t *wch
);
int unget_wch(
const wchar_t wch
);
Curses Library (libcurses)
Interfaces documented on this reference page conform to industry standards as follows:
get_wch, wget_wch, mvget_wch, mvwget_wch, unget_wch: XPG4-UNIX
Refer to the
standards(5)
reference page for more information
about industry standards and associated tags.
The get_wch, wget_wch, mvget_wch, and mvwget_wch functions read a character from the terminal associated with the current or specified window. In no-delay mode, if no input is waiting, these functions return ERR. In delay mode, the program waits until the system passes text through to the program. Depending on the setting of cbreak, the program waits until it receives one character (in cbreak mode) or the first newline (in.
The following function keys, defined in <curses.h>, may be returned by get_wch and related functions if keypad has been enabled. Note that a particular terminal may not support all of these function keys. In other words, the routines do not return a function key if the terminal does not transmit a unique code when the key is pressed or if the definition for the key is not present in the terminfo database,.
Note that
get_wch,
mvget_wch,
and
mvw.
Functions: curses(3), curs_ins_wch(3), curs_inopts(3), curs_move(3), curs_refresh(3)
Others: standards(5) | http://backdrift.org/man/tru64/man3/mvwget_wch.3.html | CC-MAIN-2017-09 | refinedweb | 258 | 59.23 |
ROC or CAP CURVE for a multiclass classification in python
I am unable to plot ROC curve for a multiclass problem.})
- Making ROCs curves
I am trying to plot the ROC curve for a diagnostic test. I believe I got the correct values, but my ROC is so smooth. Do I need to randomly generate my values 100 times to get a jagged ROC curve? I cannot figure out how to do this
dat <- as.table(matrix(c(8,32,2, 30), nrow = 2, byrow = TRUE)) colnames(dat) <- c("Dis+","Dis-") rownames(dat) <- c("Test+","Test-") rval <- epi.tests(dat, conf.level = 0.95) print(rval); summary(rval) predicted_prob<-predict(lab_file2$risk_score,type="response") roccurve <- roc(lab_file2$complication, lab_file2$risk_score) plot(roccurve)
- precrec, Sensitivity and normalized rank of a perfect model
I have some problem interpreting the following graphs that plot Sensitivity vs Normalized Rank of a perfect model.
library(precrec) p <- rbinom(100, 1, 0.5) # same vector for predictions and observations prc <- evalmod(scores = p, labels = p, mode="basic") autoplot(prc, c("Specificity", "Sensitivity"))
I would expect that a perfect model would generate values of Specificity = Sensitivity = 1 for all the retrieved ranked documents and thus, a line with slope 0 and intercept 1. I am clearly missing something and/or misinterpreting the x axis label. Any hint?
Thanks
- What's the rationale behind this optimization to ROC plotting?
I am reading this Rnews document from June 2004, and the article Programmers’ Niche from page 33 presented a way to draw the Receiver Operating Characteristic curves and optimization to it.
The first code snippet is trivial and consistent with the definition
drawROC.A <- function(T, D) { cutpoints <- c(-Inf, sort(unique(T)), Inf) sens <- sapply(cutpoints, function(c) sum(D[T>c])/sum(D)) spec <- sapply(cutpoints, function(c) sum((1-D)[T<=c]/sum(1-D))) plot(1-spec, sens, type = "l") }
Then the author says (with minor edits from me),
There is a relatively simple optimization of the function that increases the speed substantially, though at the cost of requiring
Tto be a number, rather than just an object for which
>and
<=are defined
drawROC.B <- function(T, D){ DD <- table(-T, D) sens <- cumsum(DD[ ,2]) / sum(DD[ ,2]) mspec <- cumsum(DD[ ,1]) / sum(DD[ ,1]) plot(mspec, sens, type="l") }
I have spent quite a while reading the optimized version, but got stuck on the very first line: it looks like the negative sign
-preceding
Tis used to perform the cumulative sums in reverse order, but why?
Confused, I plotted the ROC produced by the two functions together to check if the results are the same.
The left plot is produced by
drawROC.Awhereas the right one is the outcome of
drawROC.B. At first sight, they are not identical, but if you look closely, the range of the Y-axis is different, so they are actually the same plot.
Edit:
Now I have understood what the result of
drawROC.Bis correct (see my answer below), but I still have no idea where the substantial performance boost comes from...
- Regarding MLMC ( multi label multi class ) classification
I am trying to use Allennlp framework based in pytorch to do MLMC ( multi label multi class ) classification.
1.Can any suggest me datasets available for MLMC classification. ( Example: Document can have multiple labels and class. ). I am not able to find a good datasets for this. Any link would be helpful. 2.Which loss function can we use for this type of classification in python and how ? Any article or link would be helpful. 3.Any GitHub link which uses Allennlp for MLMC classification for documents which can be reused ?
These are abstract questions so I haven't provided any solution or approach to it
- Python: How to make a multiclass classifier model (3 classes) in Keras for a input with shape (256, 1989, 2)?
(I usually just read in english, sorry for miswriting)
Python 3.6
I think I'm not understanding the input_shape part. What I want is a classifier for a sensor, so I'm trying to train with a input of 256 sensors, each one having 1989 samples with 2 features (2 integers values).
The code:
def createTrainModel(x_train, y_train): # x_train.shape = (256, 1989, 2). y_train = keras.utils.to_categorical(y_train) # y_train.shape = (256, 3). # 3 classes to predict. model = Sequential([ Dense(32, input_shape=x_train.shape[1:], activation=tf.nn.relu), Dense(32, activation=tf.nn.relu), Dense(3, activation=tf.nn.softmax) ]) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) if input("\nSalvar modelo ?: ").lower() == 's': model.save(join(Path.home(),'Modelos', input('\nNome do modelo: '))) else: print('\nModelo deletado.')
The error:
`Traceback (most recent call last):File "Supervised_ML/Supervised_Test.py", line 712, in <module> main() File "Supervised_ML/Supervised_Test.py", line 699, in main prepareInputsLabels(d) File "Supervised_ML/Supervised_Test.py", line 86, in prepareInputsLabels createTrainModel(x_train, y_train) File "Supervised_ML/Supervised_Test.py", line 107, in createTrainModel model.fit(x_train, y_train, epochs=5) File "/home/desenvolvimento/.local/lib/python3.6/site-packages/keras/engine/training.py", line 952, in fit batch_size=batch_size) File "/home/desenvolvimento/.local/lib/python3.6/site-packages/keras/engine/training.py", line 789, in _standardize_user_data exception_prefix='target') File "/home/desenvolvimento/.local/lib/python3.6/site-packages/keras/engine/training_utils.py", line 128, in standardize_input_data 'with shape ' + str(data_shape)) ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with shape (256, 3)`
Also, the number of sensors and samples isn't always the same, only the
2in the last part of
x_train shape(256, 1989, 2) is constant. Is this going to be a problem ?
x_train = x_train.reshape(256,-1)in the beginning made it work, but the accuracy is lower than 0.6 (and don't change at all after any number of epochs, shouldn't it increase ?).
- Keras sample_weight for train and validation not improving minority class classification
I am working on a sequential labeling problem with unbalanced classes. I use keras sample_weight to improve the detection of the minority class but it does not help. What am I missing?
My imbalanced output classes balanced with class_weight:
class_weights = {0: 0.0 # ignore padding, mask_zero = True 1: 1.6 2: 0.44 3: 11.0 train_sample_weight = np.array([class_weights[cls] for cls in y_train]) val_sample_weight = np.array([class_weights[cls] for cls in y_val])
I am setting the required params in model.compile and model.fit
model.compile(optimizer="rmsprop", loss="categorical_crossentropy", sample_weight_mode="temporal", metrics=["accuracy"]) model.fit(X_train, y_train, batch_size=32, epochs=20, sample_weight=train_sample_weight, validation_data=[X_val, y_val, val_sample_weight])
But my classification results are not changing. Important for me is class 1. I want to improve its detection.
With sample weight:
precision recall f1-score support 0 0.00 0.00 0.00 0 1 0.54 0.91 0.68 2354 2 0.97 0.77 0.86 8214 3 0.61 0.83 0.70 333 micro avg 0.80 0.80 0.80 10901 macro avg 0.53 0.62 0.56 10901 weighted avg 0.86 0.80 0.81 10901 [[ 0 0 0 0] [ 0 2132 209 13] [ 1 1739 6309 165] [ 0 42 16 275]]
Without sample weight
precision recall f1-score support 0 0.00 0.00 0.00 0 1 0.65 0.73 0.69 2354 2 0.91 0.89 0.90 8214 3 0.98 0.71 0.82 333 micro avg 0.85 0.85 0.85 10901 macro avg 0.64 0.58 0.60 10901 weighted avg 0.86 0.85 0.85 10901 [[ 0 0 0 0] [ 1 1723 630 0] [ 2 896 7312 4] [ 0 29 68 236]]
These clf reports are produced on validation data. Similar results on test data as well as with different model architectures.
For class 1 I see that, with sample weights, the recall is higher, but the precision decreases. Overall - f1 stays the same.
Which one is better? Am I missing something to add for better results with sample weight? Thanks!
- How does a system with an LRU cache in a layer before actually accessing the database maintain the most updated information?
Assume you have a system that primarily reads large amounts of data, like pinpoint locations in coordinate form. You can set up an LRU cache in some layer before he actual database to prevent the database from being accessed constantly and redundantly.
However, in typical implementations of such a cache, what are the mechanisms in place to the deal with, for example, if data infrequently but definitely will be modified and/or deleted? Say this happens at a rate of 1/100 or 1/1000 where such a cache is very efficient. Are there any common solutions to this? Does the application communicate with the cache to check and see if such an item would be dropped? Many caches I see do not seem to have any mechanism in place. Is there some kind of reoccurring cache validation that the server runs?
- How does an LRU cache fit into the CAP theorem?
I was pondering this question today. An LRU cache in the context of a database in a web app helps ensure Availability with fast data lookups that do not rely on continually accessing the database.
However, how does an LRU cache in practice stay fresh? As I understand it, one cannot garuntee Consistency along with Availibility. How is a frequently used item, which therefore does not expire from the LRU cache, handle modification? Is this an example where in a system that needs C over A, an LRU cache is not a good choice?
- CAP theorem - Availability
I was reading some artciles regarding CAP theorem, and I do have diffuculties understanding what Available means in the theorem.
From my understanding we never can have 100 % availability if we do have only one server.
Availability: Every request receives a (non-error) response – without the guarantee that it contains the most recent write | http://quabr.com/49722561/roc-or-cap-curve-for-a-multiclass-classification-in-python | CC-MAIN-2019-13 | refinedweb | 1,667 | 58.38 |
0
Contrary to what the title implies, my real problem lie in returning a vector pointer, and then use it.
Here's what I tried:
#include <vector> #include <string> #include <cstring> //? Needed? #define R_OK 0 #define R_ERROR 1 using namespace std; vector<string>* tester(){ vector<string> v; v.push_back("Lolzer"); vector<string> *ptr = &v; return ptr; } int main(){ vector<string>*ptr = tester(); cout<<ptr->at(0); return R_OK; }
It compiles, and runs, but as soon as I try to access the vector, the compiler says it's out of bounds (probably because the vector it is pointing to is empty)
Feel free to answer any of these questions :D
- Anyone know why this doesn't work?
- Anyone have a link explaining how to point to vectors correctly?
'cause I've seen something like vector<string*> *ptr as well, and it looks confusing.
- Advice about vectors?
- Suggestions on how to further improve the code, giving the same result, but in a clear, clean and understandable (and perhaps, fast) way? | https://www.daniweb.com/programming/software-development/threads/192431/pointing-to-a-vector | CC-MAIN-2018-30 | refinedweb | 168 | 69.01 |
The official blog of the Microsoft SharePoint Product Group
Hi, this is Steve Peschka from the SharePoint Rangers team again, and in this blog entry I’ll discuss customizing My Sites across an organization. There’s a good deal of confusion out there about how best to achieve this, which is partly caused by functional differences between SharePoint Portal Server 2003 and SharePoint Server 2007.
Before I get started, here’s a quick primer. My Sites in SharePoint have two sites, so to speak – a public site and private site. The same dynamic web page is used to generate everyone’s public site. You can see this when you go to look at an individual’s My Site, and the page you navigate to is called person.aspx. SharePoint appends information about the user whose details you want to see onto the query string portion of the URL. By default, this information is in the form of “accountname=domain\user”. So, if you were going to view the details for a user with a login name of “speschka” in the “steve” domain, you would navigate to\speschka. Since that page is shared by all users, if a site designer makes changes to that page, then public information about all users will reflect those changes. In this respect, MOSS 2007 works the same as SPS 2003.
Modifying the private My Site is where things begin to work differently. In SPS 2003, a site administrator could go into his or her private site, edit the home page in Shared mode, and save their changes. This would update the layout and web parts for all My Site users, so everyone’s private site would have the same layout and web parts. In MOSS 2007, this is no longer possible – there are a number of more powerful customization tools than what SPS 2003 had, but some tasks such as customizing all private sites have unfortunately become a bit more difficult. So, how can you customize the private My Sites in MOSS 2007?
First, let’s start with how NOT to customize My Sites. As with SPS 2003, some people might think “Hey, I can modify things pretty quickly if I just go to the file system and change the template for My Sites there.” This is absolutely the wrong approach, and it will leave your site in an unsupported state. This means modifying any of the .aspx pages or onet.xml or any of the other out of the box templates files is off limits.
Instead, we’re going to take advantage of several components of the core SharePoint platform to solve this problem – features, feature site template associations (also known as “feature stapling”), master pages, and our old friend the ASP.NET web control. Before getting into the details, here are a few definitions to make sure that we’re on the same page:
· Feature: A feature is a package of SharePoint elements that can be activated for a specific scope (such as a site or web) and that helps users accomplish a particular goal or task. For example, a feature may deploy a list definition, populate it with data, and add a custom web part to work with the list data. Individually, those elements may not be particularly interesting, but when combined into a cohesive group as a feature they provide a mini-application or solution. For more information, go to the Working with Features section of the WSS 3.0 SDK.
· Feature site template association: Allows you to associate new features and functionality to existing templates such that when those sites are provisioned, the associated features automatically get added as well. To understand feature stapling, you need to understand that there are.”
· Master pages: A master page is an ASP.NET page that has the file name extension of .master and allows you to create a consistent appearance and layout for the pages in your SharePoint site.
· ASP.NET web control: For this solution we are talking about an ASP.NET server control, which consists of a .NET assembly and a set of tags that are added to a page to instantiate an instance of our control. Note that it is not a user control (.ascx file).
So, those are the key components of the solution. How do you put them all together?
A common set of requirements for customizing My Sites across an enterprise includes a) using a custom master page and b) adding, removing, and/or moving web parts around the page. Those are the only items that I will address in this blog entry, but the approach taken is flexible enough that you can do virtually anything else needed by just plugging your code into the appropriate location.
Here’s how the components described above can help you achieve this. The first feature, called “MySiteStaplee” is really where most of the work occurs.
The MySiteStaplee feature includes the following functionality:
· File upload – the feature is configured to automatically upload a custom master page into the master page gallery in the new My Site. We include this section in the feature.xml file:
<ElementManifests>
<ElementFile Location="steve.master"/>
<ElementManifest Location="element.xml"/>
So, here the feature is saying that it wants to include a file called “steve.master”, which is the custom master page. It’s also saying that there is additional configuration information in a file called “element.xml”. Now let’s look at a section of element.xml:
<Module Name="MPages" List="116" Url="_catalogs/masterpage">
<File Url="steve.master" Type="GhostableInLibrary" />
</Module>
The Module and File elements are describing where the master page should be uploaded. In the Module element, the List attribute defines the type of list to which the item should be uploaded, and the Url attribute defines the list in which it will be placed. In the File element, the Url attribute defines where the file is that is going to be uploaded. GhostableinLibrary is a little more esoteric, but essentially when you are uploading a file that is going to land in a document library, you need to include this attribute in your File element because it tells SharePoint to create a list item to go with your file when it is added to the library. If you were instead provisioning a file outside a document library, you would specify Type=”Ghostable".
· Change the Master Page setting for the site – changing the master page setting for the site requires some code to be run. For this solution, you will use something often referred to as a “feature provisioning code callout.” All that really means is that when the feature gets activated, it will run some code. To do that, you’d have to write a new .NET assembly, using a class that inherits from SPFeatureReceiver. With that class, you get four events that you can override: FeatureActivated, FeatureDeactivating, FeatureInstalled, and FeatureUninstalling. For this solution, we will override the FeatureActivated event to change the master page.
Since we’re working with a fairly simplistic scenario here, you’re just going to look at the current master page and change it to use the one that will be uploaded in the feature. To do that, use the following code in the FeatureActivated event:
try
{
using (SPWeb curWeb = (SPWeb)properties.Feature.Parent)
//got the root web;now set the master Url to our
//master page that should have been uploaded as part
//of our feature
if (curWeb.MasterUrl.Contains("default.master"))
curWeb.MasterUrl = curWeb.MasterUrl.Replace(
"default.master", "steve.master");
curWeb.Update();
}
The FeatureActivated event has a signature that looks like this:
public override void FeatureActivated(SPFeatureReceiverProperties properties)
The properties parameter provides access to a lot of useful information; in this case, you’re able to get a reference to the SPWeb associated with the My Site, so you can change the master page.
In order to get this code callout to execute, you need to configure the feature so that it uses this assembly. You’d do that in the feature.xml file for the staplee feature, by defining the assembly and class that are associated with it:
ReceiverAssembly="MySiteCreate, Version=1.0.0.0, Culture=neutral,
PublicKeyToken=c726fa831b98198d"
ReceiverClass="Microsoft.IW.MySiteCreate"
Some of you are now wondering why we didn’t make any changes to the home page for the site in the code callout, such as adding, deleting or moving web parts. The issue is that when you are provisioning a feature via the stapling mechanism, most of the document libraries and lists don’t exist at the time your provisioning code is executed. That includes the Pages library, where the default.aspx page lives that is used for the home page. Since it doesn’t exist yet, you can’t change it in the code callout, so you’ll need another way to do that.
It is also another important reason why you need to use a custom master page. This solution includes a custom ASP.NET server control that is going to be used to make changes to the home page. The way to get that control added and used in the site is to add it to the custom master page. When the custom master page is loaded, it contains an instance of the ASP.NET server control and that control can then finish off the customization work for us. You add one tag to the custom master page to register the control:
<%@ Register Tagprefix="IWPart" Namespace="Microsoft.IW" Assembly="MySiteCreatePart, Version=1.0.0.0, Culture=neutral, PublicKeyToken=cb1bdc5f7817b18b" %>
You need to add a second tag to instantiate an instance of the control when the page is loaded:
<IWPart:PartCheck
· Change web parts – the custom ASP.NET control is used to modify web parts and their layout on the page. Since you’d only want the provisioning code to run once, the first thing to do is check to see if the code has been run before by storing a value in the My Site’s SPWeb Property Bag and then checking it:
//get the current web; not using "using" because we don't want to
//kill the web context for other controls that need it
SPWeb curWeb = SPContext.Current.Web;
//look to see if our code has already run
if (! curWeb.Properties.ContainsKey(KEY_CHK))
The next thing is to get a reference to the home page in the site:
//look for the default page so we can mess with the web parts
SPFile thePage = curWeb.RootFolder.Files["default.aspx"];
With the home page, you can get the web part manager for it:
//get the web part manager
SPLimitedWebPartManager theMan = thePage.GetLimitedWebPartManager
(System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared);
Once you have the web part manager, you can work with the web parts on the page. The SPLimitedWebPartManager has a collection of all the web parts on the page as well as methods to add, close, delete and move web parts. One important note is that in most cases trying to change individual web parts as you enumerate through the web part collection will not be successful. Anything that changes the nature of the collection during enumeration causes problems, but you can normally work around this by copying the web parts you want to change into an array, hashtable, or some other kind of collection.
For this example, we are going to do three things:
· Close the Welcome web part
· Move the RSS Feeder web part to the bottom zone
· Add the This Week in Pictures web part to the middle right zone
The code will enumerate through the web part collection, find the parts you want to work with, and capture the part and operation you want to do with it (delete or move) to a hashtable. I chose a hashtable in this case because of personal preference, but you can use some other collection type as well. To determine whether the current part is one you need to do something with, we check the System.Type of the part. That’s a simple language-agnostic way of finding them:
//create a hashtable to store our web parts
hshWp = new Hashtable();
foreach (WebPart wp in theMan.WebParts)
//close the welcome part; WebPartAction is a custom class
//I wrote to keep track of web parts and their properties
if (wp.GetType().Equals(typeof(PersonalWelcomeWebPart)))
hshWp.Add(wp.StorageKey.ToString(),
new WebPartAction(wp,
WebPartAction.ActionType.Delete));
//etc
You then create a new web part, set some properties, and also add it to the hashtable of web parts:
//add a new ThisWeekInPictures web part
ThisWeekInPicturesWebPart wpPix = new ThisWeekInPicturesWebPart();
wpPix.ImageLibrary = "Shared Pictures";
wpPix.Title = "My Pictures";
//add it to the hash so it gets put in the page
hshWp.Add(Guid.NewGuid().ToString(),
new WebPartAction(wpPix, WebPartAction.ActionType.Add,
"MiddleRightZone", 10));
Finally, the code enumerates through the hashtable and makes all of the web part changes:
foreach (string key in hshWp.Keys)
WebPartAction wpa = (WebPartAction)hshWp[key];
switch (wpa.Action)
case WebPartAction.ActionType.Delete:
theMan.DeleteWebPart(wpa.wp);
break;
case WebPartAction.ActionType.Move:
theMan.MoveWebPart(wpa.wp, wpa.zoneID, wpa.zoneIndex);
theMan.SaveChanges(wpa.wp);
break;
case WebPartAction.ActionType.Add:
theMan.AddWebPart(wpa.wp, wpa.zoneID, wpa.zoneIndex);
All of the web part changes have been made now. Since you wouldn’t want to execute this code more than once, a flag is needed so we’ll know that this work has already been done. If you recall, the code checks this flag up above:
Now, we’re going to update the property bag with the flag, so when the page is loaded from this point forward, your code branch will not execute:
//add our key to the property bag so we don't run
//our provisioning code again
curWeb.Properties.Add(KEY_CHK, "true");
curWeb.Properties.Update();
curWeb.AllowUnsafeUpdates = false;
Note as well that since the code that modifies the site has now completed, the AllowUnsafeUpdates property is also changed back to its default value of false.
The code is just about complete now, and there’s only one other thing to do. If you were to go directly to the page, you might think that the code didn’t work; the page would render exactly as it came out of the box, with all of the default web parts intact and in place. You need to refresh the page to get the changes to show up – to do that, we simply issue a redirect back to our page:
//force a page refresh to show the page with the updated layout
Context.Response.Redirect(thePage.Url);
The MySiteStaplee feature has all this great functionality, but how do you get it to execute? This is where the feature stapler comes in. It does only one thing – it establishes an association between a site template and a feature. That means that whenever a new site is created based on a specific template, the staplee feature will get activated. When it does, your code callout will execute the FeatureActivated code.
Here is the feature.xml file for the stapler feature:
<Feature
Id="4457E66E-6FCD-4352-AD4D-B870600B4696"
Title="My Site Creation Feature Stapler"
Scope="Farm"
xmlns="" >
<ElementManifest Location="elements.xml" />
</ElementManifests>
</Feature>
There are a couple of things to note:
· Id – this is a GUID just for this stapler feature; it is not related to the Id for the staplee feature
· Scope – the scope is Farm because you’d want to execute it anytime a My Site is created in the farm
The feature.xml file also references a second file called elements.xml; here are the contents of that file:
<Elements xmlns="" >
<FeatureSiteTemplateAssociation
Id="4DEFA336-EDC4-43cb-9560-FE2E27E76DFB"
TemplateName="SPSPERS#0"/>
</Elements>
This one is pretty simple to understand. The Id attribute is the GUID of the staplee feature; the TemplateName attribute makes a connection between the staplee feature and a site template called SPSPERS#0. To get the site template name you should use, look in the C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\1033\XML directory (assuming you installed to this path). In that directory, there are a number of xml files; when you install MOSS it adds one called webtempsps.xml. If you open that file up you will see an entry for a template with a name of SPSPERS; the default configuration for that template has an ID of 0. You combine the two and you get SPSPERS#0.
Now that you have all the code and features created, you’ll need to install and activate the features, then update the site’s configuration. Here are steps that need to be taken to get everything properly installed:
· Copy the MySiteStaplee and MySiteStapler folders to C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES (assuming this is your installation directory); each feature folder contains the files that define that feature
· Add the two assemblies that were created (the feature activation assembly and the custom ASP.NET web part assembly) to the global assembly cache
· Install and activate the MySiteStaplee feature; when you activate it, do so to the web application that hosts My Sites. Use the installfeature and activatefeature switches with stsadm to do this
· Install the MySiteStapler feature; since its Scope is Farm it activates automatically. Use the installfeature switch with stsadm to do this" />
This entry allows the custom ASP.NET control to be instantiated on the master page.
That’s it – you’re now ready to start creating My Sites with your customizations! One other thing worth noting – this ONLY applies to new My Sites. If you’ve already created My Sites then these features won’t be used.
In the next couple of months or so, I’m going to work on making the solution described here a little more generic. My goal is to make it more of an open framework for My Site customizations that can be reused without you having to rewrite code just for your implementation. This solution is now part of the larger Community Kit for SharePoint: Corporate Intranet Edition effort taking place on CodePlex, so the Visual Studio solution file containing all of the current source code is available for download there. [Update April 2, 2007: The production release of the MySiteCreate 1.0 solution file is now available here.]
Although this is likely the longest entry ever posted on this blog, I do hope that you’ll find this to be a useful solution for customizing My Sites in your MOSS environment. If you have questions, ideas, or suggestions, please leave a comment.
Steve Peschka
If you would like to receive an email when updates are made to this post, please register here
RSS
Hi,
Can you tell me how I can change the name of the private My Site from My Home to something else (like My page)? I'm working with a Dutch version of MOSS and in the translation to Dutch My home has become a not-so-common word...
Thanks, Willem
Hi Willem. Unfortunately this will be difficult to do. This string is not part of the standard navigation that is used in other sites, it is actually contained in a resource file. I'm not sure if it's in a resx or actually embedded in Microsoft.Office.Server.Intl.dll. In either case it is probably going to be more effective to modify it with some client-side script included on the master page.
I am using Forms Based Authenticaion (FBA) with the AD Membership Provider. I set the default zone to use FBA. I CAN authenticate to my site, and CAN add site admins from the site admin page, but CANNOT get users have added the membership provider to the SSP site web.config.
Any Ideas?
Hi Bob. You need to add your FBA membership provider information to the web.config for the SSP's web application. Then you need to go into the SSP, Personalization Services Permissions and add your FBA users and/or groups to have the Personal Features and Personal Site rights. After you do that your FBA users should have the My Site link show up.
Hi Steve,
I looked in the resource files in C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\Resources. I couldn't find the text 'My Home' there... Is this the only place for resource files?
Greetings, Willem
How can we apply new features and functionality to existing My Sites? One way could be to loop through all My Sites and i.e. add/remove a web part as in your blog. Is that the best approach to update existing sites?
Hi, do you have the code of both assemblies? I don't seem to be able to create them with your instructions.
Hi Willem. We look at 12\RESOURCES\foobar.XYZ.resx for the strings we will use to replace $Resources tokens. If DefaultResourceFile is not specified (or is set to _res), we will look at 12\TEMPLATE\Features\<featurename>\Resources\Resources.XYZ.resx for the resource file.(where XYZ is the culture, e.g., en-us.) Our default resource file is at Resources\Resources.resx.
Hi Wendy. As far as modifying existing My Sites you are correct - you basically need to enumerate all of the sites and then apply your changes on each one. There unfortunately isn't a real easy or sophisticated way to do it another way.
Hi Wouter. The source code for everything should be on the CodePlex site at the location referenced in the blog. If you are not finding it there then let us know and we will check into it.
Hi Steve. Is there a way to display the Top Navigation Tabs of the main portal page on all MySite pages? - Instead of the default My Home and My Profile tabs.
Hi Jeff. The navigation in the home page is hard-coded right now in the master page. You would need to look at changing it there, and inserting one of the navigation controls. However, more specific to your point, since the main portal page is in a separate site collection, there isn't a way to get the navigation from it and display it in another site collection, since each My Site is its own site collection.
Steve, I have a few questions regarding this approach.
1. Where do we define the ContentPlace holders for the My Home and My Profile page
2. Would I also have to follow the same approach to modify the "My Profile" page.
3. If I do want to revert back to my original master page after the site creation how do I do that
4. How do I enable "Audit Logging" for all the My Sites that are created.
5. How do I add custom pages all the sites
As I understand it, the webcontrol is loaded every time the user visits the MySite page. Are there any performance problems if we have 1000 or 10.000 users visiting the MySite? Since we always have to check if the code has already run.
Is it better (faster) to store a boolean in a session variable and only get the key property once?
Hi Vijay, some answers are below. I'm not sure what your exact question is around ContentPlaceholders, but if you are saying you are building a new master page and asking where you should put these controls, I would look at the default master page for My Sites to figure that out. The My Profile page is the shared public page, so you don't need to necessarily go through these same gyrations. You can change it once and it affects all users that "My Profile". If you want to change to a different master page just go into Site Settings and select the one you want to use. If you want to do it retroactively for every single My Site then you would need to write code that enumerates all personal sites and changes the master page programmatically, just as the code sample does. To change auditing settings just go into site settings. To add custom pages to the sites look at the sample code - it demonstrates uploading a new master page and you can use the same approach for uploading other files as well.
Hi Martin, you are correct that the web control is loaded every time a user hits a page in his or her My Site. I haven't done any capacity planning around this to determine the completely impact, but it would be an interesting exercise. If I ever get some free time I may try and do some measurements around that. I personally don't favor session state because I think that breaks down scalability faster than most other state options.
is there an easy way to automatically configure all My Sites in MOSS 2007 to have the 'Portal Site Connection' pre-defined? in sps 2003 you could do this globally but i can't find this in MOSS? thx!
Hi Jay, you can go into the Site Settings for a My Site and configure the portal connection in there. All new My Sites that are created after that will use the same link.
i tried as you suggested (set it once and new my sites will have the same values), but this did not work on our system. do you have to do this to the template instead (how?) or with a certain user (such as the farm system account)?
thx.
This is very useful information. I have a need to implement some javascript that examines a profile property and executes some logic when a user's profile page (person.aspx) is rendered. How would I accomplish that using this approach?
An article on how to customize the "My Site" feature within your organization. Very detailed article, you will find it helpful.
Hi Jay. That is all you should have to do, I have done that many times and had it work without issue. You should try doing it as a site collection administrator, but other than that there should not be anything special you need to do to get it to work.
Hi Sekou. I would add your javascript (either directly or by reference) to the master page, then use the method described in this blog to replace the default master page.
Personally I would just reference the javascript file in the master page, that way if your javascript changes you don't need to go back and change the master page in all the existing My Sites.
Very cool....exactly what i was looking for..thanks a million
I am about to roll out MOSS 2007 across my company and as part of preparing for this have developed a specific layout for My Site home pages.
Have downloaded the code, modified the xml for my custom layout, built the solution, installed the assemblies to my test server etc, but whenever I try creating a new My Site I get a File Not Found error from MOSS.
I have also tried using the original source files but got the same result.
Could you use the feature stapler to prevent the creation of subsites in mysites. I've read todd baginski blog on preventing creation of subsites - I'm just trying to figure if you could use the feature stapler to prevent creation of subsites. Thanks for sharing your thoughts.
I have already tried the stapling feature and I have checked the code sample available online. This sample worked successfully, but the issue is that the My Home sub site uses one and only one master page for both the application and system pages. What was done in the stapling feautre is changing the master page for the My Home sub site. Therefore, they are using the same master page for both type of pages (application and system pages). By applying the stapling feature and replacing the default master page with our custom master page, the design of application pages was good as expected, but the design of the system pages was totally corrupted. I need to have different master pages used by the application pages from the one used by the system pages. How can I do that?
Hi Rob. I would look in the event log for errors. Also make sure that you are getting the pages uploaded correctly that the feature callout and web part are supposed to be working with. File Not Found is often a pretty generic error, so also make sure that any custom web parts you reference have been added to the Global Assembly Cache and are deployed to the BIN directory for the web application if you intend to use them from there.
Steve
Hi cafearizona (my favorite state btw). I think it would be awkward to try and prevent creating sites from a handler that is only invoked after a site has been created. That particular requirement is difficult though because there isn't a specific web creation right that you could somehow pull out of the list of rights a My Site owner has. So, that being said, the options are pretty limited. If you float a pointer to todd's blog I'll take a look and see if there's something more creative we can come up with. No promises, but we can try.
Hi Marc. Can you define what you mean by "totally corrupted"? Like pages wouldn't render or ??
Hi Steve, The pages are rendering, but the layout of the system pages (for example the page that displays the list items of a custom list)is messed up. For example, the out-of-the-box controls such as navigation, the SharePoint ASP Menu control and so many other controls which are not used in the MyHome, I have removed them from the master page. But these controls are recommended in the system pages that miss these controls.
Hi Steve. I have put this approach in place but I get an error (in event log) saying that unable to cast object of type SPSite to SPWeb. (MysiteCreate.dll) I would assume that this is from the using (SPWeb curWeb = (SPWeb) Properties.Feature.Parent Line. Do you have an idea what the problem might be.
Thanks
I want to globally change the Theme used in My Sites sites. How can I accomplish this?
Hi Steve, I want to be able to add custom WF's to the pages lists of new sites instead of having to add them manually (there will be hundreds); I have created the feature and at this point have just entered some test code in the featureactivated handler - it fires on activating the feature as it should but not when a site is created - this is what it should do right? Or have I misunderstood? How can I get the FeatureActivate to run everytime a site is created and I take it, it is possible to associate WF's (custom and out-of-the-box ones) to the pages list pls?
Steve here is the url for the way that tbaginski did it......Yeah, I know it is "wrong" to limit the creation of subsites in the "MYSITE"....however, this customer thinks they want to turn that off for the first 6 weeks....until they have a handle on user profiles, people searches, etc. Then they want to turn it back on without having to re-create all the mysites
Steve, I've got the mysitestaple and mysitestaplee installed and activitated. When I click the mysite on the portal....I just get 3 prompts for user id and then a 401 unauthorized message. Did I miss something in the installation?
Steve, short question: Is it possible to follow your code in Debug, and is so, how is this done?
Steve Peschka wrote an excellent article about " Customizing SharePoint 2007 MySites Within the enterprise
Marc,
I worked on a solution for you that consists of only modifying the master page of "My Home" and not the whole site.
The solution is based on the code provided by Steve in this post (Thank you Steve).
Regards,
Please tell me this is only a temporary thing with MOSS. With SP 2003 it was zero effort to make a change in the Shared View of a "My Site" and have that shown on all user's private My Site page. I appreciate the effort shown in this blog to deal with customizing My Sites in MOSS but if this is a permanent solution then Microsoft has truly taken five steps back on this one.
Is there a way to have the same template for all mysites and make it non-modifiable?
Thank you in advance.
Hi, All!!! I would like to modify some "functions" of the site components. For example: I need to change menu apperance: I need to make "tree-view" menu (when user click to main menu item, subitem must drop down). Can I change exists menu component and how I can do this? Also I need to modyfy the code, which executed on some event, (or write new code on some event), but I do not know, where it placed :( I tried to use SharePoint Designer and VS 2007, but in Designer I did not found this option. VS, as I see, can't work with SharePoint(It can edit some files separated from each other, but I do not think that it's right). What I do wrong? Or what else I need ? May be SharePoint SDK could help me?
Hi Maanda. Have you stepped through the debugger to verify where the error is occurring? Assuming you are using the same code included with this post you shouldn't get that error on any of the code included. I've run it probably close to a hundred times during testing and haven't had that error.
Hi Steven. To set the theme for a site you want to get a reference to the web and call the ApplyTheme method on the SPWeb object. I haven't tried that specifically so I'm not sure if it will work in the feature stapling receiver, or if you would need to do it in the other web set up stuff included in the asp.net control.
Hi Marc. I'm not sure I completely understand the issue with your forms pages, but I don't have an image available right now to check it out on. I will give it a try later to see if I can reproduce it with the latest version of this code (from up on CodePlex), or if you are doing something different that I'm not understanding let me know.
Hi Dave. The code described in this article will run every time a My Site is created (that's what the feature stapled feature does). So if you're not seeing that behavior then something else is messed up I would guess. In terms of attaching workflows to lists, I haven't done a whole lot with that so I can't really say for sure how to tie it all together without doing some more research. You might check out the ECM Workflow Starter Kit for examples.
Hey cafearizona, thanks for the link to Todd's suggestion for working around creation of subsites in My Sites. Overall it looks pretty good, the only thing I would be a little squirrely about would be modifying the out of the box newsubwb.aspx page since that would affect your supportability. But if you kept a backup of the file then it should be pretty easy to switch them back and forth if you had an issue that you needed help resolving.
Hi cafearizona. If you are getting three prompts followed by a 401 error it's almost always the case that you accidentally configured the web application to use Kerberos authentication, but Kerberos is not properly configured. I've seen a few fringe cases where it involved SQL rights not being set up correctly, but I would really chase down the Kerberos angle first.
Hi Dirk, it is possible to step through this code in the debugger. What I do after I update my code is:
a. re-GAC my code
b. IISRESET
c. Hit a different page in the site
d. Use Visual Studio .NET to Attach to Process and pick out the W3WP process.
e. Create a new My Site.
When it's time to test again I just deleted the My Site I created in step e. and started the whole process all over again.
Hi Mark. Your feedback on the effort required for this type of customization has not fallen on deaf ears. :-)
I will make sure it gets routed to the correct person, and the timing is right as we start planning the next version.
Hi Paola. Not sure exactly what you want to make non-modifiable here (the master page, the site template, the web parts in the page created by the template, etc.). But...My Sites in particular are pretty difficult to effectively lock down in that respect. That's because the premise of a My Site is that it's the personal site of one individual so...all the behavior is geared towards letting that indiviual who owns the site control it as they wish. That's also why we take a different approach with the shared Public view of My Site (or profile information really, in person.aspx). We contain that to a single page and single site to ensure it's consistent across the enterprise. But within an individual My Site, we don't distinguish the owner of that Site Collection differently from the owner of any other Site Collection - they can both inherently customize at will.
Hi Ilya. I'm not sure what all you want to do, but things like custom menus would be controlled in your master page. So you can create a custom master page that uses the navigation as you've customized, and then make it the default My Site master page using the techniques described in this blog.
Following my previous posting (April 16) I have made some progress, but after many iterations of modifying MyStaplee.xml, rebuilding, copying to the server and doing uninstall/reinstall I am finding that more than 50% of the time the code is not executing, resulting in an Out-Of-The-Box My Site.
The event log simply states:
Error: Failure in loading assembly: MySiteCreatePart...
Any thoughts on why it works only part of the time? It may be coincidence but it seems that it doesn't work when I include actions involving the same web part (ie Move OWACalendarPart then set the properties of that webpart).
If I can get it working the feature is exactly what I need!
Hi ,
I had created a custom master page for Sharepoint 2007 using SP designer and able to depoy it DEV server by uplaoding the custom master page and images into respective folder sof the new site and then able to create further sites using that site template (.stp).
But, why am I not able to use the .stp file that was created on my local machine?
What all does a .stp file contain? Does it take the Custom master page also with it...
Please share any details regarding this ASAP...
Thanks.
Smitha C
Steve, how can Feature stapling be used to A) Add a custom profile page (e.g. personCust.aspx) to My Site - both publically and privately viewable like person.aspx and B) create a hyper link in person.aspx to personCust.aspx? Is Feature stapling the right approach for this?
Hi Rob M. I have not experienced that particular problem myself. After you uninstall/install, are you doing an IISRESET? This is required because of the way that IIS caches GAC'd assemblies.
Hi Smitha. I'm not sure how your issue is related to My Sites, but generally speaking STP files are diffs - the difference between a site definition and the customizations that have occurred to that site. If all you want to do is use your custom master page and images in your My Sites, then you don't need a site template. You can upload your files as demonstrated in this blog and set the master page, also as described in the blog.
Hi Sekou. First, it's important to distinguish between person.aspx and another page you put in a My Site. Person.aspx is hosted in the My Site host site collection, i.e.. A custom page as you described would need to be uploaded to each site collection at site creation time. If you put it in the Shared Documents folder then everyone will that has access to the site will have access to the page (by default). You can use the feature file included in this blog as an example of how to upload files.
In terms of creating a hyperlink in person.aspx to persCust.aspx, it's possible, but depends on how complicated you want it to be. Meaning if you wanted a generic link it's no big deal, but if you want a link that goes back to the My Site of the person whose details are currently being viewed, then you may need to write a custom web part to accomplish that. It's just a guess, I haven't tried doing this specific scenario myself.
I am familiar with administering the SharePoint farm and using STSADM etc, but not with coding.
Could you possibly provide download links to complete files with explicit instructions about what goes where etc
I think I understand your code snippets, but am at a loss as how to use it all.
Thanks in advance.
I autogenerate a number of libraries and lists in our My Site.
Can I also put these on the default.aspx with your staplee functions? Does you app make a difference between webparts and Libraries/lists?
Or can I put them on My Site some other way (except manually of course ;))
Thanks Steve. Actually, I intend for the custom page (personCust.aspx) to be hosted by the My Site host collection like person.aspx. Would the approach be any different in getting it added/uploaded?
I have modified the install and uninstall batch files slightly to cope with some specifics of paths, but there is an iisreset at the end of each of these.
The only other point (which shouldn't affect the result) is that I am running my development server under Virtual PC 2007.
If I add an entry to MySiteStaplee.xml that moves the OWACalendarPart to another zone it works. If I replace this with an action that sets the server and mailbox properties that works (even my modification to partcheck.cs to use the full email address works!). However if I have both actions together the whole feature falls over and I get an OOB My Site.
First, thanks for taking the time to put this post together. As the comments demonstrate, it's a big help.
I would like to remove the "Create Blog" tab on the private MySite page. It seems like the approach you lay out above is the way to go. Would you agree or is there a simpler way?
Steve,
First of all, thanks for tackling this challenging issue for us newbies. However, I am getting an error when I try to create a mysite. Here is the message.
"An error occurred during the processing of /personal/dccoleman/_catalogs/masterpage/mzac.master. Could not load file or assembly 'MySiteCreatePart, Version=1.0.0.0, Culture=neutral, PublicKeyToken=cb1bdc5f7817b18b' or one of its dependencies. The system cannot find the file specified. "
I did change the master from steve to mzac. I think I changed it in all of the appropriate places also. Please advise. Thanks!!
hi, i want to create different type of mysites depend on the type of user logged in. i will check user credential in my database and depend on user right respective webpart will be visible or accessible. can any body give idea to achieve this.
Hi Marcus. If you go to CodePlex using the link provided in the blog, you can get everything - the complete source code as well as some instructional documentation for how to set it up.
Hi Dirk. If you have a number of other lists and libraries that should be created with a My Site, then you can staple those as well (assuming you create them as features, which is what I would recommend if possible).
Hi Sekou. Sorry, I didn't understand the first time that you were planning on putting personcust.aspx in the My Site host site as well. In that case you should just upload the file there manually. It's a one-time operation and is independent of whether you have 1 or 1,000 My Sites because they all share the same host site.
Hi Rob M. I'll have to take a look at it, there may just be a bug in the code (I'm assuming you are using the 1.0 version from CodePlex). I'm short on free time right now but I will get to it as soon as I can.
Hi Jason. Glad you found the blog useful. For removing the Create Blog tab, look at the default master page in SharePoint Designer. There should be a control in the page that is rendering that, so you could remove it, save it as a custom master page, and then use the process in this blog to use that master page on your new My Sites.
Hi Terry Morris. It sounds like the assembly for the MySiteCreatePart has not been registered in the Global Assembly Cache (GAC). I would double-check that.
Hi praful. I would use the code for the MySiteCreatePart as your starting point. Add some code in it to do your DB lookup to see if the current user should get the webpart and if they should, use the methods in this blog to add it to the page for them.
Is the specific step in your instructions I missed for registering the assembly? If not, can you give me some direction. Thanks!!
Rob M....same same with my use of the project. It worked....but now more often than not I get OOB. Actually, I get the branded template...but the web parts are OOB. I am running VPC 2007 on vista (64 bit).
in the app log for event viewer...I get "Error checking web parts. Value cannot be null. Parameter name: webPart
also get:
Error checking web parts. Access denied (I am on as a domain admin). Sure wish I could track this down.
So, if i create a my site - then delete it with stsadm -o deletesite ......and then I create it again...does the provisioning code for the web parts fail - is the valude of
still true even if I delete the site?
Thanks.....
Thanks for your quick response. The problem however is, I make a document library in the Onet.xml. In order to put that on the screen, it uses a ListViewWebPart. But when I define it in your MySiteStaplee, I get an error from your app when making a new MySite stating the list does not exist. I don't know which property I need to use or how to set it to the correct library since each new MySite creates its own library. Any thoughts, hints, suggestions?
Our extra geek on the geekSpeak was Dan Attis. Because we didn't get his bio and headshot in time for
Hi Terry Morris. To register the assembly in the global assembly cache use the gacutil utility. Try searching on live.com if you need help finding it doing the registration (gacutil -i foo.dll).
Hi cafearizona. If you delete the site and then recreate it all of the code will run again. The key it checks is stored in the property bag for the SPWeb at the root of the My Site. So when you delete the My Site, that key goes away with it.
Hi Dirk. Yes, if your list is part of onet.xml then it will be more difficult to work with (for example, if you need to change the list definition after sites are created based on that definition you won't be able to do so out of the box, versus if you deploy it as a feature). I'm not sure exactly what you're doing, but in all likelihood the list will not be created at the time your stapled feature fires. That is the same reason that we had to create the MySiteCreatePart - so it could do things after the site was finished being provisioned.
Our My Site requires a number of items to be displayed. Among these are 3 libraries, 1 for personal documents, 1 for personal images and 1 for personal tasks. At present, I create these libraries via the onet.xml. But I still need to display them.
By placing them manually and using our MySiteCreatePart, I found out a ListViewWebPart
is used. Then I tried to include the ListviewWebPart in the SiteStaplee.xml but that gave the error that the library did not exist.
So, according to you, I have to generate the Libraries via your app as well. Can you tell me how to go about doing that or where I can find the info to do it?
Hi Rob M. and cafearizona. As luck would have it, my lousy laptop was broken again this morning so I had time to look at the issue you describe of trying to Move a web part and also set properties on it. I verified that it does not work, but it is by design. Fortunately, there is a workaround. Here's what's going on.
When the list of actions for an existing web part is enumerated (such as Move, Delete or SetProperties), each action is added to a dictionary. The key for that dictionary is the Type name of the web part assembly. That is done so that when we are getting a reference to the actual web part on the page later in the code, we can compare what we need to change with what's in the page in a language-agnostic way (i.e. works the same in Spanish as it does in English). The problem then is if we had more than one action for a single existing web part, the process will choke because it says hey, you've already added an entry to the dictionary with that key. Fortunately, I think the Move in combination with the SetProperties are the only two actions that would occur for the same web part. So...the work-around is to delete the web part with one action, and then add it back in with another action. When you add the part back in you can put it in whatever zone you want and also set the properties - so you effectively accomplish both tasks in a single action. The reason this works is because when you choose to Add a web part we use a new random GUID as the key in the dictionary of actions. Since it's a new web part we don't need to compare it to the list of web parts on the page, that's why we use the random GUID as the key. It also allows you to add multiple instances of the same web part on the page.
So, to wrap it up, here's a snippet of the MySiteStaplee.xml file that I used to verify that this approach works - hope it helps.
<WebPartAction>
<assemblyName></assemblyName>
<className></className>
<zoneID>BottomZone</zoneID>
<zoneIndex>5</zoneIndex>
<typeName>Microsoft.SharePoint.Portal.WebControls.OWACalendarPart</typeName>
<Action>Delete</Action>
</WebPartAction>
<assemblyName>Microsoft.SharePoint.Portal, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</assemblyName>
<className>Microsoft.SharePoint.Portal.WebControls.OWACalendarPart</className>
<typeName></typeName>
<Action>Add</Action>
<Properties>
<Property Key="Title" Value="Outlook Stuff by Steve"/>
<Property Key="OWAServerAddressRoot" Value=""/>
<Property Key="MailboxName" Value="[ACCOUNTNAME]"/>
</Properties>
Is there any way to guarantee a stapled feature to fire after the MySite has been completely provisioned? A way I can be sure the lists and libraries created in Onet.xml are in place and ready to be used?
I only need to add them when making a new MySite anyway.
Steve, its urgent for me, please look into it:
i have one .net page(as page viewer webpart) in MOSS, thier i am capturing user details. and validating from my custom database. When user click on sumbit button i want to take him to MySite pagee. if user is first timer then new MySite should create and if not then existing MySite should open. where can i write code to check for existance of MySite.
waiting for ur Reply
Hi Dirk. There is not a way to fire a stapled feature after the site has been completely provisioned - sorry.
Hi praful. You should be able to redirect to the My Sites host site collection /_layouts/mysite.aspx. If the My Site is not created yet it will create it automatically; if it is created it will just redirect the user. That way you don't need to check for anything. If you really want to check for some reason then you can look at the UserProfileManager class to get an individual's profile, and then check their PersonalSite property. If it's null (C#) or Nothing (VB.NET) then they haven't created a My Site yet.
Page Viewer web part is not an option for me on MySite.
Do we have something misconfigured? I can add page viewer web parts to the site home page, but not in my sites.
Steve - thanks for the update. Not sure why I didn't think of the workaround myself.
While it has helped considerably, I still find that some of the time the feature doesn't run. Up to now I had not been attempting to debug, and having now installed Visual Studio 2005 directly on the server I find that 98% of the time the debugger does not load symbols for the project, thereby preventing any breakpoints from being hit. Although I am a VB6 person rather than .Net or C# if I can get the debugger to link to the process correctly I can sort out the remaining issues.
I have attached to both instances of w3wp.exe. Any other suggestions?
Steve - Mea Culpa - Once I fixed the type I had (as in I had deleted it) on the Register Tagprefix="IWPart" - mysitecreatewebpart started working like a charm. I was switching back and forth between your sample master page and my own....since I can be slow at times - it took a while to see the relationship. Now....I can still get the feature to through an exception error in the app event log...but right now this seems to be related to my multiple vpc images on VPC 2007 gettting out of sync on the time. I am running a vpc image of a dc, another for SQL 2005, andother for EX2k3, and finally, another for MOSS 2007. When they get more than 900 msec abpart on time..stuff quits working. But that is obviously because of my virtual netwwork...not your feature.
NICE JOB and even BETTER JOB pinging us all back!
Now if I could just load some content into a cewp when I usue your feature to drop it on the private mypage...I would be in heaven.
CafeArizona
Hi Jim. I don't have an image to look at right now but I'm wondering if it's just not showing up because it's not one of the suggested web parts for that page. Can you export the Page Viewer web part to a .webpart file locally, and then import it into a My Site? I'm assuming that will work (I could be wrong), but if it does you could follow the same process to import as was demonstrated in this project.
Hi Rob M. A couple of things to try:
1. Once you've attached to the w3wp.exe process(es), you might also want to look in the Modules windows (under Debugging...Windows) and try loading your pdb manually.
2. My favorite fallback is just to insert a single line of code where you want it to break:
System.Diagnostics.Debugger.Break
When it hits that line it will throw a standard "you gotta problem" dialog and let you open up an instance of Visual Studio (or use an existing open instance). So you can have VS.NET open with your source code and just step right in and start walking through the code.
Hi CafeArizona. As far as your CEWP, I don't have an image handy to examine the web part. However, I do know that the content for the web part is actually stored as a property on the part. Is it possible for you to use the SetProperties action to set the content on the part? If not I will try and take a look at that particular part when I get some free time.
If I were to make another XML file besides your MyFileStaplee and used the following code:
<Module Name="DWS" Url="" Path="dws">
<File Url="default.aspx">
<View List="104" BaseViewID="3" WebPartZoneID="Top"/>
<View List="103" BaseViewID="3" WebPartZoneID="Right" WebPartOrder="2"/>
<View List="101" BaseViewID="6" WebPartZoneID="Left">
<![CDATA[
<WebPart xmlns="">
<Title>Members</Title>
</WebPart>
]]>
</View>
</File>
Would it work in putting additional information on MySite? And do I need to modify your code to include these changes?
Steve - thanks again for the quick response. After further testing (and hunting around on the newsgroups) I found that putting VS2005 into Debug rather than Release mode for both projects will hit my breakpoints, but the errors about symbols not being loaded only disappears when the creation of a new My Site actually triggers the feature. I will try your suggestion when I have time.
I have successfully installed the feature on my live server and it is working perfectly, setting the OWACalendarPart properties when adding it back has indeed solved the issue I had previously.
A couple of final notes - one of the web parts I would like to add in is the My Links part. This does not appear in the Object Browser, so I assume that it is not accessible via the feature. I would also like to set the Height of the web parts I am adding, but this property seems to be ignored. Is there any documentation on the LimitedWebPartManager which would help to explain these issues?
Steve, I'm getting ready to try and set the property for the url of the source on the CEWP. That - of course - is a better way to populate the CEWP - as I am using them on the private & public mySITE's that I will provision with your feature. Thanks for the feedback. I will post results of the CEWP within next couple of days.
Hi Dirk. If you encapsulate your xml above into a feature that you get working successfully, then you can make another stapler feature for it like this blog did to have it added to your My Sites. You can have more than one stapled feature for a site definition.
Hi Rob. Don't worry about parts that don't show up in the browser, you can add any part as long as you know it's type name, fully-qualified assembly name, and it's in a location where SharePoint can find the bits (like in the GAC). You might be able to intuit those values for the My Links part from other stuff in this blog, or you can also try using Reflector to open up the assembly where the My Links part lives (should be microsoft.sharepoint.dll). For LimitedWebPartManager my only real suggestion is to look in the SDK on MSDN.
I get a "Failure adding assembly to the cache: unknown error" message when I'm trying to add mysitecreatepart.dll to the GAC. Do you know how I can fix this?
Erik
Steve....can you think of a solultion that would allow me to "configure/customize" the quck launch at the same time I am using the mysite feature. My application wants some static links (urls) added to the quick launch - and they want to hide the "documents" ---everything but myprofile and mysites....on the quick launch at the mysite level. Of course I realize I can do that site by site...but then the mysite owner needs to be taught how...in this case....I want to have a preset configuration for quick launch. TNX
cafearizona
Hi Steve.
I am having a problem using the ADD action. All other actions work - i.e. I can delete, move and set properties of web parts but I cannot add any web parts. Using the example in the MySiteStaplee.xml all the actions were performed except for the first one - adding the Pictures Web Part.
Any ideas?
Jean-Pierre
Hi Erik. Are you adding the dll to the GAC via command line or File Manager? Are you adding it on the SharePoint server? I haven't had any problems of this type yet, but I normally just do it from the command line with <path to my .net 2.0 files>\gacutil -i foo.dll
Hey cafearizona, I don't have precise code for you off the top of my head, but if you get the SPWeb for the My Site root you should be able to get at the SPNavigationNodeCollection off the spweb.navigation.quicklaunch, or something like that. And then hopefully the SDK should be obvious enough about how to add or remove SPNavigationNodes from the collection.
Hi Jean-Pierre. Are you sure your typename is correct for your ADD? Have you looked in the event log and ULS logs for error info?
This article is very useful, great job!
Are you familiar with Knowledge Network? We have created a custom site definition for the My Site Host and created a site using that template. After installing Knowledge Network, we tried to activate the KN Feature in the My Site Host but the web parts and pages that were supposed to be added were not added to the site. After doing some investigation, it turned out that the KN feature is not activated properly if SPSMSITEHOST is not the template used to create the site.
We are planning to delete the existing My Site Host that used the custom site definition and create a new one using the OOB site definition. We'll just create Features that will apply our customization to the My Site Host. Do you know how we can smoothly move the existing personal sites hosted in the My Site Host into the new one without causing harm to them?
Thanks in advance! =)
Here is a great article on the right way to customize MOSS MySites. Thanks to Jingmei for bringing this
Hi anne. I'm familiar with KN but haven't done anything with it recently. As far as your scenario I would test it out in a lab obviously, but I would think about just using a database detach and then reattach once you've got your my site host reconfigured. Beitrag und Beispiel zum Löschen eines Webparts von Steve Peschka
I often get asked how one should approach adding features to the out-of-the-box site definitions. On
I have your solution installed, but I am having a little problem with configuring OWA web parts. My webparts require the mailbox to be the entire email address. When I just use just the account name, I get page cannot be displayed. Please advise. Thanks!!
Steve Peschka one of SharePoint Rangers team, he is writing in the official blog of the SharePoint Product...
Hi, and thank you so much for this feature. I've resently installed it with a customer, and it's solved my two main issues: Change default masterpage, and automatically set the correct properties to the calendar webpart. :)
In adition the customer wants to change a property for the Collegue Tracker webpart. They do not want to track membership changes. I've edited the xml file, adding another WebPartAction section as follows:
<WebPartAction>
<assemblyName></assemblyName>
<className></className>
<zoneID></zoneID>
<zoneIndex></zoneIndex><typeName>Microsoft.SharePoint.Portal.WebControls.ContactLinksMicroView</typeName>
<Action>SetProperties</Action>
<Properties>
<Property Key="ShowMembershipChanges" Value="false"/>
</Properties>
</WebPartAction>
I can't get this to work. After a personal page has been created, this property is still checked. It seems like the property is not available until the user has accessed "Edit shared webpart" and "OK". When I "Export..." the webpart right after the site has been created, this property is not present in the dwp xml file. If I do another export after "Edit shared webpart" and manually uncheck the property, the property is present in the dwp xml.
Do you have any tips for me?
Regards
Elin Kolloen
Experts!
Does anybody know how to customize the welcome text in My Home, i.e. "Describe yourself and help others find you and know what you do ..." to something else? It seems that the webpart responsible for this display welcomewebpart.dwp is pulling the data from the content database. Is it possible to have it look up data from somewhere else? Our client wanted the text customized to reflect the organization culture. I needed to make it work in two days. Please help.
Thanks in advance!
Weitong
Thanks for putting this together. This post has obviously helped a lot of people.
I ran into a little gotcha and wanted to share a hack-around to the release from CodePlex. If you try to set the Height property on some web parts via the config file you may run into an 'ambiguous match' error.
Here is some code, added to the SetWebPartProperties method, right at the top of the try block:
// NOTE: Hack to get around the 'ambiguous match' error
// for the Height property.
if (String.Compare("Height",
wpa.Properties.Property[p].Key, true) == 0)
{
xWp.Height = wpa.Properties.Property[p].Value;
continue;
}
It isn't pretty but it does work.
Steve, I'm working on an installaton where I'm using your mysite create and my site create part. I seem to be getting some errors when I am doing a full crawl of the content. The errors are in the application log. Most of them are related to a cewp where I am tryhing to set the frame property and the web part visibility property. It happents on both my test syhstem and my production system. Have you experienced any situation where the search content crawl is impacted with the mysitecreatepart stuff?
Hey Weitong, I had a customer that didn't want one of the lines ine the Welcome web part...so I used Steve's example of deleteing that web part and added a content editor web part with the text that Iwanted. The url's are relative to the site the user is one.....so my cewp displays the same thing as the welcome web part but doens't display the line that tells them how to add web parts (customer requirement) I set the name (title) of the web part to the same text as the welcome part....so no one knows the diff.
Following up on my question regarding the mysitecreatepart errors. I found that setting the frame and visibility and content for a cewp in the mysitestaplee.xml throw errors in the application event log. They also did not set the properties on the part- so I deleted those propeerteeis from the xml and the errors that the were generated when I run the content crawler went away. I still don't know why the content crawler would catch/create erros from the mysitecreatepart....but the corrective action seems to make the issue go away.
How do you set the feedurl property of the rss aggregator web part from the provisioning script. Do you have to delete it first and then add it setting the property value? If so, what is the class name of the rss aggregator web part? Thanks
Hey Cafearizona, thanks very much for your suggestion on using a custom CEWP to display the custom welcome message in My Home instead of the OOTB personalwelcomewebpart. I am battling with setting the properties in the xml file for CEWP to load the content, i.e. text and links in there. If you have a working sample, could you possibly share the knowledge by posting the code here? guys like me in the SharePoint land would certainly appreciate that!!
Thanks a lot, Cafearizona!
<WebParts>
<className>Microsoft.SharePoint.Portal.WebControls.ThisWeekInPicturesWebPart</className>
<zoneID>MiddleRightZone</zoneID>
<zoneIndex>10</zoneIndex>
<Property Key="ImageLibrary" Value="Shared Pictures"/>
<Property Key="Title" Value="My Pictures"/>
</WebPartAction>
<WebPartAction>
<className>Microsoft.SharePoint.Portal.WebControls.ContactFieldControl</className>
<zoneIndex>12</zoneIndex>
<Property Key="Description" Value="Use to display details about a contact for this page or site"/>
<Property Key="Title" Value="My Contacts"/>
<assemblyName>Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</assemblyName>
<className>Microsoft.SharePoint.WebPartPages.ContentEditorWebPart</className>
<zoneID>Bottom</zoneID>
<zoneIndex>13</zoneIndex>
<Properties>
<Property Key="Title" Value=" "/>
<Property Key="ContentLink" Value=""/>
<zoneID>MiddleLeftZone</zoneID>
<zoneIndex>3</zoneIndex>
<typeName>Microsoft.SharePoint.Portal.WebControls.RSSAggregatorWebPart</typeName>
<Action>Move</Action>
</WebPartAction>
<WebPartAction>
<assemblyName>Microsoft.SharePoint.Portal, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</assemblyName>
<className>Microsoft.SharePoint.Portal.WebControls.OWACalendarPart</className>
<zoneID>MiddleLeftZone</zoneID>
<zoneIndex>2</zoneIndex>
<typeName></typeName>
<Action>Add</Action>
<Properties>
<Property Key="Title" Value="My Outlook Calendar"/>
<Property Key="OWAServerAddressRoot" Value=""/>
<Property Key="MailboxName" Value="[ACCOUNTNAME]"/>
</Properties>
<zoneID></zoneID>
<zoneIndex></zoneIndex>
<typeName>Microsoft.SharePoint.Portal.WebControls.PersonalWelcomeWebPart</typeName>
<typeName>Microsoft.SharePoint.Portal.WebControls.SiteDocuments</typeName>
<typeName>Microsoft.SharePoint.Portal.WebControls.BlogView</typeName>
<zoneIndex>1</zoneIndex>
<Property Key="Title" Value="Get Started with My Site"/>
<Property Key="ContentLink" Value=""/>
</WebPartAction>
</WebParts>
Thanks a million, Cafearizona! I tried your xml file and it worked great except that I had some trouble displaying the content in the CEWP.
I got this error: "Cannot retrieve the URL specified in the Content Link property. For more assistance, contact your site administrator."
I've tried pointing it to SharePoint pages such as /Pages/mypage.aspx and /_layouts/mypage.aspx and /_layouts/mypage.htm and none of them worked.
I've also tried creating a dummy site with an html page on the same web server which is probably what you did in your sample. It didn't worked for me either.
What am I missing here?
Thanks again and look forward to your reply.
Actually, it worked when I pointed it to an html page on a different site, the last approach I tried in the previous post. Is there any way to make pointing to SharePoint pages work so that site admins can maintain the content themselves?
Thanks,
Yeah, I had similar problems. I wanted to point it to a doc lib...so everything would be in moss. with an OOTB cewp..you could put the link to file in a doc lib...and press the test link ..and it would display what you want. But it would through that same error you see when you viewed the page. So I put it in a virtual web...the guy at the site I am working on had to use a moss svc account for anynoum access to make it work on his production system.
If you get it working for a sps 2007 library...let me know. I had to drop that.
Is there a way to automatically load the "Personal Documents" web part? I've been experimenting on what web parts I can load when the My Site is created, but I cannot find a way to have the Personal Documents added. I know you can click on that item in the left side navigation, but I also need it on the main section. Do you know if that's possible and if so, what is the name of the web part? Thanks!
<className>Microsoft.SharePoint.Portal.WebControls.SiteDocuments</className>
Terry, the way Steve has programed the mysitecreate...I believe you need to enter something like "" for the value on the key: Key="OWAServerAddressRoot"
In this case....you replace moss2007_ex2k3 with the name of your exchange server.
Also, when you use the accountname key and value...remember, this is the AD accountname...if your app uses a different name for the actual email address...you will get results like you have described. ie...account name is baker.craig but the email address is craig.baker. In the latter case, you will need to apply a reciepant policy to add an email alias to all of your mail boxes so owa can find the email address.
Hi Elin....I had trouble trying to set properties on the cewp...I think I saw somewhere that Steve's code will only handle simple string properties.
I seem to have a similar problem to what Rob M was having. I can deploy your master page without a problem. I can deploy the minimal.old page without a problem.
But the minute i try removing the quick launch out of the steve.master and deploying it, when it creates a mysite, it comes with the 'file not found' error.
Also, when I try modifying the minimal.old.master page, i also get the file not found error when i try to deploy.
I tried looking at the event log but i can't find anything.
Sun...a work around is to put the following style in your working pages. It will make the quick launch invisible. Put this above the body tag in your master page that works.
<style>
.ms-quicklaunch {
visibility:hidden;
</style>
Steve, Your feature and staple are Great! I am trying to understand a quirk. If I don't have the feature installed...I can view the profile of anyone that has had their profile imported - regardless of weather they have created a my site. HOWEVER - if I install and activate the feature - only a user with "full control" can view the profile of a user that has not created a my site. user's with read but not full control permission on my site get access denied screen instead of the actual profile. Thoughts?
Thank you cafearizona for your quick response on using "SiteDocuments"! I had looked into that before and it's very close to what I need, but unfortunately, it's missing the "Add new document" link which we need. If there was a way to get that to show for SiteDocuments and remove the "tasks" portion for that webpart, that would be perfect.
Basically what I need is a listed of documents for the user with the ability to add a new document right on the webpart. I found this (Microsoft.SharePoint.WebPartPages.UserDocsWebPart), which again is close, but it still does not have that "add new document link". I believe it's because that web part just rolls-up all documents for a user and that's it.
Any other idea? Thanks!
Cheers,
Derek
I've set up a clean portal....did an import of usser profiles from ad.....did a full crawl....and then (without installing mysitecreate feature) searched for a user that did not have a personal site created but was in the profile import. the search results yield the name of the user, and when you click on the link...the public profile page for that user comes up - even though the user has not created a my site yet.
Next - I installed and activated the feature. Without creating a my site - simply searching for a user that was in the profile import - you get the expected search results - however - when you click the link you get a sharepoint page that says Error: access denied and some additional verbage that says request access or sign in as different user. So activiating the mysitecreate - when activated - prevents you from viewing public profile of a user that has not yet created a my site. For large enterprises - with several thousand profiles - this is not desirable.
So - after you activate the my site create feature and staple - you can only view the public prfile of users that have actually created their own my site.
after I uninstall the feature...and do a search for a user - i get the search results page..when I click on the link - I get
An error occurred during the processing of /_catalogs/masterpage/steve.master. Could not load file or assembly 'MySiteCreatePart, Version=1.0.0.0, Culture=neutral, PublicKeyToken=cb1bdc5f7817b18b' or one of its dependencies. The system cannot find the file specified.
Thoughts
Thanks for this. Now can we add, say an announcement web part with pre-populated message in it, in the xml manifest file.
Thanks, I will try it. We got bitten when an early user created a My Site, and now he can't access it.
I am trying to create a KPI list on MY Site but the option is not available. It is available on any other site except My Site. Is it possible?
Create a top level site and add the kpi stuff you want on that. Then go to central admin....Shared Services Administration: SharedServices1 > Personalization site links .add the url of the top level site that has your kpi stuff...add permissions...and a short nanme...and you will find this page shows up as a tab on the horizonatal nav bar at mysite (between the my home & my profile tab). BTW...I think this is the same way they do the Role Based mysite stuff for the splendid 7.
I am having issues with updating mysite templates and applying them to mysites that were created prior to the update. Any thoughts on this.. basically, how do we update mysites templates on the fly for both existing and new mysites created on the
Hey Saumya...I thought that any change to the master page (steve.master or whatever) was reflected in your next page refresh. However - if you change the web part provisioning - in the xml file....this only applies to new sites created...not the ones you have already created. Is this what you are asking about?
Where can I find the assembly and class names of the different Microsoft buit-in web parts I can add to the manifest xml? Any links would be appreciated.
Peter, the *.dwp and *.webparts are the place to find this info. For example in the hive...at C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES\MySiteLayouts\DWP you will find a dwp for each OOB my site web part. Open one of the dwp's and you will find the class name and the unique assembly name. You may have to poke around the hive to find the rest of the webparts...I have not had much success installing 3rd part web parts...even though it would appear that if they are in the gallery of my sites you should be able to install them.
Also...if you export any existing web part at any location on the portal...the resulting dwp or *.webpart will show the class and assembly. Hope this provides some assistance to your query.
I cannot get my RSS working in My Site
It gave me this error:
"An unexpected error occured processing your request. Check the logs for details and correct the problem."
although it is working in other pages outside My Site
helllllp !
My master page relies on a css file for style layout, etc. We're using this same master page for the top level site collection. When updating the master page viaa Site Settings, there is an option to include the CSS file used for that master. Would you know how to include this CSS for the MySite customizations via Features? Thanks!
I deleted a users mysite via stsadm. Now when the user clicks on the mysite link to recreate his site the creation process ends with a webpage cannot be found error.
I have the problem only when the stapling feature is installed. Any ideas?
Issue with MySiteStaplee and a proposed Workaround
I have posted here and at codeplex on an issue I encountered with the MySiteStaplee feature. I found - with my installation - that once I activated the MySiteStaplee feature - sharepoint users could no longer view the public profiles of users imported via AD - unless said user created a mysite. In large enterprises - this is not desirable - one of the great OOB stories on search and profiles is you can see them as soon as they are imported and a new crawl has occured.....people search OOB is great. My experience with the MySiteStaplee feature is that it broke this feature - and I have not been able to get a response - Believe me - I have tried.
So - as a novice to Steve's code - I think the issue lies in the master page. I would like to take the web part provisioning part of Steve's code - and combine it with Steve Hillier's Themechanger feature. It has been my experience that the themechanger feature can be deactivated without adverse affect to your my site collections...you can do most of the branding with the theme changer - specifiying your own css. Then if I could get the web part provisioning to work as stand alone feature - I could get the best part of Steve's effort to be reflected in a branded corporate enterprise my site.
I haven't seen much of Steve on this blog recently - I need to get some cycles so I can try this out - would be very curious what the rest of you think about this. In reality I had to deploy an alternative site provisioning at a corporate enterprise - the mysitestaplee had one to many issues. Steve...what are your thoughts?
Jake...try using Steve Hillier's ThemeChanger feature on Codeplex () . It allows you to specify a default theme for a site definition...as well as a default (alternative) css. I've used it with my site OOB provisioning without any undesirable consequences. Nothing breaks if you deactivate the feature - nothing breaks if you change the xml code for the feature either. LMK if this is what you are looking for.
Hi Mike, I got this when I change the xml to a new master file ( say from steve.master to cafearizona.master) and then deleted steve.master (steve.master was used to provision some sites). I am very impressed with Steve's direction and contribution to the community - but this part mysitestaplee is a bit quirky. See my post on June 7th about the public profile for some more details. At any rate, just put back a master file with the name that was originally used ...and that error will go away...doesn't even have to be the orignal master file (i think)..just the name. LMK if this addresses your issue.
We want to do a bulk "upload" employee Profile pictures, and not allow the users to change them.
Would this involve a customization of My Site's, or could it be done by modifying _layouts/EditProfile.aspx ?
We could either remove the Picture part of Edit Details page, OR disable/remove the Choose Picture and Remove buttons.
I've looked into this for a client. You may want to check out central admin -->
Shared Services Administration: SharedServices1 > Manage Policy
you can mess with the policy for the picture property for the enterprise
you've got 3 choices...enabled, none, disabled.
However, you may ALSO want to check central admion - shared services ...
_layouts/MgrProperty.aspx
bread crumb -->
Shared Services Administration: SharedServices1 > User Profile and Properties > View Profile Properties
and map the url for the picture to a field that comes from the AD import. That way you can bulk upload pictures to the AD profile info (a url) and then pass the url to the picture property when the profile is imported.
You may find you don't have to modify editprofile.aspx (microsoft will like that) and just pump a url into a field in AD..map that to the picture property.....bingo bango....upload all pictures and disallow the user from changing it. LMK if this helps.
Thank you for the suggestion, CafeArizona. I will try the ThemeChanger feature. As a workaround for MySite customization, I simply embedded my entire stylesheet within my master page in a <style> tag. I did have to update the paths to image files, but it seems to be fine so far. It's not the best solution since the master page is developed seperately as it is the master page for the entire Site Collection. Thanks again! -JakeJ
I kept getting an error in FeatureActivated at line "SPWeb curWeb = SPContext.Current.Web;"
To get to the current web object I had to use this:
"SPSite currentSite = (SPSite)properties.Feature.Parent; SPWeb currentWeb = currentSite.OpenWeb();"
As shown here:
and here:
I checked out your reference to the sridhara blog. Interesting...but I'm just wondering why the sridhaRa post doesn't do stapling to the SPSPERS#0 site definition. I am not an expert - however - I can't imagine the scaleabilty of that code if it executes every time a site is created. The nice thing about Steve's example is he shows us how to staple it to a site definition. I've tried posting to sridhra's blog....but it never shows up. Also, why "if" for the mysitehost - it seems the administrator can change the layout of the public facing site and everyone else then sees those changes......so again...why add the overhead to the enterprise if you can just do this once and be done ? What do the rest of you think?
Great blog,
How would I create a list(calendar) in the My Site area for all new users?
Thanks for the help!
Hey there,
i'm using the german edition of sharepoint, we did a full install on a single server, and installed the sql server separately after the installation of sharepoint.
when i take the example code and xml files everything works fine, but if i slightly change the xml files by changing the mastersite name or rename my masterfile as steve.master an unexpected error cccurred creating the mysite. also tried different codings for the xml files but no chance to get the feature running. is there any workaround? what am i doing wrong?
thanks in advance!
Olaf
So I have followed your instructions to the tee, but after everything is done nothing happens. When I look in the list of deployed features these features are not even listed as being depoyed or anything. What do you think the issue could be?
if so, did you make sure it has the tags to the webpartcreate.dll or whatever the name is. I've always done his install manually, without the *.bat files. that way I can see the command working or not in the cmd window. Hope this helps.
Are you wanting to populate the list...or just have it included as an empty list?
Corey, can you look at your event log files and see if you are catching some errors when you try to create a site.
hey cafearizona, i'm new to sharepoint, and haven't any clue what you exactly mean by populating the list?
you think of filling a doclib with data, or show specific webparts, or the list of sites, or a list definition?
at first i just wanted to brand the mysite with the design of our company. therefore i need to change the masterpage, step two would be to change the webparts and menu items...
if i take steve's masterpage and just add two images save it, install die dll's in gac, install and activate the features i get an unexpected error during the creation of the mysite.
thanks for helping me out!
A collegue of mine prepared a list in her SharePoint 2007 MySite. Her question was how she could make
Many thanks for the post, it is very useful and it looks exactly the sort of thing I wish to achieve. I am looking to modify MySites depending on persona's or role profiles. I have a couple of questions regards the stapler feature, not being a coder myself:
1) Would a .Net developer be able to easily modify the code so that it will build the mysite features based on audiences or user groups and then build a page with web parts, lists and libraries based on that role profile.
2) Without attempting to restrict the users too much, after all the purpose of mysites is an individual space, but can we remove the availability of web parts for them to manually add to their site.
3) Can this feature also be used in tandem with the 7 MySites role profiles that have been developed by Microsoft?
Kind regards
Darren
Hello
I can only access a myprofile page, when I'm logged in as administrator.
Any thoughts on that?
Thank you very much
MikeD
Thank you for creating this.
How would I create a list (calendar) for all new users on there MySite.
Can I use MySiteCreate 1.0 to do this?
Thanks for any info!!
Ok, so my new issue is the following: All I want is the UI to change currently for mysite's, so I do not need the stapler just the staplee. I added my masterpage instead of steve.master and I made the appropriate code changes. Now when I install and activate it renders the entire portal useless. I get an error 403 for everything. I then have to force uninstall to get back to where I was. Any thoughts to this?
THanks for such a great post. I have the feature stapeling working great with the excpetion of 2 issues.
1) is showing the custom master page but\myname is not.
2)\myname is not publicly accessible to the outsite world.
Does anyone have a thought on why these things would be happening?
thanks,
dcp
I am having problems adding a Page Viewer Web Part. It is not a web part that is naturally within a My Site but I have added it so that it appears within the Web Part gallery, but when I try to put it on the page at creation it throws errors.
Error checking web parts: Value cannot be null.
Parameter name: webPart
Any help on this would be appreciated. I am trying to add the page viewer to pull in outlook web access as we don't like the views of My Inbox web part.
This is a good article. We are in the process of planning for MOSS 07 deployment. In our situation, we don't plan to use MySite. Instead, we want to incorporate some of my MySite items such as the MyProfile page onto the main portal. In short this is what we want to do within the main portal:
1) Add a navigation tab on top to say "My Page". Under this, I would have "My Mail", "My Profile". Note, the My Porfile page can be exactly the same as the one in MySite so that users can also modify hi/her profile.
2) The My Mail page would be pre-populated with the Exchange email portlets and it is locked down so that users won't be able to customize. The challenge here is, how do your automatically populate each user's login ID, password, Exchange server name for each of the mail portlets?
To sum things us, we don't want to use MySite but want to incorporate the features in the main portal instead. We don't want to give the user the ability to customize these pages. Has anyone done something similar to this or have suggestions how I might be able to accomplish this?
Ok, I have this feature running and it's great, but I have one question that I can't seem to get to the bottom of.
I want to now add in addition to the other web parts the OWA tasks part and a link part populating it with links. Please any thoughts?
-Corey
I have the problem. I would like to show calendar on Mysite which type of the calendar is ListviewWebpart (Do not use OWACalendar)on middle page to show all day and all event. I wish when fist time user click on mysite it will generate all webpart and show calendar automatically.
I used to add webpart 'ListViewWebpart' and set property into MySiteStaplee.xml but do not success.
Thank you so much,
Hi Steve. Create Blog tab is rendered from default.aspx page of MySite (from MiniConsole), not from master page. Also, it is not in WebParts collection, so the above code does not help. Nonetheless, somehow we have to get rid of it. Any ideas?
Thank you,
Viktor
Hi steve,
Unable to apply "portal site connection" setting to all.
Below is how i setup:
1. go to My Site: Site Actions: Site Settings,
under Site Collection Administrator, select Portal Site Connection. (** is this the correct place to do the setting? because from the Site Information, i can see the URL points to my site folder. not default one for all)
2. login using my own account. and i site collection administrator.
3. tick the connect to portal site and enter the address detail.
after this setting, i get the breadcrumb correctly. then i go to add a new user and go to my site, this new user have no breadcrumb.
please help.
thanks.
Hi.
I have run into a problem with broken MySite links after runnin the stsadm migrateuser command to rename a web after the corresponding user is renamed in AD.
After running the command (stsadm -o migrateuser -oldlogin "ACTGOV\ian ipsox" -newlogin "ACTGOV\ian ipsoy" -ignoresidhistory) we found that the user was still able to log on with the same Sharepoint access (site membership etc) as before, except that clicking on the MySite link first showed the error "An unexpected error has occurred.
Web Parts Maintenance Page: If you have permission, you can use this page to temporarily close Web Parts or remove personal settings. For more information, contact your site administrator."
It appears that the MySite was not migrated. Subsequent attempts to migrate this manually using the "stsadm renameweb" command generated the error "Cannot move the root web of a site collection."
Further to my last post, when I repeated the exercise today, logging on as the user after the migrate, then clicking on the "My Site" link, I got the following error:
"Your personal site cannot be created because a site already exists with your username. Contact your site administrator for more information."
... and furthermore, the user profile was able to be manually altered to fix the reference to the "my site".
Hi
I saw some talk about removing "Create Blog" tab on mysite. Was anyone successful? If so, would you mind sharing it?
thanks
Dee
I want to populate list web parts associated with lists based on my selection from drop down in another web part. Can sme1 guide me how to achieve this functionality
Great article on how to modify the "out of the box" MySites across the organisation using Master
This continues to come up during customization discussions. This is a good place to start. Another great
The buzz these days at least in some circles seems to be social networking. What's cool about this is
I am still tried a thousand of things to make it work. But with no success.
Please help
Where are the flags persisted for the web.Properties.Add(KEY_CHK, "true"); ? Can I examine the state of this property through the SharePoint admin web UI?
Also, I assume they are not commited until the update method is called. Is this correct?
I ran into a problem where the the key check code was returning true for a MySite that was in the process of being instantiated. I had deleted the site collection for this user but it seemd like the property was still hanging around. This is a dev box so it could be some corruption due to wild a** programmers at work.
Finally, why not set the property and call update immediately after the check to ensure that the code is logically grouped with with close proximity? Is it that you can only call the update method once per AllowUnsafeUpdates session? Should you read and apply the property update in a serialized code section?
Perhaps I am over-thinking this thing, but my concern is that that this code is hit at least 3 times during provisioning and I fear that this could set up a race condition. Is all provioning and execution of your code serial and synchronized?
kg
I would like to set the default to "yes" for requiring check-out when you open a document for edit when someone creates a new document library. Is this possible? If so, how?
Also, I would like to know if there is a way to globally change this setting on existing document libraries. Again, is this possible? If so, how?
Finally, I would like to have all newly uploaded documents automatically checked-in when they are uploaded. This would apply to existing and new document libraries. Is this possible? If so, how.
Thanks
Mark Smith
I've implemented your mysite customization and it works really well (I used the source of codeplex). However I've run in to an issue when the code is unistalled.
My scenario is.
1. Install the MySiteCreate feature (All OK)
2. Create my MySite based on the custom Mysite (All OK)
3. Delete my Mysite site (Deletes OK)
4. Uninstall the MySiteCreate feature (Uninstalls OK)
5. Create my MySite with should now be based on the standard OOTB Mysite. (Error occurs and cannot be created.
6. Create a Mysite for someone who did not create a site based on the MySite create feature (Create OK)
Is this a know issue with this method of mysite modification or is there something I can do to remove the dependency on the MysiteCreate feature for those individuals that have used it to create their site.
The error that occurs is as follows:-
Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately.
Parser Error Message: Could not load file or assembly 'MySiteCreatePart, Version=1.0.0.0, Culture=neutral, PublicKeyToken=cb1bdc5f7817b18b' or one of its dependencies. The system cannot find the file specified.
Source Error:
Line 5: <%@ Register TagPrefix="wssuc" TagName="Welcome" src="~/_controltemplates/Welcome.ascx" %>
Line 6: <%@ Register TagPrefix="wssuc" TagName="DesignModeConsole" src="~/_controltemplates/DesignModeConsole.ascx" %>
Line 7: <%@ Register Tagprefix="IWPart" Namespace="Microsoft.IW" Assembly="MySiteCreatePart, Version=1.0.0.0, Culture=neutral, PublicKeyToken=cb1bdc5f7817b18b" %>
Line 8:
Line 9: <HTML dir="<%$Resources:wss,multipages_direction_dir_value%>" runat="server" xmlns:
Source File: /_catalogs/masterpage/FrondeMySite.master Line: 7
I hope it's just something I've missed.
Duncan
So, I have been looking into customizing the My Site pages for my company's site for a few days now and this is the first page I have found to actually give me some good information. But, I am slightly lost. I am not new to Sharepoint but I am new to coding.
So, here are my questions:
Could you please tell me, will these instructions make it so that all the my site pages will look the same as my company's website that I have created a new master for?
Where do I find the feature.xml file to modify?
I am completely lost on these instructions since the master for my collaboration site looks completely different. Am I better off modifying the my site master page to look like my collaboration site?
Ruth
I am unable to find any reference document on steps to rename a mysite. I am sure users changing names would be a common scenarion for companies but not sure how SharePoint would handle this. I understand that MOSS profile gets updated automatically (as a part of AD sync) and running migrateuser command updates all references but does the URL of my site change or is a new site created or URL remains the same?
Algunas preguntas relacionadas: ¿Cómo personalizo mi sitio de trabajo de SharePoint? ¿Cómo puedo personalizar...
Algunas preguntas relacionadas: ¿Cómo personalizo mi sitio de trabajo de SharePoint? ¿Cómo puedo personalizar
I have a problem whith
SPFile thePage = curWeb.RootFolder.Files
it only contains a file (Blog.xsl)
Do you know why I don't have the default.aspx file ?
I started modify MySiteCreatePart\PartCheck.cs
you find Begin Edit , End Edit
in this code snippet demonstate how to add sitedocument webpart and Calendar to mysite.
The first time user click mysite it will generate My Calendarr automatically to mysite ,you will see on left menu in submenu Lists
SPLimitedWebPartManager theMan =
thePage.GetLimitedWebPartManager
(System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared);
foreach (WebPart wp in theMan.WebParts)
{
//check each web part to see if matches our typeName
if (wpList.ContainsKey(wp.GetType().ToString()))
wpList[wp.GetType().ToString()].wp = wp;
}
// Begin Edit
SiteDocuments site = new SiteDocuments();
site.Title = "SharePoint Sites";
site.ChromeType = System.Web.UI.WebControls.WebParts.PartChromeType.TitleAndBorder;
site.ShowTasks = true;
site.UserControlledNavigation = true;
site.\" xmlns:xsd=\"\"><SerializableTab Type=\"UserChoice\"><Pair Text=\"Document Center\" Url=\"\" /></SerializableTab></ArrayOfSerializableTab>";
wpList.Add(Guid.NewGuid().ToString(), new WebPartAction(site, WebPartAction.ActionType.Add, "MiddleLeftZone", "0"));
Guid folderId = curWeb.Lists.Add("My Calendarr", String.Empty, SPListTemplateType.Events);
SPList folderList = curWeb.Lists[folderId];
folderList.OnQuickLaunch = true;
folderList.Title = "My Calendarr";
folderList.Update();
// end Edit
//now enumerate items in hash; can't do it in WebPart collection
//on SPLimitedWebPartManager or it fails
foreach (string key in wpList.Keys)
{
wpa = wpList[key];
switch (wpa.Action)
{
case WebPartAction.ActionType.Delete:
theMan.DeleteWebPart(wpa.wp);
break;
case WebPartAction.ActionType.Move:
theMan.MoveWebPart(wpa.wp, wpa.zoneID, int.Parse(wpa.zoneIndex));
theMan.SaveChanges(wpa.wp);
case WebPartAction.ActionType.Add:
theMan.AddWebPart(wpa.wp, wpa.zoneID, int.Parse(wpa.zoneIndex));
case WebPartAction.ActionType.SetProperties:
SetWebPartProperties(wpa, wpa.wp);
}
}
All I like to do is apply a different style sheet to My Site. How do I do this?
Body: I’ve been working in my demo VM again today and had an issue where the creation of a MySite was
Adding,moveing,deleting .dwp is fine but when it comes to .webpart then PartCheck gives an error when checking the typeName. Any idea what might be wrong?
I manage to apply the MySiteCreate with success. Now I am trying to add a pageviewer webpart and a xmlformview webpart using the feature stapling method but it seems to give problems saying that the webparts are either deleted or invalid. I do know those webparts are they but just couldn't get them to show up. Any ideas?
I have a problem when I want to create mysite.
I get an error: "An error occured during the compilation of the requested file, or one of its dependencies. 'Microsoft.IW.PartCheck' is inaccessible due to its protection level.
Do you have an idea why ?
Thx
Hi, isn't there a way for the admin to just centrally change the default theme of the profile page?
Hi,
when I tried to load Asp.Net WebParts I always got errors. I am new to SharePoint 2007 and did not realize the differece between System.Web.UI.Controls.WebParts and Microsoft.SharePoint.WebPartPages.
Steve uses Microsoft.SharePoint.WebPartPages in his example so WebPart in his example is a Microsoft.SharePoint.WebPartPages.WebPart.
To be able to use Asp.Net webpart in this example I just took out using Microsoft.SharePoint.WebPartPages and set instead using System.Web.UI.Controls.WebParts and one can use both "old"-webparts and "new"-webpart in this example because all is derived from Asp.Net-framework.
Hope this will help those how are new to SharePoint.
Hi
In my company managers want to prevent administrators from accessing their sharepoint personal sites(My site) and aslo sites that have sensetive data (salary, stock ...)
is this applicable with sharepoint 2007 or not? if yes how?
Does anyone know of a workaround for the issue with missing support for automatic feature activation on site | web scope?
While I think feature stapling is a great approach to customizing, if you dont like XML hell,i also think that its a potential problem that alot of novice sharepoint users have to navigate to the "Site Features" section of the SiteSettings admin page to accomplish the desired customization.
can you tell me how to change the Quick Launch headings - My Profile, Documents, Pictures, etc...to something else (e.g. Documents to My Documents) on My Sites page?
I want to know how Organizational Hierarchy web part works on MyProfile page of MySite. Is there any way to hide that control?
Matt
In the FeatureActivated method you can access the nodes in quicklaunch with the following
SPNavigationNodeCollection quickLaunchNodes = curWeb.Navigation.QuickLaunch;
You can look throught the node collection and change the Title property of the nodes
do curWeb.Update(); after updating the nodes.
Do you guys know how to make onet.xml template dinamyc for mySite Creation???
Hi, I'm trying something similar to Matt except instead of renaming I want to delete nodes. I've been successful in deleting everything except Pictures, Discussions, Sites and MyProfile. We don't want to delete MyProfile and Sites. I only need to figure out how to delete Pictures and Discussions.
Here is the code that has been put into the FeatureActivated method.
SPNavigationNodeCollection spNavCollection = curWeb.Navigation.QuickLaunch;
foreach (SPNavigationNode spNode in spNavCollection)
{
UpdateLog("Nodes URL: " + spNode.Url, EventLogEntryType.Information);
spNavCollection.Delete(spNode);
UpdateLog("Node deleted URL: " + spNode.Url, EventLogEntryType.Information);
}
curWeb.Update();
I really should have found this before submitting. A co-worker found the problem after I realized everything was deleted if I call the routine three times. Here is the code that works, notice it starts at spNavCollection.Count and decrements.
//Delete the left side menu items called
//documents, pictures, discussions, surveys,lists, and sites
for (int x = spNavCollection.Count - 1; x >= 0; x--)
spNavCollection.Delete(spNavCollection[x]);
I faced problem here is the error An error occurred during the compilation of the requested file, or one of its dependencies. The type or namespace name 'SPWeb' could not be found (are you missing a using directive or an assembly reference?)
Note: i program the page in the sharepoint using inline code how can i add reference in the page to define the missing assembly refernce.
Thank you for your support
Hi Steve
I installed MySiteCreate 1.0 follow your guide but I receive a error when I create new mysite :
I used gacutil to add MySiteCreatePart.dll to GAC.
Please help me fix the error. Thanks Steve.
Hi. I am trying to adjust the alternate access mapping in the Central Admin portion of Sharepoint. I would like to use the fully qualified domain names for the URLs. Everything works fine until I click on "Mysite," where I get an error. The mysite link does not display the FQDN. What can I do so that I can use the FQDN for the portal and the Mysite?
-
"hey, you've already added an entry to the dictionary with that key"
I wonder if that is why I get the IDictionary SearchService error
OWSTIMER.EXE (0x12F8) Office Server Office Server Shared Services SearchServiceInstance (fb57240e-0614-4ef3-b9b0-3d56ca3927d7).
An item with the same key has already been added. Techinal Support Details:
at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.SynchronizeDefaultContentSource(IDictionary applications)
Thanks Steve!
I am able to move ,delete the webparts. but I can't add a custom webparts extended from System.Web.UI.WebControls.WebParts.WebPart class. I deployed the dll in to GAC.Following i have added to the xml file, and added as safe control in wbconfig.
<assemblyName>HelloWorld</assemblyName>
<className>HelloWorld.HelloWorld</className>
<Action>Add</Action>
</WebPartAction>
what I am i doing wrong? it's been deployed and I can see in the site gallery and i can add this webpart to other pages, it works fine,but i can't add it to my site. any help is appreciated!
Introduction This post covers a sample technical design for the most common branding task you’ll encounter
I can add the custom webpart now. In the code i was checking for Microsoft.SharePoint.WebPartPages.WebPart type, and my webpart was extendng from System.Web.UI.WebControls.WebParts.WebPart
As a sharepoint admin, is it possible to edit information on other user's personal sites such as about me, picture, etc? The path for my site is
If i try to go to another user, it redirects me to their public my site page. Thanks!
Hello,
I'm a great cut and paste expert, and I'm sure I could get the staple, staplee thing working but I'm not sure how to get started. Where is the master page for the 'MySite'? I am using the blueband.master and have modified it for the portal and want to use pretty much the standard BlueBand look and feel on the Mysite (i.e. large blue top nav. bar) . The standard MySite is fine for now, so I don't need to add parts or custom code or anything like that. I guess my question is; where is the .master for the mySite so that I may modify it in Designer? Thanks
I have excactly the same question as Bob. I've customized the default.master and the application.master and everything looks fine except for the MySites.
I've searched the complete Server for *.master but couldn't find a mysite.master or something like that.
hello, i've been trying to customize "my site" in my MOSS2007 portal without success. I'm working with a spanish version of sharepoint in a server farm configuration. I've followed all the steps described in the blog and it just doesn´t work. However, i tried the same using a english version of sharepoint in a single server configuration and it worked. Does anyone knows how to make it work in the first scenario???
Sorry my english is so bad, but I will try to explain you. I have a problem, I deployed this feature in a farm and eveything it seems ok, when I search a person i have an error abiut permissions, but this error only happens if tha person's site it doesn't exist, when this person create his site, I search a person and the mistake dessapears. Please, any idea?
I am interested to know how to allow acccess for users eg WWW to see my sharepoint site without being a user or in a group. The stages of anonymous and the other two settings to allow this have been set but the link i have given my friends still comes up with access error from over the internet plz help me in solving this problem as it has become anoying thx Adam
I have created a site in MOSS 2007 using Colloboration site template. It has a "My Site" feature for personalization. These "My Site" have standard "Getting Started" web part to assist users with the use of SharePoint. The Getting Started web part should be removed automatically from a users "My Site" after two weeks if, it has not been manually removed by the user. Once the user has removed the web part, it can only be added back manually by the user.
Kindly Request you provide the guidelines, any links, or best practices to
achieve this functionalities.
I did a clean install of SP, then used the codeplex batch files (which include a gacutil in the beginning, while the manual states I should do it in the end of the procedure!) and ended up with a "File not found" error when my new user tried to access his mySite.....
This obviously does not work!!
This is the longest blog discussion I have seen in my life !
"I like"
Hi Steve
I have installed both the feature as you mentioned in your blog like,
stsadm -o installfeature -name MySiteStaplee
stsadm -o activatefeature -name MySiteStaplee -url
stsadm -o installfeature -name MySiteStapler
Its installing properly now problem is that the feature is activated for a particular user (samiran)
is there any way to activate the feature for all my site users?
Today we will look at the option of Site Definitions and Features for modifying the Master Pages....
i've got a application created in visual studio for moss2007 as a .stp file. now that i've uploaded it to site template gallery it is displayed there as template, but when i try to create a new site it's not listed under "custom".
Do you have any idea what went wrong?
Greetings,
Johannes
Help me;
I have a sharepoint site where there is a group of users. User clicks on mysite, one of the web part displayed is the getting started web part with other few default web parts.
I want to hide/delete/close that getting started web part after a period of time(say 2 days). How best i can achive this task.
Thanx in advance.
Sreejesh.
Hi, I need some help. I inherited a test enviroment with the "portal site connection " customized to link to the test portal. When I move the main portal to other enviroments staging and production and put the mysitecreate code. When the users create their sites the portal site connection is still pointing to test. I have searched the master page and cannot find the link. I have also following the instructions above and still not successful.
This is a very Good blog.....
Thanks for Sharing
I find very useful information in your site, about customising MySite.
In my organisation, I have designed a customised masterpage and a number of websites underneath it, in Sharepoint 2007.
Howvere, I can't get 'MySite' to look identical to all my other sites.
Do I just need to use the two feature files, which you have listed earlier (stapler and staplee), to achieve a 'MySite' which is idententical to all my other sites, regardless of who the user is, at any given time?
Dan
Environment: MOSS 2007
portal adress:
my site adress:
I don't know when this started happen or what triggered it, but "My Site" link on top right section of the screen, prompts users for credentials three times and fails eventually with "401 Unauthorized" text on the screen.
After realising the problem I went to Shared Services Administration and have modifed My Site Settings, and change personal site provider to . This worked but all previously created "My Sites" were missing due to change in URL, so I revert back to original My Site setting which is . Now, the direct URL to works but "My Site" link on the top right section of portal takes all users to this exact URL: and prompts for user credentials three times and finally gives "401 unauthorized" error.
Could anyone please point me to right direction?
I was wondering where I can find the source code for "WebPartAction" class. I don't seem to find on this blog.
Nick
Hi;
I am restricting the user not to upload their picture on thier own ,instead I want to write an application which code does the part in uploading the pictures . do u have any ideas how to acheive this???
It looks like it worked for me, but when i look in the log file i can see the following error:
Description:
Error serializing MySiteStaplee.xml file: There is an error in XML document (0, 0).
What could go wrong here?
MySites are very interesting on many levels. When you start to think about how to architect, deploy or
MySite Mark Arend hat in einer Visio-Grafik die Zusammenhänge der MySite grafisch aufgearbeitet Schon
I just came across an error that I saw mentioned here back in August:
In our situtation, it turned out that someone had added "Administrator" to the SharePoint users and then created a "MySite". Evidently they "unknowingly" added the local administrator and not the domain admin (as they thought). When we logged on as "Administrator" and clicked on MySite, we would get the error because of SharePoint thought the "/personal/adminstrator" site already existed (which it did for "local" admin), but it was still trying to create a new MySite for the current user ("domain" admin).
I deleted the MySite site collection for the "local" administrator and recreated as the correct user. Everything went back to normal.
its not run
how can i change default to steve
I've installed and activated the feature but upon failing, I deactivated and uninstalled. My fear is I've done something else behind the scenes because now I'm getting a "Page cannot be displayed" error for MySite.aspx.
When I create a new collection, I click on "Set as MySite Host" but then get the aforementioned "Page cannot be displayed" message; I find this odd because the URL for "Set as Host" is the same.
Upon getting this error, I opened MySite.aspx in VS2005 and noticed "application.master" couldn't be found; I opened "application.master" and it said "TopNavBar.ascx" couldn't be found. This leads me to believe there's something wrong with a web.config file somewhere but I'm not sure where to look.
On my development box, MySites reside on port 80 and in the wwwroot folder of Inetpub, there is a web.config file. On my production box, where these errors are occuring, we've developed MySite as a seperate web app under port "42264." In the wwwroot folder of Inetpub on port 42264, there isn't a web.config file or folders such as "_app_bin" or "bin."
Is the web.config file in wwwroot supposed to exist on "42254?" Could this be why I'm getting an error?
if i want to add anew webpart example owainboxpart or owainboxtasks its not run
plz give me the correct xml of this two webparts
give me correct example of how add myinboxpart and tasks to mysitestaplee.xml
I refactored all the code.
I also changed the MySiteCreatePart to load web controls inherited from asp webcontrols, as opposed to sharepoint web controls.
Also has sample code for serializing web part, this so that you can put the web parts details in the MySiteStaplee.xml. And code for CDATA for content query webparts in the MySiteStaplee.xml.
dig it here...
For my first post I am going to illustrate how someone can limit the web parts available to users on
I was wondering where to find WebPartAction class code because I just couldn't find it.
I am new in SharePoint but I must remove one web part from 'my site'.
Please help.
I installed your feature successfully, which is Big thank YOU!
do you know how to modify links which inside <SPSWC:PersonalWelcomeWebPart> on the default mySite page to make all the mysites(other user mysites) can see it.
and add one more link to the <asp:SitemapPath>
how can i use link and rss webpart in my site
plz give me property of link and link to auto configure
thnks
Hi, I have a few questions:
1) My master page references a custom CSS located under _layouts/1033/Styles (as per Heather Solomon's SharePoint 2008 conference). The home page and profile page look great, but when I create a blog under the my site, it referts to the default.master template. How do I make this use my masterpage?
2) I noticed that My Sites have a lot of application pages. What's the best way to brand these. I thought that perhaps defining an alternate CSS would do this (through code), but it doesn't seem to work. I understand that Alternate CSS comes with MOSS (not WSS), so should I activate the standard MOSS/Publishing features in the My Site to do this?
3) Have you looked at moving your sample code into Ted Patterson's STSDEV utility? That is, through a solution package? If so, would you recommend creating two separate solution packages, or just one (with both features included)?
I downloaded the project from CodePlex but there does not seem to be a manifest.xml. Is there another download?
I am Using Web Single Sign-On (Web SSO) With ADFS. I set default zone to Windows Authentication and Extranet Zone for Web Single Sign-On follow by technet article
In sharepoint web, I can get and add user from Web Sigle Sign-On, But cannot get users from Web Single Sign-On followed your instructions about creating a customised MySite and when I log on as a new user, for the first time and try to create a personalised MYSite I get an message saying ' An unexpected Error has occurred'.
I am using Sharepoint 2007 and I really need to be able to customise MYSites.
After upgrading SPS 2003 to MOSS 2007, the mysite link is working fine. But when I try to amend my profile in MySite in MOSS 2007 I get this error message:
An unknown user profile error has occurred. Try recreating this user profile or updating this user profile from the directory service to resolve this problem.
That mean no one is able to amend their private MySite area.
Do you any idea why?
Body: I was finding it hard to find a specific answer on how to write a Solution Package with a Feature
I have noticed in my testing that following activating and using the feature, the IWPart user control is causing the following error in Edit Mode whenever users try to add ANY web part:
Unable to add selected webpart(s)
My Tasks: exception occurred. (Exception from HRESULT: 0x80020009 (DISP_E_EXCEPTION))
There needs to be a check around the code to see if we are in edit mode. I tried to use SPControlMode to see if we are in Edit, Display, Invalid or New mode. Unfortunately, whether I am in Edit mode or not, the mode returned is 'Invalid'. It appears that this check only works for Publishing sites, not My Sites.
Any ideas anyone?
I have deployed your solution and I am having this Access denied problem and i cannot connect to the sharepoint with the users which has not full control. Please help i've tried almost everything. Help please.
i am having problem related to breadcrumb within in masterpage (I am using my own portal design) its, coming fine in sharepoint designer (editor) but when i check the masterpage within in subpages in the browser than the position of the breadcrumb is not right it coming the body content section area, else everything working fine in terms of functionality etc... Also, when i check my document library page the breadcrumb position is coming fine the place where I need.
Tafseer
--------------------------------[END QUOTE]------
How do we know that default.aspx wasn't customized on site to a different page name?
Is there a way to retrieve the default page file name for the Personal site (SPWeb)?
Steve. This is a great article. I am new to SharePoint and am wondering what is the best way to modify or replace the public MySite. We want to change some stuff that is actually on the person.aspx page (mainly the ProfileViewer). I have done some research and I see many people are customizing the actualy person.aspx page but I am worried that we will run into issues when applying future hot fixes etc. Is there a solution where I can create a MyCompanyPerson.aspx to replace the OOB person.aspx and have all the links point to my new public mysite?
Hi, do you have the code of both assemblies? I don't seem to be able to get them from the blog.
--Vamsee
We are using this method to remove all web parts from the mysite but we have a problem with the owacalendar. There seems to be something calling it after we have removed it as we get this error message in the logs and an error was shown to the user:
Microsoft.SharePoint.Portal.WebControls.OWACalendarPart, Microsoft.SharePoint.Portal, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c. Failed to set SaveProperties=true on GET. Exception occurred. (Exception from HRESULT: 0x80020009 (DISP_E_EXCEPTION))
This happens after our control have deleted the web part so my guess is that when it's added in first place some process starts that then tries to reconnect to the calendar web part. The only solution we have found this far is to manually remove the calendar from onet.xml but that is as you all know a crappy solution. Have any one had the same problem?
//Niclas
Hi there,
How do i disable a user from creating document libraries and list in his mysite only.
Any help
Sriniko
Our My Site public profile page does not display a users Shared Documents lirary. I can't find any way to customize persons.aspx to show that on all users sites as the web part doesn't seem to be available when trying to customize. The Shared Documents folder on a users profile can be found by hunting (clicking on View All Site Content in left nav) but can't seem to find anyway to provide access to this directly from the Public Profile Page. All documentation seems to suggest this should be there by default.
Please help - no one is answering my posts on other message boards about this.
Hi Steve. I have changed html code (layouts, css) of your steve.master but it didnt apply to the My Profile page. But I can see new format on the My Home page. Please give me some tips
Hi Steve:
I'm trying to add a new popup option from the dropdown list of each item in a ListViewWebPart, where it shows: View Properties, Edit Properties, etc; I'd like to add my own option. Where could I get info about this (Always in a ListViewWebPart).
Thanks!
My Site Recommendations The following My Site recommendations are a composite of best practices taken
Thank you, this a huge help and asset to the community of sharepoint developers and consultants.
Great Post.
Thanks for sharing.
I have two set of query, could you throw some light on how to add cutom web parts to Web Parts Gallery of My Site using Staplee.Then add this web part to My Site so that it could be viewed by all users.(Assuming none of the site were provisioned before).
Hi Steve, I have been struggling for a week to get my custom page layout populated with custom webparts when an instance of this page layout is created. I have adopted your approach and using your web control to place the web parts on the page for which I have to make certain changes to take page from the context rather than the default page and add the key to page properties instead of the web.
Finally, the solutions seems to be working for me but it doesn't show up until I refresh the page. I have tried Response.redirect and Server.Transfer with relative and full url's but none of the thing is working. Also I have to checked in the newly created page otherwise I also requires me to check in the page from UI and then shows up the web parts which I have been able to resolve by thePage.checkin("Draft"). But for this page refresh I am unable to display webpart when the until i press F5.
Great article. Kudos.
This is one of the best article for My Site customization.
I had a simple requirement.
1. Adding a custom web part to My Site, When the site is provisioned.
Although it looked only when i got your code.
SPList xList = curWeb.GetCatalog(SPListTemplateType.MasterPageCatalog);
/// this code has to run under elevated
///priviledge for adding web parts to gallery
/// for all users
SPSecurity.RunWithElevatedPrivileges(delegate()
{
AddWebPartToGallery(curWeb);
});
Now the function AddWebpartToGallery(SPWeb web) is given below.
private void AddWebPartToGallery(SPWeb web)
string strWpPath = @"C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES\MySiteStaplee\MySiteBasicWP.webpart";
SPList wpGallery = web.GetCatalog(SPListTemplateType.WebPartCatalog);
SPFolder wpGalFolder = wpGallery.RootFolder;
try
{
if (wpGalFolder != null)
{
FileInfo wpFile = new FileInfo(strWpPath);
if (wpFile.Exists)
wpGalFolder.Files.Add(wpFile.Name, wpFile.OpenRead());
wpGalFolder.Update();
UpdateLog(wpFile.Name + "Web part Added at site " + web.Title, EventLogEntryType.Information);
else
UpdateLog("Web part Could not be at File path mentioned " + strWpPath, EventLogEntryType.Warning);
}
}
catch (Exception ex)
UpdateLog( "Web part Added at Gallery has caused this error: " + ex.Message, EventLogEntryType.Error);
}
Schema XML Contains.
<assemblyName>MySiteBasicWP, Version=2.0.0.0, Culture=neutral, PublicKeyToken=71f19b5f0214456d</assemblyName>
<className>MySiteBasicWP.MySiteBasicWP</className>
<zoneID>TopZone</zoneID>
<zoneIndex>0</zoneIndex>
Still i am unable to perform it fully.
I am able to add the custom web part to Web Part Gallery(this action is performed before the MysiteStaplee.Xml is read). But the code don't add Custom Web Part which is mentioned in the schema XML (MysiteStaplee.Xml).
I tried to figure out the error but couldn't get/resolve the upto the root cause.
I got the error the Parameter Web part Couldn't be null.
I checked the Web part i was using was from System.Web.UI.WebControls.WebPart.
I tried testing with the Microsoft.SharePoint.WebPartPages.WebPart also. But with out success.
Could please any-one tell how to solve this.
I can think only of some error in my Schema XML next.
Thanks in Advance.
Anticipating to hear from some one.
Saroj
sarojk@ocbc.com
I am facing one problem,
i have actually followed your
blog to customize MySite. There we set the SPWeb.Properties[key] value to avoid unnecessary execution of the code.
the problem i am facing is, i have created a feature and on activation of that feature i loop through all the personal sites and reset the SPWeb.Properties[key] to null. This works fine. But once the feature is activated, if i open my personal site, it throws the following error - the error is comming at the point where i access the SPWeb object.
this is the) "
any idea what could be causing the problem ???
Hello Steve
I read and learnt a lot about mysite. Superb Article !!!
Lets discuss my scenerio
I have 200+ users and everyone want their own site to maintain their information, their site name with their logos.
I reviewed their information and thne decided I should go for My Site. So I have used ADAM based FBA authentication successfully. Now every user having their own sie and they are quite happy. Hoever still it is beta testing, so we can do changes any time
My first question to provide my site to 200+ user is right approach? What problem we can face in future as far as maintenance? If this not the right approach what should be alternate approach to maintain? We are assuming in next 1 year user will grow up to 1000+.
2. How can an admin see their site and their contents to know what information user are maintainig? We want to restrict user if he is violating any information.
3. My Logo and Site title behaviour is inconsistent? When user update these information in site setting using title descripton, that time it show perfectly but when they go to my host or my profile page, Logo and Title replaced with the default information?
4. What is best way to do user maintainence like resetting their password, new user creation and enable for my site? Or is there any tool available for this. I reviewed one that is ePok, but not happy?
Sorry for long posting.
I am waiting for your and other jentlemen valuable advise.
Ani
Fantastic article, this was the first thing I came across when I ran into trouble working with branding My Sites. Steve, when the feature is deactivated and uninstalled, do the My Sites revert back to default.master or do they remain with steve.master? Also I cannot seem to delete steve.master from SPD nor modified versions of steve.master. Any help from you or your fellow
I have MySites on and SSP Admin site on, i.e. they are in different web application, I have a feature to change the web.config of to add to <SafeControls> list, but to my surprise when I activate the feature on and feature updates the web.config of and not off
Here is how I get reference of SPWebApplication
SPWeb web = properties.Feature.Parent as SPWeb;
using (SPSite site = web.Site)
UpdateWebConfig(site.WebApplication);
Ok so I know we can change the master page and web parts but how do we change the layout without modifying the system file, i.e. the default.aspx page used for the private view. I have a custom master page with several placeholders that I need to plug web parts into and I don't want to touch the system file as it is advised not to.
Hi Steve, great article and responses.
Just a quick question - we have a need to disable users editing certain fields, in particular not to allow them to upload their own photographs (HR to manage).
How might we accomplish this - perhaps a combination of this method, javascript etc.?
Many thanks,
This feature is pulling a default.aspx file from somewhere, but I can't seem to find it. Does anyone have an idea where I could find it? I want to change the column layout.
This is a great blog, but I am thinking that it needs to be continued else where. Is there a URL that is expanding on this feature? A place that others are using it and that others can share.
I have done everything to track this error down:
Has anyone been able to determine where this is coming from? I am able to use a custom masterpage with no issues, however not able to move,add,delete or modify a webpart. Any help is appreciated!
Thank you for the wonderful article.
Im trying to delete and add some NavigationNodes to the QuickLaunch.
I added these Lines in PartCheck.cs:
SPNavigationNodeCollection QuickLaunchNodes = curWeb.Navigation.QuickLaunch;
QuickLaunchNodes.Delete(QuickLaunchNodes[2]);
And its not working!
You answered before that you need to get the "SPWeb for the My Site root", can you please show me how that is accomplished?
Thanks in advance
Moses
Sous entendu comment modifier les My Site de MOSS 2007 avant création et leur maintenance après création.
Hi all,
When I try to enter MySite with any user credentials, I am redirected to the same Mysite (different user) each time..
For example -
for hytem it is Mysite for hytem
and for mossadmin its again Mysite for hytem,
Has anyone had difficulty, when they add the following line to the master page?
I added it, as per instructions, then when I try to open the master page, later, I am prevented.
I get an error message saying: -
"That <IWPart:PartCheck control type is not allowed on this page. It is not registered as safe"
What can be done to overcome this?
regards,
Whenevr, I try to create MySite, following the above instructions, I get the following error:
You do not have permissions to have lists and pages within My Site.
What is the cause of this?
Can anyone help me here in regards to this article. I am having error on the masterpage.
the error is it can not find the assambly file.
can someone let me know what i can try ?
Hi there, Is there anywhere on the web that gives a full detailed account on how to customise 'MySite'.
I have tried various approaches, which I found on the web, and they all failed.
I am using MOSS 2007 and I can't find adequate support, anywhre, for the customising of MySite.
It is my opinion that this issue needs to be addressed urgently, by Microsoft. I think that some solution needs to be found and incorporated into the next release of Sharepoint.
Despite trying various methods, they all failed, for me.
Dan,
Have you checked the Personalization permission to see , if you have permission to create mysite.
Hello,
We are facing an issue that's causing Mysite to sometimes timeout during creation.
We tried the following:
-Create my site for 10 users at the same time.
Result: 7 users created successfully and 3 timeout(not corrupted, you can refresh to create them again)
-Disabled the features we are activating and repeated the same scenario.
Result: 7 users created successfully and 3 timeout(not corrupted, you can refresh to create them again)
-Disabled the stapler feature(feature that override default SharePoint My Site and apply our webparts and master page, so My Site now are created with default SharePoint)
Result: 7 users created successfully and 3 timeout(not corrupted, you can refresh to create them again)!
Any ideas about the problem root cause?
In MOSS 2007 as part of the SSP provisioning process “SPSMSITEHOST” site definition is being used to
I installed this feature, but am having an issue with the web.config.
I get the following page when trying to go to a mysite after installation. Please note, when I replaced the web.config with the original version, this error does not display.
Can someone tell me what I did wrong? I followed the instructions without any issues, and everything appeared to install correctly.
Please note, I was not creating a new My Site, but navigating to my existing My Site when this error displayed. I have not tried creating a new site yet, as I wanted to ensure there were no errors with existing sites first.
--------------------------------------------
Runtime ErrorServer"/>
This removes the blog button on the MySite, but none of my other master page changes are appearing in the MySite. However, my changes appear in the MyProfile page for each user. What could I be doing wrong? Btw, I have MySites setup in their own web application.
I should have mentioned in my previous post that to remove the blog button, all I did was add the change that Gary Lapointe suggests here:
I still don't understand how that change worked, but none of my other modifications applied to the MySite.
When you use site definitions to create sites it's often pretty unclear what the order is in which | http://blogs.msdn.com/sharepoint/archive/2007/03/22/customizing-moss-2007-my-sites-within-the-enterprise.aspx | crawl-002 | refinedweb | 23,239 | 64.51 |
Controller
Controller¶
A controller is a PHP function you create that takes information from the
HTTP request and constructs and returns an HTTP response (as a Symfony22 controller in action.
The following controller would render a page that simply prints22,2 takes advantage of PHP 5.3 namespace functionality to namespace the entire controller pattern2 uses a flexible string notation to refer to different controllers.
This is the most common syntax and tells Symfony2:
The controller has a single argument,
$name, which corresponds to the
{name} parameter from the matched route (
ryan in the example). In
fact, when executing your controller, Symfony2 matches each argument of
the controller with a parameter from the matched route. Take the following
example:
- YAML
- XML
- PHP
The controller for this can take several arguments:_name}parameter matches up with the
$last_nameargument._nameweren't important for your controller, you could omit it entirely:
Tip
Every route also has a special
_route parameter, which is equal to
the name of the route that was matched (e.g.
hello). Though not usually
useful, this is equally available as a controller argument.
Creating Static Pages¶
You can create a static page without even creating a controller (only a route and template are needed).
Use it! See How to render a Template without a custom Controller.
The Base Controller Class¶
For convenience, Symfony2:
This doesn't actually change anything about how your controller works. In
the next section, you'll learn about the helper methods that the base controller
class makes available. These methods are just shortcuts to using core Symfony2
functionality that's available to you with or without the use of the base
Controller class. A great way to see the core functionality in action
is to look in the
Controller class
itself.
Tip
Extending the base class is optional in Symfony; it contains useful
shortcuts but nothing mandatory. You can also extend
ContainerAware. The service
container object will then be accessible via the
container property.
Note
You can also define your Controllers as Services.
Common Controller Tasks¶
Though a controller can do virtually anything, most controllers will perform the same basic tasks over and over again. These tasks, such as redirecting, forwarding, rendering templates and accessing core services, are very easy to manage in Symfony2.() method is simply a shortcut that creates a
Response
object that specializes in redirecting the user. It's equivalent to:
Forwarding¶
You can also easily forward to another controller internally with the
forward()
method. Instead of redirecting the user's browser, it makes an internal sub-request,
and calls the specified controller. The
forward() method returns the
Response
object that's returned from that controller::
And just like when creating a controller for a route, the order of the arguments
to
fancyAction doesn't matter. Symfony2 matches the index key names
(e.g.
name) with the method argument names (e.g.
$name). If you
change the order of the arguments, Symfony2 will still pass the correct
value to each variable.
Tip
Like other base
Controller methods, the
forward method is just
a shortcut for core Symfony2 functionality. A forward can be accomplished
directly via the
http_kernel service. A forward returns a
Response
object::
This can even be done in just one step with the
render() method, which
returns a
Response object containing the content from the template::
Note
It is possible to render templates in deeper subdirectories as well, however be careful to avoid the pitfall of making your directory structure unduly elaborate:
Accessing other Services¶
When extending the base controller class, you can access any Symfony2 service
via the
get() method. Here are several common services you might need:
There are countless other services available and you are encouraged to define
your own. To list all available creates a special
NotFoundHttpException
object, which ultimately triggers a 404 HTTP response inside Symfony.
Of course, you're free to throw any
Exception class in your controller -
Symfony22 provides a nice session object that you can use to store information about the user (be it a real person using a browser, a bot, or a web service) between requests. By default, Symfony2 a PHP
abstraction around the HTTP response - the text-based message filled with HTTP
headers and content that's sent back to the client:
The Request Object¶
Besides the values of the routing placeholders, the controller also has access
to the
Request object when extending the base
Controller class:. | http://symfony.com/doc/2.0/book/controller.html | CC-MAIN-2016-36 | refinedweb | 738 | 50.06 |
This is your resource to discuss support topics with your peers, and learn from each other.
04-07-2010 09:09 AM
I get the context? menu (the one with Show/Hide keyboard, and Full Menu options) when I press (touchscreen) or click a field in a FieldList. I don't know why.
I extended ListField as follows:
public class TableField extends ListField { TableFieldCallback callback; public void setTableCallback(TableFieldCallback callback) { this.callback = callback; } protected boolean trackwheelClick(int status,int time) { App.DEBUG("CLICK START " + time); if (null != callback) callback.onclick(this.getSelectedIndex()); App.DEBUG("CLICK CONSUMED"); return true; } }
The Field Manager that contains this field implements TableFieldCallback and sets the callback object.
The list field does not contain any child fields, rather the "data" is drawn for each field.
I press (click) a field (row) in the field list and trackwheeClick() is called which calls my onclick() method in my object and is processed (it loads another view), however the context menu is also being displayed, even tough I returned true from trackwheelClick() signaling that I consumed the event.
This behaviour is ofc unacceptable, I need a way to be able to disable the context menu when the field is clicked.
Note: sometimes the full menu appears but I assume that's because I also accidentally click / press the full menu option in the resulting context menu... perhaps its because the context menu gets the release event from the press/click don't know.
Solved! Go to Solution.
04-07-2010 01:12 PM - edited 04-07-2010 01:13 PM
04-07-2010 03:51 PM
return true from navigationclick.
04-07-2010 05:20 PM - edited 04-07-2010 05:23 PM
I have tried using navigationClick() instead (basically renaming trackwheelClick to navigationClick()) and it's still doing it.
What I have is several layers of VerticalFieldManagers/HorizontalFieldManagers then a ListField in one of those managers.
A | VerticalFieldManager B | -> VerticalFieldManager (taking up most the display) C | --> VerticalFieldManger (taking up 50px at top of display) D | --> VerticalFieldManager (taking up rest of its container - ie main part of display) E | ----> VerticalFieldManager (taking up all its container) F | -------> ListField (taking up all its container) G | -> HorizontalFieldManager (70px hight at bottom of display) H | ---> CustomField I | ---> CustomField J | ---> CustomField K | ---> CustomField L | ---> CustomField
Focus starts in the HorizontalFieldManager (G) on one of the CustomFields (H), the following sequence happens:
1. Press one of the ListField (F) entries (rows).
2. The Show Keyboard/Switch Application/Full menu appears
3. I press the list field entry again
4. The Show keyboard/Switch Application/Full menu disappears
5. I press the list field entry a third time
6. navitationClick() (or trackwheelClick) is called on my ListField
7. the program swaps in another Manager in place of E, i.e. it deleteAll() from (D) and adds a new possibly different kind of (E)
8. I return true from navigationClick() (to consume the click)
9. the full menu appears over the screen (and new manager)
Ok, so I can kind of understand the first menu popping up, because my program didnt trap and process the press. What received the press (click) event?
But why when I return true from navigationClick() is a full menu still displayed?
04-08-2010 02:29 AM
hey for touch return true on nothandled unclick also
and for non touch returning true from not handled navigationClick should do the job.
04-08-2010 07:32 AM
04-08-2010 08:12 AM - edited 04-08-2010 08:14 AM
From some tests I just performed, i.e. I added navigationClick and trackwheelClick it would seem the default implementation of these for a Manager is to return true (consume) the click.
Specifically I had a horizontalfieldmanager (G) with some custom fields (H-L) inside. I originally was not overriding these methods, and trapping clicks via navigationClick() on the custom field, and that was working, no extra unwanted menu. If I override the navigationClick() and trackwheelClick() methods on the manager containing those fields and return false, the full menu then appears. return true and the menu disappears again.
So the default implementation on a manager for navigationClick() and trackwheelClick() would seem to be to return true (consume the click). So I should not need to add these methods to my managers. Good, because I wasn't.
I am already returning true from navigationClick() on my custom ListField so I don't understand why I see the menu when I press it.
04-08-2010 08:42 AM
Are you CustomFields buttons? if so, extends the ButtonField and pass CONSUME_CLICK as a style to the constructor.
In regards to the ListField, my list fields are based on ObjectListField and as long as I return true from navigationClick, I dont get the contextual menu. I saw that your ListField is taking up all the height so it is not possible that you are clicking in an unhandled region.
During debug, do you see the click coming through your navigationClick method?
04-08-2010 08:45 AM - edited 04-08-2010 08:47 AM
Grr this is annoying. Ok, if I do the following:
protected boolean navigationClick(int status, int time) { return true; } protected boolean navigationUnclick(int status, int time) { return true; } protected boolean trackwheelClick(int status, int time) { return true; } protected boolean trackwheelUnclick(int status, int time) { return true; }
I don't get the full menu popping up, neither do I get a response to my click (i.e. because I don't call my callback). If I change this to
protected boolean navigationClick(int status, int time) { if (null != callback) callback.onclick(this.getSelectedIndex()); return true; } protected boolean navigationUnclick(int status, int time) { return true; } protected boolean trackwheelClick(int status, int time) { return true; } protected boolean trackwheelUnclick(int status, int time) { return true; }
The press is actioned, but I also now get the full menu. The only difference is the call to callback.onclick(this.getSelectedIndex());
What callback.onclick() does is create a new manager, remove the current manager (F) from its container (E), and adds the new manager (F) to container manager (E). The removed manager is not destroyed, just removed from the display.
04-08-2010 08:49 AM - edited 04-08-2010 08:51 AM
> During debug, do you see the click coming through your navigationClick method?
Yes, my custom ListField navigationClick() is getting called.
I wonder if should I defer calling callback.onclick() until after this event has completed, perhaps its because I remove the Manager+ListField from the display and add a new Manager (with its own content) during the click event that is confusing things. | https://supportforums.blackberry.com/t5/Java-Development/Preventing-context-menu-from-appearing-on-click/m-p/477798 | CC-MAIN-2016-36 | refinedweb | 1,106 | 54.02 |
How to unpack a tuple in Python
In this tutorial, we will learn how to unpack a tuple in Python.
In Python, tuples are similar to lists and are declared by parenthesis/round brackets. Tuples are used to store immutable objects. As a result, they cannot be modified or changed throughout the program.
Unpacking a Tuple in Python
While unpacking a tuple, Python maps right-hand side arguments into the left-hand side. In other words, during unpacking, we extract values from the tuple and put them into normal variables.
Let us see an example,
a = ("Harry Potter",15,500) #PACKING (book, no_of_chapters, no_of_pages) = a #UNPACKING print(book) print(no_of_chapters) print(no_of_pages)
Output:
Harry Potter 15 500
Also, note that the number of variables on the right as well as the left-hand side should be equal.
If we want to map a group of arguments into a single variable, there is a special syntax available called (*args). This means that there are a number of arguments present in (*args). All values will be assigned in order of specification with the remaining being assigned to (*args).
This can be understood by the following code,
a, *b, c = (10, 20 ,30 ,40 ,50) print(a) print(*b) print(c)
Output:
10 20 30 40 50
So, we see that ‘a’ and ‘c’ are assigned the first and last value whereas, *b is assigned with all the in-between values.
Unpacking can also be done with the help of a function. A tuple can be passed in a function and unpacked as a normal variable.
This is made simpler to understand by the following code,
def sum1(a, b): return a + b print(sum1(10, 20)) #normal variables used t = (10, 20) print(sum1(*t)) #Tuple is passed in function
Output:
30 30
You may also read: | https://www.codespeedy.com/how-to-unpack-a-tuple-in-python/ | CC-MAIN-2021-17 | refinedweb | 305 | 67.59 |
Difference Between Phalcon vs Laravel
Phalcon is referred to as a web framework. It is a PHP framework based on the model view controller architecture or pattern. It was mainly developed by Andres Gutierrez. It was initially released in the year 2012. It is written in C and PHP. It supports the different platforms like Unix, Linux, Mac OS X, and windows.
Phalcon is also referred to as Zephir/C extensions that are loaded together with PHP one time on web server’s daemon start process. The code is not interpreted as it is already compiled to a specific platform and processor. In this classes and functions are ready to use for any application. Phalcon has some basic features like low overhead, which help in less memory consumption and CPU compared to other frameworks. In Phalcon, MVC and HMVC are used with help of models, views, components, and controllers. The other features are dependency injection, Rest, AutoLoader, and Router.
Laravel is referred to as a PHP web framework. It is mainly based on the MVC pattern. It was developed by Taylor Otwell and initially released in the year 2011. Laravel has some features like a modular packaging system, different ways to access the database management system and application deployment and maintenance. It is written in PHP 7 language.
Laravel is robust and easy to understand. It reuses the existing components of different frameworks which helps in creating the web application. Laravel has excellent features to enhance the functionality and incorporates the basic features like Codeigniter, Yii and other programming languages like Ruby on Rails. With the help of Laravel, the web application becomes more scalable and owing to the laravel framework. It helps in saving time while designing the web application and it includes namespaces and interfaces.
Head to Head Comparison between Phalcon vs Laravel (Infographics)
Below is the top 6 difference between Phalcon vs Laravel :
Key Differences between Phalcon vs Laravel
Both Phalcon vs Laravel are popular choices in the market; let us discuss some of the major Difference Between Phalcon vs Laravel :
- Phalcon has one of the fastest PHP frameworks as the framework extension built in C which is extremely fast and efficient. Laravel comparatively slow framework as its mainly built on PHP and Symfony.
- Phalcon use the volt template engine, which is mainly embedded into phalcon itself and takes its inspiration from Jinja template engine. It has a very clear and understandable syntax. It complies very fast and it avoids the bottleneck for frameworks overall speed. In laravel, we have Eloquent ORM which is simple and fast. ORM helps in organizing the application database and it supports most of the databases like MySQL, Postgres etc.
- Phalcon has good performance and speed whereas laravel has poor performance and less speed.
- Phalcon requires good programming skills to understand and you need to have knowledge of C programming as well. For laravel, there is need of programming skills to understand and write the code.
- Phalcon has loosely coupled components and customizable with Zephir. Laravel comes with its command line interface called Artisan. With help of this different task can be performed like database migration, seeding database etc. It is mainly used for building REST APIs, resource routing and intuitive Eloquent CRUD and it takes less time to write as well.
- Phalcon is more flexible in terms project structure. Laravel is not that flexible like phalcon.
- Phalcon does not have good community and documentation as compared to Laravel. Laravel has a good community and its documentation is thorough and very good. It covers everything and very helpful to experienced and new users alike. It makes easy to write web apps with authentication capabilities and fully powered authorized class.
- Phalcon is difficult to learn but it has less learning curve. Laravel is easy to learn but it has a steep learning curve as sometimes features are updated in the new version but there is no online document and support provided, which makes it difficult to understand and work it with.
- Phalcon uses Volt template system. Laravel has a very powerful template system called Blade.
- Phalcon uses good design practices whereas laravel follows the bad design practices.
- Phalcon need root access to install the PHP extension and framework. Laravel does not have such a problem. Laravel sometimes complicates debugging and autocompletion.
Phalcon vs Laravel Comparison Table
As you can see there are many Comparison between Phalcon vs Laravel. Let’s look at the top Comparison between Phalcon vs Laravel –
4.5 (2,944 ratings)
View Course
Conclusion -Phalcon vs Laravel
Phalcon vs laravel both are web frameworks and based on PHP. It follows the same pattern or architecture only that is Model View Controller. PHP is being used as a programming language in both frameworks when the things come to development. Laravel has a rich template system which is a robust template. It has built-in ORM which works on traditional object-oriented programming or relational scheme. Phalcon used Volt template engine which is faster than ORM. Phalcon is mainly used for its faster execution.
Laravel is being popular than phalcon as it is having better documentation available, which help the beginners or the new developers to understand and develop the web application in the same framework. As laravel uses the basic features of PHP framework which gives the edge to this framework over the phalcon. It has greater and variety of collection of libraries to work and develop the app. It is having a higher and bigger community to reach out whenever any help is required.
Both Phalcon vs laravel are almost the same but having different pros and cons. It can be said that Laravel is mainly used over phalcon because it is being widely used and popular. Some developers preferred to work with those frameworks, which are having larger community support and quick in fixing the defects. There is no harm in using the other because till the time we won’t explore the technology, will not be able to work with or cannot be comfortable with that. So, it depends on the developer requirement and time to select the framework for the web application.
Recommended Articles
This has a been a guide to the top difference between Phalcon vs Laravel. Here we also discuss the Phalcon vs Laravel key differences with infographics, and comparison table. You may also have a look at the following articles to learn more – | https://www.educba.com/phalcon-vs-laravel/ | CC-MAIN-2020-10 | refinedweb | 1,069 | 56.35 |
This is part 1 of a series of SDL game programming tutorials that I am going to release each week if possible. In this part I will cover the basics of getting SDL set up, writing an SDL hello world and the basics of a game.
What is SDL?
SDL is a cross-platform multimedia library designed to provide low-level access to audio, keyboard, mouse and joysticks. SDL also gives you access to 3D hardware via OpenGL.
One of the best things about SDL is that it is cross-platform which means you can write code for Linux, Windows, Windows CE, BeOS, MacOS, Mac OS X, FreeBSD, NetBSD, OpenBSD, BSD/OS, Solaris, IRIX, and QNX.
In this tutorial we are going to focus on writing a few classes to create the basis for all of your 2D games, I am going to use C++ and lots of OOP concepts.
Setting up SDL
This can be very different for each individual OS or IDE so I will just give you a link here which should get you all started.
SDL Hello World
Here is a basic SDL program that loads a window then draws a bitmap to it. Use this to test you have set up SDL properly.
#include "SDL.h" // include SDL int main(int argc, char *argv[]) { // the screen we will draw to. SDL_Surface *screen; // the surface to draw the bitmap on. SDL_Surface *bmp; // area to draw the bitmap to. SDL_Rect targetarea; // initialize SDL. // I use everything as it will load video as well. SDL_Init(SDL_INIT_EVERYTHING); /* set up the screen pass in screen width,height,bpp and set SDL to software rendering */ screen = SDL_SetVideoMode(640,480,32, SDL_SWSURFACE); // load a bitmap. bmp = SDL_LoadBMP("test.bmp"); / targetarea.x = 10; // target x targetarea.y = 20; // target y targetarea.w = bmp->w; // target width targetarea.h = bmp->h; // target height // Draw the bitmap to the target area SDL_BlitSurface(bmp, NULL, screen, &targetarea); // show the bitmap // double buffering SDL_Flip(screen); while(1); }
Get a bitmap and name it whatever you like, make sure to change my test.bmp to the filename you have.
Hopefully you should now have a development environment for making an SDL program.
The basis of a game
There are things that every game has, some are for making it more fun and some are integral to the running of the game.
Every game program has a main loop, this can be something like this
// Main Loop Update Draw Check for Collisions
This is quite a simple loop, checking for collisions could also be implemented into the update function.
So thats the main loop, along with this future tutorials will deal with writing an object creation and management class, writing a sound manager, dealing with collisions, drawing tile based maps. I might also go for writing a game object factory tutorial for you guys if the demand is there.
Thanks for reading, the next tutorial will be here very soon. | https://www.dreamincode.net/forums/topic/109001-beginning-sdl-part-1/ | CC-MAIN-2019-43 | refinedweb | 491 | 70.84 |
Overview
Houdini supports two different but compatible systems for versioning assets, supporting different use cases:
You can incorporate the version number as part of the asset’s internal name. For example,
mynode::2.0. This makes each version an entirely different asset.
This allows different versions to exist in the same scene file simultaneously. When the user creates a node, they get the new version, but existing nodes use the older version and continue to work in the old way without changes.
You can make large-scale changes to how the node works, how it interprets inputs, its parameter interface, and so on, without breaking anything.
You don’t need to manually update old nodes.
There is no provision to upgrade old nodes automatically.
You can use the Version field in the asset definition. If an asset instance loads and notices the asset’s version string has changed since the scene file was saved, it will run an upgrade handler script (if it exists). The script can edit the node instance to transform it into the new version.
This can be useful if you usually make minor, backwards-compatible changes to an asset rather than big breaking changes, especially new parameters. It is mostly useful in a studio environment, where a TD does the work so the artists' tools are automatically upgraded.
You can automatically upgrade existing assets with new (backwards-compatible) features.
You need to manually update the upgrade script and script the work of changing an old instance into a new instance, every time you update the asset. You may need to maintain the ability for the script to upgrade one of several old versions to the latest edition (for example, the script might need to be able to update any of version 1, 2, or 3 to version 4).
The second system was the historical versioning solution before namespaces were added. It is retained for cases where it’s still useful.
Note
Both systems will only upgrade other users automatically if your studio has a system where central changes to an asset are propagated to users (for example, network drives or version control). | http://www.sidefx.com/docs/houdini/assets/versioning_systems.html | CC-MAIN-2019-13 | refinedweb | 356 | 52.9 |
Assertion to Verify Repeating Attribute has Specific Value Domain?
Assertion to Verify Repeating Attribute has Specific Value Domain?
Hey!
I was wondering if there's a way of asserting that a repeating attribute in a json response has a specific value domain - say there are the possible values of "food", "wines", "spirits", "aromatized wines"
Rao provided the following whizzy script assertion that asserts that a repeating attribute has a specific value
assert context.response, 'Request parameter is correct' def json = new groovy.json.JsonSlurper().parseText(context.response) assert json.data.VersionNumber.every{1== it}
//or //assert json.data.VersionNumber.every{context.expand('${REST Request#VersionNumber}').toInteger() == it}
I've been trying to play with this - but I don't know enough about the iterate and every methods
obviously I tried
assert context.response, 'Request parameter is correct'
def json = new groovy.json.JsonSlurper().parseText(context.response)
assert json.data.Name.every{['Food', 'Wines', 'Aromatised Wines', 'Spirits'] == it}
but I knew this wasn't going to work even before I tried. Done a bit of searching google/the forum - and I'm pretty sure I can't use the it method in this way.
This time, I remembered to attach the .json response for my request. As you can see - $[data][Name] is repeated 4 times and holds the values of either 'Food', 'Wines', 'Aromatised Wines', 'Spirits'.
Can anyone advise?
thanks!
richie
Try:
assert json.data.Name.every{ ['Food', 'Wines', 'Aromatised Wines', 'Spirits'].contains( it ) }
| https://community.smartbear.com/t5/ReadyAPI-Questions/Assertion-to-Verify-Repeating-Attribute-has-Specific-Value/m-p/179555/highlight/true | CC-MAIN-2022-40 | refinedweb | 244 | 50.84 |
Empty A Gmail Label With FireWatir
When you have a huge label with thousands of messages, Gmail can't handle deleting all of them at a time (it says it can, but for me it's never worked). This solves the problem. I wrote it more as a practice to get familiar with FireWatir, so it's not very pretty or anything and it's a bit slow. If you have doubts on how to use or suggestions on how to improve it please feel free to contact me. By the way, it assumes Gmail is in German. Replace that string with the one that comes up for your language.
def empty_label(label) # you need a Firefox instance already running with JSSh installed and listening ff = Firefox.new # Goes to Gmail in HTML ff.goto('{label}') a = [] # doing 'c.set' didn't work for me here so I had to do this hack of getting the value # and then acquiring the element thorugh ff.checkbox(:value,value) ff.checkboxes.each {|c| a << c.value} while a.length > 0 do c = [] a.each {|v| c << ff.checkbox(:value,v)} # checks all checkboxes c.each {|e| e.set} # clicks the drop-down entry for deleting ff.select_list(:name,'tact').select('In den Papierkorb verschieben') ff.button(:name,'nvp_tbu_go').click a = [] ff.checkboxes.each {|c| a << c.value} end end
Topics:
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/empty-gmail-label-firewatir | CC-MAIN-2016-07 | refinedweb | 240 | 70.39 |
The question is answered, right answer was accepted
This happens only when I try to build the project, when I run Unity no error shows.
(Actually it crashes)
Any idea? Thanks.
platform dependent compilation
Answer by gilley033
·
Apr 19, 2013 at 03:16 AM
Hello, have you tried using platform dependent compilation?
I was having a similar problem with one of my scriptable objects. My scriptable object was not an editor script, but it had OnInspectorGUI code in it (used so I only had to write the inspector gui for the scriptable object once).
I fixed the issue by putting #if UNITY_EDITOR before the using UnityEditor directive and #endif after it. You will also need to surround your GUI code with these same tags.
Hope this helps!
I like your walkaround, if it works, than thanks for sharing.
it works perfectly, actually the "correct" answer is just stupid and does not solve all issues you might have, for example handles ... you just need them on the object
Agreed, changed correct answer
I did not understand, can someone explain it for me please? Yes since I put these commands it builds without errors but while apk running the part which I put command does not run. I just want to collect materials from a folder. Is there a way to make it? The thing that I cant understand is why do they put if it wont be usefull. How can I collect all materials in runtime, anyone knows how to do it?
I just want to clarify for people coming along trying to understand this. This question has two answers. 1. Unity intends for you--as part of your workflow--to put all scripts that modify the editor or include the UnityEditor namespace to be in a folder (or sub folder) in your project named "Editor". Folders with this name are special. On compile, Unity checks for this detail and it will overlook these scripts on compilation and not compile them into the finished build. 2. However, Gilley033's answer here is quite clever. Apparently if you want to avoid moving your scripts (for whatever reason) you can in some cases uses the platform dependent compilation code described here. I never thought about doing that. Very interesting and thanks.
Answer by softrare
·
Sep 12, 2012 at 10:36 AM
You have to place this script in a directory called "Editor" in the root of your project hierarchy.
I got same error and it solved by placed the script to Editor directory. thank u
This worked for me, it is a good solution to separate game/editor files
By the way. The "Editor" folder doesn't even have to be in the root (anymore?). It can be a subfolder.
Solved by putting every scripts that uses UnityEditor into folders called Editor. and yeah those folders don't have to be in the root folder. It seems that Unity will ignore every scripts inside Editor folder when building.( got a build error after moving one of my essential scripts into them
Thank you so much!
Answer by Kryptos
·
Sep 12, 2012 at 11:33 AM
Built game cannot use the UnityEditor namespace. This namespace comes with the UnityEditor.dll assembly that is not shipped (and not compatible) with any build made by Unity.
Scripts that use this namespace are only meant to be executed inside Unity Editor.
Ok... so where should I put this script? Or what should I do to avoid build errors?
Right now I removed it, the better solution?
Don't use any editor capabilities inside a script that is meant to be shipped.
Try first by removing using UnityEditor;. Then cut/paste the code using this namespace into another script.
using UnityEditor;
So glad I found this thread. I thought I was clever by using UnityEditor.AnimationUtility to get animation data from characters into an array without assigning them in the inspector. So much for that idea. Thanks @Kryptos!
Put in a folder in the main assets directory called Editor
Seth got it. Works perfectly. Also kudos to Kryptos for getting the first part.
Answer by IPT
·
Oct 16, 2013 at 09:48 AM
Had the same problem, cause I have changed the location of the Orthello to be under plugins folder, when I moved it back to be under the 'Assets' folder, it fix it.
Answer by Niklasi17102000
·
Jun 29, 2014 at 04:10 PM
The problem is: The namespace "UnityEditor" can only be used if you are in the Unity Editor program. I have the same Problem, too. But I need the "using UnityEditor" for one of my scripts to load files from assets.
Guys, Then how do I implement a radio button (using v4.6 and OnGUI code) if I cannot have UnityEditor in my standalone? I cannot move my script, nor can I simply use #if UNITY_EDITOR tags to disable that part of code, because I want it to run in the standalone... If I put the CrossPlatformInputInitialize.cs script in my StreamingAssets meant to be built and shipped, how can I reference it?
v4.6
OnGUI
UnityEditor
#if UNITY_EDITOR
CrossPlatformInputInitialize.cs
StreamingAssets
@HamFar: I don't think that the radio buttons in the UnityEditor namespace were really meant for production use - only for building and testing. You'll want to go through the new GUI system documentation. I unfortunately don't have time to look, but you can start at which will give you a good, solid start with using the new GUI system the way it was meant to be used. Hopefully that will make life easier for you.
@VCC_Geek: Thank you. I have to maintain a big software app that was developed with Unity 4.6 and OnGUI code, so upgrading to Unity 5 and using the new UI system is not an option for me. But I thought if I can figure how to reference the CrossPlatformInputInitialize.cs script that enables Editor capabilities, that will be a good hack for this dilemma! So, here is what I have written so far, and I just need to figure out what code to put instead of the commented line:
Unity 4.6
Unity 5
UI
Editor
#if UNITY_EDITOR ];
#elif UNITY_STANDALONE
string standalonePath = Application.streamingAssetsPath;
string editorPath = standalonePath + Path.DirectorySeparatorChar + "Editor";
// What goes here ??? = GetComponent<CrossPlatformInputInitialize>(); ];
@HamFar: You shouldn't need Unity 5 to have the new GUI system. I believe it was introduced in 4.6. The old system is still present in 4.6 (and 5?) for backward compatibility. That said, you might want to set up a sandbox environment with a complete copy of your codebase and Unity 5 just to see how much work it would be. My own experience has been that my projects update pretty seamlessly to newer versions. If you just tiptoe through it in the sandbox, you should be able to back out easy enough if it turns out to be too started rejecting my scripts?
1
Answer
Editor crashes after loading a project
3
Answers
Unity UI not refresh.
0
Answers
Error in my runtime. Namespace error
0
Answers
The type or namespace name could not be found. Are you missing a using directive or an assembly reference?
3
Answers | https://answers.unity.com/questions/316805/unityeditor-namespace-not-found.html?sort=oldest | CC-MAIN-2019-43 | refinedweb | 1,205 | 65.93 |
Bruno Haible <address@hidden> writes: > Bastien ROUCARIES wrote: >> > getaddrinfo documentation >> > Portability problems not fixed by Gnulib: >> > On Windows, this function is declared in <ws2tcpip.h> rather than >> > in <netdb.h>. >> > >> > but it is fixed by netdb module. > > Actually, since the declaration in lib/netdb.in.h is inside a > #if @GNULIB_GETADDRINFO@ > ... > #endif > block, and @GNULIB_GETADDRINFO@ evaluates to 1 only if the 'getaddrinfo' > module > is present, it is fixed by the 'getaddrinfo' module, not by the 'netdb' > module. > In other words, users who ask for 'netdb' but not for 'getaddrinfo' will not > get the fix. > > Simon Josefsson wrote: >> Thanks, I've removed the sentence. > > Removed? Why not moved to the section "fixed by Gnulib"? > > Any objection to this patch? Yes, good point. Please push it. Thanks, Simon > > 2011-03-29 Bruno Haible <address@hidden> > > getaddrinfo: Doc fix. > * doc/posix-functions/getaddrinfo.texi: Mention Windows problem in the > section "fixed in Gnulib". > > --- doc/posix-functions/getaddrinfo.texi.orig Tue Mar 29 14:19:03 2011 > +++ doc/posix-functions/getaddrinfo.texi Tue Mar 29 14:18:47 2011 > @@ -11,6 +11,9 @@ > @item > This function is missing on some platforms: > HP-UX 11.11, IRIX 6.5, OSF/1 5.1, Solaris 7, Cygwin 1.5.x, mingw, Interix > 3.5, BeOS. > address@hidden > +On Windows, this function is declared in @code{<ws2tcpip.h>} rather than in > address@hidden<netdb.h>}. > @end itemize > > Portability problems not fixed by Gnulib: | https://lists.gnu.org/archive/html/bug-gnulib/2011-03/msg00296.html | CC-MAIN-2019-35 | refinedweb | 236 | 62.95 |
Novice in Sikulix and Jython
I just downloaded Sikulix 2.0.4 and Jython 2.7.1 and installed them.
I tried a script with this:
--
discount = 0
amount = input("Enter Amount")
if amount>1000:
discount = amount*0.10
elif amount>500:
discount = amount*0.05
else:
discount = 0
print 'Discount = ',discount
print 'Net amount = ',amount-discount
--
Only an error occurs when run:
---
Exception in thread "Thread-39" Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\
from Sikuli import *
File "C:\Users\
from __future__ import with_statement
ImportError: No module named __future__
---
Is the problem here? My name is Bjørnar (with a norwegian letter ø )
This seems to be translated til \xf8
How do I check that my installation is OK?
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- SikuliX Edit question
- Assignee:
- No assignee Edit question
- Solved:
-
- Last query:
-
- Last reply:
- | https://answers.launchpad.net/sikuli/+question/694791 | CC-MAIN-2021-39 | refinedweb | 148 | 63.09 |
Re: openssl-0.9.8g makedepend 'warnings' -- *are* these a problem?
- From: snowcrash+openssl <schneecrash+openssl@xxxxxxxxx>
- Date: Wed, 26 Mar 2008 15:57:15 -0700
hi jonathan,
I'd say "Yes, they are a problem". I'm not sure what you need to fix.
Ultimately, if you don't have the programs you need out of the build, there
is a problem. It's a weird error, too - because normally configure would
detect the problem. I'd try to track down where -arch ppc (as opposed to
-arch=ppc) came from; that would deal with a lot of the trouble.
poking around a bit, i found
cat /usr/include/stdarg.h
/* This file is public domain. */
-> /* GCC uses its own copy of this header */
#if defined(__GNUC__)
#include_next <stdarg.h>
#elif defined(__MWERKS__)
#include "mw_stdarg.h"
#else
#error "This header only supports __MWERKS__."
#endif
whereas,
cat /usr/lib/gcc/powerpc-apple-darwin9/4.2.1/include/stdarg.h
/* Copyright (C) 1989, 1997, 1998, 1999, 2000 Free Software Foundation, Inc.
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify
...
/*
* ISO C Standard: 7.15 Variable arguments <stdarg.h>
*/
#ifndef _STDARG_H
#ifndef _ANSI_STDARG_H_
#ifndef __need___va_list
#define _STDARG_H
#define _ANSI_STDARG_H_
...
i tried mod'ing CPPFLAGS with the .../4.2.1/include/stdarg.h path --
no dice. likely hard-coded somewhere (will look).
so, as workaround, prior to configure
mv /usr/include/stdarg.h /usr/include/stdarg.h.ORIG
cp /usr/lib/gcc/powerpc-apple-darwin9/4.2.1/include/stdarg.h
/usr/include/stdarg.h
successfully removes the
makedepend: warning: cryptlib.c (reading /usr/include/stdarg.h, line
4): cannot find include file "stdarg.h"
warnings, but not the,
makedepend: warning: cannot open "ppc"
which are still there.
that said, mmake/install complete fine. and downstream usage of the
resultant openssl bins/libs seems ok.
so, the "ppc" warning may be just a syntactical warning ... but would
be good to know for sure. now to try to find the reason/src of the
warning.
cheers.
- References:
- openssl-0.9.8g makedepend 'warnings' -- *are* these a problem?
- From: snowcrash+openssl
- Prev by Date: openssl-0.9.8g makedepend 'warnings' -- *are* these a problem?
- Next by Date: Re: SQL Server through SSH
- Previous by thread: openssl-0.9.8g makedepend 'warnings' -- *are* these a problem?
- Next by thread: ssh-keygen only
- Index(es): | http://www.derkeiler.com/Mailing-Lists/securityfocus/Secure_Shell/2008-03/msg00035.html | CC-MAIN-2015-35 | refinedweb | 397 | 71.51 |
Opened 10 years ago
Closed 10 years ago
#6193 closed (invalid)
clean_* not working with modelform
Description
Well, i tried to write some custom validation with model form using the pattern clean_fieldname. Well,
with latest django from svn, it didn't work. Example:
class MyModel(models.Model):
title = models.TextField(max_length=20)
class FormForMyModel(newforms.ModelForm):
class Meta:
model = MyModel
def clean_title(self):
print "test"
yield ValidationError("Clean_title is never called!")
Change History (1)
comment:1 Changed 10 years ago by
Note: See TracTickets for help on using tickets.
Please don't post usage questions to Trac. Take this to django-users or #django on freenode. Create a new ticket with specific information on how to reproduce the bug. | https://code.djangoproject.com/ticket/6193 | CC-MAIN-2018-22 | refinedweb | 119 | 60.72 |
Here is my code:
public class Player{ public static void main(String [] args) { int myBriefcase; int myCaseIndex; int[] cases = {1 , 2 , 5 , 10 , 25 , 50 , 75, 100 , 200 , 300 , 400 ,500,750, 1000 , 5000 , 10000 , 25000 , 50000 , 75000 , 100000 , 200000 , 300000 , 400000 , 500000 , 750000 , 1000000 }; System.out.println("Choose:"); myCaseIndex = SavitchIn.readLineInt(); myBriefcase = cases[myCaseIndex]; System.out.println("You chose briefcase number" + myCaseIndex); int i=26; int remove; while ( i != 1){ System.out.println("Remove what"); remove = SavitchIn.readLineInt(); if ( remove > 25 ) System.out.println("Out of Bounds"); else if ( cases[remove] == 0 || cases[remove] == cases[myCaseIndex]) System.out.println("It has been chosed already."); else cases[remove] = 0; i--; } System.out.println("Inside briefcase is" + cases[myCaseIndex]); } }
I am a total beginner in JAVA and it would be great if someone can help me correcting the flaw of my code.
My first problem is that when the user inputs a larger value in myCaseIndex than 26 it says ArrayIndexOutofBounds. I tried fixing it with an if statement but still I cant seem to get it working.
My second problem is that this program allows the user to remove case in just 25 times, if you repeat the number lets say you inputed 21, then you inputed 21 again, then 21 again, it is already counted as three times. So how do I fix that? I tried putting an i++ if he/she inputed the number again but it does not compile.
Oh and another question... Is it possible to make the start of the index of Array as '1' not '0'? If you observe my code it is from a gameshow, and in the gameplay of that show the Cases(Array) starts at '1'. Is it possible to start at '1' or not?
Hope someone can help me.
Sorry if my english sucks, it's not my mother tongue, hope you understand my grammar.Sorry if my english sucks, it's not my mother tongue, hope you understand my grammar. | http://www.javaprogrammingforums.com/whats-wrong-my-code/10692-looping-outofbounds-code-problem.html | CC-MAIN-2015-48 | refinedweb | 331 | 66.23 |
The .NET System.Array class has many uses and is often encountered when you interact with other .NET classes and even user developed C# or VB.NET classes.
System.Array
In this article, I will show the syntax for using Arrays in Micro Focus COBOL.NET as well as some examples of usage. Once you know this syntax, you can then:
Most importantly, you will be able to very easily use .NET Arrays as any other .NET language would. The System.Array class along with members, methods, and properties is completely documented on Microsoft’s MSDN site at.
Again, the purpose of this article is to provide information on using the Array class with Micro Focus COBOL.NET.
Array
If you are a COBOL programmer, you know that COBOL has had syntax since its beginning to create something similar to an array. Using the OCCURS keyword allows COBOL programmers to create a table and even nested tables of simple to very complex structures of data. This classic COBOL syntax is still very much supported in Micro Focus COBOL.NET, but the data structures it supports should not be confused with the .NET System.Array class.
OCCURS
However… to support .NET Arrays as well as interact with other .NET classes and languages, Micro Focus has extended the syntax of OCCURS. One key difference (among others) between .NET Arrays and OCCURS tables in “traditional” COBOL is that OCCURS tables in “traditional” COBOL are either fixed size or variable with a maximum size. In either case, the size is essentially established at compile time. Aside from not being limited by the fixed nature of OCCURS tables, Arrays bring a number of other capabilities including a rich set of methods and properties that are very easy to use in COBOL.NET.
Arrays
OCCURS
.NET Arrays can be created as fixed size or dynamic. Even Arrays created initially as fixed can be resized as needed at runtime.
Micro Focus COBOL.NET supports the creation of both types of Arrays. Here are some samples of Single and Multi-Dimensional Arrays:
Arrays
To create an array of Strings with an initial size of 5:
String
5
01 myString1 String occurs 5.
To create an array of Strings with no initial size:
01 myString2 String occurs Any.
To create a multi-dimensional (in this case 2 dimensions) Array of Decimals:
01 myDecimal Decimal occurs 5 10
To create a Jagged Array (An Array of Arrays):
01 myDates type “Datetime” Occurs 10 Occurs Any.
Note the use of the new keyword Any. The definition of the Array has the following general format:
Any
01 dataname TYPEoccurs [size | Any]
The Type specified determines what type of items the Array will hold. The size is an integer that specifies the initial size of the Array. If Any is used, the array will then be initialized later by setting the size of it at runtime using the set size of… syntax, initializing it with items using set content of… syntax, or by setting it to another array of the same type (Decimal, String, Class Type, etc.).
Type
Any
set size of
set content of
The dataname must be a top level item of either 01 or 77 and Type must be any .NET data type or class. If you need to store traditional COBOL PIC items or groups of items in an Array, the recommended approach is to define a class which contains that data and then specify the Class as the type. Alternatively you can look at using .NET Collections which is a whole other topic.
dataname
Once an Array is created in Micro Focus COBOL.NET, it can be accessed in several ways and all of the System.Array methods and operations are supported in Micro Focus COBOL.NET and other .NET languages… because it IS a .NET System.Array. From a COBOL syntax standpoint, all Arrays begin at index 1. This is purely a syntax difference with C# and other .NET languages where the first index position is zero. If you pass a Micro Focus COBOL.NET Array to a C# class, C# would still access the first position with an index of zero. Likewise, if the Array is created in C#, Micro Focus COBOL.NET would access the first position with an index of 1. The Arrays work as expected in each language.
System.Array
1
To iterate through an Array similar to the C# foreach statement, you use:
foreach
“perform varying object thru ArrayObject”.
This allows you to process a number of COBOL statements for each item in the Array without the need for an index. Example 3 uses this technique. If you have an Array of Strings, then object would be a data item defined as a String. Through each iteration, the Array item is moved into object and then can be processed by subsequent COBOL statements. In this case, we are using an Array, but actually perform varying syntax can be used against any instance of a .NET Collection class.
Collection
I have attached a sample solution with this article and have also included below the source of the 2 modules. Here are 5 examples of Array usage in Micro Focus COBOL.NET. Hopefully these should provide a good reference of different techniques and also serve as reference for the syntax needed in different scenarios when using .NET Arrays. The samples were coded in Micro Focus COBOL.NET in Visual Studio 2008.
The soon to be released Micro Focus Visual COBOL (shortly after the Microsoft Visual Studio 2010 launch) simplifies and improves syntax for COBOL and .NET. This will be covered in future articles!
set size of DecimalArray to 5
perform varying idx from 1 by 1 until idx > DecimalArray::"Length"
move DecNum to DecimalArray(idx)
add 100.50 to DecNum
end-perform
set ArrayObj to new "Class1"()
invoke ArrayObj::"ProcessArray"(DecimalArray)
Resize
set size of…
set content of StringArray to ("Str1" "Str2" "Str3")
invoke type "Array"::"Resize"[String](StringArray, StringArray::"Length" + 10).
set size of StringArray to 20
ToCharArray
String
AppendChar
SecureString
set PswdString to "mypassword"
set CharArray to PswdString::"ToCharArray"()
set SecStr to new "SecureString"()
perform varying CharVal thru CharArray
invoke SecStr::"AppendChar"(CharVal)
end-perform
set content of…
set content of ArrayofArrays to (DecimalArray StringArray CharArray)
set size of DateArrays to 2
set size DateArrays(1) to 3
set size of DateArrays(2) to 5
set DateArrays(2 3) to type "DateTime"::"Now"
$set ilusing"Article02CSharp"
program-id. Article02 as "Article02.Article02".
data division.
working-storage section.
01 DecNum pic s9(5)v99 comp-3 value 123.45.
*> Traditional COBOL packed decimal field
01 DecimalArray Decimal Occurs any.
*> Array of System.Decimal
01 StringArray String occurs 3.
*> Array of System.String with initial size of 3
01 CharArray Character Occurs any.
*> Array of System.Character with no initial size
01 PswdString String.
*> .NET System.String
01 CharVal Character.
*> .NET System.Character
01 SecStr Type "SecureString".
*> System.Security.SecureString
01 ArrayObj Type "MyClass".
*> A data item of type Class1 C# class in this sample
01 idx binary-short value zero.
*> .NET Int16 or Short
77 ArrayofArrays Type "Array" occurs any.
*> An Array of DateTime instances
77 DateArrays Type "DateTime" occurs any occurs any.
*> A Jagged Array of DateTime objects
procedure division.
*> Example 1
set size of DecimalArray to 5
perform varying idx from 1 by 1 until idx > DecimalArray::"Length"
move DecNum to DecimalArray(idx)
add 100.50 to DecNum
end-perform
set ArrayObj to new "MyClass"()
invoke ArrayObj::"ProcessArray"(DecimalArray)
*> Example 2
set content of StringArray to ("Str1" "Str2" "Str3")
invoke type "Array"::"Resize"[String](StringArray,
StringArray::"Length" + 10).
set size of StringArray to 20
*> Example 3
set PswdString to "mypassword"
set CharArray to PswdString::"ToCharArray"()
set SecStr to new "SecureString"()
perform varying CharVal thru CharArray
display CharVal
end-perform
*> Example 4
set content of ArrayofArrays to (DecimalArray StringArray CharArray)
*> Example 5
set size of DateArrays to 2
set size DateArrays(1) to 3
set size of DateArrays(2) to 5
set DateArrays(2 3) to type "DateTime"::"Now"
goback.
end program Article02.
using System;
namespace Article02CSharp
{
public class Class1
{
public void ProcessArray(Decimal[] myArray)
{
myArray[0] = 543.21. | http://www.codeproject.com/Articles/64075/Using-Arrays-with-Micro-Focus-COBOL-NET | CC-MAIN-2014-41 | refinedweb | 1,357 | 56.55 |
Project Log : Arduino USB
Description
Project log for developing USB expansion shield for Arduino and associated code.
See also: Learning About Arduino and AVR-USB
. See also . : Featured as a chapter called Virtual USB Keyboard in the new book Practical Arduino by Jon Oxer complete with readable schematics and all.
Code
- arduinousb_release_004.tar.gz -- Fourth alpha release. Added generic USB device support. Added Python wrapper and demos. Same caveats as 002.
- arduinousb_release_003.tar.gz -- Third alpha release. Upgrade to version 2009-08-22 of V-USB driver. No new functionality, same caveats as 002.
- arduinousb_release_002.tar.gz -- Second alpha release. (Compatible with Arduino 0016 (not 0017!) and PCB design but not original protoboard design.)
- (old) arduinousb_release_001.tar.gz -- First alpha release. (Patched version of UsbKeyboard.h for Arduino 0012)
Notes
- ( 7 March 2008 )
- Have started construction of "mini" expansion shield on a piece of strip board.
- Connected USB "B" socket to board and wired ground and +5V (Vbus) from USB to Arduino to power it successfully.
- Helpfully found the required 2.2KOhm and 68 Ohm resistors needed in the latest box of electronics bits I purchased. (Of course they were in the last set of the resistors I looked through though.)
- Started drawing schematic in KiCad. Used USB "B" socket symbol from con-usb.lib from. Used Arduino pin layout from.
- Also using Arduino Atmega168 pin mapping details. Note: INT0 == PD2 == IC Pin 4 == Arduino Digital Pin 2, INT1 == PD3 == IC Pin 5 == Arduino Digital Pin 3
- FIX: Schematic should probably have a diode to prevent powering Vbus accidentally.
- The source for usbdrv.h says, regarding the hardware:
USB lines D+ and D- MUST be wired to the same I/O port. We recommend that D+ triggers the interrupt (best achieved by using INT0 for D+), but it is also possible to trigger the interrupt from D-. If D- is used, interrupts are also triggered by SOF packets. D- requires a pullup of 1.5k to +3.5V (and the device must be powered at 3.5V) to identify as low-speed USB device. A pullup of 1M SHOULD be connected from D+ to +3.5V to prevent interference when no USB master is connected. We use D+ as interrupt source and not D- because it does not trigger on keep-alive and RESET states.
- For a 5V source it seems the pullup on D- needs to be 2.2K to 5v.
- Other circuits: usbasp, usbtinyisp, avrusb
- Socket pin outs: USB overview and Plug and Receptacle pinouts Update : August 2008 A possible pinout for the PCB part of the USB connector. Thanks Mr Spatial. :-)
- Soldered up zener diodes and pull up resistor.
- Plugged into linux host, produced this from tail -f /var/log/syslog :
Mar 7 04:52:38 localhost kernel: [685132.128973] usb 1-2: new low speed USB device using uhci_hcd and address 9 Mar 7 04:52:38 localhost kernel: [685132.252884] usb 1-2: device descriptor read/64, error -71 Mar 7 04:52:38 localhost kernel: [685132.480711] usb 1-2: device descriptor read/64, error -71 Mar 7 04:52:39 localhost kernel: [685132.696583] usb 1-2: new low speed USB device using uhci_hcd and address 10 Mar 7 04:52:39 localhost kernel: [685132.816506] usb 1-2: device descriptor read/64, error -71 Mar 7 04:52:39 localhost kernel: [685133.040335] usb 1-2: device descriptor read/64, error -71 Mar 7 04:52:39 localhost kernel: [685133.256203] usb 1-2: new low speed USB device using uhci_hcd and address 11 Mar 7 04:52:40 localhost kernel: [685133.663904] usb 1-2: device not accepting address 11, error -71 Mar 7 04:52:40 localhost kernel: [685133.775828] usb 1-2: new low speed USB device using uhci_hcd and address 12 Mar 7 04:52:40 localhost kernel: [685134.183550] usb 1-2: device not accepting address 12, error -71
- Above result "new low speed device" indicates pull up is in correct place. The error messages are because it's only the bare board plugged in. (Apparently strip board doesn't accept addresses.) :-)
- I'm using 3.6V, 0.5W Zener Diodes (1N5227) although I hear .25W is preferred, and apparently 1W don't work (according to a forum post).
- FIX: Change diodes in schematic to zeners.
- Modified PowerSwitch usbconfig.h file. Compiled with make on Ubuntu 7.10 machine.
- Before plugging in, remember to: upload firmware, change power jumper to "none", disconnect Arduino usb socket cable, attach shield, attach cable to expansion usb socket.
- Doesn't work. :-(
- Used this to upload (from OS X):
hardware/tools/avr/bin/avrdude -Chardware/tools/avr/etc/avrdude.conf -v -v -v -v -pm168 -cstk500v1 -P/dev/tty.usbserial-<id> -b19200 -D -Uflash:w:<path>main.hex:i
- Need to edit Makefile to set DEVICE to atmega168!
- Need to edit main.c
- Works!
- Using lsusb shows the device appearing/disappearing:
Bus 001 Device 040: ID 16c0:05dc
- Using sudo lsusb -v gives something that includes:
iManufacturer 1 iProduct 2 PowerSwitch
- Edited EasyLogger enough to init okay.
- ( 8 March 2008 )
- I've now got the powerswitch usb echo demo working from an Arduino sketch as I'm flashing an LED every second. The Usbduino shield is in the house. :-)
- BTW an error message of the form "error: invalid conversion from `void*' to ..." is because C++ requires that casts from void * are explicit not implicit.
- BTW also, an error message of the form "error: expected initializer before int" or "error: expected initializer before int" may be because PROGMEM isn't recognised because you need to include the pgmspace.h file. Use gcc's -save-temps command to look in the .ii file to see if PROGMEM is replaced.
- ( 18 March 2008 )
- Trying to get IDE to compile library correctly from bare source—but it doesn't compile '.S' files at least...
- Try this in library directory first:
hardware/tools/avr/bin/avr-g++ -Wall -Os -I. -DUSB_CFG_CLOCK_KHZ=16000 -mmcu=atmega168 -c usbdrvasm.S
- Hmmm, so it seems the IDE doesn't do the compile the other things correctly either, so we need also:
hardware/tools/avr/bin/avr-g++ -Wall -Os -I. -DUSB_CFG_CLOCK_KHZ=16000 -mmcu=atmega168 -c usbdrv.c
- I guess that's why I compiled the other test by hand... :-)
- So the above commands will produce usbdrv.o and usbdrvasm.o files.
- Still not getting working keyboard data on OS X...
- On Ubuntu I can get some data out by doing: (depending on the usb device address)
sudo cat /dev/input/by-path/pci-0000\:00\:14.2-usb-0\:2\:1.0-event- | hexdump
- This produced when pressing the button: (index 1 // KEY_A // 4 ?)
0000000 86c7 47de 05cc 000b 0001 002a 0001 0000 0000010 86c7 47de 05d7 000b 0001 001e 0001 0000 0000020 86c7 47de 05d9 000b 0000 0000 0000 0000 0000030 86c7 47de da7f 000c 0001 002a 0000 0000 0000040 86c7 47de da89 000c 0001 001e 0000 0000 0000050 86c7 47de da8c 000c 0000 0000 0000 0000
- This produced when pressing the button: (index 2 // KEY_B // 5 ?)
0000000 88e5 47de a4c7 000b 0001 002a 0001 0000 0000010 88e5 47de a4d0 000b 0001 0030 0001 0000 0000020 88e5 47de a4d2 000b 0000 0000 0000 0000 0000030 88e5 47de 9ec5 000c 0001 002a 0000 0000 0000040 88e5 47de 9ece 000c 0001 0030 0000 0000 0000050 88e5 47de 9ed1 000c 0000 0000 0000 0000
- ( 21 March 2008 )
- ( 28 March 2008 )
- Discovered today that on Linux it's now working okay for HID, seemingly. (Although the first few key presses seem not to be recognised.)
- ( Site down for around a month )
- ( 9 July 2008 )
- During the downtime I discovered that the Arduino did function as an USB HID keyboard (with one button!) under 10.4 on a PowerBook G4 so it would seem maybe the 10.2 and/or the iBook didn't like it for some reason.
- I managed to send standard characters, modified keystrokes (e.g. Command-B), arrow keys and function keys—the latter of which I used to bring the Dashboard up on screen whenever I pushed the button attached to the Arduino.
- So, the short answer does seem to be that at least at some level AVRUSB and Arduino can be compatible—I'm not sure if there is a point at which one or other of them will break as I haven't tried anything very sophisticated.
- It would seem the "missing first keystroke" issue I noted is a known AVRUSB issue: First key is not sent (hid keyboard) Apparently you can send a dummy empty keystroke to work around it at least.
- After much delay, here is the full top and bottom views of my USB mini-shield (on strip- or vero-board) for Arduino, it should be enough for you to reconstruct it:
- Note that the traces are cut between the four pins on the USB connector and there are two traces that have drill bit induced breaks.
- Oh, ok, and it turns out I also had a schematic drawn up in KiCad, so I exported it to SVG for you (note that I haven't actually verified it's accurate): schematic for (AVRUSB) USB mini-shield for Arduino in SVG (Hmmm, Inkscape opened it okay, but Firefox 2 doesn't seem to like it.)
- ( 23 July 2008 )
- Have noticed AVRUSB firmware is now a download separate from the example projects. The code has also been reorganised. Not sure which version I should first release.
- ( 26 July 2008 )
- A couple of days ago I pulled the original PowerSwitch and HIDKeys source I had downloaded into SVN and re-applied the modifications I had made and documented compilation. I'm now working on tidying up the HID Keyboard sample with the aim of uploading "library" and sample sketch.
- ( 6 August 2008 )
- Last week I got a reasonable API/library design implemented.
- Added link to PCB pinout of USB socket to 7 March 2008 entry above.
- ( 12 August 2008 )
- Have released a first alpha release 001, by request. See Code section above.
- Realised I had planned to change UsbKeyboardDevice method named update to refresh but won't change it for this release.
- ( 19 August 2008 )
- Blog entries found after an email received with the UsbMouse code: BoarduinoUSB, UsbMouse library for Arduino. Thanks Michel! [ Update : UsbJoystick library for Arduino ]
- ( 13 September 2008 )
- It was produced more as an test of a KiCad Mac OS X nightly binary but here's an initial rough, untested and potentially inaccurate circuit for an Arduino-based USB "keyboard" device: arduino_usb_keyboard_circuit_001.pdf Note that the current code won't work on it due to moving some of the connections around.
- ( 19 September 2008 )
- Started Project Log USB Stealth Twiddler page.
- ( 27 January 2009 )
- Thanks to xSmurf the apparent cause of the instability has been identified. It appears the timer0 interrupt routine causes the USB side of things to barf. Interim work-around is to disable the timer in setup with:
// disable timer 0 overflow interrupt (used for millis) TIMSK0&=!(1<<TOIE0);
- Note that this "fix" will cause delay and millis to no longer function.
- With this workaround in place I can reliably repeat typing.
- In the interim I am using this for delays:
void delayMs(unsigned int ms) { /* */ for (int i = 0; i < ms; i++) { delayMicroseconds(1000); } }
- xSmurf suggested modifying the pre-scaler value for the timer as a possible solution.
( 18 February 2009 )
- Earlier in the month I managed to put together a PCB design and etch a PCB with it but haven't uploaded anything until now. The first version had a bunch of connections in reversed order (thanks to using generic connectors on the schematic) but the second revision seems to be functional. The second revision still needs a couple of modifications, mainly to provide support and isolation for the USB connector. The current board did work initially but now seems to have an issue which I need to verify—I think it's a construction issue rather than a design issue.
- @@ TODO : Need to upload the most recent code with the new pinout.
- Uploaded schematic pdf and kicad source files. Here's a screenshot of the board layout:
- Here's an etchable image as produced by but keep in mind the socket support holes aren't great and I still have a small problem with my etched board which I haven't 100% confirmed.
- @@ TODO : Add photographs.
( 23 February 2009 )
- Until I upload the correct code in an archive, here's a library patch that should work:
--- arduinousb_release_001/libraries/UsbKeyboard/usbconfig.h 2008-08-12 13:38:53.000000000 +1200 +++ arduino-0012/hardware/libraries/UsbKeyboard/usbconfig.h 2009-02-05 17:19:58.000000000 +1300 @@ -27,7 +27,8 @@ /* This is the port where the USB bus is connected. When you configure it to * "B", the registers PORTB, PINB and DDRB will be used. */ -#define USB_CFG_DMINUS_BIT 3 +#define USB_CFG_DMINUS_BIT 4 +//#define USB_CFG_DMINUS_BIT 3 /* This is the bit number in USB_CFG_IOPORT where the USB D- line is connected. * This may be any bit in the port. */ @@ -39,13 +40,13 @@ /* ----------------------- Optional Hardware Config ------------------------ */ -/* #define USB_CFG_PULLUP_IOPORTNAME D */ + */ +#define USB_CFG_PULLUP_BIT 5 /* This constant defines the bit number in USB_CFG_PULLUP_IOPORT (defined * above) where the 1.5k pullup resistor is connected. See description * above for details. --- arduinousb_release_001/libraries/UsbKeyboard/UsbKeyboard.h 2008-08-12 13:38:53.000000000 +1200 +++ arduino-0012/hardware/libraries/UsbKeyboard/UsbKeyboard.h 2009-02-05 14:48:47.000000000 +1300 @@ -7,9 +7,18 @@ #define __UsbKeyboard_h__ #include <avr/pgmspace.h> +#include <avr/interrupt.h> +#include <string.h> #include "usbdrv.h" +// TODO: Work around Arduino 12 issues better. +//#include <WConstants.h> +//#undef int() + +typedef uint8_t byte; + + #define BUFFER_SIZE 4 // Minimum of 2: 1 for modifiers + 1 for keystroke @@ -122,6 +131,11 @@ PORTD = 0; // TODO: Only for USB pins? DDRD |= ~USBMASK; + cli(); + usbDeviceDisconnect(); + usbDeviceConnect(); + + usbInit(); sei();
- And here is the demo patch:
--- arduinousb_release_001/examples/UsbKeyboardDemo1/UsbKeyboardDemo1.pde 2008-08-12 13:54:45.000000000 +1200 +++ arduinousb_release_001/examples/UsbKeyboardDemo1/UsbKeyboardDemo1.pde 2009-02-23 03:25:29.000000000 +1300 @@ -5,12 +5,25 @@ void setup() { pinMode(BUTTON_PIN, INPUT); digitalWrite(BUTTON_PIN, HIGH); + + // disable timer 0 overflow interrupt (used for millis) + TIMSK0&=!(1<<TOIE0); // ++ +} + +void delayMs(unsigned int ms) { + /* + */ + for (int i = 0; i < ms; i++) { + delayMicroseconds(1000); + } } void loop() { UsbKeyboard.update(); + digitalWrite(13, !digitalRead(13)); + if (digitalRead(BUTTON_PIN) == 0) { //UsbKeyboard.sendKeyStroke(KEY_B, MOD_GUI_LEFT); @@ -32,7 +45,7 @@ UsbKeyboard.sendKeyStroke(KEY_ENTER); - delay(200); + delayMs(200); } }
( 23 May 2009 )
- A while back a helpful correspondent sent me an annotated image of the proto-board shield:
( 24 May 2009 )
- Unfortunately I've discovered today that the code which seems to run reliably on Linux is still having problems on OS X. The result is only semi-reliable functioning... :-/ The errors result in:
May 24 01:31:54 ComputationDevice kernel[0]: USBF: 118274.167 [0x44d7400] The IOUSBFamily is having trouble enumerating a USB device that has been plugged in. It will keep retrying. (Port 2 of hub @ location: 0x1a000000) May 24 01:32:01 ComputationDevice kernel[0]: USBF: 118280.814 AppleUSBUHCI[0x421a000]::Found a transaction which hasn't moved in 5 seconds on bus 0x1a, timing out! (Addr: 0, EP: 0) May 24 01:32:05 ComputationDevice kernel[0]: USBF: 118285. 74 [0x44d7400] The IOUSBFamily was not able to enumerate a device.
- With the above PCB schematic (not the stripboard shield) this hex file should work (as well as it does): UsbKeyboardDemo1_20090524.hex
- Changing the demo to only send one character and reducing the delay from 200 to 20 doesn't cause the communication to stall, but then has repeated characters.
( 13 September 2009 )
- As part of the Learning About Jog Wheel project I have just discovered something. On a MacBook Pro (which I've mostly been testing on) if I plug a AVRUSB device into the left USB port it fails to enumerate but if I plug it into the right USB port it works right away. This would seem to match what appeared to be inconsistent behaviour I've observed while working on the project... Looks like I need to re-test the things I thought weren't working. :-/
( 16 October 2009 )
- Uploaded alpha release 002 of the code (see top of page). This incorporates all the patches to the original code mentioned above AFAICT. It is compatible with Arduino 0016 (not 0017) and the PCB design—either on a PCB or if you modify the protoboard design to work the same way.
- Oh, also, I have confirmed that the PCB design is working for me (once you ensure the socket legs don't short out things) so the problem I seemed to witness earlier was presumably due to the USB port I was testing it on. (Which is convenient because I broke a solder joint on the "fixed" one. :) )
( 17 October 2009 )
- Worked on upgrading the version of V-USB (formerly known as AVRUSB) used to vusb-20090822. I seem to have merged my changes okay and it appears to work as before. This should in theory incorporate some bug fixes although nothing noticable so far. Also should enable easier use of later examples. Still need to upload the changes.
( 19 October 2009 )
- Have begun to port the hid-data example from the latest VUSB code to work on the Arduino to function as a generic UsbDevice capable of receiving and sending data. Have managed to get the device recognised but communication appears to fail.
- Have for the hid-data example working on the Arduino—I moved the timer disable code out of the constructor and that seemed to fix things. Code not uploaded yet.
( 20 October 2009 )
- As partially documented at Learning About Python and USB I've now got a Python + libusb-1.03 + pyusb-1.x script reading and writing over USB to the ported hid-data Arduino UsbDevice with no driver required on OS X. Code not uploaded yet.
( 22 October 2009 )
- I've now got a generic "streaming" usb device implemented on the Arduino. Also a Python interface to it. A couple of demos including one for turning pins (and thus LEDs) on and off. The other implements a "decrypting" dongle. Code not uploaded yet. :)
- Example Python code:
from arduino.usbdevice import ArduinoUsbDevice theDevice = ArduinoUsbDevice(idVendor=0x16c0, idProduct=0x05df) theDevice.write(0x01) print theDevice.read()
- Example Arduino code:
#include <UsbStream.h> void setup() { UsbStream.begin(); UsbStream.write(0xff); } void loop() { UsbStream.refresh(); if (UsbStream.available() > 0) { int data = UsbStream.read(); } }
- Want to look into putting a Processing wrapper together too.
- Oh, I just realised I didn't mention that I actually uploaded the release 003 the other day—that's the one that upgraded to the latest V-USB.
- Uploaded release 004 code (but forgot to update the release notes). Needs better documentation. Will be interested in hearing if it works okay for other people.
( 24 October 2009 )
- Modified streaming device to have dynamic device descriptor stored in RAM. This enables, for example, vendor and product IDs to be changed on the device rather than at compile time. Tried a couple of known pairs and the vendor was recognised and listed. Note: This approach doesn't allow changing something that changes the size of the descriptor. To do that requires a fully dynamic descriptor created/returned at runtime (which should be possible).
( 27 October 2009 )
- Awesome, I've just learned that assembler support in the Arduino IDE patch in Issue 110 has been applied to SVN. I just tested it and the code now compiles out of the box—I tested it with the library in the sketchbook directory. So, this should mean that once IDE version 0018 is released it's fully supported.
( 6 April 2010 )
- Link to a talk about a similar project: and web page (via hackaday)
( 28 July 2010 )
- Thanks to some pushing from I've finally sorted out the issues with Arduino 0018 and the ATmega328p. You want to add this around "usbPoll" and "usbInit" in "usbdrv.h":
#ifdef __cplusplus extern "C"{ #endif ... #ifdef __cplusplus } // extern "C" #endif
And then from add this to "usbconfig.h":
#define USB_INTR_VECTOR INT0_vect | http://code.rancidbacon.com/ProjectLogArduinoUSB | CC-MAIN-2014-52 | refinedweb | 3,344 | 65.93 |
Introducing Nagios-Dropwizard
Super simple Nagios checks via Dropwizard Tasks.
information for our purposes. It was also not idiomatic to Nagios.
How does it work?
The Nagios-Dropwizard framework is built on top of the Dropwizard
Task mechanism. The assumption is that Nagios (i.e.
check_url.py) will call specific tasks on the Dropwizard admin port and that task will return a properly formatted Nagios health check. The output will be parsed by
check_url.py, which will translate the message into the appropriate exit code.
Writing a Nagios Check Task in Dropwizard.
The framework provides a convenient super type,
com.bericotech.dropwizard.nagios.NagiosCheckTask, that developers can extend.
NagiosCheckTask requires subtypes to implement the
performCheck method, which provides request parameters and expects a Nagios
MessagePayload object returned.
For example, say we had a task queue we wanted to monitor:
public class QueueCheckTask extends NagiosCheckTask { static final int CRITICAL = 80; static final int WARNING = 50; Queue queue; public QueueCheckTask(Queue queue) { // Tasks must have names in Dropwizard and this // is a constructor requirement of the framework. // I wish I could make it more obvious. super("check-queue"); this.queue = queue; } @Override public MessagePayload performCheck( ImmutableMultimap<String, String> requestParameters) throws Throwable { itemCount = queue.size(); Level level; if (itemCount > CRITICAL) level = Level.CRITICAL; else if (itemCount > WARNING) level = Level.WARNING; else level = Level.OK; String message = String.format( "Queue is %s at %s items.", level, itemCount); return MessagePayload.builder() .withLevel(level) .withMessage(message) .withPerfData( PerfDatum.builder("count", itemCount).build() ) .build(); } }
To register the Nagios check task with Dropwizard, you simply add it to the environment as a Dropwizard
Task:
environment.addTask(new QueueCheckTask(queue));
Or, if you our using the Fallwizard framework, you simply need to define it as Spring Bean and it will be automatically registered with Dropwizard.
<bean class="my.namespace.QueueCheckTask" c:
Using the
check_url.py Nagios check script.
Checking the status of the Nagios check task is easy using the
check_url.py script. Assuming you have the Dropwizard server running, simply execute:
python check_url.py -u admin -p password -H localhost -P 8081 / -U tasks/check-queue
Calling this, you should receive a message like:
Ok - Queue is OK at 25 items. | count=25
The exit code will also be mapped to the appropriate value (in this case,
0).
Passing parameters.
Your status checks don't have to be static. If you need to, you can pass parameters to the status check which will be available in the
ImmutableMultimap<String, String> requestParameters parameter of the
performCheck method.
I've included a couple of utility functions that will allow you to pull the first parameter out of the
requestParameters, one that even throws an error if the parameter does not exist:
// Use the Guava Optional wrapper to indicate a possible null value. Optional<String> queueName = getParameter(requestParams, "queueName"); // Throws an UnsatisfiedParameterException if the parameter // is not found. String queueName = getMandatoryParameter(requestParams, "queueName");
Error Handling.
The
NagiosCheckTask does not require derived classes to trap exceptions (
throw away!). If an exception is thrown by a derived class, the
NagiosCheckTask will wrap the exception and return a
MessagePayload indicating the service (or at least this check) is
Level.CRITICAL.
This, however, is not the same behavior for the
check_url.py script. Instead, we take the convention that if an error occurs, the status of the check is
UNKNOWN. I've taken this convention because it's completely possible that the
check_url.py script is misconfigured or there's a connection problem. I don't believe this to be the same case with a failure in a
NagiosCheckTask which tends to indicate some sort of failure within the system.
That's it for now. I would love to hear what you think.
Unrepentant Thoughts on Software and Management. | http://rclayton.silvrback.com/introducing-nagios-dropwizard | CC-MAIN-2017-26 | refinedweb | 624 | 50.12 |
Hi,
I have a webservice I have created, and it works fine.
However, I am now wanting to load a file (xslt file in my case), to pass it through to Xalan for transformation.
However... I do not know how to load the file.
The main issue I am having, is I don't know where the file should be...
I am currently using getResourceAsStream() function... is this correct?:
InputStream loInputXSLT = getClass().getResourceAsStream("/xslt/file.xslt");
@Name("myWebService") @Stateless @WebService(name = "MyWeb", serviceName = "MyWeb") public class MyWeb implements MyWebRemote {
I can load the file using the following code:
FileInputStream loInputXSLT = new FileInputStream("/var/opt/transformation.xslt");
Try:
URL f = Thread.currentThread().getContextClassLoader().findResource("transformation.xslt"); FileInputStream xxx = new FileInputStream(f);
Thanks for that, thats what I was looking for :) | https://developer.jboss.org/message/338465 | CC-MAIN-2019-30 | refinedweb | 129 | 51.65 |
I should point out that this article starts out with me thinking it was going to be a blog post, nothing more, and in reality it is not much more than a blog post, but I have a rethink and although it is very small I think it may be useful to some folks, which is why I decided to release it as an article in the end. As I think that there may be some readers that take this the extra step and use it in their own apps.
So enough of the self flagellation, what does the thing attached actually do.
Well ok, here it is, for ages I have marvelled at the Web addin by CoolIris called "PicLens", which I have to say is the best add in I have ever seen. I have even tried to write my own version of this which myself and fellow coder Marlon Grech did for WPF, we did not do as good a job as CoolIris.
We called ours MarsaX, which you can look at right here.
Ours looks like this:
Whilst CoolIris PicLens looks like this:
Now as CoolIris PicLens is a browser addin, what makes it even more impressive is the fact that it runs in the browser. How the hell is this possible. I decided to have a look at whether it would be possible to host the CoolIris PicLens addin in my own WPF app, and to see if I could manipulate what was shown on the 3d wall, and that is what this small article is all about.
The only things you will need are:
This section will explain how the demo WPF app hosts the CoolIris PicLens browser addin.
The very cool thing (at least I think so, there are folks who detest Flash but I love it), is that the CoolIris PicLens browser addin is just a standard Flash SWF file. That is kind of cool, as it means I can embed it in my own page. I know people hate Flash but as I stated I like it, and CoolIris has fully exposed its functionality via JavaScript. You can read more about how to embed the CoolIris PicLens addin using their documentation link, which you can find at.
It is quite detailed. But for me, all I wanted to achieve was to be able to host the addin in my own web page which itself was hosted inside my own WPF app. So let's have a look into that, shall we.
Step 1: Creating some XAML to host the HTML page that will in turn host PicLens
This is by far the easiest part, all we have to do is create a WebBrowser (.NET SP1) control, that we can use to host an arbitrary web page. This is the relevant XAML:
WebBrowser
<Window x:Class="EmbeddedPicLensWpfApp.Window1"
xmlns=""
xmlns:
<WebBrowser x:Name="browser"
Grid.
</Grid>
</Window>
And then in the code behind, all we have to do is set the WebBrowser document to be our HTML page that is in turn hosting the CoolIris PicLens browser addin. This too is easily achieved as follows: EmbeddedPicLensWpfApp
{
public partial class Window1 : Window
{
public Window1()
{
InitializeComponent();
this.Loaded += Window1_Loaded;
}
void Window1_Loaded(object sender, RoutedEventArgs e)
{
String fullPath = System.IO.Path.Combine(
System.IO.Directory.GetCurrentDirectory(),
@"PicLensHostPage.htm");
browser.Navigate(
new Uri(fullPath));
}
}
}
Step 2: Creating the HTML
The moderately hard part of all of this is setting up the CoolIris PicLens addin. CoolIris actually exposes a express wall creator, but that uses an EMBED object tag, which is bit more rigid and less flexible that creating the CoolIris PicLens addin using JavaScript. I chose to use JavaScript as it offered me greater flexibility.
The entire page to host the CoolIris PicLens addin looks like this (I will explain some of this later):
<!-- saved from url=(0014)about:internet -->
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<head>
<style type="text/css">
Html, body
{
Overflow:auto;
}
#wall
{
background-color: #121212;
Overflow:auto;
}
</style>
<script type="text/javascript"
src="">
</script>
<script type="text/javascript">
function LoadPicLensWithSearchWord(keyword)
{
try
{
var flashvars = {
feed: "api://"
+ keyword,
showEmbed: "false",
showSearch: "false"
};
var params = {
allowFullScreen: "true",
allowscriptaccess: "always"
};
swfobject.embedSWF(
"",
"wall", "600", "600", "9.0.0", "",
flashvars, params);
}
catch(err)
{
alert(err);
}
}
</script>
</head>
<body onload="javascript:LoadPicLensWithSearchWord('robots')"
bgcolor="#121212">
<div id="wall" >
<!-- 3D Wall Goes Here -->
</div>
</body>
</html>
But for now, all you need to understand is that the CoolIris PicLens addin is a Flash SWF file and we can create an instance of it using JavaScript.
FlashVars
As you can see from the above example HTML file, there are really 2 main parts. One of them is the FlashVars, which is really a dictionary of configuration values, that can be used to configure the CoolIris PicLens addin.
Let me explain what I am setting there for the CoolIris PicLens addin.
feed
showEmbed
showSearch
For a complete list of available options, you can check out the CoolIris PicLens addin documentation at the following URL:
SwiftObject
The next thing that the JavaScript makes use of is the swfobject.embedSWF library. This small but of JavaScript allows the embedding of Flash SWF files. This is a standard library and can be obtained from the following URL:
swfobject.embedSWF
You can see that it sets various options such as the URL to the SWF and various other bits and pieces such as height/width, etc.
Media Feeds
The more eagle eyed amongst you will notice that I am using the Flickr API, which has a URL similar to this "api://". Now according to the CoolIris PicLens addin documentation, you should be able to create your own custom RSS media stream.
This is described at. To do this, you would need to create an actual web site hosted in IIS, with an HTML file which hosted the CoolIris PicLens addin in it (similar to the example file included), and also create a RSS XML document which is also part of the IIS web site. The documentation suggests that you just also create a crossdomain.xml file which MUST also be located at the root of you IIS installation. I tried this and almost got it working, to the point I knew it was using my own RSS stream, but it still complained about a missing crossdomain.xml. I am not the most patient of people when it comes to the web, it just ain't my bag man, so I leave this as an exercise for the reader.
The demo app simply allows the user to type a keyword, which is then used against the Flickr API. This still obviously requires some interaction between the managed C# code and the browsers JavaScript, more on this in just a minute.
As I just eluded, the attached code makes use of the Flickr media stream API. So all the attached demo code really does, is allow the user to input a new keyword and creates the appropriate Flickr media stream API URL, based on the new keyword, and then passes that to the CoolIris PicLens addin via a JavaScript call. This does mean we need some way of getting stuff from managed code (WPF in this case, though this would be the same in WinForms I would think), into the browser to call some JavaScript.
So let's have a look at that, it is fairly easy to do, we just need to use the System.Windows.Controls.WebBrowser.InvokeScript() method. Here is how the demo app does this:
System.Windows.Controls.WebBrowser.InvokeScript()
private void Search_Click(object sender, RoutedEventArgs e)
{
if (String.IsNullOrEmpty(txtKeyWord.Text))
{
MessageBox.Show("You need to enter a search word", "Error",
MessageBoxButton.OK, MessageBoxImage.Error);
return;
}
//Invoke the JavaScript in the Browsers page
browser.InvokeScript("LoadPicLensWithSearchWord", new Object[] { txtKeyWord.Text });
}
Along the way while I was looking for the best way to invoke JavaScript inside the WebBrowser, I came across a rather cool DLL/Namespace, which is as follows:
The Microsoft.mshtml DLL can be referenced to give you access to some rather rich HTML interaction types.
Microsoft.mshtml
Then you just need to include this namespace using mshtml; and then look what you can do with the WebBrowsers document. You can basically do any of the typical DOM things you would expect like adding child nodes, adding attributes, injecting JavaScript, etc.
using mshtml;
This however is not used in the demo app, I just thought it was cool, and could be very useful. You can read more about the Microsoft.mshtml DLL and its exposed types over at:
If you want to know more about this approach, I found several very good links:
One interesting thing that nags the crap out of me is when I open a web page up and I get the security bar. You may be interested to know that you can get rid of this, by using some special markup in your web content. I have done just that. This is called "Mark Of The Web", which you can read more about right here:.
This is what MSDN has to say about it:
"The Mark of the Web (MOTW) is a feature of Windows Internet Explorer that enhances security by enabling Internet Explorer to force Web pages to run in the security zone of the location the page was saved from—as long as that security zone is more restrictive than the Local Machine zone—instead of the Local Machine zone. The role of the MOTW is more prominent with Microsoft Internet Explorer 6 for."
Again, this is from.
The effect of this is quite apparent, for example here is the attached demo apps HTML file without the special "Mark Of The Web" markup.
Notice the annoying nag notice.
And here it is again, this time with the special "Mark Of The Web" markup supplied in the demo apps HTML file.
All you have to do is add this to your HTML file.
<!-- saved from url=(0014)about:internet -->
Here is a screen shot of the finished thing working inside a WPF Window:
Anyways folks, that is all I have to say for now. I know this is a very small article (and would have made a nice blog posting), but I just felt that some of you may find it useful, and I would rather a wider audience receive something useful.
So apologies that this article is so small and not my normal mammoth of an article (remember mammoths died out).
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
I am lucky enough to have won a few awards for Zany Crazy code articles over the years
MDL=>Moshu wrote:Up till now I used to read them and almost at the very first XAML you lost me
or I lost you. It doesn't matter....
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/68025/Embed-PicLens?fid=1565933&df=90&mpp=25&noise=3&prof=True&sort=Position&view=Expanded&spc=Compact&select=3425672&fr=26 | CC-MAIN-2015-40 | refinedweb | 1,846 | 66.88 |
Hi Fred -- Your number line program sounds like fun -- especially for the person learning a programming language. For a 7 year old, it might be just as instructive to use felt pens or colored pencils though. As mentioned at my "Getting Inventive with Vectors" at , I like to go for a more complete vector concept, and not make the number line too front and center initially. Space first, planes and lines second, is my approach (take freedoms away later, but start with what's most familiar and real i.e. volume). The way the curriculum is now, we associate numbers with lengths, and then take this away later, saying you need vectors to extend in space, whereas the real numbers only *scale* vectors (are not independently geometric). So to kids, it seems like real numbers start out acting like vectors, but then vectors come along and start doing the number line thing like they've already been doing -- but now using this new terminology. Confusing. Recapitulates the historical sequence (Euclid -> Grassmann) more than presents a coherent conceptual logic.[1] The idea of translating a pencil through space without rotation should be communicable to a 7 year old. If it slides around in space without change in orientation, that's what we call translation (I bet with whole body movements the idea of motion without rotation would get across -- e.g. have kids slide across the room without changing the direction they're facing). So you have a bunch of these pencils pointing in various directions (various lengths too, if you like), and you get to slide them anywhere so long as you don't change direction. Put them tip-to-tail and that's what we mean by "adding the pencils" (or call them arrows -- then it's just a short step to "vectors" -- as a key term). Here, a computer could be helpful, giving a graphical notion of different segments adding up tip-to-tail. It's hard to hold a lot of pencils in space. Pictures help. You could talk about a swimming sea turtle that swallowed a die (random chooser device). At each turn to play, it could pick any of the (6) plus/minus XYZ vectors and move in that direction, with each hop being a segment (a tail-to-tip interval corresponding to a pencil in the above discussion): >>> import turtles, povray, functions # Python 101 modules >>> myfile = povray.Povray("ocean.pov") # open a povray file >>> seaturt = turtles.Turtle(turtles.xyzrays(),myfile) # xyz freedoms >>> seaturt.randomwalk(10) # take 10 random hops >>> functions.xyzaxes(myfile,3) # add xyz axes to picture >>> myfile.close() # close file >>> myfile.render() # render (or do it manually) Result: (notice how the turtle back-tracks at one point). Because these kids start in space, they already know what a tetrahedron and an icosahedron are too, and so you can intelligibly have your turtle swim in one of the 4 directions defined by the tetrahedron's 4 vertices (or one of the icosahedron's 12). The turtles.py module already defines tetrays and icorays for this purpose. See "Random Walks in the Matrix": Once a fully spatial sense is developed, THEN I'd reduce it to a single line on which you can only point left or right (or whatever we call the two directions (your left or my left?)). I do it this way because I think space/volume is actually more natural and easier to understand than these artificially constrained "number line" domains. At this number line level, with degrees of freedom highly (artificially) restricted, you could, if you wanted, do some simple programming. Maybe with slightly older kids. For example (the methods in action): >>> a = arrow(10) >>> b = arrow(-5) >>> c = arrow(8) >>> d = arrow(-3) >>> e = arrow(9) >>> aprinter([a,b,c,d,e]) 10 ----------> -5 <----- 8 --------> -3 <--- 9 ---------> Result 19 >>> aprinter([a,c,d,d]) 10 ----------> 8 --------> -3 <--- -3 <--- Result 12 ========================= Simple source code behind the above: def arrow(n): line='' if n<0: line = '<' line+=abs(n)*'-' if n>0: line += ">" line += '\n' return (n,line) def aprinter(arrows): sum = 0 for a in arrows: print "%#3i %s" % a sum += a[0] print "Result %#3i " % sum [1] | http://mail.python.org/pipermail/edu-sig/2000-December/000859.html | crawl-002 | refinedweb | 699 | 57.61 |
OPC DRIZO, Houston , TX use 3D and bi-directional benefits of CADWorx/PIPE and CAESAR II to boost
efficiency, accuracy and design quality.
Providing Global Solutions for Emerging States
The break up of the former Soviet Union has presented many of the
emerging states with mammoth problems. Namely how to
fund sustained grow through the exploitation of their rich natural
resources and how to bring these commodities to a world market.
Getting online as quickly as possible and with the least
capital outlay has been a constant goal of the world’s developing
nations.
Therefore governments are looking to the global companies to provide
the funding and expertise needed to bring these resources to
market for a share of the profits.
One such an alliance has been formed for the development and
management of Kazakhstan’s Karachaganak field. This alliance between
the Kazakhstan government and the ABTL consortium. ABTL is made up of
the following: AGIP – Italy, British Gas – UK, Texaco – USA
and LUKoil – Russia.
The Karachaganak field is estimated to contain recoverable
reserves of 300 million tonnes of oil and gas condensate and 500
billion
cubic meters of natural gas. Initial plans are 3.6 million tonnes of
oil and condensate in the first year and raising this to 12
million tonnes after the year 2001.
The Karachaganak field produces very sour gas with high
concentrations of H2S and CO2 putting a premium on the glycol
dehydration.
Thorough dehydration would be the key to the removal of these
contaminants therefore making these elements benign and easing their
removal.
Reducing the Environmental Impact
Engineering management of the project was awarded to (BESP) Bechtel-Snamprogetti, London in the UK. BESP chose OPC DRIZO Inc.,
Houston, Texas in the USA to design the glycol contacting and regeneration unit using OPC’s DRIZO patented process.
Specifications for the dehydration of the gas required no more than 1 part per million volume of water @ 70barg. Using OPC’s
DRIZO patented technology meant that the unit could be made with lower capital expenditure count than conventional molecular
sieves due to their low equipment. Also the DRIZO units produce virtually zero emissions.
What also makes the OPC designed DRIZO units unique is that they can run using diethelyne glycol (readily available in the East)
or the more efficient triethelyne glycol (readily available in the West). This means that in the future triethelyne can be used
when it becomes more readily available. The DRIZO unit is the only process that can offer this flexibility.
The water-rich glycol that is fed into the regenerator contains a portion of the sour elements that are present in the wet gas.
The first stage in extracting these elements is to feed this liquid into a flash drum where the flashed off gas is recompressed
and fed back into the plant’s wet gas stream for subsequent dehydration.
The DRIZO process is virtually emission free because there is no
flaring off of contaminated gas by-products as with many glycol
regeneration processes. The by-products of the DRIZO process are glycol
at purity levels of 99.99%+ and sour produced water. This
very high level of glycol purity greatly enhances the dehydration
efficiency when reintroduced into the wet gas stream.
Design Considerations
Requirements were that the unit could be fabricated and shipped from anywhere in the world. With overall dimensions of 13.5m
long x 7m wide x 21.2m high, it was evident that the skid could not be fabricated and transported as a single unit.
During the design process steelwork and pipe runs had to be configured to enable the unit to be broken down into boltable
sections no larger than 2m x 2m x 10m for transportation by sea, road or rail after fabrication. This approach cut out over
70 tonnes of steel – a significant saving in fabrication and shipping costs.
CADWorx PIPE the Key to Efficient 3D Design
Early in the design process OPC decided that they needed to design the unit using plant design software that would provide
them with powerful 3D modeling tools. After much searching OPC settled on CADWorx/PIPE from COADE, Inc.
CADWorx/PIPE which gave OPC the ability to produce 3D, 2D drawings, fabrication isometrics, bills of material and
provided bi-directional links to COADE’s CAESAR II stress analysis package.
OPC found CADWorx/PIPE easy to learn, implement and use. In fact OPC’s designers were up and running after just two
days of training. OPC also found great value in the one-year technical support and program updates that COADE
included with the purchase of the package.
CADWorx/PIPE’s specification driven routines meant that each
fitting was placed to project specifications, which ensured
that only components with the correct material and dimensions data were
placed in the model. This drastically reduced human error.
The workgroup features of CADWorx/PIPE also made it easy for multiple designers to work on different parts of the model
whilst viewing each other’s work at any point in the design. Thus reducing man-hours and increasing output.
Deliverables the True Benefit of 3D Design
Although there were great benefits in working in 3D this would not be much use to those without access to computers or the
software. Design reviews and field workers needed 2D drawings and reports that kept to traditional forms of representation.
These ‘deliverables’ included 2D plans, elevations, sections, fabrication isometrics and spool drawings and bills of
material. Within CADWorx/PIPE’s single module format OPC were able to create these items automatically from the 3D model
with very little effort, modification or clean up.
Another benefit was that any adjustments to the model could be automatically reflected in the 2D drawings. This in it self is
not unique but what really set CADWorx/PIPE apart was the ability to pass modifications to the fabrication isometrics
or sections back to the model. True ‘round trip’ engineering.
CADWorx/PIPE and CAESAR II
Integrating Stress Analysis into the Design
OPC were also looking for other ways to save on the duplication of work and realized that the potential for doing so on the
stress analysis portion of the job was enormous.
Typically the designer would draw a stress isometrics from the plant layouts. The stress engineer would then use these to
input the information for stress analysis. The drawing would then be marked up and then returned to the designer for review.
If all was correct the designer would add the changes to the layout and modify the stress isometrics accordingly.
Because CADWorx/PIPE had the ability to bi-directionally link to COADE’s CAESAR II
pipe stress analysis program, OPC saw the chance of saving hundreds of engineering and design hours. An unexpected benefit was
that during the preliminary design stages the designer could send the piping layout complete with proposed support points,
hangers or restraints to CAESAR II for analysis. Therefore catching potentially large problems early in the design.
Once the analysis had been performed by CAESAR II the stress engineer would make recommendations for layout changes
or support placement based on the results. CADWorx/PIPE could read those results directly into 3D model allowing the
designer to review the proposed modifications for layout feasibility. Once all parties were satisfied CADWorx/PIPE would
import the CAESAR II modified information into the CADWorx/PIPE piping model including any routing or
support recommendations.
The bi-directional benefits did not stop there. CADWorx/PIPE is also able to take the analysis results and interactively
produce stress isometrics. The designer chooses what analysis information appears on the drawing.
Looking Forward
OPC feel that they have not yet fully realized all of the benefits of using CADWorx/PIPE and CAESAR II and feel
that there are still greater efficiencies that can be gained through the use of 3D modeling in their design process.
Many of the mundane tasks that designers had come to accept as part of their job are eliminated by the use of this design tool.
The results of these mundane tasks were still required, but CADWorx/PIPE made the results an automatic by-product of
efficient and safe design.
For OPC using CADWorx/PIPE has meant working smarter, offering clients more and delivering a better quality product for
their design dollars.
In the past OPC may just have produced plant layouts and sections leaving the fabrication isometrics and spool drawings as the
responsibility of the fabricator. Because OPC can now produce these drawings automatically and using CADWorx/PIPE'x
ability
to detect clashes they have put themselves in the position of virtually
proving the design before the first piece of steel is cut
or the first arc is struck with obvious benefits.
OPC believe the decision to implement an integrated engineering CAD system such as CADWorx/PIPE and CAESAR II
has been a worthwhile and painless move. OPC DRIZO feels that the project and they have greatly benefited from the use of these
design tools.
© Copyright 2017. Hexagon PPM. All Rights Reserved. Powered by Nodus Solutions | http://www.coade.com/coadeapplication/OPC | CC-MAIN-2017-47 | refinedweb | 1,504 | 51.78 |
These are chat archives for nextflow-io/nextflow
nextflow config, I have two questions / feature requests ;)
nextflow.configfiles?
x=$(do_something_here)
$(id)in the script.
nextflow config -flatthat dump the config as dot separated attribute that's straightforward to parse, but I like the idea of a yaml / json output, a PR is welcome for that ! :)
process foo { script: def cpus1 = task.cpus * .75 def cpus2 = task.cpus * .25 """ bwa ... etc """ }
def cpu_bwa = Math.floor( task.cpus * 0.75) def cpu_samtools = task.cpus - cpu_bwa
No such variable:mean that there is an error in the script, or that the data does not come from a channel like it should? The error message references an output section,
set id, file "${id}_sort.bam" into sorted_bam
file("${id}_sort.bam")
-flatthing is great, that's super easy to parse
1) not possible, the script execution depends on the config object, therefore it must be parsed before the script itself
I understand this - I want the config object and the script config though. So can still be done in order. I just want to see stuff that's defined in the pipeline script as well. eg. version
aws-abatchand
localexecutors within the same nf script. I did some limited testing and it seem that they can not be mixed, mainly due to the inclusion of
-w s3://bucketto the command line, i.e. the local tasks try to use the s3 work space and fail. | https://gitter.im/nextflow-io/nextflow/archives/2018/03/08 | CC-MAIN-2019-43 | refinedweb | 241 | 74.59 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to update domain set in fields_view_get?
For instance, take the account_check_writing module, in which we have fields_view_get on account.voucher. Here a domain is set for journal_id field.
Now, I have another module, in which I need to set domain to domain for journal_id.
Should I set the domain available in account_check_writing in my module, so that both the conditions are satisfied or Is there any other better method to solve this problem.
In the custom module inherit the class and add fields_view_get method default code with additional add your domain conditions.
Example
class account_voucher(osv.osv):
_inherit = 'account.voucher'
def fields_view_get(self, cr, uid, view_id=None, view_type=False, context=None, toolbar=False, submenu=False):
"""
Add domain 'allow_check_writting = True' on journal_id field and remove 'widget = selection' on the same
field because the dynamic domain is not allowed on such widget
"""
if not context: context = {}
res = super(account_voucher, self).fields_view_get(cr, uid, view_id=view_id, view_type=view_type, context=context, toolbar=toolbar, submenu=submenu)
doc = etree.XML(res['arch'])
nodes = doc.xpath("//field[@name='journal_id']")
if context.get('write_check', False) :
for node in nodes:
node.set('domain', "[('type', '=', 'bank'), ('allow_check_writing','=',True),('your_field','=','value')]")
node.set('widget', '')
res['arch'] = etree.tostring(doc)
return res
yes, I have done added the domain in my own module. But, logically speaking, my new module does not have anything to do with account_check_writing. and I have to add Account Check writing module in the depends. So How to solve this dependency issue?
If you using account_check_writing module and if you need to change the account_check_writing functionality (domain filter) then override the account_check_writing module fields_view_get method based on your requirement. otherwise create your own fields_view_get method in custom module.
so, I can set domain based on Account check writing module and my own module without adding dependency... Am I right?
In the custom module if you using _inherit = 'account.voucher' then need to dependency account.voucher in the custom module.
yes, So i have to add account_voucher module in depend instead of account_check_writing
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-update-domain-set-in-fields-view-get-54408 | CC-MAIN-2017-30 | refinedweb | 390 | 51.34 |
0
Hello All!
I was just wondering if C++ classes could internally be represented in the following way:
#include <stdio.h> struct A { int a; void (*ptr) (A *); }; void display(A *ptr) { ptr->a = 2; printf("ptr->a = %d", ptr->a); } int main() { A obj; obj.ptr = display; obj.ptr(&obj); return 0; }
The C++ equivalent could be something like:
#include <stdio.h> class A { public: int a; void display(); }; void A::display() { this->a = 2; printf("a = %d", this->a); } int main() { A obj; obj.display(); return 0; }
So, do C++ compilers could actually convert C++ code to valid C code(something like the above example)? Or how exactly is it done(if done in any other way). This was just a simple class example. I couldn't think how access specifiers could be implemented in C. Any inputs would be helpful. I was just curious if the compilers actually did it this way.
Thanks a lot! | https://www.daniweb.com/programming/software-development/threads/425485/regarding-internal-represantaion-of-classes | CC-MAIN-2017-13 | refinedweb | 158 | 68.97 |
Installing Qt toolkit version 4.2.3 on Linux
Trolltech
Trolltech is a Norwegian software company. It was founded in 1994. Since then, the company has grown rapidly. Trolltech has got two product lines. The famous Qt toolkit and the Qtopia. Qtopia is an application framework for embedded linux devices. Today Trolltech has about 200 employees and more than 4400 customers in 60 countries. Their headquarters is in Oslo. Trolltech is a new kind of a company. Together with MySQL AB, they are the most renowned companies that use the open source business model. Trolltech's motto is Create more, Code less.
Qt toolkit
Qt is a cross-platform application development framework. The most famous applications using Qt are KDE, Opera, Google Earth and Skype. Qt was first publicly released on May 1995. It is dual licenced. That means, it can be used for creating open source applications as well as proprietary ones. Qt toolkit is a very powerful toolkit. Only the Java Swing toolkit can match its capabilities. The great advantage of the Qt is that it is a way faster than Swing. The look and feel of the Qt is also superior to Swing. On the other hand, the Swing toolkit is completely free of charge. For commercial development Qt is not free. Qt is well established in the open source community. Thousands of open source developers use Qt all over the world.
In June 2005, Trolltech has released the latest major release. The long awaited 4th version. The new version has a lot of new features, changes and improvements.
Trolltech introduced five new technologies.
- Tulip - a set of template container classes
- Interview - a model/view architecture for viewing items
- Arthur - the painting framework
- Scribe - the Unicode text renderer
- Mainwindow - a modern action-based mainwindow, toolbar, menu, and docking architecture
The most exciting technology is definitely the Arthur painting framework.
Download
For non-commercial development Qt is free of charge. We can easily download the toolkit. We go to their download page. As we are talking about Linux installation here, we choose the Qt/X11 Open Source Edition. The file name is qt-x11-opensource-src-4.2.3.tar.gz. The size of the file is 35.9MB. The file is archived and zipped. To unzip the file, we type the following command.
tar -zxf qt-x11-opensource-src-4.2.3.tar.gz
The command will unzip all the files to a directory qt-x11-opensource-src-4.2.3. The size of the directory is now 114.5 MB. Now it is time to carefully read the README and the INSTALL file. There we will find detailed installation instructions. The installations is easy and straightforward.
Install
We install the library the classic way. On Unix systems, installation of a software is divided into three steps.
- Configuration
- Building
- Installation
First we run the configure script. The script will configure the library for our machine type. By default, the qt will be installed in /usr/local/Trolltech/Qt-4.2.3 directory. This can be changed by the -prefix parameter of the configure script. And it is the only option, that I used. I decided to install the library into the /usr/local/qt4 directory. Note that the installation word has two meanings here. It is the whole process consisting of all three steps. And it also means 'moving files to specific directories', which is the last, third step.
./configure -prefix /usr/local/qt4
This is the Qt/X11 Open Source? yes
The script will ask for licence acceptance. After we type yes, the script continues.
Qt is now configured for building. Just run 'make'. Once everything is built, you must run 'make install'. Qt will be installed into /usr/local/qt4 To reconfigure, run 'make confclean' and 'configure'.
After a short period of time, the script will nicely inform about the outcome.
The building of the qt toolkit takes several hours. It depends on the power of your processor. During the building my system went suddenly down. The temperature of the processor reached the critical value. This was due to my inefficient cooling probably. I always look at events optimistically. I figured out how to check temperature on the command line
cat /proc/acpi/thermal_zone/THRM/temperature
I also realized, that when you restart the building, it will continue where it has finished.
After the process finished, I saw no message like 'building finished successfully'. This is common, but in my opinion not correct.
The last step is installing, or moving files to the directories.
sudo make install
This command finishes the installation process. The library is now installed in /usr/local/qt4 directory. The size of the directory is 361.5 MB. As we can see, Qt is a huge library.
The last thing that we do, is adding the qt4 path to the PATH system variable. bash users, which are majority of linux users, need to edit the .profile file.
PATH=/usr/local/qt4/bin:$PATH export PATH
The changes will be active after another login.
Testing a small example
Finally we will write a small code example.
#include <QApplication> #include <QWidget> int main(int argc, char *argv[]) { QApplication app(argc, argv); QWidget window; window.resize(250, 150); window.setWindowTitle("Simple example"); window.show(); return app.exec(); }
To build this example, we will use a very handy tool called qmake.
qmake -project qmake make
If the qt4 installation directory is not a part of the PATH variable, we can provide the full path to the qmake tool.
/usr/local/qt4/qmake -project /usr/local/qt4/qmake make
Installation finished OK. | http://zetcode.com/articles/qt4/ | CC-MAIN-2019-22 | refinedweb | 936 | 69.99 |
perlmeditation mstone <p><b>Meditations on Programming Theory <br> meditation #4: Identification, part 2: functions </b></p> <p>In [id://222451|MOPT-03,] I said that the theoretical difference between functions and variables was arbitrary, but useful. This time I'll explain why, and show how functions and variables work together.</p> <readmore> <h1>Functions:</h1> <h2>functions and variables:</h2> <p>A function has all the same features a variable does: A function is an <b>entity</b> with an <b>abstraction barrier,</b> and that barrier defines a <b>scope.</b> Every function can be <b>bound</b> to an <b>identifier,</b> and every function produces a <b>value.</b> The set of all values a function can produce defines a <b>type.</b></p> <p>Function notation actually makes those pieces easier to see than variable notation. For a simple function declaration:</p> <pre> sub function { return ("value"); } </pre> <ul> <li> The keyword <tt>'sub'</tt> defines an entity. <li> The braces <tt>'{'</tt> and <tt>'}'</tt> mark that entity's abstraction barrier. <li> The string <tt>'function'</tt> is the identifier bound to that entity. <li> The keyword <tt>'return'</tt> marks the entity's value. <li> And the string <tt>'value'</tt> is the value itself. </ul> <p.</p> <p>Theoretically, the code:</p> <pre> sub func { return "original value.\n" } sub print_func { print func(); } sub redefine_func { sub func { return ("local value.\n"); } print_func(); } print_func(); redefine_func(); </pre> <p>could produce the output:</p> <pre> original value. local value. </pre> <p>which would make <tt>func()</tt> behave like a dynamically-scoped variable. It doesn't, though. The actual output is:</p> <pre> local value. local value. </pre> <p>which shows us that the second definition completely replaces the first.</p> <p.</p> <p>(1) - <b>BIG HONKIN' CORRECTION:</b> [adrianh] correctly showed, below, that you can dynamically redefine functions by assigning an anonymous subroutine to a local typeglob:</p> <code> sub redefine_func { local (*func) = sub {return ("local value.\n")}; } </code> <p>makes a function behave exactly the same way as a local variable. Kudos and thanks adrianh!</p> <p>Officially, variables are <b>degenerate functions.</b> The term 'degenerate' indicates the limiting case of an entity, which is equivalent to some simpler entity. We usually find degenerate entities by setting some attribute to zero. A point is a degenerate circle, for instance: a circle with radius zero.</p> <p>In the case of functions and variables, the thing we set to zero is the function's <b>parameter list.</b></p> <h2>functions, parameters, and formal structures:</h2> <p>A <b>parameter</b> is a variable whose value affects a function's value, but not its <b>formal structure.</b></p> <p>A formal structure is the set of operations that make a function do whatever it does. Those operations will always boil down to a pattern of substitutions, and we can always build a <b>formal system</b> (see [id://220362|MOPT-02] for more about formal systems) that uses those substitutions as rules.</p> <p>A formal structure isn't a complete formal system, though. It's just a computation that happens <i>within</i> a formal system. Officially, a formal structure describes a family of 'generally similar' computations that can take place in a given formal system, and a function's parameters are axioms on which those computations are performed.</p> <p>Which is great, as long as we know what 'generally similar' means. To explain that, we need more vocabulary.<p> <p>The rules that make up a formal system fall into one of two categories.. <b>equivalence rules</b> and <b>transformation rules:</b></p> <ul> <li> In an <b>equivalence rule,</b> the replacement string represents the <b>same meaning</b> as the original string. <li> In a <b>transformation rule,</b> the replacement string represents a <b>different meaning</b> than the original string. </ul> <p>The concepts of equivalence and meaning are <b>equipotent:</b> they have <b>equivalent power.</b> We can define either one in terms of the other, so if we have one, we automatically have both:</p> <ul> <li>If we know that two strings represent the same meaning, we can write an equivalence rule that replaces one string with the other (sometimes.. we'll get to the details in a minute).<p> <li>If we know that a rule is an equivalence rule, we know that the strings on either side of that rule are alternative representations of the same thing.. even if we don't have any idea what that thing happens to be. </ul> <p>The process of assigning meanings to symbols is called <b>interpretation.</b>:</p> <pre> print ( "2 + 2 ", (2+2 == 4) ? "is" : "is not", " equal to 4.\n" ); print "- but -\n"; print ( "'two plus two' ", ('two plus two' eq 'four') ? "is" : "is not", " equal to 'four'.\n" ); </pre> .</p> <ul> <li>An equivalence that's well enough behaved to use as a substitution rule is called a <b>formal equivalence.</b><p> <li>An equivalence that we acknowledge as humans, but don't represent formally (i.e.: with a rule) is called an <b>informal equivalence</b> or an <b>interpretation.</b> </ul> <p>We use both ideas when programming, but only certain substitutions count as equivalence rules. The others are just transformations where both sides of the rule happen to have similar interpretations.</p> <p>Formal equivalence is <b>transitive.</b> The transitive property says that <b>if 'a' equals 'b' and 'b' equals 'c', then 'a' equals 'c'.</b> In other words, we can condense any sequence of equivalence rules into a single rule. In programming, we call that process <b>reduction</b> and say that two strings are <b>reducible</b> if:</p> <ol> <li> we can derive one string from the other <li> we can derive both strings from each other <li> or we can derive a third string from both original strings </ol> <p>The set of all strings we can derive from a single axiom by repeatedly applying a single rule is called a <b>transitive closure,</b> or just a <b>closure.</b> An equivalence rule breaks a language into a set of mutually-exclusive closures called <b>partitions.</b> Any two strings from the same partition are formally equivalent, and no string in a partition is reducible to any string in any other partition.</p> <p>Those partitions mark the difference between equivalence rules and transformation rules. Only an equivalence rule can partition a language. The closure of a transformation rule contains the entire language.</p> <p>We use the term 'reduction' because formal equivalence is slightly different from logical equivalence. Logical equivalence is <b>symmetric: if 'a' equals 'b', then 'b' equals 'a'.</b> Substitution rules aren't symmetric, though. They only work in one direction. To make a formal equivalence symmetric, we'd need two rules, each going the opposite direction. Only the second reduction principle, above, is symmetric. The first and the third are <b>asymmetric.</b></p> <p. <p>To get back to functions, with those new terms in our arsenal, we can say that:</p> <ul> <li> A function's <b>formal structure</b> defines <b>a sequence of transformation rules,</b> along with <b>any equivalence reductions</b> necessary to turn the result of one transformation into a target for the next. </ul> <p>In practice, that means code tends to gather in chunks where we change something, then rearrange the result. That <b>change, then rearrange</b> sequence is a low-level design practice called an <b>idiom.</b></p> <p.</p> <ul> <li>A function's <b>parameter list</b> is the set of original values that get transformed and rearranged until they become the function's value. </ul> <p>A variable, being a degenerate function, takes no parameters. Any axioms necessary are hardwired into the function, so the formal structure produces the same value every time.</p> <p>Parameters share the same strange, half-way existence as values, since both parameters and values can cross an abstraction barrier. Parameters are visible in two different scopes at the same time, and making that happen takes special machinery.</p> <h2>functions and bound variables:</h2> <p.</p> <ul> <li> A statement that calls a function and supplies it with parameters is officially known as an <b>invocation context.</b><p> <li> A free variable that represents a parameter is officially known as a <b>bound variable</b>, or <b>formal parameter.</b><p> <li> The entity in the evaluation context that gets bound to the formal parameter is called the <b>actual parameter.</b><p> <li> The system that handles those bindings is called the <b>parameter passing mechanism.</b></p> </ul> <p>Parameter passing mechanisms come in three basic flavors: <b>positional, named</b> and <b>defaulted:</b></p> <ul> <li><b>Positional</b> parameters are the most common. The formal parameters appear in a specific order when the function is defined, and actual parameters are bound to the appropriate names based on their order in the invocation context.<p> <li><b>Named</b> parameters are less common. The actual parameters can occur in any order, because you list them with their names in the invocation context.<p> <li><b>Defaulted</b> parameters are tricky. The default value is defined as part of the function's definition, and the compiler creates a behind-the-scenes entity for that value at compile time. At runtime, the formal parameter starts off being bound to that entity, but gets re-bound to the actual parameter, <i>if</i> an appropriate entity exists in the invocation context. </ul> <p>Perl happens to use an interesting variation on positional parameters. Every function takes exactly one parameter: a list. The contents of that list are defined in the invocation context.</p> <p>At first glance, that seems like a cheap way to avoid building a 'real' parameter passing mechanism, like C or Java have, but it's actually quite elegant from a theoretical standpoint. A Perl function can also <i>return</i> a list, so 'the list' forms a standard interface between any two functions. When we add the rest of Perl's list-handling tools, that interface makes Perl very good at handling <b>signal processing logic,</b> where each function acts like a filter on an arbitrarily long stream of input:</p> <pre> sub sieve { my ($prime, @list) = @_; if (@list) { return ($prime, sieve (grep { $_ % $prime } @list)); } else { return ($prime); } } print join (' ', sieve (2..50)), "\n"; </pre> <p>The code above implements the <b>Sieve of Eratosthenes</b> with signal processing logic. The function <tt>sieve()</tt>:</p> <pre> 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 </pre> <p>Signal processing is another idiom, and functional programmers use it heavily. Signal processing programs tend to be easy (okay.. <i>easier</i>) to analyze mathematically, so you can prove that the program will behave properly if you're willing to do the work.</p> <p>Perl also makes it easy to simulate named parameter passing:</p> <pre> sub use_pseudo_named_parameters { my (%args) = @_; ... } use_pseudo_named_parameters ( 'arg1' => 'val1', 'arg2' => 'val2', 'arg3' => 'val3', ); </pre> <p>And defaulted parameter passing:</p> <pre> sub use_pseudo_defaulted_parameters { my (%args) = @_; my %params = ( 'param1' => 'default 1', 'param2' => 'default 2', 'param3' => 'default 3', ); @params{ keys %params } = @args{ keys %params }; undef (%args); ... } use_pseudo_defaulted_parameters ( 'param1' => 'invocation value 1', 'param2' => 'invocation value 2', ); </pre> <p>so instead of sticking us with a single parameter passing mechanism, Perl makes it reasonably simple to simulate any mechanism we want.</p> <h2>functions and identification:</h2> <p.</p> <p>By that reasoning, the string <tt>'func(1)'</tt> would be the name of a specific variable. the <tt>'(1)'</tt> part would just be a naming convention, not an invocation context that establishes a binding. I used a similar naming convention for hash keys in the code samples, above.</p> <p>Yes, the 'machine in a factory' version is generally easier to implement in a real computer, but the 'family of variables' version is also possible, and is occasionally useful. I'll explain how in next week's meditation, which will cover <b>lvalues.</b></p> <p.</p> <p.</p> | http://www.perlmonks.org/?displaytype=xml;node_id=224813 | CC-MAIN-2016-26 | refinedweb | 2,058 | 54.73 |
Text occurs in every diagramming application, for instance as labels on nodes and links. In JViews, you have the choice between IlvZoomableLabel and IlvText; IlvZoomableLabel is more a text picture that allows fancy gradient paints inside the glyphs while the solid colored IlvText is usually faster and allows more traditional text manipulations (wrapping mode and so on). Both classes provide plenty of features that allow to adapt them to all possible situations.
Plenty of features ... sigh! Features cost memory and performance. Each text parameter that can be customized needs to be stored in the text object, and if you have many features, you also have many parameters. Indeed, IlvText for instance uses quite a bit of memory, and if you have 100000 text objects, you can run into memory problems ... that are avoidable.
What if we don't need all these customizable parameters? Let's assume all text objects in our application display a single line each, and the only variable parameters are the color of the text and of the background rectangle, and of course the label and position of the text object. Then we need to store 2 colors, a string and a position in the text object, but not parameters such as antialiasing, font, wrapping mode, margins and so on, because these can be hard coded and are fixed for all text objects. This is the essential idea of a lightweight text: use only memory for parameters that are customizable, and avoid memory for parameters that we never want to customize in our application.
Good idea, but it sounds like a lot of effort! My new LightText class needs to render text to display it. How can I do this without writing a text rendering engine from scratch? The idea is to use the text rendering engine of IlvText. Whenever I need to draw the LightText, I allocate a new IlvText object, draw it instead, and then give it free. That is, I delegate the rendering of the LightText to a temporary IlvText object allocated on the fly. The Java garbage collection will make sure that the memory footprint remains low. The first sketch is this:
public class LightText extends IlvGraphic {
Color foreground;
Color background;
IlvRect bounds; // the current position
String label; // the label text
...
public void draw(Graphics dst, IlvTransformer t) {
getDelegate(true).draw(dst, t);
}
private IlvText getDelegate(boolean includeBounds) {
IlvText text = new IlvText(new IlvPoint(), label));
// hard coded parameters
text.setAntialiasing(true);
text.setLeftMargin(4);
text.setRightMargin(4);
text.setTopMargin(4);
text.setBottomMargin(4);
text.setFillOn(true);
text.setStrokeOn(false);
// customized parameters
text.setForeground(foreground);
text.setFillPaint(background);
if (includeBounds)
text.moveResize(bounds);
else if (bounds != null)
text.move(bounds.x, bounds.y);
return text;
}
}
Now LightText uses less memory, because it stores less parameters than IlvText, but of course this is not fast. When drawing the objects, delegates must be allocated, and this needs time. This does not hurt much when we draw only 10-30 objects, but when we have to draw 100000 objects, it will be much too slow.
Well, 100000 objects on the screen? We only need to draw 100000 objects at the same time if the view is demagnified. Otherwise all these objects would not fit on the screen. But when the view is demagnified, each single object occupies only a tiny area and you cannot recognize the details of the object anyway. Therefore it is not needed to draw all details. In this case, it is sufficient to draw only a tiny rectangle instead of the fully rendered text. Only when the view is magnified, we need to draw the text in detail, but in this case we never need to draw many objects. Hence, when the view is magnified, the slowdown caused by the few delegate drawing does not hurt so much, and when the view is demagnified, we don't need to allocate many delegates. Here is the modified draw routine:
public void draw(Graphics dst, IlvTransformer t) {
IlvRect bbox = boundingBox(t);
if (bbox.width <= 2 || bbox.height <= 2) {
// draw only a tiny filled rectangle
dst.setColor(foreground);
int w = Math.max(1, (int)bbox.width);
int h = Math.max(1, (int)bbox.height);
dst.fillRect((int)bbox.x, (int)bbox.y, w, h);
} else if (bbox.width <= 4 || bbox.height <= 4) {
// draw border and inner, but no text
int w = Math.max(1, (int)bbox.width);
int h = Math.max(1, (int)bbox.height);
// fill on is hardcoded to true
dst.setPaint(background);
dst.fillRect((int)bbox.x, (int)bbox.y, w, h);
// stroke on is hardcoded to false, hence only the text label,
// not the border is drawn
dst.setColor(foreground);
int x = (int)bbox.x;
int middle = (int)(bbox.y + 0.5f * bbox.height);
dst.drawLine(x, middle, x + w, middle);
} else {
// draw the text via delegate
getDelegate(true).draw(dst, t);
}
}
This draw method has 3 levels of details:
I wrote a small application that displays 10000 text objects. When using IlvText directly, the application allocates 13 MB memory. When using LightText, it requires only 2.5 MB of memory. This technique hence allowed me to reduce the memory load by a factor of 5.2. When I zoom out, the LightText actually draws faster than IlvText. When I zoom in, only few objects get drawn, hence the slowdown of LightText is in this case neglectable.
The full code of the example is available here: lighttext.zip. It compiles with JViews 8.0 and 8.1. My small application has a check box at the top to choose between IlvText and LightText, and it displays the history of the memory footprint at the bottom. When you switch from IlvText to LightText, you can easily see how the memory footprint is reduced.
LightText is not just yet another text object. It is rather a programming technique, since it depends on your application which parameters of IlvText must be customizable and which can be hard coded with fixed values. The same idea can be applied to other IlvGraphic subclasses as well. JViews has plenty of features in general purpose IlvGraphic subclasses, but when memory is critical and the purpose is more special and less general, optimizing graphic classes might be worth the effort. | https://www.ibm.com/developerworks/community/blogs/javavisualization/entry/fasttextwithlessmemory?lang=en | CC-MAIN-2015-35 | refinedweb | 1,039 | 56.86 |
Convert Images using C# Image Processing Library
Converting Images to Black n White and Grayscale
Sometimes you may require to convert colored images to Black n White or Grayscale for printing or archiving purposes. This article demonstrates the use of Aspose.Imaging for .NET API to achieve this using two methods as stated below.
- Binarization
- Grayscaling
Binarization
In order to understand the concept of Binarization, it is important to define a Binary Image; that is a digital image that can have only two possible values for each pixel. Normally, the two colors used for a binary image are black and white though any two colors can be used. Binarization is the process of converting an image to bi-level meaning that each pixel is stored as a single bit (0 or 1) where 0 denotes the absence of color and 1 means presence of color. Aspose.Imaging for .NET API currently supports two Binarization methods.
Binarization with Fixed Threshold
The following code snippet shows you how to use fixed threshold binarization can be applied to an image.
Binarization with Otsu Threshold
The following code snippet shows you how Otsu threshold binarization can be applied to an image.
Grayscaling
Gray-scaling is the process of converting a continuous-tone image to an image with discontinues gray shades. The following code snippet shows you how to use Grayscaling.
Convert Image to grayscale with Setting 16bit
Gray-scaling is the process of converting a continuous-tone image to an image with discontinues gray shades. The following code snippet shows you how to use Grayscaling with 16bits.
Convert GIF Image Layers To TIFF Image
Sometimes it is needed to extract and convert layers of a GIF Image into another raster image format to meet an application need. Aspose.Imaging API support the feature of extracting and converting layers of a GIF Image into another raster image formats. Firstly, we will create instance of image and load GIF image from the local disk, then we will get the total count of layers in the source image using Length property of GifFrameBlock class and iterate through the array of blocks. Now we will check if the block is NULL then ignore it else convert the block to TIFF image. The following code snippet shows you how to convert GIF image layers to TIFF image.
Converting SVG to Raster Format
See :
Convert SVG to Raster image
Converting Raster Image to PDF
See :
Convert Raster Image to PDF
Converting Raster Image to Svg
See
Convert Raster Image to Svg
Converting RGB color system to CMYK for Tiff file Format
Using Aspose.Imaging for .NET, developers can convert RGB color system file to CMYK tiff format. This article shows how to export/convert RGB color system file to CMYK tiff format with Aspose.Imaging. Using Aspose.Imaging for .NET you can load image of any format and than you can set various properties using TiffOptions class and save the image. The following code snippet shows you how to achieve this feature.
Working with animation
See
Converting Open document graphics
See
Converting Corel Draw images
See Convert CDR
Converting webp images
Converting eps images
Converting cmx images
Converting dicom images
Exporting Images
Along with a rich set of image processing routines, Aspose.Imaging provides specialized classes to convert images to other formats. Using this library, image format conversion is as simple as changing the file extension to desired format in the Image class Save method and by specifying the appropriate ImageOptions values. Below are some specialized classes for this purpose in ImageOptions namespace.
It is easy to export images with Aspose.Imaging for .NET API. All you need is an object of the appropriate class from ImageOptions namespace. By using these classes, you can easily export any image created, edited or simply loaded with Aspose.Imaging for .NET to any supported format. Below is an example that demonstrates this simple procedure. In the example, a GIF image is loaded by passing the file path as a parameter to the Load method. It is then exported to various image formats using the Save method. The examples here show how to load a GIF and save it to BMP, JPEG, PNG and finally TIFF using Aspose.Imaging for .NET with C# and Visual Basic.
Convert compressed vector formats
Aspose.Imaging supports next compressed vector formats: Emz(compressed emf), Wmz(compressed wmf), Svgz(compressed svg). Supported read of these formats and export to other formats.
Combining Images
This example uses Graphics class and shows how to combine two or more images into a single complete image.To demonstrate the operation, the example creates a new Image canvas in JPEG format and draw images on the canvas surface using Draw Image method exposed by Graphics class. Using Graphics class two or more images can be combine in such a way that the resultant image will look as a complete image with no space between the image parts and no pages. The canvas size must be equal to the size of resultant image. Following is the code demonstration that shows how to use Draw Image method of the Graphics class to combine images in a single image.
Expand and Crop Images
Aspose.Imaging API allows you to expand or crop an image during image conversion process. Developer needs to create a rectangle with X and Y coordinates and specify the width and height of the rectangle box. The X,Y and Width, Height of rectangle will depict the expansion or cropping of the loaded image. If it is required to expand or crop the image during image conversion, perform the following steps:
- Create an instance of RasterImage class and load the existing image.
- Create an Instance of ImageOption class.
- Create an instance of Rectangle class and initialize the X,Y and Width, Height of the rectangle
- Call Save method of the RasterImage class while passing output file name, image options and the rectangle object as parameters.
Read and Write XMP Data To Images
XMP (Extensible Metadata Platform) is an ISO standard. XMP standardizes a data model, a serialization format and core properties for the definition and processing of extensible metadata. It also provides guidelines for embedding XMP information into popular image such as JPEG, without breaking their readability by applications that do not support XMP. Using Aspose.Imaging for .NET API developers can read or write XMP metadata to images. This article demonstrates how XMP metadata can be read from image and write XMP metadata to images.
Create XMP Metadata, Write It And Read From File
The release of Aspose.Imaging for .NET 3.3.0 contains the Xmp namespace. With the help of Xmp namespace developer can create XMP metadata object and write it to an image. The following code snippet shows you how to use the XmpHeaderPi, XmpTrailerPi, XmpMeta, XmpPacketWrapper, PhotoshopPackage and DublinCorePackage packages contained in Xmp namespace.
Export Images in Multi Threaded Environment
Aspose.Imaging for .NET now supports converting images in multi threaded environment. Aspose.Imaging for .NET ensure the optimized performance of operations during execution of code in multi-threaded environment. All imaging option classes (e.g. BmpOptions, TiffOptions, JpegOptions, etc.) in the Aspose.Imaging for .NET now implement IDisposable interface. Therefore it is a must that developer properly dispose off the imaging options class object in case Source property is set. Following code snippet demonstrates the said functionality.
Aspose.Imaging now supports SyncRoot property while working in multi-threaded environment. Developer can use this property to synchronize access to the source stream. Following code snippet demonstrates how the SyncRoot property can be used. | https://docs.aspose.com/imaging/net/converting-images/ | CC-MAIN-2022-27 | refinedweb | 1,264 | 54.12 |
odbx_result_finish man page
odbx_result_finish, odbx_result_free — Closes the result set and frees its allocated memory
Synopsis
#include <opendbx/api.h>
int odbx_result_finish
(odbx_result_t* result);
void odbx_result_free
(odbx_result_t* result);
Description
odbx_result_finish() closes the result set which may also include dropping the non-fetched rows sent by the server. It releases all resources allocated by odbx_result() and by the native database library which are attached to
result as well as the memory the first parameter is pointing to. Trying to free
result manually using free() will create memory leaks because it contains more dynamically allocated structures and also the memory of the result set allocated by the native database library. odbx_result_finish() must be called even if the statement was not a SELECT-like statement which returned now rows as it may be necessary to commit the changes done by the statement.
odbx_result_free() performs the same tasks as odbx_result_finish() but is unable to return an error if the task couln't be completed. It shouldn't be used in applications linking to the OpenDBX library version 1.3.8 or later and it will be removed from the library at a later stage.
result must be valid a result set created by odbx_result() which is returned via its second parameter. After feeding it to odbx_result_finish() it becomes invalid and must not be feed to it again. Otherwise a "double free" may occur and the application may be terminated.
Return Value
odbx_result_finish()
- -
ODBX_ERR_PARAM
The
resultparameter is invalid
See Also
odbx_result() | https://www.mankier.com/3/odbx_result_finish | CC-MAIN-2017-47 | refinedweb | 245 | 51.38 |
Mixcloud API wrapper for Python and Async IO
Project description
Mixcloud API wrapper for Python and Async IO
aiomixcloud is a wrapper library for the HTTP API of Mixcloud. It supports asynchronous operation via asyncio and specifically the aiohttp framework. aiomixcloud tries to be abstract and independent of the API’s transient structure, meaning it is not tied to specific JSON fields and resource types. That is, when the API changes or expands, the library should be ready to handle it.
Installation
The following Python versions are supported:
- CPython: 3.6, 3.7, 3.8, 3.9
- PyPy: 3.5
pip install aiomixcloud
Usage
You can start using aiomixcloud as simply as:
from aiomixcloud import Mixcloud # Inside your coroutine: async with Mixcloud() as mixcloud: cloudcast = await mixcloud.get('bob/cool-mix') # Data is available both as attributes and items cloudcast.user.name cloudcast['pictures']['large'] # Iterate over associated resources for comment in await cloudcast.comments(): comment.url
A variety of possibilities is enabled during authorized usage:
# Inside your coroutine: async with Mixcloud(access_token=access_token) as mixcloud: # Follow a user user = await mixcloud.get('alice') await user.follow() # Upload a cloudcast await mixcloud.upload('myshow.mp3', 'My Show', picture='myshow.jpg')
For more details see the usage page of the documentation.
License
Distributed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/aiomixcloud/ | CC-MAIN-2021-10 | refinedweb | 245 | 50.02 |
Python Project on Typing Speed Test – Build your first game in Python
Project in Python – Typing Speed Test
Have you played a typing speed game? It’s a very useful game to track your typing speed and improve it with regular practice. Now, you will be able to build your own typing speed game in Python by just following a few steps.
Keeping you updated with latest technology trends, Join TechVidvan on Telegram
About the Python Project
In this Python project idea, we are going to build an exciting project through which you can check and even improve your typing speed. For a graphical user interface, we are going to use the pygame library which is used for working with graphics. We will draw the images and text to be displayed on the screen.
WAIT! Have you worked on the 1st Python project in 20 project series by TechVidvan – Python Game Project on Tic Tac Toe
Prerequisites
The project in Python requires you to have basic knowledge of python programming and the pygame library.
To install the pygame library, type the following code in your terminal.
pip install pygame
Steps to Build the Python Project on Typing Speed Test
You can download the full source code of the project from this link:
Typing Speed Test Python Project File
Let us understand the file structure of the Python project with source code that we are going to build:
- Background.jpg – A background image we will use in our program
- Icon.png – An icon image that we will use as a reset button.
- Sentences.txt – This text file will contain a list of sentences separated by a new line.
- Speed typing.py – The main program file that contains all the code
- Typing-speed-open.png – The image to display when starting game
First, we have created the sentences.txt file in which we have added multiple sentences separated by a new line.
This time we will be using an Object-oriented approach to build the program.
1. Import the libraries
For this project based on Python, we are using the pygame library. So we need to import the library along with some built-in modules of Python like time and random library.
import pygame from pygame.locals import * import sys import time import random
2. Create the game class
Now we create the game class which will involve many functions responsible for starting the game, reset the game and few helper functions to perform calculations that are required for our project in Python.
Let’s go ahead and create the constructor for our class where we define all the variables we will use in our project.
class Game: def __init__(self): self.w=750 self.h=500 self.reset=True self.active = False self.input_text='' self.word = '' self.time_start = 0 self.total_time = 0 self.accuracy = '0%' self.results = 'Time:0 Accuracy:0 % Wpm:0 ' self.wpm = 0 self.end = False self.HEAD_C = (255,213,102) self.TEXT_C = (240,240,240) self.RESULT_C = (255,70,70) pygame.init() self.open_img = pygame.image.load('type-speed-open.png') self.open_img = pygame.transform.scale(self.open_img, (self.w,self.h)) self.bg = pygame.image.load('background.jpg') self.bg = pygame.transform.scale(self.bg, (500,750)) self.screen = pygame.display.set_mode((self.w,self.h)) pygame.display.set_caption('Type Speed test')
In this constructor, we have initialized the width and height of the window, variables that are needed for calculation and then we initialized the pygame and loaded the images. The screen variable is the most important on which we will draw everything.
3. draw_text() method
The draw_text() method of Game class is a helper function that will draw the text on the screen. The argument it takes is the screen, the message we want to draw, the y coordinate of the screen to position our text, the size of the font and color of the font. We will draw everything in the center of the screen. After drawing anything on the screen, pygame requires you to update the screen.
def draw_text(self, screen, msg, y ,fsize, color): font = pygame.font.Font(None, fsize) text = font.render(msg, 1,color) text_rect = text.get_rect(center=(self.w/2, y)) screen.blit(text, text_rect) pygame.display.update()
4. get_sentence() method
Remember that we have a list of sentences in our sentences.txt file? The get_sentence() method will open up the file and return a random sentence from the list. We split the whole string with a newline character.
def get_sentence(self): f = open('sentences.txt').read() sentences = f.split('\n') sentence = random.choice(sentences) return sentence
5. show_results() method
The show_results() method is where we calculate the speed of the user’s typing. The time starts when the user clicks on the input box and when the user hits return key “Enter” then we perform the difference and calculate time in seconds.
To calculate accuracy, we did a little bit of math. We counted the correct typed characters by comparing input text with the display text which the user had to type.
The formula for accuracy is:
(correct characters)x100/ (total characters in sentence)
The WPM is the words per minute. A typical word consists of around 5 characters, so we calculate the words per minute by dividing the total number of words with five and then the result is again divided that with the total time it took in minutes. Since our total time was in seconds, we had to convert it into minutes by dividing total time with 60.
At last, we have drawn the typing icon image at the bottom of the screen which we will use as a reset button. When the user clicks it, our game would reset. We will see the reset_game() method later in this article.
def show_results(self, screen): if(not self.end): #Calculate time self.total_time = time.time() - self.time_start #Calculate accuracy count = 0 for i,c in enumerate(self.word): try: if self.input_text[i] == c: count += 1 except: pass self.accuracy = count/len(self.word)*100 #Calculate words per minute self.wpm = len(self.input_text)*60/(5*self.total_time) self.end = True print(self.total_time) self.results = 'Time:'+str(round(self.total_time)) +" secs Accuracy:"+ str(round(self.accuracy)) + "%" + ' Wpm: ' + str(round(self.wpm)) # draw icon image self.time_img = pygame.image.load('icon.png') self.time_img = pygame.transform.scale(self.time_img, (150,150)) #screen.blit(self.time_img, (80,320)) screen.blit(self.time_img, (self.w/2-75,self.h-140)) self.draw_text(screen,"Reset", self.h - 70, 26, (100,100,100)) print(self.results) pygame.display.update()
6. run() method
This is the main method of our class that will handle all the events. We call the reset_game() method at the starting of this method which resets all the variables. Next, we run an infinite loop which will capture all the mouse and keyboard events. Then, we draw the heading and the input box on the screen.
We then use another loop that will look for the mouse and keyboard events. When the mouse button is pressed, we check the position of the mouse if it is on the input box then we start the time and set the active to True. If it is on the reset button, then we reset the game.
When the active is True and typing has not ended then we look for keyboard events. If the user presses any key then we need to update the message on our input box. The enter key will end typing and we will calculate the scores to display it. Another event of a backspace is used to trim the input text by removing the last character.
def run(self): self.reset_game() self.running=True while(self.running): clock = pygame.time.Clock() self.screen.fill((0,0,0), (50,250,650,50)) pygame.draw.rect(self.screen,self.HEAD_C, (50,250,650,50), 2) # update the text of user input self.draw_text(self.screen, self.input_text, 274, 26,(250,250,250)) pygame.display.update() for event in pygame.event.get(): if event.type == QUIT: self.running = False sys.exit() elif event.type == pygame.MOUSEBUTTONUP: x,y = pygame.mouse.get_pos() # position of input box if(x>=50 and x<=650 and y>=250 and y<=300): self.active = True self.input_text = '' self.time_start = time.time() # position of reset box if(x>=310 and x<=510 and y>=390 and self.end): self.reset_game() x,y = pygame.mouse.get_pos() elif event.type == pygame.KEYDOWN: if self.active and not self.end: if event.key == pygame.K_RETURN: print(self.input_text) self.show_results(self.screen) print(self.results) self.draw_text(self.screen, self.results,350, 28, self.RESULT_C) self.end = True elif event.key == pygame.K_BACKSPACE: self.input_text = self.input_text[:-1] else: try: self.input_text += event.unicode except: pass pygame.display.update() clock.tick(60)
7. reset_game() method
The reset_game() method resets all variables so that we can start testing our typing speed again. We also select a random sentence by calling the get_sentence() method. In the end, we have closed the class definition and created the object of Game class to run the program.
def reset_game(self): self.screen.blit(self.open_img, (0,0)) pygame.display.update() time.sleep(1) self.reset=False self.end = False self.input_text='' self.word = '' self.time_start = 0 self.total_time = 0 self.wpm = 0 # Get random sentence self.word = self.get_sentence() if (not self.word): self.reset_game() #drawing heading self.screen.fill((0,0,0)) self.screen.blit(self.bg,(0,0)) msg = "Typing Speed Test" self.draw_text(self.screen, msg,80, 80,self.HEAD_C) # draw the rectangle for input box pygame.draw.rect(self.screen,(255,192,25), (50,250,650,50), 2) # draw the sentence string self.draw_text(self.screen, self.word,200, 28,self.TEXT_C) pygame.display.update() Game().run()
Output:
Summary
In this article, you worked on the Python project to build your own game of typing speed testing with the help of pygame library.
I hope you got to learn new things and enjoyed building this interesting Python project. Do share the article on social media with your friends and colleagues.
How to launch and test this project in python editor or pycharm ?
Please refer:
great information,thnx for sharing it
check Secrect Messaging in Python
Can anyone please share use case diagram for this course!
Can anyone please share use case diagram for this Code!
I also want the usecase diagram and modules
This project is not running properly. First time run gives correct output but after reseting it, it doesn’t show any value in wpm and time value is displayed like 1545925769 seconds while in reality, the time taken is very less. Also, while closing the game window, a prompt showinhg:”python not responding” is popped. Kindly help.
U can change the code if ur window is not close..
In this function
def run(self) :
for event in pygame. event. get() :
if event.type == QUIT:
self. running = False
pygame.display.quit()
pygame.quit()
quit()
U can change the code if ur window is not close..
In this function
def run(self) :
for event in pygame. event. get() :
if event.type == QUIT:
self. running = False
pygame.display.quit()
pygame.quit()
quit() | https://techvidvan.com/tutorials/project-in-python-typing-speed-test/ | CC-MAIN-2020-45 | refinedweb | 1,875 | 69.07 |
Introduction: Humidity + Servo + Arduino
Hey, guys!!I'm Sridhar Janardhan back with another tutorial.In today's tutorials, i am going to teach you how to control a servo with the data thrown by the humidity sensor.
Objectives: To know how the valve of a door is opened is to water the land when the soil has less moisture content. To know about this you must be familiar with the DTH-11 sensor which is a humidity sensor for the electronics hobbyist.The DTH-11 sensor is a device designed to measure the humidity of it's surrounding. Now let's start to gather the components required for this project.
Step 1: Components Required:
The Components required for this projects are:
- Arduino Uno
- DTH-11 sensor
- Jumper wire
- Servo Motor
- Breadboard
Now let's get into the interfacing part of the Arduino.
Step 2: Interfacing the DTH-11 Sensor:
As I explained in the initial part of the Ibles. DTH-11 is a humidity sensor that measures the humidity content around it's surrounding.This sensor throws a data at a regular ping.These data are used by the hobbyist to implement their action as here I have used these data to control the movement of the servo.
Key features of the DTH11 sensor:-
- Operating Voltage: +5 Volts (Can be powered by Arduino)
- the range of temperature: 0 t0 50 °C (error of ± 2 °C)
- Humidity percentage: 20 to 90% RH ± 5% RH error
- Interfacing Medium: Digital
The three pins of the DTH11 sensor are:
- VCC pin: The power supply pin needed to operate
- GND pin: The ground pin needed to ground the components in the circuit
- Signal pin: The pin that sends the data to the Arduino
The connection of the sensor is as follows:
- VCC Pin: The power supply is connected to the positive railing of the breadboard.
- GND Pin: This pin is connected to the negative railing of the breadboard.
- Signal Pin: This pin is connected to the Digital pin 3 of the Arduino.
Now let's start interfacing the Servo motor.
Step 3: Servo Motor Interface:
The servo motor is a specially designed motor whose speed and acceleration can be controlled in both the direction.This speed finds its application in the major physical mechanism.
The pin description of the servo motor are:
- Red wire: The VCC pin of the servo.
- maroon wire: The GND pin of the servo.
- Orange wire: The Signal wire.
The Servo connection is as follows:
- Red wire: The VCC pin is connected to the positive railing of breadboard.
- Maroon wire: The GND pin is connected to the negative railing of the breadboard.
- Orange wire: The signal pin is connected to the digital pin 5 of Arduino.
Step 4: Coding
#include "DHT.h" #include Servo myservo;
int pinDHT11 = 2;
SimpleDHT11 dht11;
void setup() {
myservo.attach(5);
Serial.begin(115200); }
void loop() {
Serial.println("=================================");
Serial.println("Sample DHT11...");
byte temperature = 0;
byte humidity = 0;
if (dht11.read(pinDHT11, &temperature, &humidity, NULL)) {
Serial.print("Read DHT11 failed.");
return;
}
Serial.print("Sample OK: ");
Serial.print((int)temperature);
Serial.print(" *C, ");
Serial.print((int)humidity);
Serial.println(" %");
if (humidity <= 50) { for (pos = 0; pos <= 180; pos += 1) {
myservo.write(pos); delay(15); }
}
else { for (pos = 180; pos >= 0; pos -= 1) {
myservo.write(pos); delay(15);
} delay(1000);
}
Participated in the
Makerspace Contest 2017
Be the First to Share
Recommendations
10 Comments
1 year ago
Please could you check the code as it appears to have many errors
Reply 1 year ago
Actually the code itself is full of errors , since the value declaration are not part of the library I will update a new clean code , that work !
Reply 4 months ago
could you send me the code to pls
Reply 6 months ago
Could you send me the code please?
Question 4 months ago on Step 4
hi! sorry to disturb but at the second #include there is no file comming after so the compiler gives me this error:
Arduino: 1.8.12 (Windows 10), Board: "Arduino Mega or Mega 2560, ATmega2560 (Mega 2560)"
humidity_sensor:3:10: error: #include expects "FILENAME" or <FILENAME>
#include
^
exit status 1
#include expects "FILENAME" or <FILENAME>
This report would have more information with
"Show verbose output during compilation"
option enabled in File -> Preferences.
1 year ago on Step 4
I have error when comply " SimpleDHT11" does not name a type " please help.
3 years ago
error message be like
Arduino: 1.8.4 (Windows 8.1), Board: "Arduino/Genuino Uno"
C:\Users\user\Documents\Arduino\humidity_servo\humidity_servo.ino:1:17: fatal error: DHT.h: No such file or directory
#include "DHT.h"
^
compilation terminated.
exit status 1
Error compiling for board Arduino/Genuino Uno.
This report would have more information with
"Show verbose output during compilation"
option enabled in File -> Preferences.
Reply 2 years ago
separately download the library of DHT.h
2 years ago
hey! I'm new to this so i know almost nothing. i have learned a little but in school and wanted to do this for my project. i just want to ask to be sure if i dont need a resistor or anything like that?
3 years ago
There is some error in your code plz do some corrections in your code | https://www.instructables.com/Humidity-Servo-Arduino/ | CC-MAIN-2021-25 | refinedweb | 878 | 56.25 |
Warning: this page refers to an old version of SFML. Click here to switch to the latest version.
Using OpenGL
Introduction
This tutorial is not about OpenGL, but only the way to use SFML window package to interface with OpenGL. As you know, one of the most important feature of OpenGL is portability. However, OpenGL requires you to create a rendering context first. And rendering context is all but portable ; each operating system as its own way to create it. That's why people usually use a portable library to get a portable windowing / event system that will be able to run OpenGL under every system. Most famous libraries for doing so are SDL and GLUT, but they are written in C and not always convenient to use in C++, especially if you have an object-oriented approach. They also lack essential features, like being usable in multiple windows or in existing interfaces.
Initialization
To use OpenGL, you only have to include Window.hpp : the OpenGL and GLU headers will be automatically included by it. This is to prevent you from having to use preprocessor, as OpenGL headers have different names on each operating system.
#include <SFML/Window.hpp>
No extra step is requiered for SFML initialization when you want to use OpenGL. So let's create a window as we learnt before :
sf::Window App(sf::VideoMode(800, 600, 32), "SFML OpenGL");
If you want more control on the OpenGL context creation, you can pass an additional structure when creating the window,
which contains extra graphics settings like depth buffer, stencil buffer and antialiasing. The structure to use
is
WindowSettings :
sf::WindowSettings Settings; Settings.DepthBits = 24; // Request a 24-bit depth buffer Settings.StencilBits = 8; // Request a 8 bits stencil buffer Settings.AntialiasingLevel = 2; // Request 2 levels of antialiasing sf::Window App(sf::VideoMode(800, 600, 32), "SFML OpenGL", sf::Style::Close, Settings);
Every member of the
WindowSettings structure has a proper default value, so you can only specify
the settings you care about. Depending on the hardware configuration, the requested settings may not all be available.
In this case, SFML will choose the closest settings supported by the machine. To know what has actually been chosen,
you can get the window configuration back :
sf::WindowSettings Settings = App.GetSettings();
Once a window is created, we have a valid rendering context. So this is a good time for doing our OpenGL specific initializations. Here we setup a perspective view with Z-Buffer enabled :
//);
All OpenGL contexts are shared, this means that you can create a texture while WindowA is active, and use it to draw things in WindowB.
Main loop - drawing a cube
Main loop starts as before, with event processing :
while (App.IsOpened()) { sf::Event Event; while (App.GetEvent(Event)) { // Some code for stopping application on close or when escape is pressed... }
But here we have to handle one more event :
Resized. This is because when
window size changes, we have to adjust the OpenGL viewport to match the new size. Viewport is the area
of the window where the scene will be displayed, so if you don't adjust it to fit the new window size,
your scene will display in a small sub-rectangle of the window.
if (Event.Type == sf::Event::Resized) glViewport(0, 0, Event.Size.Width, Event.Size.Height);
You can now start rendering a new frame. Before calling any OpenGL command, you have to make sure
that the proper window is active. Here we don't care because we have only one window, but if you handle
multiple SFML windows you must take this in account. To make a window active, you can call
its
SetActive function :
App.SetActive();
Then, the first thing to do is to clear the color and depth buffers to erase previous frame's content :
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
We are now ready to draw a cube. First, we define its position and orientation. Orientation will change according to the current time, to add a little bit of motion.
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.f, 0.f, -200.f); glRotatef(Clock.GetElapsedTime() * 50, 1.f, 0.f, 0.f); glRotatef(Clock.GetElapsedTime() * 30, 0.f, 1.f, 0.f); glRotatef(Clock.GetElapsedTime() * 90, 0.f, 0.f, 1.f);
Then we draw the cube :
glBegin(GL_QUADS);End();
Finally, we can end our main loop by displaying rendered frame on screen :
App.Display();
And that's it, you should have a white rotating cube on a black background. As usual, no clean up is needed after main loop exits.
Conclusion
Using OpenGL with SFML is straight-forward, and does not require extra step compared to regular SFML use. You can get a robust, portable and object-oriented OpenGL windowing system with only a few lines of code.
You now know almost everything about the SFML window package. You have learned how to install SFML API, open a window, handle properly inputs, events and time, and interfacing with OpenGL. You can now jump to another section to learn a new package. | https://www.sfml-dev.org/tutorials/1.6/window-opengl.php | CC-MAIN-2018-05 | refinedweb | 839 | 64.3 |
Tom Miller's BlogXna Game Studio and Managed DirectX<BR>These postings are provided "AS IS" with no warranties, and confer no rights. <BR>Use of included script samples are subject to the terms specified <a href="">here</a>. Server2006-08-30T08:17:00ZMake money from your amazing game creations!<P>Today we announced the <A class="" href="" mce_href="">Xbox LIVE Community Games</A>. You can read more information about it (including a FAQ) <A class="" href="" mce_href="">here</A>.</P> <P><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: 'Arial','sans-serif'">What is the big thing in the announcements that could be considered exciting? You'll be able to make money from your creations. <?xml:namespace prefix = o<o:p></o:p></SPAN></P> <P><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: 'Arial','sans-serif'".<o:p></o:p></SPAN></P> <P><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: 'Arial','sans-serif'" <A class="" href="" mce_href="">Schizoid</A>? A game that has been released and making money. Now we have this announcement today. Almost anyone has the potential to make money by writing a game and getting it on the service. It couldn't be simpler.<o:p></o:p></SPAN></P> <P><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: 'Arial','sans-serif'"> As for me, I'm celebrating this announcement by taking a trip to Vegas. (Ok, that's just coincidence, but still).<o:p></o:p></SPAN></P> <P mce_keep="true"> </P><img src="" width="1" height="1">tmiller Games!<P>For those of you who don't read the <A class="" href="" mce_href="">XNA Team Blog</A> the Community Games on Xbox LIVE feature has gone into a beta release!</P> <P>Plus, the <A class="" href="" mce_href="">web site</A> looks much cooler now.</P> <P!</P><img src="" width="1" height="1">tmiller XNA Game Studio Mind Edition...<P>I'm probably going to get in trouble for announcing this before any press release has went out publicly, but I find it hard to hold back my excitement! The next version of the XNA Game Studio product line will be the "XNA Game Studio Mind Edition".</P> <P.</P> <P!</P> <P.</P> <P.</P> <P.</P> <P.</P> <P>I have to admit, I feel excited to get all of this off of my chest now. I haven't felt this good in at least a <A class="" href="" mce_href="">year</A>. Must be something in the <A class="" href="" mce_href="">air</A>.</P><img src="" width="1" height="1">tmiller just tried to use four render targets on my Xbox 360 and it failed!!<P>Dear Mr Miller, you suck. I'm running in 1080p resolution, and needed to have four simultaneous render targets, and a depth buffer, and I can't do it.</P> <P>--</P> .</P> <P>Let's take a look at an example. Your Xbox is rendering in wide screen 720p mode with a resolution of 1280xx.</P> <P>As you can see, even this simple scenario we overstepped our bounds. Luckily, we have a mechanism (called "tiling") that allows us to work with this data anyway. We essentially break our large 1280x720 set of pixels into a series of smaller sets of pixels that *do* fit within the memory constraints. In the example above, the back buffer could be broken down into two separate "tiles" of size 640x720 with 2X multisampling, and things would work just fine. With only two possible consumers of this chunk of memory in version one (the back buffer and the depth buffer), you could easily fit the largest possible back buffer sizes within this chunk of memory.</P> <P."</P> <P>There are two big issues with this. First, performance would be horrible. Each tile that is rendered actually will have your <STRONG>*entire*</STRONG>. </P> <P>Knowing that, what size of render targets can you make? What if you have a 2048x2048 render target (surface format of Color) with no multisampling? Would that fit if you had four of them at the same time? A quick and dirty way of figuring it out would be this:</P> <P>2048 * 2048 = 4MB * 4bytes (color) = 16MB (total amount of memory needed) / 2MB (maximum size) == 8</P> <P>So, you'd need 8 tiles for this buffer to render correctly, 8 is less than 15 so you'd be safe. What if you added 2x multisampling though?</P> <P>2048 * 2048 == 4MB * 8 bytes (color+ms) == 32 MB (total amount of memory needed) / 2MB (maximum size) == 16</P> <P>Nope, 16 tiles is too many, this wouldn't fit. What if you didn't have a depth buffer though? Depth buffer counts as a buffer in the EDRAM so without it, you'd have 2.5MB per surface available:</P> <P>2048 * 2048 == 4MB * 8 bytes (color+ms) == 32 MB (total amount of memory needed) / 2.5MB (maximum size) == 13</P> <P>13 tiles, you'd fit again!</P> <Px720 surface and 3 tiles used in that, you can be assured that you don't have 3 separate tiles each of size 426.67x720. Each tile is rounded up to the next size it can be (normally a mutliple of 32). In the case above, that 1280x720 surface would really be three tiles of size 448x720 with the third title having some "empty" space in it. So if you do your little math above and it comes to exactly 15 tiles, you still may be too big if the tiles aren't the size you'd expect.</P> <P>Do I really expect someone to try to create and use four simultaneous 2048x2048 Color render targets with 4x multisampling and a 32bit depth buffer? Well no, because as you've seen here, it would fail! However, if it does fail, at least this hopefully explains why!</P> <P.</P><img src="" width="1" height="1">tmiller CornflowerBlue?<P>This post will be purely speculation by me! </P> .</P> <P.</P> <P.</P> <P.</P> <P>If you've read my Kickstart book, the vast majority of sample code in there clears all of the backgrounds to CornflowerBlue. Every sample I've written since that point in time does as well.</P> <P. =)</P><img src="" width="1" height="1">tmiller quite what I had in mind...<P>Well, so far my promise to myself to write new blog posts more often has been working out well. I've almost written as much this week as I did the entirety of last year (and I have to admit, four blog posts over the course of a year is pretty pathetic, which is what I had last year).</P> <P>This post though doesn't have any interesting anecdotes from our latest release, and in fact isn't even about XNA Game Studio or game development at all!</P> <P>As part of my exercise of "writing more" one of the things I had to do was go back and see what I was writing about back in the day when I was enjoying my blog posting, so over the course of today I went back and re-read every post I ever made. It was an interesting experiment to say the least, I'm surprised at the number of things I read that I just had completely forgotten.</P> <P>A couple of the posts I found were related to World of Warcraft, which is what this post will be about. They were about a year apart, and they were each essentially scathing reviews of the state of the game. While I most likely really did feel so annoyed when I wrote them (and had cancelled my account each time), the fact remains I still play the game to this day. While many people bemoaned the expansion and how much it "changed the game", I for one, enjoyed the changes. There is much less of a brick wall when you hit the level cap than there was before, and there are plenty of things to do regardless of what type of player you are or were.</P> <P>In my original WoW life, I was a "hardcore raider". We raided every day from 6:30 until 11,12, whenever we felt finished for the night. We cleared Molten Core, Onyxia, Blackwing Lair, ZG, AQ20/40, and Naxxramus (although to be fair, I left the guild after BWL). My schedule became to the point where I couldn't meet those demands and amongst other reasons I left that guild.</P> <P>That began my second WoW life, where I decided to start over on a PvP server. I won't get into my philosophies on the pvp servers because I don't want to be inundated with a series of "l33t speak" that I don't understand telling me how much I suck. I maxed out multiple characters on the pvp server, killed thousands upon thousands of my "enemies" (there were no cross server battle grounds back then), and there is absolutely no doubt in my mind that the pvp ruleset isn't for me. I am a carebear through and through apparently.</P> <P>Around the time I was realizing this and getting frustrated though, my original "hardcore" characters had the option to move. My old server was too big and had queue's every night, and all characters were invited to move to another new server. Sensing this was a great time to see if I still enjoyed the game without the pvp annoyance and removing myself from the pressure of raiding constantly, I moved all of my characters to the new server and started playing again. I joined a little guild, had a bit of fun.. They merged with a huge guild and I considered that a big mistake for a variety of reasons. Shortly after that merge, I left the big guild, and reformed the old little guild, bringing most of the original members back.</P> <P>So now I'm the guild leader of some random guild on a brand new server. We were the definition of casual, and happy that way. Well, most of us were. Some were annoyed we weren't doing the big 40man raids (I'd already seen so much of Molten Core i never wanted to step foot in it again), and they left. The beta for the expansion was just around the corner though, and that's where we all started playing more again. Everyone in the "core" of the guild was in the expansion beta as well.</P> <P>Once the expansion came out, I decided the name of the old guild was stupid, and we needed to find a much more stupid name to call ourselves, so I kicked everyone out of <Dark Crusade> and created the guild <Less Than Three>. For those who are "unaware" Less Than Three is <3, which is an emoticon for a heart. Naturally, our guild tabard is pink with a heart. That still annoys some people, but eh, I get a kick out of it. So anyways, the expansion came out, and we were invigorated, so many new quests, so many new dungeons, so much to do!</P> <P>So we all leveled up with varying levels of speed, and ran all kinds of dungeons along the way. It was good fun. Eventually we got to the point where we decided we were ready, we were going to run a "heroic". We were going to collect epic loots and slaughter the place. We decided we would try heroic Slave Pens because at the time you needed revered reputation to run a heroic and that was the easiest faction to get rep with, and on top of that was supposedly one of the easiest heroics in the game. We gathered at the entrance, we walked in, we were ready to do great things.</P> <P>We wiped on the first pull. Twice. We wiped on the second pull. We were all red before we got to the first boss. We did kill that first boss though before we decided we weren't ready for heroics yet.</P> <P>Looking back at it now, it's pretty funny, but at the time, the thoughts were "Holy crap, this is impossible." We yawn through most of them nowadays, but I always look back at that first wipe fest with fond memories..</P> <P>Anyway, finally around November of last year I decided as a guild we needed to start doing more, and that meant Karazhan. As I stated earlier, we are an extremely small casual guild, but I figured that doesn't mean we were bad players (half of us came from the hardcore raids back in the day), we were just not focused on being #1 anymore. When I say small though, I mean not even large enough to run Karazhan (ie, less than 10 players keyed). We had all the core classes covered though. We had enough tanks and healers, all we needed was DPS classes. I figured we could just find random pick up group (PUG) members to fill out our DPS slots, and that's what we started doing.</P> <P>We had some success too. Soon we were killing multiple bosses and things were going well, when we eventually hit a brick wall. We had gotten to the point where we could take just about any random PUG and clear through Curator. When I say any random PUG, i mean just that as well. We've had a PUG who did half the damage I did (I'm the main tank). We've had a PUG who went out of his way to break any crowd control that existed. We've had a PUG who decided to use his pet to pull a group of mobs that weren't even in the same room as us. Actually, that was all the same guy. He didn't understand why I didn't invite him back next week because of how awesome he was. Where was I? Oh yeah, brick wall. Shade of Aran was our brick wall. </P> <P>This is what Shade of Aran had taught me. At the time, the majority of our raids were made up with 8-9 guild members and 1-2 PUGs. I learned that we could 8 man every boss up until Curator, and that we could 9man Curator. I also learned that a random pug in all greens who doesn't understand the need to stand still during Flame Wreath is a recipe for disaster. Unlike any boss before it, Aran punishes players for doing things wrong, and if there is one thing you can count on a pug doing, it's "something wrong".</P> <P>So, I started trying to recruit players to help us out. I talked to all of the PUG members we had with us that went well, but most of them were in established guilds and were with us on an alt for easy/free badges and loot. The ones interested in joining were the ones who were horrible. This is where I learned that I am a horrible recruiter. I simply cannot do it at all!</P> <P>Which brings us to this extremely long winded post! Consider this my last lame attempt at recruitement. Our raid times are pretty inflexible (given our small size and the schedule requirements of some of our members), but our normal raid nights are Friday and Saturday nights, from 8pm to Midnight pacific time. We have killed all of the bosses in Karazhan with the exception of Prince, Nightbane and Netherspite (see my note on pugs above), but hope to have those done soon.</P> <P>So, if you're interested in raiding Karazhan (and eventually Zul'Aman soon) on those days, and random other dungeons throughout the week (all in the evenings, we're all at work during the day you know), you should contact me! If you're a warlock, even better, since all our current warlocks are alts of characters already in Kara and some bosses (I'm looking at you Illhoof) could really use a warlock. I really don't care what class you are though. Do you want to tank? Awesome, that means I could bring an alt sometimes. Want to heal? I'm sure our healers would like to bring an alt sometimes too! Got DPS, that's awesome, we always need DPS. </P> <P>Now, we're a pretty accomodating guild for the most part. I'm entirely too helpful for my own good most times. Given the extremely public nature of this post though, I'm going to have to be a little more particular on the things I am looking for. I understand the concept that beggars can't be choosers, but I don't feel like being taken advantage of and helping someone get ready for raids with us just to have them disappear. If you want to create a new character on the server, we'll help you level up when it's convenient for us, but it won't be all that often. I'd much rather if you were already level 70 and keyed. If you want to tank in Kara, you'll need to be uncrittable, and have a minimum of 12k hps and 12k armor (20k if druid). If you want to heal, you'll need at least 1000 +heal, and preferably more, along with a decent mana regen and mana pool. If you want to DPS, I hope you can sustain 400+ the entire run. </P> <P>To be honest, I hate putting any kind of preconditions on it at all. However, i think what i have is pretty reasonable. I don't care what class you are, what spec you are, so long as you are a good player, you'd be welcome. So after all that, if you're still interested, you should contact me, either here on these boards, or in game. We play on the Eitrigg server as Alliance, and you can contact any character that starts with "Miller" in the guild <A class="" href="" target=_blank<Less Than Three></A>. </P> <P>We have a web site as well at <A href=""></A> (as you can see our old guild name remnants still exist), but it's been flaky the last few days to say the least. Hopefully it is resolved soon.</P> <P>P.S. How sad is it that I'm posting a recruiting post on my blog? I almost feel dirty.</P><img src="" width="1" height="1">tmiller purple?<P>One? <A class="" href="" mce_href="">Shawn</A> normally goes a great job staying ahead of the curve so the possibilties of topics diminishes some. =)</P> <P>So for now I'm just going to post a random anecdote that was asked in our internal alias one day.. Why does the framework clear the targets (render targets, back buffer) to purple when RenderTargetUsage.DiscardContents is turned on?</P> .</P> <P?</P> <P>So, why purple? To avoid the hate mail that would have come from the original neon green!</P><img src="" width="1" height="1">tmiller Announcements!<P>Well,!</P> <P> Nevertheless, some great things annouced today that are quite exciting. You can read all about them on the <A class="" href="'ve%20neglected%20this%20blog%20for%20so%20long%20that%20it's%20been%20since%20last%20August%20that%20I%20wrote%20anything.%20%20This%20is%20probably%20because%20we%20have%20so%20many%20other%20team%20members%20who%20are%20doing%20such%20a%20great%20job%20with%20their%20own%20blogs,%20mine%20almost%20seems%20redundant!" target=_blankXNA Team Blog</A> as well.</P> <P.</P> <P.</P> <P!</P> <P.</P><img src="" width="1" height="1">tmiller (or "I never seem to write blog posts anymore")<P>Well, Gamefest started today, and with that our <A class="" title="XNA Team Blog" href="" mce_href="">team blog</A> posted a new <A class="" title="XNA Game Studio 2.0" href="" mce_href="">announcement</A>. Exciting stuff!</P> <P.</P> <P.</P><img src="" width="1" height="1">tmiller Game Studio Express Update month we'll be releasing an update with a small sampling of some of things we've been working on. You can read more about it on our team blog <A class="" title=Updatehere</A>.<img src="" width="1" height="1">tmiller? Books? A web site? A blog post?<P).</P> <P> Anyway, enough random complaints from me, and onto what I originally was going to write about..</P> <P.</P> <P>If you haven't heard recently (sometimes I am a little slow on the announcements) we have launched a new <A class="" title="Xna Creators Club" href="" mce_href="">creators club web site</A>. It has a new <A class="" title="Starter Kits" href="" mce_href="">starter kit</A> for you to download and enjoy, a number of <A class="" title=Samplessamples</A> to look at, and even new <A class="" title=Forumsforums</A>. Naturally, there is more there as well, so go check it out!</P> <P>We also have our <A class="" title=ContestDream. Build. Play</A>. contest going on, and we announced some of the <A class="" title=Prizesprizes</A> which are simply amazing in my opinion ($10,000 cash, a new computer, a chance to have your game published on Xbox Live Arcade and more?). There was also a 'warm up' contest using the Spacewars starter kit that had some <A class="" title=Winners!amazing entries</A>. I'm excited to see the things that will come out of the real contest now. Like I've mentioned before, I've always wanted to run a contest for something like this.</P> <P>As part of the GDC list of <A class="" title=Announcementsannouncements</A>, we also have a lot of great information. Such as Creators Club members getting a license to the <A class="" title="Torque X" href="" mce_href="">Torque X engine</A>. None of this even hints at some of the things we are hoping to have done in the not too distant future. For anyone who thought we were going to be resting on our laurels as they say, rest assured we are doing nothing of the sort. </P> <P>Now if I could just get myself to be more like <A class="" title="Shawn Hargreaves" href="" mce_href="">Shawn</A> and actually write some technical posts here once in a while...</P><img src="" width="1" height="1">tmiller<P class=MsoNormal<FONT face="Times New Roman" size=3>With a title like that I’m bound to start discussing the intricacies of one of the CLR features am I not?<SPAN style="mso-spacerun: yes"> </SPAN>Well given the day, I have to say of course not.<SPAN style="mso-spacerun: yes"> </SPAN>I’m going to take this time to remind myself of the things that have happened over the last year.</FONT></P> <P class=MsoNormal<?xml:namespace prefix = o<o:p><FONT face="Times New Roman" size=3> </FONT></o:p></P> <P class=MsoNormal<FONT face="Times New Roman" size=3>Have you heard?<SPAN style="mso-spacerun: yes"> </SPAN>We shipped the Xna Game Studio Express!<SPAN style="mso-spacerun: yes"> </SPAN>Honestly, I thought I had written a short post stating that, but apparently I’m losing my memory in my old age, because it certainly doesn’t appear to be here!<SPAN style="mso-spacerun: yes"> </SPAN>For that, I apologize, but it wasn’t a secret that it was coming out, so hopefully this isn’t a surprise to anyone!<">As I insinuated in a post a few months ago, this release has been quite ‘special’ for>This time a year ago I was still in the DirectX team.<SPAN style="mso-spacerun: yes"> </SPAN>Such a thing as “Xna Game Studio Express” didn’t exist, and the amount of people even aware such a thing was being considered was very small.<SPAN style="mso-spacerun: yes"> </SPAN>This team has gone from essentially nothing to having a product out that will changes the rules in this industry. This was all done in the space of a single year’s time (really in just a few short months).<SPAN style="mso-spacerun: yes"> </SPAN>This is an absolutely remarkable achievement, and I hope every single person who helped make this possible feels the same sense of pride that I do.<SPAN style="mso-spacerun: yes"> </SPAN>The hard work shown by this team and how we pulled everything together to allow all of our anxious customers to start using Xna was a sight to behold.<">It’s not often in life you get to be a part of something that not only can change the industry you work in, but can spawn an entirely new one as well.<SPAN style="mso-spacerun: yes"> </SPAN>What this team has accomplished in such a short time speaks volumes about the passion and dedication they have and bodes well for the future of this product.>We’ve accomplished all my original goals for managed code in gaming.<SPAN style="mso-spacerun: yes"> </SPAN>We’ve released a product with plenty of support.<SPAN style="mso-spacerun: yes"> </SPAN>We’ve opened up the Xbox 360 for development using managed code.<SPAN style="mso-spacerun: yes"> </SPAN>We even have a <A class="" title="Dream. Build. Play." href="" target=_blankcontest</A> going for you to enter to win great prizes.<SPAN style="mso-spacerun: yes"> </SPAN>I’ve been <A class="" href="" target=_blanktrying to set up a contest</A> for a very long time.<>Knowing some of the features we have planned, and seeing what the community already has done in such a short time, I couldn’t be more proud of where we’ve ended up.<SPAN style="mso-spacerun: yes"> </SPAN>The excitement is just beginning.<SPAN style="mso-spacerun: yes"> </SPAN>The rest is up to you!</FONT></P><img src="" width="1" height="1">tmiller Game Studio Express Beta 2<P>Yay, Beta2!</P> <P> Check out our <A class="" title="XNA " href="" mce_href="">newly updated site on MSDN</A>.</P><img src="" width="1" height="1">tmiller Games? Of course!<P>You may remember me talking about <A href="">Koios Works</A> before. They wrote one of the first retail games using Managed DirectX. They have an entirely new 3D game out now called <A href="">Panzer Command: Operation Winterstorm</A></P> <P>Not only are they still using Managed DirectX and having a full 3d game going, they won <A href="">first prize</A> in a contest sponsored by Intel with a nice chunk of change reward!</P> <P>I wonder if and when they'll be moving over the Xna Game Studio Express? =)</P><img src="" width="1" height="1">tmiller the Xna Game Studio Express (Beta) now..<P>Just click this <A href="">link</A>.</P><img src="" width="1" height="1">tmiller | http://blogs.msdn.com/tmiller/atom.xml | crawl-002 | refinedweb | 4,534 | 71.04 |
If you’re wondering how to trigger the GPIO of a Raspberry Pi remotely with another device, then this tutorial is for you. Set your Pi to open an LED or even a door with an SMS from your android device. With IFTTT, the sky is the limit! this:
Alternatively, you can use a different trigger to set off an HTTP request with a webhook. Our sample project will trigger an LED connected to the Raspberry Pi using an SMS from an Android device. The Raspberry Pi will act as a simple webserver. We can specify a keyword in IFTTT wherein IFTTT will make an HTTP GET request to the Raspberry Pi server if the SMS matches the keyword given. Depending on the URL of the web request, the LED will turn on or turn off.
Don’t worry if it’s still quite blurry. You’ll understand how it works as we build the project. Let’s begin by setting up the hardware!
Connecting the Components
This project only involves 1 LED and a Raspberry Pi. Connect the long side of the LED to pin 3 and the LED’s shorter side to the ground. After this tutorial, you can replace the LED with any digitally controlled component or module and it should still work.
><<
Next, search for “Android SMS”.
In Android SMS, choose “New SMS sent matches search.” This trigger activates every time you send an SMS on your android device that matches a certain keyword you specify later. Note that any phone number should work as long as the keyword is in the message.
Next, specify the keyword you want to use to trigger the LED. I’ll use “LED ON” for mine.
Let’s proceed with the effect. Click “Add” after That.
Select make a web request.
Next, specify the type of web request you want to make. How do we know what URL to use for this part? This is where Bottle comes in.
Bottle is a micro web framework in Python. It is designed to be fast, simple and lightweight, and is distributed as a single file module with no dependencies other than the Python Standard Library. First, download Bottle into your raspberry Pi with:
wget
Next, copy the code below into any editor or IDE.
import RPi.GPIO as GPIO import time from bottle import route, run, template GPIO.setmode(GPIO.BOARD) GPIO.setup(3, GPIO.OUT) GPIO.output(3, False) @route('/LED/:ledstate') def ledtrigger(ledstate=0): if ledstate == '0': GPIO.output(3, False) return 'LED OFF' elif ledstate == '1': GPIO.output(3, True) return 'LED ON' run(host='0.0.0.0', port=8081)
The code is pretty straightforward. First, import the required libraries. Then, set the PIN of your LED as an output and set the default signal to LOW or False. Next, set up your web server by handling the requests to specific URLs.
@route('/LED/:ledstate') handles the requests in 192.168.100.24:8081/LED/1 or 0.
Note that the preceding IP address is the IP address of your Raspberry Pi. If you don’t know your Pi’s IP address, enter
ifconfig in the terminal and search for the wlan0 section. The address that comes after inet is your IP address.
We then use if statements to control the status of the LED. The return value is printed to your web browser once it is triggered.
You can test the URLs on any web browser from any device connected to your home network. For instance, if I visit 192.168.100.24/8081/1, the LED turns on, and I get the message “LED ON” in my web browser. If I enter 192.168.100.24/8081/0, the LED turns off.
Now, go back to IFTTT and enter the URL into your webhooks web request. One downside of this configuration is that you will need two applets —one for turning the LED on and another for turning it off. For the turn-off applet, you can use LED OFF as the keyword.
Love this Website. Interesting Article. Lots of technical stuff
Will this wort with iPhone/ipad | https://www.circuitbasics.com/how-to-control-the-raspberry-pi-gpio-using-ifttt/ | CC-MAIN-2021-39 | refinedweb | 693 | 76.32 |
Enumerations?
Table of Contents
Enumerations, or enum's for short are a handy way of representing a fixed set (at compile time) of related states wrapped up as an object type. All values/states of an enum are considered public. Values may not have accessibility modifiers applied to them.
When declaring an
enum, a name and a comma separated list of values are required. The values do not have to be capitalized, but it is often conventional to do so. For example:
enum Color{ RED, GREEN, BLUE }
Enums can be used as follows:
aColor Color = Color.RED anotherColor = Color.BLUE
Only one instance of an enum is created per isolate, thus the following holds true:
enum Color{ RED, GREEN, BLUE } c1 = Color.RED c2 = Color.RED assert c1 &== c2 //c1 and c2 resolve to the same object
In other words, the same enum object is created only once and shared.
Enum items may have state associated with them, this can be initialized as follows:
enum Color(hexcode int){ RED(0xFF0000), GREEN(0x00FF00), BLUE(0x0000FF) }
We're able to assign state to enums because the individual entries are in fact subclasses of the enum holding them (
Color in this example). This allows us to write code like this:
enum MyEnum(~a int, ~b int){ ONE(9){ this(a int){ super(a,8) } }, TWO(22, 33) override toString() => "[{a} {b}]" } res = [MyEnum.ONE MyEnum.TWO] //res == "[[9 8] [22 33]]"
Given that the enum and enum items are themselves classes, we are afforded access to the likes of abstract methods etc. we're able to write code like the following:
public enum Operation { PLUS { public def eval(x double, y double) { x + y; } }, MINUS { public def eval(x double, y double) { x - y; } }, TIMES { public def eval(x double, y double) { x * y; } }, DIVIDE { public def eval(x double, y double) { x / y; } } def eval(x double, y double) double } res = Operation.PLUS.eval(1, 1) //res == 2
Note that although enums can have state, it is not recommended that this state be mutable given that enum items are shared per isolate.
Enums specify two extremely useful methods:
valueOf. This method can be called on an enum type and enables us to find the item instance for a specified name as a String:
enum Nums{ONE, TWO, THREE, FOUR} item Nums = Nums.valueOf('TWO')
A variant of this method is to use the one exposed on the class Enum as follows:
from java.lang import Enum enum Nums{ONE, TWO, THREE, FOUR} item = Enum.valueOf(Nums.class, 'TWO')
values. This method is useful for listing all items of an enum type. For example, we could rewrite our previous example as:
enum MyEnum(~a int, ~b int){ ONE(9){ this(a int){ super(a,8) } }, TWO(22, 33) override toString() => "[{a} {b}]" } res = MyEnum.values() //res == "[[9 8] [22 33]]" | https://concurnas.com/docs/enums.html | CC-MAIN-2022-21 | refinedweb | 477 | 69.21 |
?
A bootloader is a program which is able to load another program (the application program). Typically the bootloader program is not changed, and is kept in the microcontroller. That way the bootloader can load again and again a different program.
💡 Architecturally there can be a ‘mini’ or ‘micro’ bootloader which can load the ‘real’ bootloader. E.g. the OpenSDA bootloader on the Freedom boards have this capability.
The Bootloader Code and the Bootloader Vectors are programmed into a new part (e.g. with a debugger or a standalone flash programmer (e.g. with USBDM). Then the Bootloader can be used to load or change the Application Code and Application Vectors. With this, the Bootloader remains the same, while the Application can be updated.
Bootloader Sequence
A typical bootloader is doing something like this
- The bootloader decides at startup if it should enter bootloader mode or if it shall run the application. Typically this is decided with a button or jumper set (or removed). If it shall run the application, the bootloader calls the application and we are done :-).
- Otherwise, the bootloader will reprogram the application with a new file. S19 (S-Record) files are often used for this, as they are easy to parse and every tool chain can produce them.
- The bootloader needs to use a communication channel to read that file. That can be RS-232, USB or an SD card file system (e.g. FatFS).
- Using that file, the bootloader programs the flash memory. Special consideration has to be taken into account for the application vector table. As the bootloader runs out of reset, it is using its own (default) vector table, and needs to relocate the vector table if running the application.
💡 It would be possible to use the reset button on the FRDM-KL25Z board as a user button (see this post). To keep things simple, I’m using a dedicated bootloader push button on PTB8.
So writing a bootloader requires the following parts:
- Communication Channel: File I/O or any other means to read the Application File.
- File Reader: A reader which reads the Application File.
- Flash Programming: to program the Application.
- Vector Redirection: To switch between the Bootloader and Application Vector Table.
- User Interface: Showing status and information to the user, and to switch between application and bootloader mode at system startup.
Processor Expert comes with Flash Programming and Communication components (USB, SCI, I2C, …) installed. I have a Shell user interface already, plus an S19 file reader component created. Combining this with my other components should enable me to make a bootloader :-).
Flash Memory of the Bootloader
To make sure the bootloader gets linked only into its space, I reduce the FLASH memory for it. With the settings below I limit the FLASH memory from 0x0000 (vector table) up to 0x3FFF. That means my application memory area starts at 0x4000.
So I change the available flash for the bootloader in the CPU properties, and cut the available FLASH size on the KL25Z128 from 0x1FBF0 (~128 KByte) in the Build Options tab to 0x3FB0:
With this, the bootloader occupies the space from address 0x0000 (vector table) up to 0x3FFF.
Flash Protection
My bootloader resides in the first lower flash pages. To avoid that it might get destroyed and overwritten by the application, I protect the bootloader flash blocks. There is a setting in the CPU component properties where I can protect 4 KByte regions:
Terminal Program
For my bootloader I need a way to send a file with a terminal program. As my serial connection has only Tx and Rx, but no RTS/CTC lines for flow control, it is useful if the terminal program either implements software flow control (XON/XOFF), or a delay value for sending a file.
After some searching the internet, I have found an open source terminal program which exactly can do this:
It supports sending a file with a delay (shown above with 1 ms delay), and supports XON and XOFF. I used it successfully with my bootloader.
💡 Using a zero delay did not work in all cases. Not yet sure why. What worked was sending a file with a 1 ms delay setting.
Bootloader Shell
The bootloader features a shell with following commands:
-------------------------------------------------------------- FRDM Shell Bootloader -------------------------------------------------------------- CLS1 ; Group of CLS1 commands help|status ; Print help or status information BL ; Group of Bootloader commands help|status ; Print help or status information erase ; Erase application flash blocks restart ; Restart application with jump to reset vector load s19 ; Load S19 file
The ‘BL status’ command shows the application flash range, and the content of the application vector table (more about this later):
App Flash : 0x00004000..0x0001FFFF @0x00004000: 0xFFFFFFFF 0xFFFFFFFF
With ‘BL restart’ it starts the user application (if any), and with ‘BL erase’ the application flash can be erased:
CMD> Erasing application flash blocks...done!
Bootloading an Application
With ‘BL load s19’ a new application file can be loaded. It will first erase the application flash blocks, and then waits for the S19. To send the file, I use the ‘Send File’ button:
It writes then the address of each S19 line programmed to the shell console:
CMD> Erasing application flash blocks...done! Waiting for the S19 file... S0 address 0x00000000 S1 address 0x00008000 S1 address 0x00008010 ... S1 address 0x00009420 S1 address 0x00009430 S1 address 0x00009440 S9 address 0x00009025 done! CMD>
Bootloader Details
If I enter ‘BL Load S19’, it executes the function BL_LoadS19() in Bootloader.c:
static uint8_t BL_LoadS19(CLS1_ConstStdIOType *io) { unsigned char buf[16]; uint8_t res = ERR_OK; /* first, erase flash */ if (BL_EraseAppFlash(io)!=ERR_OK) { return ERR_FAILED; } /* load S19 file */ CLS1_SendStr((unsigned char*)"Waiting for the S19 file...", io->stdOut); parserInfo.GetCharIterator = GetChar; parserInfo.voidP = (void*)io; parserInfo.S19Flash = BL_onS19Flash; parserInfo.status = S19_FILE_STATUS_NOT_STARTED; parserInfo.currType = 0; parserInfo.currAddress = 0; parserInfo.codeSize = 0; parserInfo.codeBuf = codeBuf; parserInfo.codeBufSize = sizeof(codeBuf); while (AS1_GetCharsInRxBuf()>0) { /* clear any pending characters in rx buffer */ AS1_ClearRxBuf(); WAIT1_Waitms(100); } do { if (S19_ParseLine(&parserInfo)!=ERR_OK) { CLS1_SendStr((unsigned char*)"ERROR!\r\nFailed at address 0x", io->stdErr); buf[0] = '\0'; UTIL1_strcatNum32Hex(buf, sizeof(buf), parserInfo.currAddress); CLS1_SendStr(buf, io->stdErr); CLS1_SendStr((unsigned char*)"\r\n", io->stdErr); res = ERR_FAILED; break; } else { CLS1_SendStr((unsigned char*)"\r\nS", io->stdOut); buf[0] = parserInfo.currType; buf[1] = '\0'; CLS1_SendStr(buf, io->stdOut); CLS1_SendStr((unsigned char*)" address 0x", io->stdOut); buf[0] = '\0'; UTIL1_strcatNum32Hex(buf, sizeof(buf), parserInfo.currAddress); CLS1_SendStr(buf, io->stdOut); } if (parserInfo.currType=='7' || parserInfo.currType=='8' || parserInfo.currType=='9') { /* end of records */ break; } } while (1); if (res==ERR_OK) { CLS1_SendStr((unsigned char*)"\r\ndone!\r\n", io->stdOut); } else { while (AS1_GetCharsInRxBuf()>0) {/* clear buffer */ AS1_ClearRxBuf(); WAIT1_Waitms(100); } CLS1_SendStr((unsigned char*)"\r\nfailed!\r\n", io->stdOut); /* erase flash again to be sure we do not have illegal application image */ if (BL_EraseAppFlash(io)!=ERR_OK) { res = ERR_FAILED; } } return res; }
It first fills a callback structure of type S19_ParserStruct:
typedef struct S19_ParserStruct { uint8_t (*GetCharIterator)(uint8_t*, void*); /* character stream iterator */ void *voidP; /* void pointer passed to iterator function */ uint8_t (*S19Flash)(struct S19_ParserStruct*); /* called for each S19 line to be flashed */ /* the following fields will be used by the iterator */ S19_FileStatus status; /* current status of the parser */ uint8_t currType; /* current S19 record, e.g. 1 for S1 */ uint32_t currAddress; /* current code address of S19 record */ uint16_t codeSize; /* size of code in bytes in code buffer */ uint8_t *codeBuf; /* code bufffer */ uint16_t codeBufSize; /* total size of code buffer, in bytes */ } S19_ParserStruct;
That structure contains a callbacks to read from the input stream:
static uint8_t GetChar(uint8_t *data, void *q) { CLS1_ConstStdIOType *io; io = (CLS1_ConstStdIOType*)q; if (!io->keyPressed()) { #if USE_XON_XOFF SendXONOFF(io, XON); #endif while(!io->keyPressed()) { /* wait until there is something in the input buffer */ } #if USE_XON_XOFF SendXONOFF(io, XOFF); #endif } io->stdIn(data); /* read character */ if (*data=='\0') { /* end of input? */ return ERR_RXEMPTY; } return ERR_OK; }
Parsing of the S19 file is done in S19_ParesLine() which is implemented in a Processor Expert component which I already used for another bootloader project:
This parser is calling my callback
BL_onS19Flash() for every S19 line:
static uint8_t BL_onS19Flash(S19_ParserStruct *info) { uint8_t res = ERR_OK; switch (info->currType) { case '1': case '2': case '3': if (!BL_ValidAppAddress(info->currAddress)) { info->status = S19_FILE_INVALID_ADDRESS; res = ERR_FAILED; } else { /* Write buffered data to Flash */ if (BL_Flash_Prog(info->currAddress, info->codeBuf, info->codeSize) != ERR_OK) { info->status = S19_FILE_FLASH_FAILED; res = ERR_FAILED; } } break; case '7': case '8': case '9': /* S7, S8 or S9 mark the end of the block/s-record file */ break; case '0': case '4': case '5': case '6': default: break; } /* switch */ return res; }
Of interest are the S1, S2 and S3 records as they contain the code. With
BL_ValidAppAddress() it checks if the address is within the application FLASH memory range:
/*! * \brief Determines if the address is a valid address for the application (outside the bootloader) * \param addr Address to check * \return TRUE if an application memory address, FALSE otherwise */ static bool BL_ValidAppAddress(dword addr) { return ((addr>=MIN_APP_FLASH_ADDRESS) && (addr<=MAX_APP_FLASH_ADDRESS)); /* must be in application space */ }
If things are ok, it flashes the memory block:
/*! * \brief Performs flash programming * \param flash_addr Destination address for programming. * \param data_addr Pointer to data. * \param nofDataBytes Number of data bytes. * \return ERR_OK if everything was ok, ERR_FAILED otherwise. */ static byte BL_Flash_Prog(dword flash_addr, uint8_t *data_addr, uint16_t nofDataBytes) { /* only flash into application space. Everything else will be ignored */ if(BL_ValidAppAddress(flash_addr)) { if (IFsh1_SetBlockFlash((IFsh1_TDataAddress)data_addr, flash_addr, nofDataBytes) != ERR_OK) { return ERR_FAILED; /* flash programming failed */ } } return ERR_OK; }
The Flash Programming itself is performed by the IntFLASH Processor Expert components:
This component is used for erasing too:
/*! * \brief Erases all unprotected pages of flash * \return ERR_OK if everything is ok; ERR_FAILED otherwise */ static byte BL_EraseApplicationFlash(void) { dword addr; /* erase application flash pages */ for(addr=MIN_APP_FLASH_ADDRESS;addr<=MAX_APP_FLASH_ADDRESS;addr+=FLASH_PAGE_SIZE) { if(IFsh1_EraseSector(addr) != ERR_OK) { /* Error Erasing Flash */ return ERR_FAILED; } } return ERR_OK; }
Bootloader or Not, that’s the Question
One important piece is still missing: the bootloader needs to decide at startup if it shall run the Bootloader or the application. For this we need to have a decision criteria, which is typically a jumper or a push button to be pressed at power up to enter bootloader mode.
In this bootloader this is performed by
BL_CheckForUserApp():
/*! * \brief This method is called during startup! It decides if we enter bootloader mode or if we run the application. */ void BL_CheckForUserApp(void) { uint32_t startup; /* assuming 32bit function pointers */ startup = ((uint32_t*)APP_FLASH_VECTOR_START)[1]; /* this is the reset vector (__startup function) */ if (startup!=-1 && !BL_CheckBootloaderMode()) { /* we do have a valid application vector? -1/0xfffffff would mean flash erased */ ((void(*)(void))startup)(); /* Jump to application startup code */ } }
The function checks if the ‘startup’ function in the vector table (index 1) is valid or not. If the application flash has been erased, it will read -1 (or 0xffffffff). So if we have an application present and the user does not want to run the bootloader, we jump to the application startup.
Below is the code to decide if the user is pressing the button to enter the startup code:
static bool BL_CheckBootloaderMode(void) { /* let's check if the user presses the BTLD switch. Need to configure the pin first */ /* PTB8 as input */ /* clock all port pins */ SIM_SCGC5 |= SIM_SCGC5_PORTA_MASK | SIM_SCGC5_PORTB_MASK | SIM_SCGC5_PORTC_MASK | SIM_SCGC5_PORTD_MASK | SIM_SCGC5_PORTE_MASK; /* Configure pin as input */ (void)BitIoLdd3_Init(NULL); /* initialize the port pin */ if (!BL_SW_GetVal()) { /* button pressed (has pull-up!) */ WAIT1_Waitms(50); /* wait to debounce */ if (!BL_SW_GetVal()) { /* still pressed */ return TRUE; /* go into bootloader mode */ } } /* BTLD switch not pressed, and we have a valid user application vector */ return FALSE; /* do not enter bootloader mode */ }
I’m using
BitIOLdd3_Init() to initialize my port pin, which is part of the BitIO component for the push button:
💡 When creating a BitIO component for Kinetis, Processor Expert automatically creates a BitIO_LDD component for it. As I do not have control over the name of that BitIO_LDD, I need to use in my bootloader whatever Processor Expert has assigned as name.
I’m using PTB8 of the Freedom board, and have it connected to a break-out board (pull-up to 3.3V if button is not pressed, GND if button is pressed):
You might wonder why I have to initialize it, as this is usually done automatically by
PE_low_level_init() in
main()? The reasons is: I need to do this *before*
main() gets called, very early in the
startup() code. And that’s the reason as well why I need to set the
SIM_SCGC5 register on Kinetis to clock the peripheral.
Inside the CPU component properties, there is a Build option setting where I can add my own code to be inserted as part of the system startup:
To make sure it has the proper declaration, I add the header file too:
These code snippets get added to the
__init_hardware() function which is called from the bootloader startup code:
This completes the Bootloader itself. Next topic: what to do in the application to be loaded…
Application Memory Map
As shown above: the bootloader is sitting in a part of the memory which is not available by the application. So I need to make sure that application does not overlap with the FLASH area of the bootloader. My bootloader starts at address 0x0000 and ends at 0x3FFF:
While the application can be above 0x4000. These numbers are used in Bootloader.c:
/* application flash area */ #define MIN_APP_FLASH_ADDRESS 0x4000 /* start of application flash area */ #define MAX_APP_FLASH_ADDRESS 0x1FFFF /* end of application flash */ #define APP_FLASH_VECTOR_START 0x4000 /* application vector table in flash */ #define APP_FLASH_VECTOR_SIZE 0xc0 /* size of vector table */
The application just needs to stay outside the FLASH used by the bootloader:
To make this happen, I need to change the addresses for
m_interrupts and
m_text in the CPU build options:
That’s it 🙂
💡 As for the ARM Cortex-M4/0+ do not need to copy the vector table in the bootloader to a different location, I can debug the application easily without the bootloader.
S-Record (S19) Application File Generation
The bootloader loads S19 or S-Records. This post explains how to create S19 files for Kinetis and GNU gcc.
Code Size
The bootloader is compiled with gcc for the FRDM-KL25Z board. Without optimization (-O0), it needs 13 KByte of FLASH. But optimized for size, it needs only 8 KByte 🙂 :
text data bss dec hex filename 8024 24 2396 10444 28cc FRDM_Bootloader.elf
Summary
With this, I have a simple serial bootloader for only 8 KByte of code. The bootloader project and sources are are available on GitHub here.
And I have several ideas for extensions:
- Using a memory stick to load the appliation file (USB MSD Host).
- Using a SD-Card interface with FatFS.
- Using a USB MSD device to load the file.
- Performing vector auto-relocation: the bootloader should detect the vector table at address 0x00 of the application file and automatically relocate it to another location in FLASH. That way I can debug the Application without change of the vector table.
- Making sure it runs on other boards and microcontroller families.
- Creating a special component for the bootloader.
While the bootloader only needs 8 KByte for now, I keep the reserved range at 16 KByte, just to have room for future extensions.
Happy Bootloading 🙂
hello Erich,
i download your project from github but when i run that project on my CW IDE threre are some error like
../Sources/ProcessorExpert.c:32:17: fatal error: Cpu.h: No such file or directory
../Sources/Events.c:31:17: fatal error: Cpu.h: No such file or directory
../Sources/Bootloader.h:11:22: fatal error: PE_Types.h: No such file or directory
../Sources/Shell.c:10:18: fatal error: CLS1.h: No such file or directory
mingw32-make: *** [Sources/Bootloader.o] Error 1
mingw32-make: *** Waiting for unfinished jobs….
mingw32-make: *** [Sources/Events.o] Error 1
mingw32-make: *** [Sources/Shell.o] Error 1
mingw32-make: *** [Sources/ProcessorExpert.o] Error 1
how i can solve this error?
please suggest me some solution for that.
The error messsages indicate that the code for the Shell and other Processor Expert components have not been generated (or cannot be found. Have you generated code already with Processor Expert?
I hope this helps,
Erich
hello Erich,
i just download your project and import in CW IDE. i think this project has already generated in PE.
and one thing more as you said error show me that some component is not available in this version of PE.
thanks.
The projects on GitHub do not have the generated code uploaded (does not make any sense), so you have to generate code. And you have to install the latest McuOnEclipse components first from SourceForge, see and
I hope this helps,
Erich | https://mcuoneclipse.com/2013/04/28/serial-bootloader-for-the-freedom-board-with-processor-expert/ | CC-MAIN-2017-47 | refinedweb | 2,750 | 54.12 |
DispatchAction in Struts
By: GrenfelDispatch <html:link>.
One way of dealing with this situation is to create three different Actions – DispatchAction. Let us assume that CreditAppAction, a sub-class of DispatchAction is used to implement the above-mentioned presentation logic. It has four methods – reject(), approve() and addComment(). The CreditAppAction class definition is shown in Listing below.
You might be wondering why all the three methods take the same
four arguments – ActionMapping, ActionForm, HttpServletRequest,
HttpServletResponse. Don’t worry, you will find the answer
soon.
For a moment, look at the four URLs submitted when the bank staff perform the three actions as mentioned before. They would look something like this.
Figure: –’t. You will have to tell it
explicitly. And this is done in the ActionMapping for /screen-credit-app.do. The
ActionMapping for the URL path “/screen-credit-app.do†is declared in struts-config.xml
as shown
in Listing below.
The section highlighted in bold is what makes this Action different from the rest. The type is declared as mybank.example.list.CreditAppAction – you already knew that. Now, let us look at the second attribute in bold. This attribute, named parameter has the value “stepâ€. Notice that one of the HTTP request parameter in the four URLs is also named “stepâ€..
//Example DispatchAction
public class CreditAppAction extends DispatchAction {
public ActionForward reject(ActionMapping mapping,
ActionForm form, HttpServletRequest request,
HttpServletResponse response) throws Exception
{
String id = request.getParameter(“idâ€);
// Logic to reject the application with the above id
... ... ...
mapping.findForward(“reject-successâ€);
}
public ActionForward approve(ActionMapping mapping,
ActionForm form, HttpServletRequest request,
HttpServletResponse response) throws Exception
{
String id = request.getParameter(“idâ€);
// Logic to approve the application with the above id
... ... ...
mapping.findForward(“approve-successâ€);
}
public ActionForward addComment(ActionMapping mapping,
ActionForm form, HttpServletRequest request,
HttpServletResponse response) throws Exception
{
String id = request.getParameter(“idâ€);
// Logic to view application details for the above id
... ... ...
mapping.findForward(“viewDetailsâ€);
}
...
...
}\
DispatchAction can be confusing in the beginning. But don’t worry. Follow these steps to setup the DispatchAction and familiarize yourself with the steps.
- Create a subclass of DispatchAction.
- Identify the related actions and create a method for each of the logical actions. Verify that the methods have the fixed method signature shown earlier.
- Identify the request parameter that will uniquely identify all actions.
- Define an ActionMapping for this subclass of DispatchAction and assign the previously identified request parameter as the value of the parameter attribute.
- Set your JSP so that the previously identified request parameter (Step 3) takes on DispatchAction subclass method names as its values.
//Action Mapping the image buttons). Just name all the buttons same. For instance,
<html:submit property=â€stepâ€>Update</html:submit>
<html:submit property=â€stepâ€>Delete</html:submit>
and so on. Image buttons is a different ball game. Image button usage for form submission and DispatchAction are exclusive. You have to choose one. In the above example we used the DispatchAction and used methods that has ActionForm as one of its arguments. As you learnt in the last chapter, an ActionForm always existed in conjunction with the Action. In the Listing above,.. this tutorial is very nice but if it contains more
View Tutorial By: JOHN.J at 2007-11-29 23:56:06
2. What then, is the purpose of the unspecified() met
View Tutorial By: Bryan at 2011-01-12 09:33:27
3. Really nice and concise article.
Thanks a l
View Tutorial By: vishwajeet at 2011-02-23 22:38:14
4. Very nice to understand the concept.It also help t
View Tutorial By: Hrusikesh jena at 2011-05-26 00:23:28
5. Hi Friends,
I got a very interestin
View Tutorial By: Rohit Kapur at 2011-07-27 20:56:57
6. Following question to this topic:
- when i
View Tutorial By: Alistair at 2012-01-14 10:59:52
7. nice tutorial,but if provided with the complete ex
View Tutorial By: Prasant Kumar Sahu at 2013-01-07 13:17:20 | https://java-samples.com/showtutorial.php?tutorialid=581 | CC-MAIN-2022-33 | refinedweb | 676 | 51.95 |
from PIL import Image im = Image.open('dead_parrot.jpg') # Can be many different formats. pix = im.load() print im.size # Get the width and hight of the image for iterating over print pix[x,y] # Get the RGBA Value of the a pixel of an image pix[x,y] = value # Set the RGBA Value of the image (tuple) im.save('alive_parrot.png') # Save the modified pixels as .png
Here is what the above code is Doing:
1. Opening an image from the hard drive
2. Loading the image into memory
3. Getting the width and height of the image
4. Getting the RGBA value of a pixel
5. Setting the RGBA value of a pixel
6. Saving the image to the hard drive | https://myedukit.com/coders/python-examples/python-get-pixel-color/ | CC-MAIN-2022-21 | refinedweb | 123 | 70.5 |
04 March 2011 23:25 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
The February range widened at the high end by 9 cents/lb ($198/tonne, €143/tonne), as assessed by ICIS, from 102-103 cents/lb in January.
Participants described the market in February as “muddy”, and several sources expected the market to calm during March, noting that no new price initiatives had yet surfaced.
Buyers said February price-hike initiatives largely began in a range of 10-12 cents/lb, but moderated slightly downward among some producers. One producer sought no increase for the period, buyers said, widening the contract range.
Buyers suggested some producers were attempting to sell off older inventories at the low end of the range, while other sellers pursued higher pricing driven by some stronger feedstock values.
One buyer said there were too many layers of inventory to define the market price more narrowly. “The range is so crazy because one of the producers has fallen way behind on its increases,” another buyer said.
US MIBK suppliers include Dow Chemical, Sasol, Eastman, Celanese and Haltermann.
( | http://www.icis.com/Articles/2011/03/04/9441161/us-mibk-contract-range-moves-higher-for-feb-on-supply-feedstocks.html | CC-MAIN-2014-42 | refinedweb | 181 | 52.29 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello,
This is my first post on this forum, and i'm in need of your help. I'm working on a project with an android that has a set of animatronic eyes and a facetracking feature. I'm using the opencv library and Processing 3.3.6 to execute the tracking, and an arduino t control the eyes. At the moment i've created a script where the eyes are following only ONE face, but sometimes the eyes 'jump' when a new face enters the webcam. I would like to avoid this, so my reasoning was to always get the biggest width of the faces detected and send the x and y coordinates to the arduino. I found similar questions on the forum to 'Get the largest element from an array' but although i understand the logic my sketch keeps on outputting all sets of x and y coordinates detected. A supplementary note is in place that i work heuristically with code and have very basic knowledge of programming languages. Any push in the right direction is highly appreciated. Below the processing code:
import gab.opencv.*; import processing.video.*; import java.awt.*; import processing.serial.*; Capture video; OpenCV opencv; Serial myPort; // Create object from Serial class int newXpos, newYpos; //These variables hold the x and y location for the middle of the detected face int midFaceX = 0; int midFaceY = 0; void setup() { size(640, 480); video = new Capture(this, 640/2, 480/2); opencv = new OpenCV(this, 640/2, 480/2); opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); //println(Serial.list()); // List COM-ports (Use this to figure out which port the Arduino is connected to) String portName = Serial.list()[1]; //select first com-port from the list (change the number in the [] if your sketch fails to connect to the Arduino) myPort = new Serial(this, portName, 19200); //Baud rate is set to 19200 to match the Arduino baud rate. video.start(); } void draw() { scale(2); opencv.loadImage(video); image(video, 0, 0 ); noFill(); stroke(0, 255, 0, 40); strokeWeight(3); Rectangle[] faces = opencv.detect(); int maxValueFace = 0; int maxIndex = -1; for (int i = 0; i < faces.length; i++ ) { if (faces[i].width > maxValueFace) { maxIndex = i; maxValueFace = faces[i].width; //println(maxValueFace); rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // midFaceX = faces[i].x + (faces[i].width/2); // middle of the face midFaceY = faces[i].y + (faces[i].height/2); // middle of the face float xpos = map(midFaceX, 0, width, 90, 120); //maps range of servos L->R float ypos = map(midFaceY, 0, height, 90, 120); //maps range of servos U->D int newXpos = (int)xpos; //converts position X float into integer int newYpos = (int)ypos; //converts position Y float into integer myPort.write(newXpos+"x"); // send X coordinate to Arduino myPort.write(newYpos+"y"); // send Y coordinate to Arduino println(midFaceX + "," + midFaceY); } } } void captureEvent(Capture c) { c.read(); }
Answers
Use a separate loop for finding the max, remember the index.
The block starting at line 51 then needs moving outside the loop and should just use the remembered index.
Thank you for your response, i'm struggling my way through it bit by bit. I've placed from 51 line onward in a separate loop like this:
Still no success. If I comment out from line 51 onward and print the 'maxIndex' it still prints multiple numbers when multiple faces are detected. Is there something incorrect in the way i've set up the max declaration?
lines 8 to 20 are STILL within the loop (which runs from line 1 to the end) | https://forum.processing.org/two/discussion/25492/get-biggest-face-from-opencv | CC-MAIN-2019-43 | refinedweb | 617 | 63.39 |
.
If you use the JumpStart installation method, the process might use a system identification configuration (sysidcfg) file. This file is used to generate a specific Xsun configuration file for a system. The Xsun configuration portion of a sysidcfg file is created by the command kdmconfig -d filename. However, on systems that use the default Xorg server, the command does not create a file with any Xorg configuration information. Consequently, you cannot use the JumpStart method on these systems without some additional preparatory steps.
Workaround: Before using the JumpStart installation method on a system that uses the Xorg server, perform the following steps.
Prepare a specific xorg.conf file to be used on the system. Store this file in the JumpStart directory of the JumpStart server.
Create an xorg.conf file with one of these commands:
/usr/X11/bin/Xorg -configure
/usr/X11/bin/xorgconfig
/usr/X11/bin/xorgcfg
Create a finish script that copies the xorg.conf file to the /etc/X11 directory in the system that you want to install. For example, the script might include the following line:
In the custom JumpStart rules file, include the finish script in the rules entry for systems of the type that you want to install.
Perform the custom JumpStart installation.
For instructions about how to perform a custom JumpStart installation, see the Solaris 10 10/08 Installation Guide: Custom JumpStart and Advanced Installations. Chapter 4 includes information about the JumpStart rules file, while Chapter 5 contains a section about finish scripts.
The Removable Media auto run capability in the CDE desktop environment has been temporarily removed from the Solaris 10 software.
Workaround: To use the auto run function for a CD-ROM or another removable media volume, you must do one of the following:
Run the volstart program from the top level of the removable media file system.
Follow the instructions that are included with the CD for access from outside of CDE.
After.
The following file system bugs apply to the Solaris 10 release.
Do not take the primary disk offline in a mirrored ZFS root configuration. The system will not boot from a disk taken offline in mirrored root-pool configuration.
Workaround: To detach a mirrored root disk for replacement or take it offline, boot from another mirrored disk in the pool. Choose one of the following methods:
Bring the primary disk in the mirrored ZFS root pool back online. For example:
If the primary disk has failed or needs to be replaced, boot from another disk in the pool.:
ata driver timeouts might occur during system boot on Intel multiprocessor systems. These timeouts occur when the root device is on a drive with the HBA controller bound to the legacy ata driver. These timeouts lead to a momentary hang,:
If you are performing a system boot, proceed to Step 10, otherwise install the Solaris 10 10/08 software.
At the end of the installation, reboot the system. Repeat steps 1 through 7.
To make this change permanent so that the above steps do not need to be repeated for subsequent boots, do the following:
Become the super
If you use the fdisk -E command to modify a disk that is used by a ZFS storage pool, the pool becomes unusable and might cause an I/O failure or system panic.
Workaround:
Do not use the fdisk command to modify a disk that is used by a ZFS storage pool. If you need to access a disk that is used by a ZFS storage pool, use the format utility. In general, disks that are in use by file systems should not be modified.
The/08 release to a system that runs a pre-Solaris 10 6/06 release, which does not have the embedded_su patch, the ZFS Administration application wizards are not fully functional.
If you attempt to run the ZFS Administration application on a system without the embedded_su patch, you will only be able to browse your ZFS configuration. The following error message is displayed: you upgrade an NFSv4 server from Solaris Express 6/05 to Solaris Express 7/05 or later (including all Solaris 10 updates), your programs might encounter EACCES errors. Furthermore, directories might erroneously appear to be empty.
To prevent these errors, unmount and then remount the client file systems. In case unmounting fails, you might need to forcibly unmount the file system by using umount -f. Alternatively, you can also reboot the client.
NFSv4 Access Control List (ACL) functions might work improperly if clients and servers in the network are installed with different previous Solaris 10 releases. The affected ACL functions and command-line utilities that use these functions are the following:
acl()
facl()
getfacl
setfacl
For more information about these functions and utilities, see their respective man pages.
For example, errors might be observed in a network that includes the following configuration:
A client that is running..
The mkfs command might be unable to create a file system on disks with a certain disk geometry and whose sizes are greater than 8 Gbytes. The derived cylinder group size is too large for the 1-Kbyte fragment. The large size of the cylinder group means that the excess metadata cannot be accommodated in a block.
The following error message is displayed:
Workaround: Use the newfs command instead. Or, assign a larger fragment size, such as 4096, when you use the mkfs command..
If you use the smosservice command to add OS services to a UFS file system, a message that there is insufficient disk space available is displayed. This error is specific to UFS file systems on EFI-labeled disks.
Workaround: Complete the following workaround.
Apply the SMI VTOC disk label.
Re-create the file system.
Rerun the smosservice command.
The).
The following issues involve the kernel debugger.
A system that is running the Solaris kernel debugger to debug a live system might loop with incomplete error messages. This loop occurs when the OpenBoot PROM's master CPU is changed. A system reset restores the system to operation. However, the traces of the original failure are lost. Consequently, you cannot perform a diagnosis of the fatal reset.
Workaround: When the system is at the PROM level, the OpenBoot's ok prompt is displayed. In a system with multiple CPUs, the ok prompt is preceded by a number that is enclosed in curly braces. This number indicates the active CPU in the system. To run your debug session while at the PROM level, use the following steps.
Raise pil to f by typing the following command: sort capability in the European UTF-8 locales does not work properly.
Workaround: Before you attempt to sort in a FIGGS UTF-8 locale, set the LC_COLLATE variable to the ISO–1 equivalent.
Then start sorting.
The following networking bugs apply to the Solaris 10 release.
The Broadcom NetXtreme II 5709 (BCM5709) chipset is not supported in the Solaris 10 10/08: Solaris Containers-Resource Management and Solaris Zones.
If all the network interfaces in the IPMP group fail, a zone does not boot if it has an IP address that is part of the IPMP group.
The following example illustrates the result if you attempt to boot the zone.
Workaround: Repair at least one network interface in the group.
Internet SCSI (iSCSI) targets might report cyclic redundancy check (CRC) errors if DataDigests are enabled. User applications that update input/output buffers after transmitting to the iSCSI initiator might cause a miscalculation of the CRC. When the target responds with a CRC error, the iSCSI Initiator retransmits the data with the correct DataDigest CRC. Data integrity is maintained. However, data transfer performance is affected. No error message is displayed.
Workaround: Do not use the DataDigest option.
If.
The following security issues applies to the Solaris 10 release.
After the account management PAM module for LDAP (pam_ldap) is enabled, users must have passwords to log in to the system. Consequently, nonpassword-based logins fail, including those logins that use the following tools:
Remote shell (rsh)
Remote login (rlogin)
Secure shell (ssh)
Workaround: None..
If following section describes behavior changes in certain commands and standards in Solaris 10 OS..
The command ping -v fails when the command is applied to addresses that use Internet Protocol version 6 (IPv6). The following error message is displayed:
Workaround: None. To obtain the same ICMP packet information that ping -v provides, use the snoop command.
The following Solaris Volume Manager bugs apply to the Solaris 10 release.
If you have a Solaris Volume Manager mirrored root (/) file system in which the file system does not start on cylinder 0, all submirrors you attach must also not start on cylinder 0.
If you attempt to attach a submirror starting on cylinder 0 to a mirror in which the original submirror does not start on cylinder 0, the following error message is displayed:
Workaround: Choose one of the following workarounds:
Ensure that both the root file system and the volume for the other submirror start on cylinder 0.
Ensure that both the root file system and the volume for the other submirror do not start on cylinder 0.
By default, the JumpStart installation process starts swap at cylinder 0 and the root (/) file system somewhere else on the disk. Common system administration practice is to start slice 0 at cylinder 0. Mirroring a default JumpStart installation with root on slice 0, but not cylinder 0, to a typical secondary disk with slice 0 that starts at cylinder 0, can cause problems. This mirroring results in an error message when you attempt to attach the second submirror. For more information about the default behavior of Solaris installation programs, see the Solaris:
You can access the disk at the new location during this time. However, you might need to use the old logical device name to access the slice.
Workaround: Physically move the drive back to its original slot.
If you remove and replace a physical disk from the system, and then use the metarecover -p -d command to write the appropriate soft partition specific information to the disk, an open failure results. The command does not update the metadevice database namespace to reflect the change in disk device identification. The condition causes an open failure for each such soft partition that is built on top of the disk. The following message is displayed:
Workaround: Create a soft partition on the new disk instead of using the metarecover command to recover the soft partition.
If the soft partition is part of a mirror or RAID 5, use the metareplace command without the -e option to replace the old soft partition with the new soft partition.
This section describes issues that apply to the Sun Java Desktop System (Java DS) in the Solaris 10 OS.
This section describes issues related to Email and Calendars..
You might be unable to complete the online registration of the StarOffice software if the software cannot find Mozilla on the system. The software must be able to locate the Email and Calendar application to successfully send documents.
Workaround: Add /usr/sfw/bin to your PATH. Perform the following steps.
Open a terminal window.
Issue the following command:
To start the StarOffice software, issue the following command:.
The command creates the file xorg.conf.new in the root (/) directory.
Copy the new configuration file to the /etc/x11 directory and rename the file xorg.conf.
Modify the configurations in the file by using the following sample configurations:
Add a new monitor section.
Add a new device section..
You might need to adjust the resolution value for your particular system setup.
Look for the following line under the ServerLayout section:
Insert the following line below the line in the previous step:.
The File Manager might fail if you use the following View options:
View as Catalog
View as Image Collection
Depending on the View options that you use, the following error messages might be displayed:
Error:
Error:
Error:
Workaround: None. Every time these problems occur, restart File Manager or click the Restart Application button on the crash dialog box..
When you attach a zone, if the original host and the new host have packages at the same patch level but at different intermediate patch histories, the zone attach might fail. Various error messages are displayed. The error message depends on the patch histories of the two hosts.
Workaround: Ensure that the original host and the new host machines have had the same sequence of patch versions applied for each patch..
After modifying the contents of snmpd.conf, you can issue the command kill -HUP snmp Process ID. This command stops the snmp process. The command then sends a signal to the System Management Agent's master agent (snmpd) to reread snmpd.conf and implement the modifications that you introduced. The command might not always cause the master agent to reread the configuration file. Consequently, using the command might not always activate modifications in the configuration file.
Instead of using kill -HUP, restart the System Management Agent after adding modifications to snmpd.conf. Perform the following steps:
Become superuser.
Type the following command:
# /etc/init.d/init.sma restart
You are booting a Sun LX50 which has a Service partition and Solaris 10 OS on x86 is installed. Pressing the F4 function key to boot the Service partition, when given the option, causes the screen to go blank. The system then fails to boot the Service partition.
Workaround: Do not press the F4 key when the BIOS Bootup Screen is displayed. After a time-out period, the Current Disk Partition Information screen is displayed. Select the number in the Part# column that corresponds to type=DIAGNOSTIC. Press the Return key. The system boots the Service partition.
If you choose to use the com.sun application programming
interface rather than the
javax application
programming interface to develop your WBEM software, only Common Information
Model (CIM) remote method invocation (RMI) is fully supported. Other protocols,
such as XML/HTTP, are not guaranteed to work completely with the com.sun application programming interface.
The following table lists examples of invocations that execute successfully under RMI but fail under XML/HTTP: | http://docs.oracle.com/cd/E19253-01/820-5245/6nglfuqds/index.html | CC-MAIN-2016-22 | refinedweb | 2,359 | 55.54 |
{-# OPTIONS_GHC -fglasgow-exts #-} ----------------------------------------------------------------------------- -- | -- Module : Data.Array.Parallel.Distributed.Gang -- Copyright : (c) 2006 Roman Leshchinskiy -- License : see libraries/ndp/LICENSE -- -- Maintainer : Roman Leshchinskiy <rl@cse.unsw.edu.au> -- Stability : experimental -- Portability : non-portable (GHC Extensions) -- -- Gang primitives. -- -- /TODO:/ -- -- * Implement busy waiting. -- -- * Benchmark. -- -- * Generalise thread indices? module Data.Array.Parallel.Unlifted.Distributed.Gang ( Gang, forkGang, gangSize, gangIO, gangST, sequentialGang, seqGang ) where --import GHC.Prim ( unsafeCoerce# ) import GHC.IOBase import GHC.ST import GHC.Conc ( forkOnIO ) import Control.Concurrent.MVar ( MVar, newEmptyMVar, takeMVar, putMVar ) -- import Control.Monad.ST ( ST, unsafeIOToST, stToIO ) import Control.Exception ( assert ) import Control.Monad ( zipWithM, zipWithM_ ) -- --------------------------------------------------------------------------- -- Requests and operations on them -- | The 'Req' type encapsulates work requests for individual members of a -- gang. It is made up of an 'IO' action, parametrised by the index of the -- worker which executes it, and an 'MVar' which is written to when the action -- has been executed and can be waited upon. type Req = (Int -> IO (), MVar ()) -- | Create a new request for the given action. newReq :: (Int -> IO ()) -> IO Req newReq p = do mv <- newEmptyMVar return (p, mv) -- | Block until the request has been executed. Note that only one thread can -- wait for a request. waitReq :: Req -> IO () waitReq = takeMVar . snd -- | Execute the request and signal its completion. execReq :: Int -> Req -> IO () execReq i (p, s) = p i >> putMVar s () -- --------------------------------------------------------------------------- -- Thread gangs and operations on them -- | A 'Gang' is a either group of threads which execute arbitrary work -- requests. A /sequential/ 'Gang' simulates such a group by executing work -- requests sequentially. data Gang = Gang !Int [MVar Req] -- ^ The number of 'Gang' threads, and an -- 'MVar' per thread; empty for sequential -- 'Gang's. -- To get the gang to do work, write Req-uest values to its MVars -- | The worker thread of a 'Gang'. gangWorker :: Int -> MVar Req -> IO () gangWorker i mv = do req <- takeMVar mv execReq i req gangWorker i mv -- | Fork a 'Gang' with the given number of threads (at least 1). forkGang :: Int -> IO Gang forkGang n = assert (n > 0) $ do mvs <- sequence . replicate n $ newEmptyMVar zipWithM_ forkOnIO [0..] (zipWith gangWorker [0 .. n-1] mvs) return $ Gang n mvs -- | Yield a sequential 'Gang' which simulates the given number of threads. sequentialGang :: Int -> Gang sequentialGang n = assert (n > 0) $ Gang n [] -- | Yield a sequential 'Gang' which simulates the given one. seqGang :: Gang -> Gang seqGang = sequentialGang . gangSize -- | The number of threads in the 'Gang'. gangSize :: Gang -> Int gangSize (Gang n _) = n -- | Issue work requests for the 'Gang' and wait until they have been executed. gangIO :: Gang -> (Int -> IO ()) -> IO () gangIO (Gang n []) p = mapM_ p [0 .. n-1] gangIO (Gang n mvs) p = do reqs <- sequence . replicate n $ newReq p zipWithM putMVar mvs reqs mapM_ waitReq reqs -- | Same as 'gangIO' but in the 'ST' monad. gangST :: Gang -> (Int -> ST s ()) -> ST s () gangST g p = unsafeIOToST . gangIO g $ unsafeSTToIO . p instance Show Gang where showsPrec p (Gang n []) = showString "<<" . showsPrec p n . showString " threads (simulated)>>" showsPrec p (Gang n _) = showString "<<" . showsPrec p n . showString " threads>>" {- Comes from GHC.IOBase now... -- | Unsafely embed an 'ST' computation in the 'IO' monad without fixing the -- state type. This should go into 'Control.Monad.ST'. unsafeSTToIO :: ST s a -> IO a unsafeSTToIO (ST m) = IO $ \ s -> (unsafeCoerce# m) s -} | http://www.haskell.org/ghc/docs/6.12.2/html/libraries/dph-prim-par-0.4.0/src/Data-Array-Parallel-Unlifted-Distributed-Gang.html | CC-MAIN-2014-41 | refinedweb | 534 | 60.72 |
Reparenting cleanup
this wiki
AuthorEdit
Timo Korvola
SynopsisEdit
This fixes reparenting fights that occur between Sawfish and the KDE system tray. Both try to reparent system tray icons as they are mapped, leading to a lot of flicker and an unpredictable end result. After the patch, Sawfish will reparent windows to their frames at MapRequest time, never at MapNotify. Also, windows that are unmapped by the client should normally be reparented to the root, but if the unmapping was caused by the window being reparented by some other client, problems ensue. So we check for that.
The patch also fixes an exotic race condition triggered at least with old versions of Monodevelop #308155 and Gnome Power Manager. Current versions of both programs don't expose the issue anymore. It was discussed on the mailing list on february 2007 and the follow up march 2007. There's a proof of concept available that should demonstrate the bug, a window isn't decorated if unmapped during the first reparenting.
PatchEdit
Index: src/windows.c =================================================================== --- src/windows.c (revision 4194) +++ src/windows.c (working copy) @@ -514,8 +514,9 @@ return w; } -/* Remove W from the managed windows. If DESTROYED is nil, then the - window will be reparented back to the root window */ +/* Remove W from the managed windows. If DESTROYED is nil and + the window is currently reparented by us, it will be reparented back to + the root window */ void remove_window (Lisp_Window *w, bool destroyed, bool from_error) { Index: src/events.c =================================================================== --- src/events.c (revision 4194) +++ src/events.c (working copy) @@ -689,10 +689,13 @@ Lisp_Window *w = find_window_by_id (id); if (w == 0) { + /* Also adds the frame. */ w = add_window (id); if (w == 0) + { + fprintf (stderr, "warning: failed to allocate a window\n"); return; - + } if (w->wmhints && w->wmhints->flags & StateHint && w->wmhints->initial_state == IconicState) { @@ -736,10 +739,13 @@ if (ev->xreparent.parent != root_window && ev->xreparent.parent != w->frame) { - /* Not us doing the reparenting.. */ + /* The window is no longer on our turf and we must not + reparent it to the root. -- thk */ + w->reparented = FALSE; + XRemoveFromSaveSet (dpy, w->id); + + /* Not us doing the reparenting. */ remove_window (w, FALSE, FALSE); - XReparentWindow (dpy, ev->xreparent.window, ev->xreparent.parent, - ev->xreparent.x, ev->xreparent.y); } Fcall_window_hook (Qreparent_notify_hook, rep_VAL(w), Qnil, Qnil); } @@ -757,6 +763,10 @@ { /* arrgh, the window changed its override redirect status.. */ remove_window (w, FALSE, FALSE); +#if 0 + fprintf(stderr, "warning: I've had it with window %#lx\n", + (long)(w->id)); +#endif } else { @@ -765,11 +775,11 @@ w->attr.height = wa.height; w->mapped = TRUE; + /* This should not happen. The window should have been + framed at the map request. -- thk */ if (w->frame == 0) - create_window_frame (w); - install_window_frame (w); - if (w->visible) - XMapWindow (dpy, w->frame); + fprintf (stderr, "warning: window %#1x has no frame\n", + (long)(w->id)); Fcall_window_hook (Qmap_notify_hook, rep_VAL(w), Qnil, Qnil); } } @@ -782,6 +792,9 @@ if (w != 0 && ev->xunmap.window == w->id && (ev->xunmap.event == w->id || ev->xunmap.send_event)) { + int being_reparented = FALSE; + XEvent reparent_ev; + w->mapped = FALSE; if (w->reparented) { @@ -790,11 +803,32 @@ XUnmapWindow (dpy, w->frame); reset_frame_parts (w); } - /* Removing the frame reparents the client window back to - the root. This means that we receive the next MapRequest - for the window. */ - remove_window_frame (w); - destroy_window_frame (w, FALSE); + /* Careful now. It is possible that the unmapping was + caused by someone else reparenting the window. + Removing the frame involves reparenting the window to + the root. Bad things may happen if we do that while + a different reparenting is in progress. -- thk */ + being_reparented = XCheckTypedWindowEvent (dpy, w->id, + ReparentNotify, + &reparent_ev); + if (!being_reparented) + { + /* Removing the frame reparents the client window back to + the root. This means that we receive the next MapRequest + for the window. */ + remove_window_frame (w); + destroy_window_frame (w, FALSE); + } + + /* Handle a possible race condition: if the client + withdrew the window while we were in the process of + mapping it, the window may be mapped now. -- thk */ + if (ev->xunmap.send_event && !w->client_unmapped) + { + before_local_map (w); + XUnmapWindow (dpy, w->id); + after_local_map (w); + } } Fcall_window_hook (Qunmap_notify_hook, rep_VAL(w), Qnil, Qnil); @@ -802,7 +836,10 @@ /* Changed the window-handling model, don't let windows exist while they're withdrawn */ - remove_window (w, FALSE, FALSE); + if (being_reparented) + reparent_notify(&reparent_ev); + else + remove_window (w, FALSE, FALSE); } }
Community's reasons for inclusion or rejectionEdit
vote: yes. For a client to reparent a mapped top-level window is a violation of the ICCCM. Unfortunately KDE gets it wrong and maps system tray icons before reparenting them, thus briefly making them top-level windows. This reparenting then interferes with Sawfish trying to reparent the window out of its frame. The solution implemented in the patch is to peek for ReparentNotify events at unmap_notify. It has been copied from Fluxbox. I have also cleaned up reparenting done by Sawfish a bit. - Timo Korvola
vote: yes. Seems to fix the race condition mentioned above, with apparently no regressions. - Aav 10:00, 16 January 2008 (UTC)
vote: yes. patch applied, thanks. Janek Kozicki 11:44, 19 January 2008 (UTC) | http://sawfish.wikia.com/wiki/Reparenting_cleanup | CC-MAIN-2017-04 | refinedweb | 818 | 57.06 |
J to insert a JSP file in another file.
<jsp:include> vs include directive :
It has the same difference which I mentioned at the beginning of the article (directive vs action). In <jsp:include> the file is being included during request processing while in case of include directive it has been included at translation phase.
Syntax of <jsp:include> :
<jsp:include
Here page URL is: the location of the page needs to be included & flush value can be either true or false (Boolean value).
Example:
<html> <head> <title>Demo of JSP include Action Tag</title> </head> <body> <h3>JSP page: Demo Include</h3> <jsp:include </body> </html>
page: Page value is sample.jsp which means this is the page needs to be included in the current file. Just the file name mentioned which shows that the sample.jsp is in the same directory.
flush: Its value is false, which means resource buffer has not been flushed out before including to the current page.
Read more: jsp include action tag.
2. <jsp:forward> Action
<jsp:forward> is used for redirecting the request. When this action is encountered on a JSP page the control gets transferred to the page mentioned in this action.
Syntax of <jsp:forward> :
<jsp:forward
Example:
first.jsp
<html> <head> <title>Demo of JSP Forward Action Tag</title> </head> <body> <h3>JSP page: Demo forward</h3> <jsp:forward </body> </html>
Now when JSP engine would execute first.jsp (the above code) then after action tag, the request would be transferred to another JSP page (second.jsp).
Note: first.jsp and second.jsp should be in the same directory otherwise you have to specify the complete path of second.jsp.
Read more: JSP forward action tag.
3. <jsp:param> Action
This action is useful for passing the parameters to Other JSP action tags such as JSP include & JSP forward tag. This way new JSP pages can have access to those parameters using request object itself.
Syntax of <jsp:param>:
<jsp: param
Now considers the same above example –
first.jsp
<html> <head> <title>Demo of JSP Param Action Tag</title> </head> <body> <h3>JSP page: Demo Param along with forward</h3> <jsp:forward <jsp:param <jsp:param <jsp:param </jsp:forward> </body> </html>
In the above example first.jsp is passing three parameters (data, time & data) to second.jsp and second.jsp can access these parameters using the below code –
Date:<%= request.getParameter("date") %> Time:<%= request.getParameter("time") %> My Data:<%= request.getParameter("data") %>
4. <jsp:useBean> Action
Read more here – <jsp:useBean>, <jsp:setProperty> and <jsp:getProperty> in detail.
This action is useful when you want to use Beans in a JSP page, through this tag you can easily invoke a bean.
Syntax of <jsp:useBean>:
<jsp: useBean
Example of <jsp:useBean>, <jsp:setProperty> & <jsp:getProperty>:
Once Bean class is instantiated using above statement, you have to use jsp:setProperty and jsp:getProperty actions to use the bean’s parameters. we will see both setProperty and getProperty after this action tag.
EmployeeBeanTest.jsp
<html> <head> <title>JSP Page to show use of useBean action</title> </head> <body> <h1>Demo: Action</h1> <jsp:useBean <jsp:setProperty <h1> name:<jsp:getProperty<br> empno:<jsp:getProperty<br> </h1> </body> </html>
StudentBean.java
package javabeansample; public class StuBean { public StuBean() { } private String name; private int rollno; public void setName(String name) { this.name=name; } public String getName() { return name; } public void setRollno(int rollno) { this.rollno=rollno; } public int getRollno() { return rollno; } }
5. <jsp:setProperty> Action
This action tag is used to set the property of a Bean, while using this action tag, you may need to specify the Bean’s unique name (it is nothing but the id value of useBean action tag).
syntax of <jsp:setProperty>
<jsp: useBean .... .... <jsp:setProperty
OR
<jsp: useBean .... .... <jsp:setProperty </jsp:useBean>
In property_name, you can also use ‘*’, which means any request parameter which matches to the Bean’s property will be passed to the corresponding setter method.
6. <jsp:getProperty> Action
It is used to retrieve or fetch the value of Bean’s property.
syntax of <jsp:getProperty>
<jsp: useBean .... .... <jsp:getProperty
OR
<jsp: useBean .... .... <jsp:getProperty </jsp:useBean>
Other Action Tags
The below action tags are not frequently used so I haven’t covered them in detail.
7. <jsp:plugin> Action
This tag is used when there is a need of a plugin to run a Bean class or an Applet. | https://beginnersbook.com/2013/06/jsp-tutorial-actions/ | CC-MAIN-2021-10 | refinedweb | 736 | 56.66 |
Key Takeaways
- Java 16, and the imminent Java 17 release, come with a plethora of features and language enhancements that will help boost developer productivity and application performance
- Java 16 Stream API provides new methods for commonly used terminal operations and help reduce boilerplate code clutter
- Record is a new Java 16 language feature to concisely define data-only classes. The compiler provides implementations of constructors, accessors, and some of the common Object methods
- Pattern matching is another new feature in Java 16, which, among other benefits, simplifies the otherwise explicit and verbose casting done with instanceof code blocks
Java 16 was released in March of 2021 as a GA build meant to be used in production, and I covered the new features in my detailed video presentation. And Java 17, the next LTS build, is scheduled to be released this September. Java 17 will be packed with a lot of improvements and language enhancements, most of which are a culmination of all the new features and changes that were delivered since Java 11.
In terms of what’s new in Java 16, I am going to share a delightful update in the Stream API and then mostly focus on the language changes.
From Stream to List
List<String> features = Stream.of("Records", "Pattern Matching", "Sealed Classes") .map(String::toLowerCase) .filter(s -> s.contains(" ")) .collect(Collectors.toList());
The code snippet you see above should be pretty familiar to you if you are used to working with the Java Stream API.
What we have in the code is a stream of some strings. We map a function over it and then we filter the stream.
Finally, we are materializing the stream into a list.
As you can see, we usually invoke the terminal operation
collect and pass a collector to it.
This fairly common practice of using the
collect, and passing the
Collectors.toList() to it feels like boilerplate code.
The good news is that in Java 16, a new method was added to the Stream API which enables us to immediately call
toList() as a terminal operation of a stream.
List<String> features = Stream.of("Records", "Pattern Matching", "Sealed Classes") .map(String::toLowerCase) .filter(s -> s.contains(" ")) .toList();
Using this new method in the code above results in a list of strings from the stream that contain a space. Note that this list that we get back is an unmodifiable list. Which means you can no longer add or remove any elements from the list returned from this terminal operation. If you want to collect your stream into a mutable list, you will have to continue using a collector with the
collect() function. So this new
toList() method that is made available in Java 16 is really just a small delight. And this new update will hopefully make the stream pipeline code blocks a little bit easier to read.
Another update to the Stream API is the
mapMulti() method. Its purpose is a bit similar to the
flatMap() method. If you typically work with
flatMap() and you map to inner streams in the lambda that you pass to it,
mapMulti() offers you an alternative way of doing this, where you push elements to a consumer. I won't go into much detail about this method in this article as I would like to discuss the new language features in Java 16. If you're interested to learn more about
mapMulti(), I definitely recommend looking at the Java documentation for this method.
Records
The first big language feature that was delivered in Java 16 is called records. Records are all about representing data as data in Java code rather than as arbitrary classes. Prior to Java 16, when we simply needed to represent some data, we ended up with an arbitrary class as the one shown in the code snippet below.
public class Product { private String name; private String vendor; Private int price; private boolean inStock; }
Here we have a
Product class that has four members. This should be all the information that we need to define this class. Of course, we need much more code to make this work. For instance, we need to have a constructor. We need to have corresponding getter methods to get the values of the members. To make it complete, we also need to have
equals(),
hashCode(), and
toString() implementations that are congruent with the members that we defined. Some of this boilerplate code can be generated by an IDE but doing so has some drawbacks. You can also use frameworks like Lombok but they come with some drawbacks as well.
What we really need is a mechanism within the Java language to more precisely describe this concept of having data-only classes. And so in Java 16 we have the concept of records. In the following code snippet we redefined the
Product class as a record.
public record Product( String name, String vendor, int price, boolean inStock) { }
Note the introduction of the new keyword
record. We need to specify the name of the record type right after the keyword
record. In our example the name is
Product. And then we only have to provide the components that make up these records. Here we provided the four components by giving their types and the names. And then we are done. A record in Java is a special form of a class that only contains this data.
What does a record offer us? Once we have a record declaration, we will get a class that has an implicit constructor accepting all the values for the components of the record. We automatically get implementations for
equals(),
hashCode(), and
toString() methods based on all the records components. In addition, we also get accessor methods for every component that we have in the record. In our example above, we get a
name method, a
vendor method, a
price method, and an
inStock method that respectively return the actual values of the components of the records.
Records are always immutable. There are no setter methods. Once a record is instantiated with certain values, that is it, you cannot change it anymore. Also, record classes are final. You can implement an interface with a record, but you cannot extend any other class when defining a record. All in all, there are some restrictions here. But records offer us a very powerful way to concisely define data-only classes in our applications.
How to Think About Records
How should you think about and approach these new language elements? A record is a new and restricted form of a class used to model data as data. It is not possible to add any additional state to a record, you cannot define (non-static) fields in addition to a record’s components. Records are really about modeling immutable data. You can also think of records as being tuples, but not just tuples in a generic sense that some other languages have where you have some arbitrary components that can be referenced by index. In Java, the tuple elements have actual names, and the tuple type itself, the record, also has a name, because names matter in Java.
How Not to Think About Records
There are also some ways that we may be tempted to think about records that are not completely appropriate. First and foremost, they are not meant as a boilerplate reduction mechanism for any of your existing code. While we now have a very concise way of defining these records, it does not mean that any data like class in your application can be easily replaced by records, primarily because of the limitations that are imposed by records. This is also not really the design goal.
The design goal of records is to have a good way to model data as data. It's also not a drop-in replacement for JavaBeans, because as I mentioned earlier, the accessor methods, for example, do not adhere to the get standards that JavaBeans have. And JavaBeans are generally mutable, whereas records are immutable. Even though they serve a somewhat similar purpose, records do not replace JavaBeans in any meaningful way. You also should not think of records as value types.
Value types may be delivered as a language enhancement in a future Java release where the value types are very much about memory layout and efficient representation of data in classes. Of course, these two worlds might come together at some point in time, but for now, records are just a more concise way to express data-only classes.
More About Records
Consider the following code where we create records
p1 and
p2 of type
Product with the exact same values.
Product p1 = new Product("peanut butter", "my-vendor", 20, true); Product p2 = new Product("peanut butter", "my-vendor", 20, true);
We can compare these records by reference equality and we can also compare them using the
equals() method, the one that has been automatically provided by the record implementation.
System.out.println(p1 == p2); // Prints false System.out.println(p1.equals(p2)); // Prints true
What you will see here is that these two records are two different instances, so the reference comparison will evaluate to false. But when we use
equals(), it only looks at the values of these two records and it will evaluate to true. Because it is only about the data that is inside of the record. To reiterate, the equality and hashcode implementations are fully based on the values that we provide to the constructor for a record.
One thing to note is that you can still override any of the accessor methods, or the equality and hashcode implementations, inside a record definition. However, it will be your responsibility to preserve the semantics of these methods in the context of a record. And you can add additional methods to a record definition. You can also access the record values in these new methods.
Another important function you might want to perform in a record is validation. For example, you only want to create a record if the input provided to the record constructor is valid. The traditional way to do validation would be to define a constructor with input arguments that get validated before assigning the arguments to the member variables. But with records, we can use a new format, the so-called compact constructor. In this format we can leave off the formal constructor arguments. The constructor will implicitly have access to the component values. In our
Product example, we can say that if the price is less than zero, let's throw a new
IllegalArgumentException.
public record Product( String name, String vendor, int price, boolean inStock) { public Product { if (price < 0) { throw new IllegalArgumentException(); } } }
As you can see from the code snippet above, if the price is above zero, we don't have to manually do any assignments. Assignments from the (implicit) constructor parameters to there record’s fields are added automatically by the compiler when compiling this record.
We can even do normalization if we want to. For example, instead of throwing an exception if the price is less than zero, we can set the price parameter, which is implicitly available, to a default value.
public Product { if (price < 0) { price = 100; } }
Again, the assignments to actual members of the record, the final fields that are part of the record definition, are inserted automatically by the compiler at the end of this compact constructor. All in all, a very versatile and very nice way to define data-only classes in Java.
You can also declare and define records locally in methods. This can be very handy if you have some intermediate state that you want to use inside of your method. For example, say that we want to define a discounted product. We can define a record which takes
Product and a
boolean that indicates whether the product is discounted or not.
public static void main(String... args) { Product p1 = new Product("peanut butter", "my-vendor", 100, true); record DiscountedProduct(Product product, boolean discounted) {} System.out.println(new DiscountedProduct(p1, true)); }
As you can see from the code snippet above, we won't have to provide a body for the new record definition. And we can instantiate the
DiscountedProduct with
p1 and
true as arguments. If you run the code, you will see that this behaves exactly the same way as the top level records in a source file. Records as a local construct can be very useful in situations where you want to group some data in an intermediate stage of say your stream pipeline.
Where Would You Use Records
There are some obvious places where records can be used. One such place is when we want to use Data Transfer Objects (DTOs). DTOs are by definition objects that do not need any identity or behavior. They are all about transferring data. For example, starting with version 2.12, the Jackson library supports serializing and deserializing records to JSON and other supported formats.
Records will also be very useful when you want the keys in a map to consist of multiple values that act as a composite key. Using records in this scenario will be very helpful since you automatically get the correct behavior for equals and hashcode implementations. And since records can also be thought of as nominal tuples, a tuple where each component has a name, you can easily see that it will be very convenient to use records to return multiple values from a method to the caller.
On the other hand, I think records will not be used much when it comes to the Java Persistence API. If you want to use records to represent entities, that is not really possible because entities are heavily based on the JavaBeans convention. And entities usually tend to be mutable rather than immutable. Of course, there might be some opportunities when you instantiate read-only view objects in queries where you could use records instead of regular classes.
All in all, I think it is a very exciting development that we now have records in Java. I think they will see widespread use.
Pattern Matching With instanceof
This brings us to the second language change in Java 16, and that is pattern matching with
instanceof. This is a first step in a long journey of bringing pattern matching to Java. For now, I think it's already really nice that we have the initial support in Java 16. Take a look at the following code snippet.
if (o instanceOf String) { String s = (String) o; return s.length(); }
You will probably recognize this pattern where some piece of code checks whether an object is an instanceof a type, in this case the
String class. If the check passes, we need to declare a new scoped variable, cast and assign the value, and only then can we start using the typed variable. In our example, we need to declare variable
s,
cast o to a
String and then call the
length() method. While this works, it is verbose, and it is not really intention revealing code. We can do better.
As of Java 16, we can use the new pattern matching feature. With pattern matching, instead of saying
o is an instance of a specific type, we can match
o against a type pattern. A type pattern consists of a type and a binding variable. Let’s see an example.
if (o instanceOf String s) { return s.length(); }
What happens in the above code snippet is that if
o is indeed an instance of
String, then
String s will be immediately bound to the value of
o. This means that we can immediately start using
s as a string without an explicit cast inside the body of
if. The other nice thing here is that the scope of
s is limited to just the body of
if. One thing to note here is that the type of
o in source code should not be a subtype of
String, because if that is the case, the condition will always be true. And so, in general, if the compiler detects the type of an object that is being tested is a subtype of the pattern type, it will throw a compile time error.
Another interesting thing to point out is that the compiler is smart enough to infer the scope of
s based on whether the condition evaluates to true or false as you will see in the following code snippet.
if (!(o instanceOf String s)) { return 0; } else { return s.length(); }
The compiler sees that if the pattern match does not succeed, then in the
else branch we would have
s in scope with the type of
String. And in the
if branch
s would not be in scope, we would only have
o in scope. This mechanism is called flow scoping where the type pattern variable is only in scope if the pattern actually matches. This is really convenient. It really helps tighten up this code. It is something that you need to be aware of and might take a little bit of getting used to.
One more example where you can very nicely see this flow typing in action is when you rewrite the following code implementation of the
equals() method. The regular implementation is to first check whether
o is an instance of
MyClass. If it is, we cast
o to
MyClass and then match the name field of
o with the current instance of
MyClass.
@Override public boolean equals(Object o) { return (o instanceOf MyClass) && ((MyClass) o).name.equals(name); }
We can simplify the implementation using the new pattern matching mechanism as demonstrated in the following code snippet.
@Override public boolean equals(Object o) { return (o instanceOf MyClass m) && m.name.equals(name); }
Again, a nice simplification of explicit, verbose casting in the code. Pattern matching abstracts away a lot of boilerplate code when used in appropriate use cases.
Pattern Matching: Future
The Java team has sketched out some of the future directions of pattern matching. Of course, there are no promises on when or how these future directions will actually end up in the official language. In the following code snippet we will see that in the new switch expression we can use type patterns with
instanceOf like we discussed previously.
static String format(Object o) { return switch(o) { case Integer i -> String.format("int %d", i); case Double d -> String.format("int %f", d); default -> o.toString(); }; }
In the case where
o is an integer, flow scoping kicks in and we have variable
i immediately available to be used as an integer. Same holds true with the other cases and the default branch.
Another new and exciting direction is record patterns where we might be able to pattern match our records and immediately bind to the component values to fresh variables. Take a look at the following code snippet.
if (o instanceOf Point(int x, int y)) { System.out.println(x + y); }
We have a
Point record with
x and
y. If the object
o is indeed a point, we will immediately bind the
x and
y components to the
x and
y variables and immediately start using them.
Array patterns are another kind of pattern matching that we might get in a future version of Java. Take a look at the following code snippet.
if (o instanceOf String[] {String s1, String s2, ...}) { System.out.println(s1 + s2); }
If
o is an array of strings, you can immediately extract the first and the second parts of the string array to
s1 and
s2. Of course, this only works if there are actually two or more elements in the string array. And we can just ignore the remainder of the array elements using the three dot notation.
To sum up, pattern matching with
instanceOf is just a nice, small feature, but it is a small step towards this new future where we may have additional kinds of patterns that can be used to write clean, simple and readable code.
Preview Feature: Sealed Class
Let’s talk about the sealed classes feature. Note that this is a preview feature in Java 16, though it will be final in Java 17. You need to pass the
--enable-preview flag to your compiler invocation and your JVM invocation in order to use this feature with Java 16. The feature allows you to control your inheritance hierarchy.
Let's say you want to model a super type
Option where you only want to have
Some and
Empty as subtypes. And you want to prevent arbitrary extensions of your
Option type. For example, you do not want to allow a
Maybe type in the hierarchy.
So you basically have an exhaustive overview of all subtypes of your
Option type. As you know, the only tool to control inheritance in Java at the moment is via the
final keyword. This means that there cannot be any subclasses at all. But that is not what we want. There are some workarounds to be able to model this feature without sealed classes, but using sealed classes, this becomes much easier.
Sealed classes feature comes with new keywords
sealed and
permits. Take a look at the following code snippet.
public sealed class Option<T> permits Some, Empty { ... } public final class Some extends Option<String> { ... } public final class Empty extends Option<Void> { ... }
We can define the
Option class to be
sealed. Then, after the class declaration, we use the
permits keyword to indicate that only
Some and
Empty classes are allowed to extend the
Option class. Then, we can define
Some and
Empty as classes as usual. We want to make these subclasses
final as we want to prevent further inheritance. No other class can now be compiled to extend the
Option class. This is enforced by the compiler through the sealed classes mechanism.
There is a lot more to say about this feature that cannot be covered in this article. If you are interested to learn more, I recommend going to the sealed classes Java Enhancement Proposal page, JEP 360, and read more about it.
And More
There are a lot of other things in Java 16 that we could not cover in this article. For instance, incubator APIs like the Vector API, the Foreign Linker API and the Foreign-Memory Access API are all very promising. And a lot of improvements have been made at the JVM level. For example, ZGC has had some performance improvements. Some Elastic Metaspace improvements have been made in the JVM. And then there is a new packaging tool for Java applications which allows you to create native installers for Windows, Mac, and Linux. Finally, and I think this will be very impactful, encapsulated types in the JDK will be strongly guarded when you run your application from the
classpath.
I highly encourage you to look into all these new features and language enhancements since some of them can have a big impact on your applications.
About the Author
Sander Mak is a Java Champion who has been active in the Java community for over a decade. Currently, he is Directory of Technology at Picnic. At the same time, Mak is also very active in terms of knowledge sharing, through conferences but also on online e-learning platforms.
Community comments | https://www.infoq.com/articles/java-16-new-features/?topicPageSponsorship=bbdd1116-15e9-4ebf-9918-4516668c5053&itm_source=articles_about_java&itm_medium=link&itm_campaign=java | CC-MAIN-2022-05 | refinedweb | 3,895 | 62.27 |
A set of over 1250 free MIT-licensed high-quality SVG icons for you to use in your web projects.
A set of over 1250 free MIT-licensed high-quality SVG icons for you to use in your web projects. Each icon is designed on a 24x24 grid and a
2pxstroke.
If you want to support my project and help me grow it, you can become a sponsor on GitHub or just donate on PayPal :)
Icons search:
npm install @tabler/icons --save
or just download from Github.
All icons are built with SVG, so you can place them as
,,
background-imageand inline in HTML code.
If you load an icon as an image, you can modify its size using CSS.
You can paste the content of the icon file into your HTML code to display it on the page.
... Click me
Thanks to that, you can change the size, color and the
stroke-widthof the icons with CSS code.
.icon-tabler { color: red; width: 32px; height: 32px; stroke-width: 1.25; }
Add an icon to be displayed on your page with the following markup (
activityin the above example can be replaced with any valid icon name):
Import the icon and render it in your component. You can adjust SVG properties through React props:
import { IconAward } from '@tabler/icons';
const MyComponent = () => { return }
@tabler/iconsexports it's own type declarations for usage with React and Typescript.
All files included in
@tabler/iconsnpm package are available over a CDN.
To load a specific version replace
latestwith the desired version number.
To compile fonts first install fontforge.
When compiling the font it will look for a json file
compile-options.jsonin root folder (same folder as the
package.json) In this file you can define extra options:
The default settings if you have not defined the file will be:
JSON { "includeIcons": [], "fontForge": "fontforge", "strokeWidth": 2 }file.
{ "fontForge":"/Applications/FontForge.app/Contents/MacOS/FontForge" }
To compile the fonts run:
sh"] }
tabler-icons-svelteto use icons in your Svelte projects (see example):
All icons in this repository have been created with the value of the
stroke-widthproperty, so if you change the value, you can get different icon variants that will fit in well with your design.
Tabler Icons is licensed under the MIT License. | https://xscode.com/tabler/tabler-icons | CC-MAIN-2021-43 | refinedweb | 379 | 69.52 |
DLLARS ER ANNM. PUBISHED B A COMITTEE F MINI ERS, ORE THE H. E. CHURCH, SOUTH. E eMES D DTR Vol.XXVII.-N 18 ~e.Maco, G ., T ursdy, Sptem er :4, 865. N wSre. N .1 1 From he Cristin Trasurr notwort one esisof potage Thisone act ut thre ae retibutons i thi word, as Speaing o hisown person hitory ncon PERSNAL IETY Thee in the ring's fresh and life dred t ) b ~born andi educated in the Episcopal Oburch, vrue ytecneso ftosnso Theein te auumn' melow bush of he oungeducted.He hs wiely mplat- oety He ay b themidngh hieftol uo r hard f, i my outh So ar a I kow, iththe ietyof tousads o theprofsse The inwiner' aow :.ed n te ear of= 6 aar-nt sron nturl dr yu o yut roprty -le ay e te I* heywer inrodce ino Bsto ad Mss. Chistan ofthelan. Te ani fo spcu Lif isnotlf wihou TheLor, afecio, wich gude byete patil e. cndirytha fo soe magnedwrng oul chsets y D. Jfrs, ho ametoBoson n ltin; he ree fr ginan te uscrpu Life s no ligh wito~ut hy lve*- retcednes or ice, romps unt secring ay b the runke rowy to mbrol you son been he 1dh In 8a ampofsu e dleett o t merT e A Bla k hi b ti dls ou orhi ofs ri g s oo e uc ti na a va ta m fgh t at ma e d n isdigra e r is th m n 81 id pd toth irco se vaiv q ie c ns qu nc er m ny wh t he be in Look ona mit, voi, ablo; ragemnt, o esenial o te ciiliatin an tht shll edue yor sn ino g mblig> reahingof he gspe, t whih w wer ina i thechuch a me ofearnstsimpe pety Thee Lor, wihout thn heringear rogrss f th rac, ha conribted olitletodisspaton, rleoery~r blghtthe air ame reatd gee srangrs bfore an of argeis fuenc fo goo, ar nowcol Rear, ye herethnot!kee outof vew he rsponibiity f an oter o you .dughtr. H ma be he vnde of eIn is etird paish in Bis o, R ode nd lfelss, nd i may la mental nstaces No~ojhe tpe14 o@ sn prty han he aren, tohav thechil edca- rdet spritsto ll te lo an theviefus n IsandBisop Giswld'sminitryhad een vemadeit tl s wr of th Thevat No, ot te go ofthe eavns-- ted Evn no, i themids ofall ur lght you neghbohood kepingyou n cnstat vey rmarkblyblesed wth ev vls od bledin bosm ofthechuih, ad sckin th Saveas sen i~heee may a ne ill e strtle by he dctrie anoyace ad alrm. h, hw muh ceape ligon. is poe te wee mch aquetme dlifebloo fro it tor andlaceatedarteies But hyslf i thse! f eucaing he dunghaseencknoleded, fay, ad mke or yurslf ad cilden, ne hortselctin frm te PryerBook th theicdtie andresonsbiliiesof te oisi But hyisl I hem out catd a to ende himself susainig wen i theretrbutve Nmesi, cmmisione to arvs inBostn, hichwerewriten y hi. hotanes, e arh a o a th minst punih tose ho averepuiatd th climsThe Bishon deferided himself' in some e~ssra, a sa osneune Eart's oly sn, OLord ar Tho- buden r a uisace o soiety itfrompunisoc vte negectig th preiou publish in Trat onPraer-Metins a any astos hve ben trownout f th re Be of in ow ndGod, ile elf potecio s bond afb 1 im~th tomr tru d eschesi Adoae chut tht: ~~ah fe Ca h r m I d a intallthigedtbse y a .. -s; sbjet. T e prtecion f soiet is ver im ight a rghtbelo gingto an, ot ee s (i e. Iam hank ul t sa it has eve sue by he contain f s ecultion whih ha ee Most le~sd Lor, grat Go of ll mnis areak oe lmo anme prdo of Gd. I is tme w all elt he obigaton itent yste oem e a oc a ery me scuzark d s hantwo dd wil b Hy dan, m noon my ay, m eve impses, ot uon paentPmerel, bu upo ByBishp GriwoldI wa prepred or mydestoyed heireffilencyin te pulit, r, s Giveto m evey da an~hou, mesure forther abhmen, ismoretha we o Go tha itspreiousimmotal ind hal ticd, ad-whch Ihav endeavor faithful le t re-nterthe astratewithut ebar The ernestto m longng hert, ratio tha eventhe potecion o socatintr- enerlly wre th s m thenas nw. Ifmy ben brken *o, crcuit disogani4, an som ----.-----***- ---- prsonlit. Wth al tat as ben aid r sug D T NG'SREP Y T BIS OP OTT R. o Bibop Grisold Inthe eeuseof eh ad dsolae. hol a fernce tha wee a For he outhrn hrlaionAdvcate of he geatess nd alu of he umansou, Th reentPastralLettr o Bisop ottr, tmpoaneos payeron ll theroccsion beutiflla a ell ultvate 'rrden ar no In th enu eraton o thenatual rghtsof vry eroneus. he geatnss o gret me is goittl s of cl gymenwho ten n rechus b irsiariin here r hewas hip arte een aid n as a i manmad bymorl ndplitcal hiosoher, oseved ad ire, onoedeve toexese- o epscpaladmniton Th Re. D yng othr curhesto reah i hs curca, n wot t enerainthepasorand susainth publish ~ ~ ~ ~ ~ ~~wt simla ansers an nedDr ygsbsq vtcrer. ed, and ca cacl b eognzesinmn mans ert ly el-beng ha e een fr dst nt erona afecio s, f he enime tsof om p efceshi lete wih t e tat me t t atit s Fory ouryers goI c mm nce m mi d ocaitesfo th w nt f ifier an tach rs ::::.~~~an :: hp s r ,rto d n n oae s d Neb ou eho ortizs eihp to heageofinoleaneof erectio! hu ofa ary o akigdo, hevale f apatiu rviwsthehitoy o te ntrdutin o wsdstigushd b aettrfomBisopKep ae efesh wtha lvey ens o prsoalso shoingtha th rght ofspech ofopiion la mn i an ofthse espctsma beappe- ighChishvies itotheAmeica Eisc- wom ha nversee, o tis ubjct.Itwascepanc. Epeialy des heminstr ned of cnscehe, are ut he iscver ofyeser-ciaed.Som ma aprecatehisdaiy til;palChuch.Bor an eduate intheEpi. eoug fo hi tht Ihadcom trm Bsho frsh aptsm f te Siri--are-onscraion day. Buherisaentarigtjutasrecous-othrsthevale ofhisvot; oher, wht h coal~urc, haingspet frtyfouryeas i Grswod. Tis as he eginingof wafar an renweddedcaton o thir ighandhol jus asresonbleasanyofthoe mntone, aysto uie o chrc; ohes my aprxi.he miistyhavngdurngtatimeoccpid fr ear, aoud te smegret rinipls allng Aninceae fpesonl pet inth of te peple:the ightto b eduated Yetit i on his onsieraton hiefy, tat w thehighst vlue.bretren enshw, JhnsMellaine an It mst e cnceed health igh isnotbas th riht o ma toeduatin. ereis a Dr.Tyn beinsby tatng tat he lais Hwle, ad may oher ofsimlarchaacte, p I n ry uo mOh Lor, fr lm i trubl*"- quit soobvousas he igh totheair towa ind gited~ithpowre hattrasced alma prsse intheBisop' leter onsitues hatI ws clle tostat mdefnceof te Gspe Pryeris lwas apriilee, ut t i a oia ter toligt, o lcomtio, t th asof imb teialfores;capcites hatastnis usmor whch as eenknwn s te Hgh-hurh i it dotries nd ts ibety.It as y frstpriilee i tie o truol, Wat hoy w sa bvousasths, s heequll cmprhesie ait ad feeagncy tismin, e syha a omnaion o Crisias n rliios wrsip through is harctritsspri, is enenc, ik snw;an th rvelosd ve ou lve was deferred on the ground of hinbeing unable to attend the London exammations, and six- to w end it e th the name of one of the candidates a very animated conversation took place on the subject of literary qualifloation for our ministry. Dr. Osborn, W. Arthur, W. Mh atnad de rsemaint ned the an ought tobe received into the Wesleyan minis- try who was ignorant of the common elements of English education. Dr. Rule, T. Vasey, J. H. James, and some others, expressed their fears that such literary and educational tests as were applied to candidates by the London July Committee would lead to the rejection of some who were called of God to preach, and upon whom He had set his broad seal by giving them grace, gifts, and fruit. Mr. Punshon said that .the July Committee had never yet declined a span on the ground of literary dis- qualification alone* EXAMINATION OF CHARACTER. The session of Monday "was chiefly taken up with the Examination of Character, Every man's name was read. The cases of delinquency were very few, but seNeral of them were of a pa8m ud hara eer. Con ree epe a wlds e standard of Christian urilty and propriety mn si inb r fashion. Grave transgressions are visited pith condign justice. The larger the body of min sisters becomes,-the greater becomes the neces- sity for this." EXAMINATION OF CANDIDATES, The candidates for ordination are examined i their totolog cal acqui ements u oaT edition to this a pubhc meeting as held at which they are called on to relate their personal ex Criencte as t th itr conversionaccaju t f hne nu hereof candida s, this service has been divided between two or more churches. On the present occasion i wa nhe u nd on t leven- "'E ly: Cherr -street, Birmingham; Darli - oanm8ereet, Wo verhampton; Wesley Chapne , Walsall; and at the Chapel at Hill Top. The theological examination of the candid estoTfk, na d do n eedbp ts e Dr. Hannah. Mr. Vasey inquired whether any of thbse young men were in the habit of read- ing their sermons, a question which gave rise tobi hsohmewh atilengthen d were An eix mon-reading habit was very stangly depreca ted and spoken against. It was evidently the mh nommshf a fi tombee ecoe vtehdeoconfe nue theepym dwspoe i2nooctc es ,a j tempore and from his heart. All the young ho dd rk fuHtoo nbe ton an The ordinationonmNA Id on Wednesdeay mp e e ific ,es yt, Ch hat hd n ash'ery rain, and the restriction of admission to those T bu b t iit s e or a dpart- forty-nine. The services are much the same as those of our own church on similar occasions. 'Pe cand dapersesiedaeme Tax prieniduccessive the secretary of the conference took Lart -in the laying on of bands in each instance, assist- ed by Bishop Janes, and several of the older nxembers of the conference, two of the assist- ing ministers retiring on eagg successive occa tion, and two others taking their places. After the ordination, the Lord's Supper was admmis tered to the newly ordamed ministers, and the ex-president, Rev. Dr. Osborn, delivered the charge, which was founded on I Tim. iii, 1-7. straiZED CHILDREN AND THE CHUncu. On Saturday Mr. Joshua Mason introduced L rr ag e, m dn1 s" requires special care to be taken of those whom we baptize; directs that, when the returns of the schools are laid before the quarterly meet- ings of the circuits in March, the number who have been baptized during the year, in con- nection with each of our congregations, should be included, and opportunity be sought for de. nt n meat 8*eodr andmb o of the Lord,' and for leading them early to an intelligent and conscientious choice of formal membership with our society." He sustained it by a forcible speech, closing with the following suggestions; L Let parents and guardians of children know that those who are dedicated to God in mr gen dow hC iend blo emdm tion by the ministers and at least the officers of oyr societies. "2.Aset our day-achool teachers, and espe, ciply our Sunday-school teachers, be informed h b is na relrded asaso8meah5rc r assist those who have charge of them at home in Wading them early to take deliberately upon thk se vheis a rmatlo ts on fteris to know Jesus, and to love and serve him. 3. Let the preachers meet the baptized as such as soon as the children can discern their right hand Trom their left, at least once a quar. ter and oftener if they can, either on the Sab= ba*h or the week day, and talk to them, exam.. ine them in the catechism, and pray for them, inthe presence of their parents. "4. Let them be classified under three ages, say from five to ten years, from ten to fifteen, and from fifteen to twenty. . "5. Let a selection be made once a year of those who may be thought worthy to be united to "W e A .di the resolution pressed the point that hapjgedj children musi be regarded as members of the church, and trained as such. He express& his regretat eth t la eee right-a Tme i reian force the objections oPthose who took excep, tion to their presence. Dr.Rule maintained that not only those who b ann so e 5du asP a solicitude if they came within our reach. pThey were nobbaptized into Methodism, butinto the hurch f rist; sad it was our duty to take Af Arthur thought the subject was of the utmost importance. First to baptize and then to neglect, could not be a part of the faith of any Chrishian Church, and it ought not to be a h Tu t done to make them members of the Church of God. He dou'oted, however, whether the par cular meaps n esmeddnrMr. Hason's re ings-was desirable. He mov that the sub 3ect be remitted to a small committee, to meet uring the year, and report td the next con Tnt an interesting and powerful address from Mr. S. R. Hall, in whic7he dwelt on the relationship of the present proposal to the rule on the admission of members, the conference dra fleave the matter open to further manor JANEs. if he is, that he has been misled by the esser- We do not find in the reports of the proceed- tions of some unaeruptdous defamer of the ings, much else of interest to our readers gene- Southern churches, and has listened to tales rally. There are long speeches reported from that hie uenal caution would have rejected, and Bishop Janes and others, but we do not find his Christian moderation would nossordinarily anything in them that we think it worth while have reproduced in so offensive a connection. to copy. The Methodist Recorder, speaking of the If our own speech be plain, and our declara- Bishop's speech, says: lions strong, let it be understood that we feel "The president seemed rather afraid of the that the malignity of the alanderer to whops he bishop a politics, for he reminded him rather has listened deserves reprobation in the IRoet pointedly that politics were not generally dis- pointed language. cussed in the meetings of con erence. But the The writer seems sensible that h bishop quietly took no notice of it, and said all e is over that was in his heart to say. He said little, if steping the bounds of Christian charity when, anything, from which an Englishman ought, after a broad, we must say an infamous, acen- under any circumstances, to dissent. Still, station, he asks, "is this hard ?" We answer about a great deal of his speech there was a do as hard as a local de arture from the truth cided political coloring; at which, indeed, we P can cannot wonder, when we think of the tremen. make an accusation. We defy The Methodist to dous crisis through which his people have been point to a single utterance of the churches to pRBBlug, and thenbsorbinginterestevery Amer- sustain its assertion that "the Southern church- can has had in the political situation of his es 'o day would rejoice in the power to hunt down country. The bishohy'sreferencesto Mr. Thorn- the freedmen and convert them intachattles;" nthT s t o avernd se8r i 1 that "they would restore the laws of ignorance, a storm of cheers; while his generous tribute- concubinage, and trade in blood." Is it to be to the excellence of the Queen (a tribute found in any of the utterances of their bishops, wh ,to beta aioye go nri wa or in those of their official bodies? Can a line of delighted t cheering."P at ben g f b pe 33 "We have I orally conquereli a peace. That that it is found in the indisposition of the there is now no rebel army pn our soil is not Southern churches to consent to their ab- due to any change in the Southern mind, to sorption by those of the North, vie deny that any return of the Southern heart to the old the latter have said to their Southern breth- 1110mOrl08 Of illeU OdH 8 Jb plyOtuor oult rou, "Come," except upon such humiliating rs hav wrested peace out of the fiery jaws terms as no man who has a soul in him can es earnd 01 c prtun Mtimnaorm ynd he accept, and by yielding to the false principle that political tests are to be shade a condition tio tal tthhey canubts sist us as th form1. of church membership--< principle that a free perhaps more so. All we ask of the conquered people have always and everywhere resisted, South is loyalty ; they nicet our demand with These are reasons, enough for rejecting any the empty shell of the thing required. They such overtures as Northern churches have e2ths. andh r b i eisupbr 8ef' made, not to Southern churches, as such, but their franchises are at our mercy; they sub- toindividualministersand members of South. mitthemselves to our law; they allow the old ern churches--a policy which e understood flag to take its place again; but the arms of The Methodist to have pronounced unwise. In oe old ,etN edcree otto le, thi pmelame- the case of the M. E. Church, South, there are stringent forms of swearing, calmot comm d other reasons, growing out of past relations and the heart. Our peace is hollow and deceitful the ecclesiastical action of the N rather Gene- -it is simply conquered. ral Conference, sufficient to var fusion, "until "Whether or Imb the S uthernepdople are that Conference shall have made confession of iis tn eis d a ce que woultdhbe illshy the wrong it once souglit to perpetrate against great folly not to recognize and remain fully us-a wrong from which it was restrained by awake to it. We must not allow ourselves the Supreme Court. These reasons are fully elm a ve o ha cod n ae enu y set fourth he ecent address of our bishops, PP re everywhere disposed to submit o We do not feel called upon to give the proofs new stateof things-that they consider slavery at hand that the great body of Southern Chris t a n see he n at e a ch ti c oo c means that these things are horribly disagree per to prove the truth of his unsustained alle able fac smw ch the bli bi ythae ht gation, before we say more, hileUt 3 Ut t the 9a kteh nunt oerersi1n n A CONTROVERSY IN THE P. E. CHURCH. willing and deeply hostile peop e; that they Bishop Potter, of New York, has recently will embarrass the GoTrnmenta esroe an excited a war of no small proportions among at a venture. his clergy. He addressed an important pastor-- "But if this is the aspect of the purely politi. al letter to the clergy of his diocese in which, Cl seprheninh oio ,eit su br e. fit and after reviewing the prinolples and law of the- of rejoicingin the overthrowof slavery and the church, he lays down the following rules: liberation of millions of men and women, "1. The church makes a fundamental dis- many of them their fellow--commumoants at tinction between ministers episcopally ordain- the same altars the Southerachurches todaysoodl ed and ministers not episcopally ordained; for o rejoice in the power to hunt down the freedmen, and when she admits them to serve at her altars, convert them again into chattles. They would restore she'does not re-ordain the former. .. -. ..3 "2. The church requires of all who minister ., 1 .. . Jo her congregations two things: first, that they s. It her a 0 ., ,, ,, ,4. be episcopally ordained, and second, that they' (er, and calculated to increase the strife which beepiscopallyordained ministersof this church. ought to be hushed, if it cannot entirely be Non-episcopal divines are therefore, doubly ex- healed? We deny that it is hard-much less cluded-first, because they are not epieco- is it unjust. It is only hard, if at all, as truth pally ordained, and second, because they are i t o al t bet e th rnt i o p)cxoc ld od minister and Southern Churches is slavery. We wanted only from administermg the sacrament, but to blot it out of existence as a crime against also from teaching within her fold, holding God and man, and they wished and labored to them to be incompetent; for she requires them save it as a divine institution. We have sue- to be regularly admitted as candidates for holy needed; the bondman is free; he has posses- orders, to pass a probation of six months; those sion of himself; he can choose his own em- examinations having especial reference to player and keep his own earnings; and we now ints of difference between the church and g 1 fto ourl"outhern broth Comeu*eGgrd oTyh ch cho fhae rm gm .novel- plough has fi ed it up; let us be friends; let ty or varietyin her devotional servicesis severe us restore the old relations.' The answer to in the provnnon which she makes for securing this invitation is a flat negative. The crime of absolute umformity of worsh p. She wall not dve%, on whd Iths ahp Irdh sehoef heMOtoC alll head fd on bebyd Ai rianio Southern churches, sad they guard their hatred novel forms or expressions. She leaves noth- of the North with the very sanetities of re, ing to the fancy or caprice of the ofliciating ligion, minister. If he become lax or unsound in his wi th el re o e ph le anm a a on dd batotti at ethe t deu sight greets us. Almost the whole white hu- raises, theeffleesforbaptism, for co firmation manityof the South hates the Union, and sub- or the holy communion, for matrimony, and mits to.the embrace of the Washington gov for the burial of the dead in Christ--these will ernment only with suppressed and despairing rebuke him, and help to sustain the faith and maE 0 Ptt afe li is f 0. SNo de tiond oshpitte of his gnor- is quite natural and to be looked for; if so, it clear and absolute than the I w which the is equally natural that we should take care of church has ordained, and evidently means to iwherest oP stuarsoom ani y, ft n i IIseEver minister, r ,ea 11 le ions of mankind,'the Southwillnurseits wrath, occasions of public worship, use the book of - we must take care of the country and of liber- common prayer as the same is or may be estab- ty. The spirit thattoillinsistin electing leading rebels wished by the authority of the general conven, to ofice may claim the merit of candor, but it must tion of this church: and performing such ser- nevertheless beguardedbysoldiers: Without them, it vice,.no other prayers shall be used than those would trample the authority of the nation in the mud prescribed by the said book.' The only excep- to-morrow. We have a conquered peace. Theforce tion to this rule is the permission gaven to the schich won must conserve. This, doubtless, is a Bishop, to set forth, temporarily, prayers or great evil, but a less one than anarchy or re, thanksgivmgs for certam special and extraor- :.e.1--.1 rebellion. The military armeaknothewith- diary occasions. .... Jr. until disloyalty shallyive place Finally, we have seen that the church re- to patriotism? peatedly, and in the most -solemn manner, .: The extract above is from a leading editorial a ec seeiene of ery ministedrosh or- in 1'ke Methodist. Last week we took occasion disciple and worship. She holds God to be a to commend the tone of that paper, as evine- God in order, and not of confusion. She leaves ing a better spirit toward the Southern Metho- I others to employ their own methods: but within dist Church than th th f th I herown fold she will endure no irregularity, Northern Church, it spoke r eace nontrhlisfg tehawhabte up t bi bde ev suggested that to live and Mt live was the true truth, and nothing but the truth, of God. policy of both Churches. But its tone is won. We have seen if reported also, that the Rev. derfully changed in a fortnight, as the above I Dr. Muhlenberg, of the Church of the flolly Me a tC it eve r Dommutn hhr i k a es 1 zeo eteh Church in the South. We have marked a few Broadway Tabernacle, on a Sunday evening- passages in italics. Some of these passages in, affiliating with Presbyterians, Congregational- dicate the political tendencies of the paper. its, etc.-notwithstanding this admonition of They show that its idea is that, in the South, Bishop Potter, against such practices by the "the Yankees must govern"-that to effect clergy of the Episcopal Church-the Bishop in- this, we "must be guarded by soldiers;" that stituted the requisite preliminary proceedings "the military arm cannot be withdrawn from to bring the offending minister to trial. ri y ch ce ,a e n **the arms of the soldier cannot comment) the it has been deemed expedient to make his the heart." What object then can it have in test case. In close sympathy with him are the keeping up the military rule ? Not to "com- Rev. John Cotton Smith, of the Church of the mand the heart," which it says carinot be done Aseensiion; the Rev. Dr. Taylor, of Grace; the in that way, butto crush us out. We turn, how.. Rev. Dr. Canfield, of Brooklyn; the Egy. Alvah ever, these politicaldinestions over to those Guion, of Williamsburg; and the Rev. B. F. appointed to deal with them; and revert to the De Costs, editor of the Christials Emes-all em- false scousation against the Southern churchein inent Churchmen of the Low Churob, or Evans But before we go further, let us say, that we gehcal party. cannot resist the conviction, that the amiable We find also the following paragraph bearing Dr. Cro a is not the author of this article; or, on thil subject, indicating a movement, that, if ~ ( I _I __L I__ __ bv@# one-third-are membereof soolety, being [ \\ (ft 10 $11 & an increase of 1,401 to our membership from thehSabbath 8 htools ofrthePeincip lit alo e MACON, GA., SEPTEMBER 14, 1865* I remo adult remaining ITthese schools ------- ------ == is the fact that the adult classes are conducted DIFFIOULTIES much on the same principle as the English It is rarely that we receive a letter, in these Bible-olass. Dyes no ra ar e IBiblet days of restricted mail foollities. We are 1 ou94utg knr ut present with th reent thereby denied that knowledge of the general of the school,,at worship and addressee, are the diti f the Church an} our section, with inost likely means of preserving elder scholars con on o tous, and of bria ing them into fellowship with which the editor of a Church papers supposed theGhurch of 3hrist ?"-A thought that de., to be familiar,- There is nothing torblate of an consideration, encouraging character in the Churches in this serves HOME MISSIONS. city. Indeed, we have been troubled to see The Committee of the Home Mission and, the indifference, indeed, the apathy of many of Contingent Fund met on 25th. The "Contin- our members here to the interests of religion. gent Fund" has long been an institution in The s' is bad for theChurch, but worse for II ign Wesleyan ethodiam, and was raised by a themselves. "yearly collection" through all thecircuitsand As we said, we know but little of the progress the proceeds devoted chiefly toward meeting of the Church elsewhere-nothing, indeed, so expenses connected with the ministry on the definitely afs to make what information wehcan circuits which were unable to defray them out give worth much to any body. mors a of their regular inconice. It is now blended reached us of revivals Inr t iro a pil hief- with the new oHome Mission",operations, for yei 80 rm ixtn mand west ofus. In one which there are special congregational collect or twoinarts as hhe e t on . Methodist revivals. We'shouldrejoice to know work in Great Britain. In 1858 six ministers were so appointed; in 1859, seventeen; in l800, that the entire Southern Church is partaking thirty four; in 1861, forty-five; in 1862, fifty- in these tokensof divinefavor-and we entreat four; in 1863, flfty-nine; and in 1864 seventy- our brethren to communicate to us whatever of two Home Hissionaries were appointed. In interest they can tollrespecting the work under addition eight miniiders are appointed for the theiroo le whlVleen Christi s on so- benefit of Wesleyans in the army in Great Britain and Ireland. among the people elsewhere, they are edified Generally these"home missions"are distinct and encouraged; and are themselves stimula- from the regular elronits, and the Missionaries tod to now religious zeal. He who tells the are not membeth of the Conferehoe. As soon Church of a gracious revival preach@ to it a however as the mission stations are strong newosermo that ought to be, and often is, enough they are received as regular circuits. Smeel859fifteenHomeMissionstationshaving Because we are ignorant of the condition of answered their purpose in raisingcongregations the Church, we know not how to shape our and churches, have been satisfactorily merged .discourse on subjects of practical religion, into circuit arrangements. Thisi*esult was to Whether thempre"6hershneed re-aniimatingu or be desired and expected following successful 173 labor; and it is encouraging that several other whether the Sunday School interests should be stations will this yearbesimilarly incorporated. pressed, or whether neglect of discipline is to the report stated that the income had great. nestrTert n rorwa i or e co's hermHne of ly improved during the past yearthough it was, still insufHeient to meet the demands of the exhortation should be taken, one cannot know wo k. It appeared also that wherever Home mn tryeak owha ur th difficu e b s p Missitoens had been establ hi is se may he. It s impossible, therefore, under greatly enlarged. The marked success at Bow, present circus ances, to get up a paper satla- a new circuit in Londbn, received special men- see, me.... q. n,.,dk b krasi .imBta- Mon emhel a e 1 a including the minister'efamilymetfor worship m ti na nued e asu it mhetminister'achoon tTohn y prepared to direct his n to those "live" sub- eircuitamountingtobetween7,000 and 8,000, si rofitn1 y mos erest, an especially free from all debt. m w 11 be se8en h stle-s for once our reap* CHILDREN 8 FITND. pearance we hav4 Deen 1= It Mr....si alone to Among the Wesleyans, the entire support of carry the bullen of the Contribut a preacher's large family of children does not and correspondents are bpuatpf w; and mayors altogether devolve upon the circuit in which those, who have heretofore honored our col- labors. Twelve pounds (about $60) is allowed umns," are for the present dumb. We wish to the supportof each child; but it is not paid Id ch th abd to the preachers directly by the circuits in we ocu rea em again, rouse them to which they labor, but from a general fund, con- thinking and writing as they have done afore- time. We trust that this hint #ill be consider- tribute by the circuits in proportion to the ed a standing invitation to them to renew their number of their membership. By thizarrange- lab the C ment the difEculty'sometimes found manoug up ors or hurch, and their regular inter- tati ' course w2th our subscribers. If the Advocate is in a oning ministers with large families,.is not what it has been in other days, it is not entirely obviated, as the circuit pays just the because its editor has declined in seal or in- same whether their preacher be childless or dusty as we belie e the fr da have a score of "olive 'plants around his table." th::::::&:: k'Tund u u a e e an dea nds o lieve time will remove. On the present scale every ninety members furmsh the amount fprthesupport ofone child. THE BRITISH WESLEYAN CONFERENCE. THE MIBSIONARY COMMITTEE OF REVIEW The preparatory Committees of this Confe- In the reading of the minutes it was men- rence, composed of an equal number of minis-- tioned that the Emperor of the French had ters and laymen, met at Birmingham a few promised to prevent the persecution of Prot.. days before the- Conference session, (July 27), estants in the Loyalty Islands.. It was also and reviewed all the financial work of the Cons stated that the Missionary stations were pro' section. Here the subjects of Eduention, ducing an increasing amount for the support Sunday Schools, Missions, Building Chapelsate of their own religious establishments, and their were discussed, and the arrangements made contributions to the Missionary cause had risen deenfed necessary in these departments of the from about 9,000 in 1844 to 41,000 in 1884. work. The following brief abstract made up Mr. Arthur said, after referring to the Mission- from the Methodist and ChristiaR Advocate and ary cause in various places, he was glad to .Tournal shows what is doing in that Conferepee. recognize other laborers in the Mission field.' EDUCATION, There were many fields in which the Methodists "There had beezi some decrease in the num- had no Missionary. He felt=great symliathy,. ber of male students in the 'Training College> moreover, for the multitudesof negroes recent. at Westminister, but this, it was believed, ly freedin the United States of America. And ould n dconti ne b 0 ad e umb rbof then, Napoleon's letter! Who, in 1813, would the year 19" studentskere trained, all f whom have dared to prophesy that in our Missionary had )gssed the Govertiment examination at Committee of 1865 a letter would be received ch m .haSvee bnehTaed i dh at mixing protection and toleration, and that tuition and sent out as teachers. The receipts er signed apoleon ? God bless the for the year had been 4,908, and the pay- missions, and God bless the Emperor ! mente 6,053, leaving a deficiency of 1,145 to Bishop Janes, of America, when introduced, be slipplied from the General Fund. referred to the condition of the freedmen of "The number of Wesleyan Day-schools was the Southern State b elf now 579; scholars, 88,525; averagenttendance, h s, expressing ims very 61,563, being an increase of 17 schools and hopefully as to their future, and claiming 0,192 scholars. The total income of the schools English sympathy and help while they were was 68,084, of which 33,507 was paid by the passing through the present crisis. scholars, and 28,302 received from the Gov- ernment. The total cost of (he schools for the THE CONFERENCE* year was 60,902. The Rev. William Shaw, many years a Mis- "The Sunday Schools numbered 4,986, and sionary in South Africa, was elected President hol ,1boi a innrdas a chainld of the Conference by a large majority. He and 4,792scholars. The average attendance of received 206 votes; the Rev. W. Arthur 58 scholars at the morning sessions was 266,487, votes* in the afternoon, 353,528. Of the scholars, Twenty-six preachers had died during the ,2 n lesomelmbsee ; ,9t0h7e iCh z*b o a913 year--five of them over eighty years oft age' chism classes. Of the officers and teachers. e ages of seventeen of them ranged from 70,426 were members of the Church. The total sixty--five to eighty-six years, and they had cost of sustaining these school for the year been travelling preachers from thirty-six to C had r cappointedras ta five supernumeraries were added to "I h bt d th epo me r it limited period, in conse- above 7feeeon mers ofea umberoof holaars quence of temporary failureof health, but most Schools, and, though some allowance must be of them from permanent disability caused by made for its being the first year in which the age and infirmity. Sixteen of them had ren- re rn has been soughththe hur mdy be re- dered effective service for periods ranging from ve 86,483 inantpoptal of 537,311, or aboutThey thirty-three to forty-eight fears. One of them, sixth of the whole. We have the lar at nu e the Rev. Williant Wedlock, had lost his sight .2-i;- h co u n thqy are attended by adults, who never leave regu r work of the ministry for twenty-six them until they are disabled by sickness or old years. age. Many who cannot read by reason of age, Bishop Janes was present as the representa- 11 aettnddto h tdhe word o Gobd reae ron of dh M E. Church. When he was in- and one of the schedules reports that 'one of welc 4 conferencee he was cordit lly our very faithful scholars died happy inme ome We copy the following : Lordbin December last, in her seventy-fourth CANDIDATES 702 THE MINISTRY. 1sh so oes he m bi w @ th co thOne hundred and forty sixcandidatesoffered tribute to our Church membership. Of 22,995 oneFh nd d Ofdbase five withdrew their offer; scholars, 10,726, or nearl one-half, are above were accepted on 211 were nace ed; six fifteen years of age, an 8,086-considerably emitted into the Theologie% t1fat tutionbe e I I no ammalammisla 0 PLAN OF EPISCOPAL VISITATIONS FInsT DIsancT-Bresor K MI ri Genter "Annan, ason em.*, at Elemilton, Mo, 16th Aug n neenary, St Louis 28d Aug. Louisvill at Russe n Ky., 6th Se8pt Western Va. o Parkersburg, Va, let Nov KansasMission 820 D -B own awards Isnor Entr. rkshT Conference at Joneeboro', Ark. 4th Oct, Virginis ** at El Dorado, 18th Oct North Carolins a at Da ll*, Va.,2195hh For gham, ov. Turn Disaler-Branor Paxil. Mm 8 oriference, ovin n, an., 8NOct, Montgomery at Lowndesboro' Ala, 15th Nov. Mobile Fous.rn sa 'sAs 29th Nov. 8. ORTOlills Goldefence, 44 Marlon, B. O ]St NOY. Georgla at Macon, 18th Nor: Florida at Madisen C. H., 29Nov. Firm asynice-BIsnor ANDREW. Indian Missiott Conference, 4th O t. o Grade 1 hNOct. Ene exas at Mansflold, La 5 N . California 11sh Oct. The next General Conference will meetn New Orleans on 1st Wednesday in April, 1866. needer a hat this ists place seketed, 80uthern Christian Advocate, The re ular re-publication of thisJosig and well.. kndwn erligion an Fa 1 1\Ee ecpospp r*- shnuoth Southp2hns been resumed AT MAcox, GA. Those who want this paper from the beg nLg of P"t a net r.b IPi <5Tito. The blaural re .*1 abe M. E 01 uset- ti..roughout the South are agents of the paper, and are empow- ered to take subscriptions and to give receipts rams: For three months, One Dollat For seven months, Two Dollare For one year, Three Dollars. e e yo a n DeoHamm may be unwise to insist upon our raile, tha[ythe m a cars hde eejr speerdat Ta those persons, for the payment of whose sizbscrip, tions rev th. rn<. Line ( th.* Annual Conference, they w *1 t*,. ,, fe r at Ir na s ep Any person sending 530,00 for subscribers, shall e oh a he ,in ES EYAN PtE Eass oO LnE e5b 24 1865. The Faed I o plete. Ther or the First e ,a ar it a c$ go CI e ... .Ts so on French(op onal). .. ..elase...... . xwic' (with use Ti2s2medif. 22. Drawi ................................................ Is no Painti ............................................. 22 so an ,d Whi g, a e ra3 o of ', it. is theproportionsh the ro low that, by Oct.1st, the bills of patrons will be correspondingly re ngedPisy a dl th as emam ath no omrts, corienrfe ,ermirwar mepeq room roe e a weH as cup, plate, knife, Ar'ka hdr ti thene are no n er own room. Fo J.M.BONNE ,a le a HE CHRISTIAN WITNESS. THE second 0 iTezTs nTIPha so a oe a n I a hbberal c e :ee to 8 Adade ciret to e ex ee 1 as eioug poroprietor is he proprietor of this splr had been a mithister for six- teene a ornt eo ending orhurdches ethe n ee we obl ions anofath a a my a leth es thr als c en a gy pe haitian a not at sh 0 r thed a onqwoh ow rvl so largely under thenameand o rt e oeonuneneman of this ub othem ftheCh tianUnion-amovementwit whichtheeditor EfuHy identifle andwhichwill continueto berepresented an dbe wholl ePnallpolitical disonesions, ri er no titjoC e o beo yo eifin e ha p (2 las yearsadpd ew#a a a e na rom g mt I et wi r to k I a one a informed in re- a obto theleadin herentseTthe do 6 b In as they orga scannes eeia h$c d'ery, who usayn many access to her erit will give a weekly report of the MarketsEastan est. "*Ia fto? I ouroldsubscribersandalsotoadd Come now, friends of re g on, Hberty and a pure C 8- 'rnaus.-TwoDollarsayear inadvan:-, Ar nee--na..s ce)n) at or res g .year, with the ca r.._ a live me. or.. t..."amou-.Ordea. flENTRAL PRESBYTERIAN.---OFFICE 1.1 tr o.= a -r -re.,r wm.D.cooke inthe8toreofloo. N.to.so-, s = ., Prurteenth-street,'between Main and a k u- Ee., ,, ..ar, a ps as .r. Adsun*.r /Two Dollars for .x ..r. arm., m,.v. ..a 1.... rre I. E. .-lt..*as, should be ads w. w anodwad For Subser iKBOWLEDGc oE< eT September 12th, 1865. -- \. L < ..... b. Is r... I an, $10 to debit; W.'P. 1- 3 F r e de<,, fi-3 o r us. Far rs an 0: .0Holt, 3*..),,';",. ' u..aua . eredit re-c 3 m..rive so a ..coets, E-TV T F an i. I n |";y s. HE EPIBOO PALM ETHOI*IST-W ILL non.. .. $s oo At subscriptions are standed to expire on the 1st or January,1soade as to commence a new yournewisthe objects in view, Bad so state on the oath itself, or in pubile, and then continue to preach, it will be Jtistly regarded by every holiorable erson as a shallow artifice, and an unmanly ge. It is better to take the oath outright be done with it, than to p y the sneak for the p rpose whom this address will meet, though we cannot be persuaded that there is even one in the State, we obmmend to him the prophetic words of the Marquis of Ar yle on the eye of his exe. oution-'Mind that tell you; myskillfailame if you who are ministers will not either suffer much or. sin much; for though you go alozi with these men in part, if you do not do stin al thbngs, you aresbut where you were said must tx arh; a if go notat all times with them, "Ruling Elders, and members of the Presby. terian Church in Missouti: you, too, have prin- ci le m iintajhand du les nrforbm th direct infringement your religious prin- ciples, and a direct I tPe7erence with the rights of conscienchecontrary to its own provisions, for it drives you from the pastors of your choice, and forces ypy, if you attend church akall, to listen to preachers who you believe have denied the Lord that bought them. Meet the issue, like christian men. Abstain from all violence and abusive lankguage, but let your purpose be, firm and unsha en." A WONDERFUL CONVERSION. Dr. Parsons has left us-returned to the bo som of the "Northern Church." Kis converse sion was thorough-so fully has he put off "the sold man," that'the very memory of his antece, dentrlife is gone. It is a case ofmetempsyco- als. Methodist Episcopalian, North and South, Protestant Episcopalian, then Methodist Epis- copglian, South, at length he is only a Hetho: manner of man he was. See how he repudi. ates his former self, as reported in the Western Christian Advocate: On the Conference floor, Bishop Morris*said: Brot r Parsons, I have two questions to ask A u, n no 0 wM scribe fixity and faithfully to the anti-slavery pri ip fdhe M.oE. Chu5e ?""Most cheer- *fillly, Bishop. When, yeark ago, the separation took place, and I was thrown on the south side, I felt that I was wrongly placed. I have never of vue t g r once more to be at the id homestead door.- ih p 'The great question' referred to, is the aboli- tion of very. 'The action of theM. E. Church , meeting 'his heartiest approbation,' is the ap- proiral by theGenenal Conference of theeman, citation proclamation, which was endorsed by the Conflirence which the Doctor now joined. Now to say nothing of Dr. Parsons' private letters, his. conversations and speeches, all of hich se mp rgo enies how md nh the church in 1944, he "adhered" South, with the Kent Confe H ble to outPfikr t Gene 1 onfeee e ,just eligi- elected over Dr. Tomlinson, left out for his op position to the separation. He was senge yeaseto so. Louis in affoTTIEFUoiikance, as the mart best able to resist the eneronebments of Northern Methodism, and was afterwards trans. feared to Soqle Chapel, Cincinnati, beyond the border, for the same reason. In many edito- rials in the Nashville Advocate, of which he WR8 corresponding editor, he belabored the North- ern Church for its dishonest repudiation of the plan of separating. He was one of the three commissioners appointed to settle the property que ofoh withrthetN hn Ohu 1 band we obtained the decision of the Supreme Court in our favor. And yet, wonderful to say, this very firm, candid and consistent divinity dog- , tor, felt, "years ago,"thathe "was wrongly .'-he "always Awd a desire to return (o the M L. Church." Why, then, for the honor of christianity, did he not go? Did the "loaves and fishes"outweigh principle? What can be thought of a man who should publicly proclaim himself a hypocrite of many years standing T-- WecanilccountforDr.Parsons'perpetualmash- ings and seene-shiftings, only on the bypothe. sis of a transmigration of the ori pal The *,an intoeverysubsequent phase of histre. **All the world's a stage," and Dr. Parsons is-only enTanz Armers or Missounz.-The St. Louis Re- publicanof the 2d says: The thirteenth annual meeting of the General Association of Missouri Baptists was held in Booneville on the 19th and 21st ult. About fifty members were present, and agreed to decline takir g the all, r...4u.r.v of mip- igters and teachers by Lim rms...unanutear. The reasons for this action are set forth in a lengthy document, which has been sent to us for publican tion. Some of these regons are, in brief, 1st. That the oath is in conflict with the constitution of the United States, as interfering with the freedom of worshipping.God, as ere post/acto in its operations, and as Making every minister who refuses to take it, become a witness against himself. Ed The oath 18 unjust and unequal in its operations. Sd. It prom to nu Isat is real no me to acknowledge an authority in in-s .71.1 that I .. not belong to it, and that human authority is above divine- ** Msanosiss in FRAxes.-The Freneh Methodist Conference tras held in June. On looking over the state of the worJr, it was ascertained that the cause of religious liberty is gaining ground in France - sw 6 imeo or pou ph n 0 e r ee go2 o to students preparing to enter one of the State Churches. In two eases only are there any difficul- ties in opening a new place of worship, aud it is hopedthesedificulties will soon be overcome.-- There are now under the care of the French Con. feretice, 193 places of worship, 28 ministers, 14 ' teachers and colporteurs, 89 local preachers, 1658 members, 168 probationers, 6 day schools, with 215 scholars, 37 Sunday*achools, 258 Sunday-school teachers, with 1859 scholars. There is an increase of 11 laces of wonhi 52 membersdl7 p robnLic Ray. Josurn Waras, 1ate pastor of the Dayles- town Methodist EpiscopalOhureh, has beenappoin- ted by Bishop Bimpson, misalonary to Texas. We REV. E. H.RYTHERFORD (PreSUNOMAD) has been visitffig his former charge InVickaburg. Their ex- collent church edlice was not injured by the born' bartment of the city. Its preservation seems a special providepee of God, for it was greatly ex- posed. The basement was used for some time as a and the divine blessi we b e the congregation ng, op fli will soon be restored to its former vigor.- nee writing the above, we learn that Mr. Rutherford is now in Kentucky, on account of his wife's hosith; and that he has received information from one of his elders of thb attempted occupation of the church by a ininister from the North, and formerly Achap- lain in the Federal army. -Rev. Jno Neill, of MoWe la.,Mwas c Hed he Presbyterian. , *PRESBYTERIAN CHURCHEB IN Nzw ORLEANS - A correspondent writes us: MAll our people here have clung to their former pastors with the noblest fidelity and warmest attachmbat, and they have welcomed ni all back with open arms. They are united, devoted, and earnest, Andthespiritualpros- pect is very gratifying Rev. Wm. A. Hall, formerly chaplain to the Washington Artillery, has returned to his ebarge in New Orleans. They have provided for his support, and rall...J wan crest br.- ergy to the rdenseitation of the cl nr. & tr a r,.. nan mut le ro os6e n a Presbyterian, DAWSON CT. GA. CoxPERENCE -The OV. T. B. Christian writes: We hive much to encourage us rethis en eet'Phe valh vneu cis wide Dawson, seventy were added to the church. Among the number were three of the Federgl garrison- the commandantof the post, and two of his men. At Chidrasawhatchee I have received22; at Salem 12; at Mt. Zion 7; at Macedonia 15, and several meetings are yet to be held. w. Tax Ray. W. H. OAnna, says the Christia? Advocate and Journal, has been appointed by Bish- op Ames to the work at New Orleans. He is a ca 0 eu uA en under Dr. Newman's direction. was hurch nh soood he ehiseign and left the C What irill he do now for a house of worship since our Zion has been made to surrender in Now Orleans ? AtifiECESSION CHuxcad-The C ch of St. A 1 been reorganized in some more contiguous place, and has also given a call to the Rev. O. O. Rey, -nolds, of Hunter, New York.-Evangelist. To Ricy. DE. 8. K. TALMAus, of the Presby- terian Churely so long and favorably known as the President of the Oglethorpe University, near Mil. ledgeville, Ga., died very agddenly at homeon the 2d September. His health had- been very poor for a long time. He was a most estimable christian homan a7sith 1 1 emia to . el9 " THE FIEST PREBBYTERI 1 (AEURCH at N Ille formerly of Texas, on a season or rarress i -as.mpersons evil addedto the churchon profession of their faith. 'I'he people are much en- coxiraged, and our latest information is that they are still continuing their meetings, as many appe to be interested upon the subject of religion.-Ge tral Presbyterian- THE USURPED UKURCHas.-We find in t he pa pers this paragraph:-"The President has ordered the Southern Methodist churches, which have been in the hands of Northern ministers by military agency, to be restoredto the afipisters of the church Neworleann,7t M is djisn flutiesin Mrssisexert III ORGAlilZI G.-Gov. Sharkey is proceeding eneMetically in the re-organization of has. 8 Un adjo as ppx. heo@a 0 prtiiB thepriticipalitemsofitstransactionsso faraswe have seen them reported. An ordinance WA passed declaring null and void the ordinances of esp.:. ar..J rep al / z./." nees eaceteodrdi nance, v.has:t. r on J..r th,- ast r. of the Legisla @ and Bi ed pa ed ra repugnant to the Constitution of tha, Ur. 3 fast , and of Mississippi, prior to January 1861, except laws concerning crimes; enabling railroads to pay monies borrowed by them; repea s all laws author- mer g the payrnerst of due.s to the State .n 4."* af..,],.. ral.,z.: .,, i..audinallaton ...r spent. ..n statear. urst lantisi all all, tal i.ua, pr. ee..JInga, Jun - ot..as I 11.@ ral valu...:.1 prop rty, for wbr.t. ueb h at, were grown : see.,rs.--.. a.ul testirn<*r ? 1 lay takers 1:- J.ra. Tr? Whath*)r .:.r resA L Pratrat*,8 Ware contemplated in specie or currency : ratified all marriages consummated sance January, 1861, whenor m-l..ttus.d t.v e .c.I I:>.us ar...i .>..,as.,n-a .:-r r...t --'fIs., full..wir=r C.:.an-titut....rml amand. moral ps:s.1 L*\ n ball,0 ad Sh 1 li Th-3 Instituts: re or us.,.7 traw.cag beers a true.1 .0 _use State .:-f Mr i app. renter asser y n. r ar.v...Instary ... to .us .. U..s. U.x, as p n.:,nn not ..1 cr.r..e her.... I" 0*.<. s...,r, .I. iv c....1sJ. wbrall "" x. ar. a .xt.. fri .1 r.,. quire, shaltprovide by law for the protection and .... ra .:.s n.... per ...o had y....p, Ir.,-.Jr... r LI f t, .as. and g....a f r., rs. no J tr. us, against no .* )tint rang at m. ... r ,-.pa ad." 1* pa 1.r ratin...t.... 1. L -- 8 tr., a r, .j Ours a ..ru .1 e 3... .:. u .1.1 the onvention in its progress in paying the way to readmission into thoUnion. All obstacles will soon be removed, and says he will restore the writ of habeas corpus, and remove the troops at the earliest moment when the State makes sufficient rogress to have entirely returned toherallegiance. by a theSexample of Migsippi will be followed ulg m nd e r g the P. E. carried to a eucces h b Church into kindlier relations to Le o= --r churches in the land. The coming General Conveintion of the Epiacopal Church, in Oosolaer next, will be one to Rev Dr. John p'otton Smith, to his reply to the Bishop's Pastotal. an effort will be made to introduce, a new canon, intended to allow the clergy of this church more liberal relations with thoseof other denominations. This move- ment will meet with rigorous resistance as well as hearty supportr.E The clergymen identified with it are well known for their perseverance and independence, as well as for a tendency to Tens p tr8t1dqisotubesionu whit thhe B la wI , the vast body of has clergy." In this controversy Dr. Tyng*a reply to Bishop Potter holds a prominent place. As part of the history of the-progrees of Ifigh-churchism it deserves not.ee: and we therefore give ii this paper an culline of its subject matter, ta- ken from The Methodist. Had not the P. E. Church departed so widely from its original principles, as they are set forth in Dr.- Typg's letter, its influpace over the mass of the people would have been far greater thanit now is. As a first elep toward recovering a strong position It utli be v Itlu e-1 v eu of Bishop Potter, and out. I) n he r. Ir. l..eet use Ministers of olher churches on that over plat form that allows of what our Baptist brethren have Isamed pulpit communion," as has al- ways been done in Virginia, and we believe in Kentucky. 'lilE MISSOURI CONFERENCE We gave last week from the Canton Press, the appointments in this Conference for the ensu- aug year. We have since received a pamphlet copy of the proceedings, and add the following items: Samuel J. Huffaker, Samuel Alexander, Jaa. =:::::::.:.dA a b s Taylor, George Penn, C. W. Collect, John F. Shores, R. N. T. Holliday, and Alexander Al- aright, elders in the travelling connection Joseph Metoalf and George Primrose were ad- mitted on trial. E. K, Miller wgs transferred an K on is I cStalir "E Wood were superannuated* W. G. Caples, Edwin Robinson, Johzi F. Young, Geo. L. Sexton and David Reed Smith have died since the Conference last met. A. Monroe, P. M. Pinekard, Wm. M. Rush, C. L. Vandeventer, and B. H. Spencer were elected as delegates to thegeneral Conference; and Horace Brown, Wm. A. Mayhew and Wn, M. Newland, as reserves. ' The foUowing is the Report of the Conference ON THE STATE OF THE CHURCH. In calling attention to this subject, the first f"E::C''s:'v'"th:*."tim=g*=??;, ig:: persecution of the Church and ministry of God more manifest. Never since the days of bloody persecution has opposition to the Church and ministry, and to our common Christianity, been so I dmdedtee A lie iswelo aga e ern times have combinations against the char userbolaimstand cause of the Redeemer seem- aspect.e sTruly h ork n anofe earth la sne themselves, and the rulers take counsel against the Lord and against his Christ." Psa. 2. 2._ Truly1the hople of Ga dwill bedcalled upon to tthe ulersp f tFe dar ne otpo er o Indd and th opir nu8dof cu e f a of 7. butnmihehtTt3rough Gh b h he Church of GMiTsMissouriaver mneom 0 on has had double share; and in persecutions yet tocomewearesxngledoutforspecialattention, We have been denoupeed as "secession, traitor- ousandarebelorganization; asanworthyof giv- iblprotection;danddi ado y dh eG en ment.P'p And this has been done not only by thos1in lowdut also8bmehos n nd c m- ship have been burned, others dismantled and otherwise destroyed. Of the use of otliers we have been forcibly de rived for months, and even for years; and to most painful fsot in this bill of co plaints i that the latter h been done by men prof singtobeChristina ministers! These men have come into our pul. our h oi*- Someefourministersand me shavefallen by the hand of violence, while others have had to flee for their hves. . The principal reason assigned for the bitter persecution with which we have been assailed, as the word &uth, ailized to our ecolesiastical name. Thechargehasbeenmadeandrepeat. ed until it has grown threadbare, that this word in our name, means secession, treasonandrebellion, and hence there is nothing more common than for our enemies to call us The Rebel Church.- And what is more surptising than all is, that y me of intelligence or candor should pro- by, an all oor 848*anmoment he influended mischievous in its design. It seems no t havo occurred to some men, of whom we had expec teddbetter thin abth itdstphoss le orbc te slandep those who are better than themselves It has been said by men in high official positions in this State, tigatif these chargea-are not true, that we are to blame for not having made some efort to set ourselves before the community. Now the fact is si this: we have made : frequent efforts to de ne our ecclesiastical post- . tron, but for the last fewdeals passion and prej- i t m Mass d app dvPha d le simply an alls to the name of our Church, in- dicating the geographical limits of our ecclesiastical JuOisd(ctron. In the t5lan of as ation of the gs,4xualMethodia placopal hurchinAmer- 8," o "two General conference Jurisdic- applied a ro thhe w dh'Soubth h sh lay in the South," to distin art which lay the agaish it from that understanding land 1"Natha withhthe m nal two should invade to tritory of the oth ese was eCplo o di on Church b nameksom aliteen or seventeen years before it the.political meaning which some so k o ve it' Church, we do not wish to excusear justify any conduct on the part of any ministers or mem. bers that may have given just cause for com- laid but do disapprove and regret the same, he truth fe, there is no Christian denomination in MissourI, or in the Southern States, of equal frtth ours. Now,,how can this fact beaccontat- ed for, if this word in our name, be the mla- chievous thing that some say it is ? It is not of doctrinalor political but of ge raphicalandjuris- diedonalim ort. But it may inquired, if the doctrine an discipline of the two churches be the same, why keep up the Southern organization at all ? Why not all be one? We answer, the reasons for keeping up our organization are toe gh edt alf a million of souls look to us for the word of life, and hold us in honor and duty bound to give it to them.. 2. Thousands 9ponbthou wi 1 obonl be I the 1 t Christ and to heaven, if we do not sustain oar organization, as the means qf promoting theirsalva tion. 3. There are multiplied thousands in the Southern States, and elsewhere, who will perish in their sins, if they are not saved through our instrumentality, for they will hear no other mints- ters, and can be reached by no other organiza- tion; and in this view of this subject, no Chris- tran, or philanthrophist, or lover of his country, (m ht wish o ask us, o veith p.thWe t r wall ch vedee adec s it for to agaej *De of the case compelled the organization, rel -r.II require its continuance, But others may say, sustain the organization, ut "q' 0 se name I a Ina 1 ithe w rd youtly wish it waseotherwise, yet a charge our name would not change the hearts and com duct of our persecutors towards us. 2. A change nT necan 8nlby free edmby $1 llGeA' 1886. 3. A change of name would involve the loss of all our church property, or involve a ca it *L I of separate State legislation in order ,, q,, ,, the transfer. 4. A change of name would now inevitably produce strife anddissat- isfaction in our ranks. Hence, however desi- rable such a change may abde, it cannot now be 0 19aD hmmi n I a wisdom deavor to sustain our organization as the means nin*olznontingr ,e so Messiah's Kin e should he the more encouraged to dtt this, in view of the fact that, notwithstanding the sore trials and persecutions of the last five years, by I e a n am h ho a d I cof that the "Lord of Host th d " sailed, but Gopd, He sustaind its, for which we thank His holy name, and take courage I As t.o the temporal condition of the Church, u..t-rac.ng ur .;r.pl.a.... .. as antere.:-le com ittFees on those sFueb5ects. e req...-r..- As it respects our spiritual state, we regret to say, that it is fan from what it ought to be and far from what we wish it to be. And yet, with grat. itude to God, we Fecord the fact, that we em. ble withhnsmarhoor Inzamio Missoutr ,nthe gent and devoted Christiansm the State* * In conclusion, your Committee would offer "I'".:oad had >Hoew uAnan., ces, it is our duty, by every proper, prudent or Christian means, to endeavor to sustam our or ganization, as the meansof sustaining the piety 2Go 's children and the salv vows we forward, as much as lieth In us, queerness, yeace, and love amopg all shka tha scpeoyme & especially% those thittare, or 3. Resolved, 'That, as we have ever done, we still heartily endorse the23d Article of Religion, as sat forth in our book of Disci)pdline, together m en te e b ob dvate /hat we brethren, viz: AsseT lie fs e the Congrestabt Gera El a asU t 8 o}kA @anaeched e the division of power made to them by the Con- no h ir 17niteMt tes, and etens id tadtes are a sovereign and independent nation, aindi ight not to be subject to any foreign ju" 4. Resolved, That we will, with reviewed dili- gence and fidelity, labor for the improvement ofn thiedp itutabco d taboen ofuse onedcoa deZum, cessful against us. 5: Resolved, That we still hold on to our ecs clesiastical platform, viz: 1. Obedience to all proper authority, whether human or divine.- 2. No ecclesiastical interference with political questions. 3.-Theobservanceof allthedlitie% E.rowing out of the established relations of so. hde reachinA 1 fthe ch8piel Twhbout . THE MISSOURI TEST OATH. The minisiters of the different churches in Missouri tesist the- action of the convention ' which requires that those -who have been preaching unchallenged for many years shall now, before they cian perform any clerical func- t ion, take an outh that they have been "always truly and loyally on the sideof the United States against all enemies thereof." The Roman CathoHe ArchbishopofSt. Louis, tells his clergy not to take the oath, but if it be pressed upon thism, to report the circumstances to him. Else- where we publish the deliverances of the Bap- tists and Methodists of Missouri. We coDy the following from an address to the ministers and members of the Presbyterian church in asouri. "And a bretly of all theo statenennso etam us ew is your duty in the premises ? Can ouin od conscience take the oath prescribedin thego constitution Can you take it without dhnew a hu 6, heutLo ,te y n a a o lingl put your hand to an instrume tut a would extort from you the shameless confession of spitatate lersel in the days of the Sairiour, "we havestoking tCanart" Willyousufferthe idle epithets rebel and traitor,'( or the threats of a wicked persecution, to lifake you reetful of your solemn ordination vows, and of the glorioutrecord of your fathers, and of the purity of the chureb; and thus be driven to the commission of an act which will be a foul nud% 11 o bcehlae a telre pale an plicity withp this late rebellion, or to8r tain com- places, may be induced to take th their Trustees of institutions of learn. e on as 1 1 TNI1 It AM PRINTING HOUSE, s... r..l Street, opposite Post011100, MAc ON, GA. W. BURKE & CO., RESPECTFULLY rO OtE h it 5 ab415dthenna eed stan Padv a K-BMD n R e, PA 8,CA'0 eOkGUdES r Merchants, Ba era Broke shers, Traders, 4gents, Me. ete. eOnK-e D ha@nRt e NnGd t eates e. Aug 31-2 POST OFFICES OF THE BISHOP8, PP Erns ur th n nb nERR o tOffice is Lynchbn Va. B18HOP KAVANAUGH reside& before th war at Ver- sailles, Ky. We do nol know that he has changed his resi- deBI HOPSOULE'SPaintOfBeelsNashalleTenn. aw. METHODIST BOOK DEPOSITORY. weiroztlvans, MAcow, oA, TM have on hand a good supply of ELi MEN T.LR SPELLING 9 -olf? }TH RAM NOTE PAPE'it. Ba l il FT)ST EbrYELOPE.S OE VARINis KINE 5. PENs, PENCES, PERHOLDER: DIES [RAlf.. the a ch@o r parents, art that obe THE BALTIMORE CONFERENCE IN 1 IR- I see the spirit like a winged dragon, having Who that has ever trad the screetes is gr.at Phim c..s 1,aran of TAssmar .ie Glass. 18th we c was realating His wl 1: that father and mother GINfA. a long sail. drawing drolet, and flying in the commerual ally base "fuelled to observe she Day of dayor wonder* felt that they had no right to annul his law* Ms. Euron: 1 bove bden reading with much rair, in search of a dwelling plage. Casting has earner.inese .11 (be swelling throng 7 We see When to w rid shall r611 ssander. I So the alght wore away, and the morning broke; interear, the articles in TA.* 1fer40 iss on the re sler s look upon a certain neighbor hoo# bes, fr ex pre- v.1 .n the eye and gall, and hurried Quersenedinfireandsmokeandunur.d r but it brought no peace to the household* oonistruction of our onurch la theSouth.=rnsp,,...soungmaninthebloomof).6.1,,>> .1 **--ch or sh.>buy multitude On our ar.at a usu.t terro wild heart-readinB weighe'G down by the perversenessof he oun8 Water.. But me a Baltimorean, I feel a special is, ti... enror.gthof his pon**F, earturag..o II..- .. x *E*.usians. 01 travel the use a orarness .par.z Of that hourF'wner. earth la ending* rebel. She woke worn and almost ick. ut no ;raterest in that portion of the Baltimore Con- oI in etart. going lor IIme. TI., is. ... .. ., ,. man.Irel .ud Iri all direenorse mm. are .n- And hear jealous Jo age desearlino slubborn as ever. forence wblah has been erreatert Irom her 1.9 the the old hellish dresgon b,. ... n. .,as Ital I as ar, ab. c.ursuis of some real or landed r]ru, Free wisI indeed! What sh graud, suful as ws, L-loo.1. and use bones are full .11 malrow I <.11 *I la closracterseas the age m which as 198 0 LisallumpeLa ob0.3870 Fadstis tely 1( Is! Boar, shrinedina daint deliorate it must be conceded that many or Ibe insuls erasr the gark.inlo his loo=om and wount .u . .. 1... accomplashed great results so she. 14 soundeth' moral of Hesh, it can look out an defy the I are who went oil under the serion of the mrs- I his lustson fire: I will lead laius at .:.m ba.,,,, acer..=e s>f malerialwealth, of knowicdge, and ornrn.,rse world terrible aBent of evil l Glorious worker lorty at the.Truser ... C q,-rence, wele men of ad worse, until he commile ever y ain I all sa y... .1 r...m.an I ap iness. In this ines the abief Death attonished, nature shakesI at govi l Kindest power in creation I .1 sover- enI and prev.ous good standing le the church him a murderer, sad wall plung alas eval sc., <.es I weda y success. 11 has distinguished 5.-3..11creature, as theybyale n e.gn human will What wonder heaven and anas the membership are..vadence of genuine ever beneath the boiling b.ilose 01 th, great triose also have sequired imme or lortune, or To sma dire trolunst soon liell contend lor little Kitty's will. .*0 they do poety, and that mue of the.r territory wr,-., fiery s.,r. ar..:. With thus a hara le-c. ani.r.g -shou.---J abe cause of truth and virrue, as L -! Line B..ok where all la hi.ard. On every one. Erappy the child whose parents .rser to the war, unsurpassed in IIchnese nami wals b ..-1emence --1 bi- claur-car -b.n L.r. 0,. old hurtful error. Nat useret unrecorderl* st-adfamily keep the ri bt side in the conflict I senty. Nor abould 11 be forgotten lbast is:.ee =br. close by the lad use".I.ig.... I.. 1.:,,, .rr. emes *IIIed so religion no given lu'1 Every doom is thenconviarded Keny found an ally the morning. A wo- presobers and peojile land especially the p. 1- a oc emprner.r to enaracter;and conferred rich, 8> the .1ulge:, wla.,r; h-a arra;grath man who ocau; ied the adjoining tenement plel evinced the.r allachment to the claumb of hen ozi the creds the Saviour bting, I mar gooddru many generation=. 1-*=rv revinen thing unplansth lawing learned the strate of thing from time their farbers by mainlanning connection wills n ,. r Jay sons in m.Ja.icist a com we a so uence .n true Uses of those who. Nuct..c.g a.navenges remaineth II ru meban a fkil ghte e Ir Istjon h rf ilap de ul 12gtt7.j swer. r de" on a sause stab I of 1 hbMan a r. Ir.w &3 .,stem., was or hteeriT, to er6 e r was leTda in he fier for n or t a (H an i crI b the drag >n, ri<- al. This place er im i\l b rdd no w hinM Wo. e LI.e gust hath nearce salersu util-r 11 she could help it .111 this fell on sticalanayo I caldearagogues.had Fountof Tove, dreqdKingauperna sore and aching best. The methos had al- directed against **Isaltimore Conference Meth I ee Man again, gsee..nd time hovering in broad and massue fabrio of adolairy as by th., free giv aglifeaternal, reely beers normented with fears, that the heal* odsom." When, therefore, we review In the .r, ar..I seklug lor a Ivong place in a throw of a rurghly convulsion: What but ar. Save the paine infernal! and thirer, and exeitement woubt really be the light oS charity the circumstances of the i ri true low, by nr rat of clear saw he earnest fear, and purpose made she herole Ere thy wrath's last ereoution to her thirty IIpa l then suddenly ser. it aown ed of th e taunton r-r moomy 910 us a nds her singing the allowing staniss. its blooming and prodweiive gardensewherelin are Lo I stand with face suflused, sub 0 riteous look and went away morning 11 Confer ea.:: for much of the same [- as star y and E en L.I. ***** *.rung frame of mind, and with trees of righteousness and flowers of rare beau- Grows..; ... rn; e.via aceawa *,.8, cruel battle between desire and hang for a will be almoul .mpo-adde to avoid a cer.diel vote.. 11..0 m.ght almost m-- bdirock ,- ty and fragrance? Spa- e. .. 8 :.. -s . ,,. ... .s..ela on.* ur. and li.ok winifeth into =0,.:4,. belong v. th. .'ll 1 <.rs sub. I ut to 4 '* ser .o., and of the world. The Church has enough of B 5 n ., r..t.1, it.- rnue tall of imistle milk-el.s.ke 1.e-r Lewi area stan. t.y le..; 1 process from the ople n .. I st...rjoy they see the Lord, intelligence, and no deficieindy of numerical Itoocherish hopeof heaven. 0 0 01bally, read turn any Einy would nest pac her.*slanysi held them in quiet p-:-i-asion, Withoutavoilbetween, strength, or appliancesforabunantsuccessia slink out of the difRealty, though her parent ..;il be attended with much unusppy strare. Then from the grave I shall arise. .her holy enterpriseofanbduing thewoildwilto Though any pray-ar.i ar.: full, would let her. She or tWey must openly sur- 2. The war has wrought great obsagear but And take my3oyful stand, her King. \\nal a most needs is a fresh ISave m--, M TE.; gram are.elor; render. This liul.: display of character a d* if I am correctly informed, there is no anger Among the saints whodweb on high, baptism from on High, a larger infusion of abe From ties pe...i endles waling eleafer than ever man they should do the ch.hi of their uniting with the M. E. Church,.South; Received at God's right had." Spirit of tan-t. and oI th a 2. rd which 'ed Osi Thyright a de-me a cruel wrong in helping to break down thede' their antipathies sin thate direction are well 'This lace is too dry for me," ..se, an.- d. Itimon through or me, and ral imJ sulf -ring. With tiny eno-en +1.16 m mandsof her conscience known, and ig.is said, are unchanged. -. and oPif he flies to the accur ei de 11 rI he cross. Ihe re- to Fr..ire th ag Ala. god r 1 div..i lit the course of the glorning ])(ra. Hart, was 3. Such is their devotion to Methodism, and onFrom the meadow he ascends, like a great Hnish the war k winc h we g.con H.ru to do. Kneelin r he t ro r he 4 ot at igm h in 4 ." by in indications which it is.neePdless to I sany a nG of p adIlkattem tin antici In an f r.C ed a little stratagem for bringing her to terms. 4. I fear ab are not prepared at this time her cot, a ning on her Intle wheel. Ah ness. It was a great treat for any of the children to to return to the Baltimore Conference. If in she is ripe destruction,?' says (Be dra ( 19 The children of t is world are, in their KITTY'S REBELLIOLT ride with him, and one to which Kitty had this opinion I maistakepilehallrejetee; but will give her a tasteof theburni all of am- generation, wiser than the children of light." A TRifE STORY 7 " ".that, when he proposed if not, have a suggestion to make: Let one or nation, and will cast her intout lake that Their eager chose of ear bly good shames our o .... br 1 m man shebduke ow h neor eo uro i 3sio eme Maheapt nd burne U ndbdanet Uo thi 8 p r n ni fr a6drink of cool emonade whirb od on Run ask our mother to please pit on your ad th di n brethren a met return av eh einworn fee 8, rirembl so a .ut the d no word r bide u hake oil **Please, man.. sand her rootl. she h, then?' said th ce}@le face imied She e nh t yaf re cee, 8non r beau at passage Perr the mountain. nall o m d Ilaj mt r le t step as thirsty as Estore for it mu sir, warm, sh ish I sh IItul little girl ho tch where %.0 may alight and liud a welcome. He FED BY GOD. presently tlielittle feec<:am.- patingry back* ed to Ifuln a was making hers if d all THE TEMPLE OF THE HEART. ,.es in a -mail ratings a neat and .le-cent h<*u IfOhristians had more faith, they wortld of- and the thirsty red lips were puttip again for she re and how;I was grieving the dear his- The its a tempis tra the Christian'sheart, or refreathment. **her.=," says he, ***rill I dw. I tener receive direct answere to their prayeiv. a drink. v.our then she knelr. and, ash strong cry- And every st...ught and f -onog worships there 60.2 Inad to bondage every one shal shtil cro The same God whole1 E=2,pab by ravens, and "Kitty; say please." .q ne *.mis. I pkFred that ides=ed spirst who Each, sweetly ..anctined. maints.ns its part the threshold, and make him less in earnal supplied the children of Israel with magna, "Tan't say pe.uo -, lia. L*t*L*t st. r.I ..*'af can alrib lery eart IU 40t=dne rise assistar.ars IC. elevat*d pr..neur humble prs or terrors." I[eflies down like lightning, enters stillhearathepetitionsiofHispeople, andairp. thirsty agailic will, and lady, baby them he.r arm on...ile..I E 3 4 One adT 0 Choir a the house, RBd WAlkBi@toth6)BTEDr; but there plies their wants from He infinue re ources.- playfully as it deemed, but as the wee rebel The grateful mother covered he. wid. 1.-..:,,, There Meditation nders; Mem tands, are talkIng about the victory of Calvary, and A poor minister, witItalarge family depend? began setually to suffer from heat and thirst and bases, and earned her down so the sin*DE The works and p tiderts of her & to trace exebancing appointra nts an ea. ha her.- ingon him, was suddenly left without employ rather thin say "please," it became a various room, where she sprang into her father's arms, Devotion strengthens; glowing Zeal ezopands f he us. Red parat cannes may w= main also sound ment, in the depth of a severe winter. The last chair- lose and pace. The other children came run- 111= ut ward it.:.rm car. rend, us. fr..s ess enter th- r is to dry for me I all resurn to rr y he a scanty m. al frr rhe hungry children. Tae '* thatus, lift Kiny us naug in to see how Krtly coul I -sy please. She from whiob I came out 1"-* ?.ns #<.2. L ,,, a J.atrees ed mother record to rent with her ** Please, mamrun, lift Eq.;, said the moth was ready to hug and hise every body.>; The ORIME OVERREAGRINGalTSELF. Larle ones, but th.* prod m;marer could not, er gently* whole innely Mood around laughing and cry- As needote is related of John Eyre, a man THE THEOLOGY OF (ROVERBS deep. and so in abe darkness of that mid win- the rest of the children, sat acwn to the it tale ,.uda-.i R.ght had triumpned In had I.een as ,s n, which, shows in a strikir. .lory of proverbs in thiiCblieff highest aspec along faith sprung up in his heart, that from but who could eat supper while that pore 1,rric s,-subleerruggle. but in was once lor all Corn narrai ra parity of th. numan nesarr.ani I and that whub makes manylot them an lull of some souge, then an Een to hun, that his outlaw stood usek by the wall meantag oth aba do, 1... rinc., Kuty Hart harmbown bo ok- may help to see unt for me.meanness of anne face.ng to those whow..rd 8 ly befept Ehem, 8 HenVEDif failel HOuid furnish food for them hunger an<{ tburer .' The mother yearnEd I ...,ithn [.1 in-sist sightful author;ty. Her mil crime or which e -tood ours.. [<.1 in unds he conviction of whreh they rare full, the, Wub thli Lhought uppermost in be heart, he take her in her arms a.nri pre her load ra. I cr* s not usaken-that la an ugly phra..-- st to a ] 1 of considerable -see us- all r.ppearances to rise contiary. tb too, suchL repose t"1bjabow e J sh r o it 1 L it r se G 31e dna nu 1.iMesar as ta all I dair f I $ all she wanted, yer she refused to a .t ra -re wilfulness. In the rest or the fail}. to the cust..Jy of the 1 pro perang 1..< ar. hour". th re nothin. in ruirn.:.st burs.ting bestle, 194 pareme told them The question .vM. f..arly as issue-dral.I st'" (r...se purrule usi used sh along to mass, ,iain,- However, not long before bra .lesib. them -0 pr cious as their ini n abat ir, the long theybadaone to give child obey the parents or she parents .ut-ma in Ir ced.I under tax d na[ so real ta win we hav Itered his solu.t with ze ard to the run a well pprove neels tobe .=mb wh.ch be- "I would put on the kerrie, des..".ial.1 me w the, load? [F Le hD Od ADJ COfilUGH 8116ft- [.t Trafatad\ Ph@ 1 I El IB 1508 @fa sH ( IIrit in* L 1 of his wealth, he made ano tier will, in dog no, its it must he well m abs and with rhe father, "sind spread the cloIb ra. usual Tne '61/";!!":0815!$.9"1 E."!i'" :'"0 b so in I b r bl, I ker f h tr d a) as9n.1 bre last yell ettle boiled, she r..uk her 1.xty no her ourn room, and asa te. staid n somell to a ding to them at .ne d entlems teach. Mr. Evre. rimm it. reseng. ." In e impers ble v relimate ro True low ag father eased beside the fire, and :er .1 all, K.aty could nor **say pose," am uses, orv rar Knty so be a soverted or el abe g man, wi o 7 tae f 1 5 n (1el fe 01 each one of us rure (*8 this de strangers. But as the Lord per oth fl;? chil- bell- r man Defore .ks legal., distrieval and .. year *.nd a lialf old is ppea...i that conse.ence hu mur5E re an tools po pen, wbcthe-r me staril separate our elves from dren.es the Lord pitieth them abar fear Ham." ured our, and fairly Rhf[DO'1 HE*Oft file illlic' s gIll"PMel ABA MOWH641 tholf pl.2)* f* '.Il*} Un? II 20ph of a aQ De lone .sks 6000 Arad wil .eats] rio vagorous A knock reas heard at the door, and a kner creause, who 65.1 not inded drink since noon- ehav adil ended abaL long coniser by ch.rd clerg mar co# to town e.:.cm air-, and L.,s anet then.- on a...yo --.eo .n.J 00 was brand..J in I.r the mini.ter then she gen- she canned her to her father, and I..egged him g he Israrr of alone to is besit of II-sh, and ins.n snto t carcumrstances 01 bl.: ol..I cur el Emaibed by them tieman who brought ermaiked a sy. *.io open- -to aske time ute in hand. Mr. Hart began vi n-line b Spent into it cryoner, .4r.1,a. f ther' qr.4-nd' death a kid if he had made a w.11 be- L sun to proverbs such as them arely ine) ing ic general bank tellie were found, which he talk to me young culprd playfully, non mM Its neese cries in ch.Id.heart, meant. naose Isre he daed. On been answered by E.:<- in are ..usetrated with the assurance that, one, was Myaw.t-d to an ..pt. Words cannot de. doubung he abould br.ng her round. He gave ,,r. e an think? Liernal issues me p..ndinW sh at the cler man re-r co.:.U us be ha, Himself be ng the trunk a II make rrurb I .cr be an emorson- or the lamly as each a bl61r ar ormeeTy a 110 pendkal ll EU dd agg e*(. gg|,.d.eam of II -C ..agr. ..2 "' ha . has poekes n5 pulled us the ormer nd mi vial" eds tr sus ret, to r. n ann thn a n chbe B ES!!,!? (HE ???,0'? e.,e:^"th I.:.I:.e., oe m u n r I no n6bedd -- should have to whip her if she dd not mind. aween Federack 111. oI Denme.rk smi tressh-s pe and eil*r n ,s le .00 to 1,?, u vt.a af may or a ne.ghborboad for a day, or It from de a e ra.ned abat me utle.. * MY%";-: -1. Win:ill"G'"ifi "B":'9'"??2 "' ,:ix E on largr ru b b no in a a r elam I hi and nobody had ever guessed how much b.nuelr, ere retir;ng to harabia sounded dre ss..t. Man go.:.d peopl who feel that the gue.... a anne weru to blipshe weald, rand lo carr y every so b.gh, at enlis of mone igh not be unse grit waslatentinthatsoftlittlebosom. Noth, with a drau;L.r of near from a wood..n bottle, 4. rea.s .9 oc.w *r-utlic.ent for then., are never- thrug a tr.umph before th-m Sun the lie. copu=bt ; but behadito 5 ught of the straits sq else would asonal, bowever and [be whip b.-si when an imploring cry from a wounded Sweae itseb.e. J.e.4u...ud walls a fearial appresseusion in that it is a lie, always carrie. n;rbin neell to whch new were reduced. to come. Sail the b.aby remained resout-hears lying on the Seld, made him turn, and with n.r.r an dear. ** the.r hope will to as the givrr-4 st.., 4 unt ..f ar:- over dissolutign. It isbure to Inus on rkably does Gr.d overrule efea the ed, sud far Irom righteousness. the very words of 8 ody, ''Thy need is greater up ot true gboat." II sa related 01 801.011 tries ,destroy itself at last. Ifgpriests may prop it hearts of men to accomplish His purposes of Feverish and e.11causted, with peached I, than mxne," he kn own by the fallen ene- r, use or the mar spre ELar fx reveal di up frotra wirbout. may ... t it on its feet again, lode and inercy f eward an:.ar rello some Hink oryingfordrinkprinfl.:rublyrclassageosp,-.s. mytopourghe liquorin hismouth. Hisre- tret.:.iah..death i.ewas simost overrchelmed afterithaeonefillentrarouthepresenceorthe the little wora wrn- h would brug is, abe was quital was a pastol-shot in the shoulder from us;sh to.= pi.2epeer or martyrdom, and earnestly truth.1st all this wal be Isbor Ir. vain it will &.sr to KEEF TEiL BE4it ARU.-GOLinlO the put to bed in her crats. AH thr.1ugh me narm the treacherousSwede. "Ruscall'' he cried, "I supplicated I.,r he I.ght el God's ounces nee, onir L.1, bb, Ingen, agan to fall, and more sun-thin a i. blur. Under the barras of tbia night she tossed and meanedin h@r anquiet would have befriended you, and youswould enhout any sense of comfort 11.6 darkne shamefullyand irretrievably than before. On talecess...iSun of riarbteousnes there are warmth sleep, or woke crying from thirds but eved murderms10 return! Now willlptanish you. conunued up to the parandol bmarraysug warb as...,,ater hand, the vivacity of 11.4 truth, as and comfort. Walk 10 she fire-that is, t> the then, sleepy and miserable as ishe was, she I would have given you the whole bottle; but in sight of abe stake, when suddenly has whole centuated with thu short lived character of sh.. Word of God ** I. not my word line fire?" would only sob "Tan't say pease," when the now you shall have only half." And drinking soul ssasso Illied whh CODEOlaison, that be could Im.,< well expresseel in a Sw.es proverb: Ir How many wartaing and comforting passages water came near. For the father another, off half himself, he gave the rest to the Swede- not sortat.r clapping has hands, and cryingour, -.2.ee a.2,,,.7 mars, s*elful...r end, r.. oury sh arurA. are there. Keep in motion and action, sur- that was a night of sleepless wretchedness, ry... The king, hearing the story, said foh the He so came' He is some! fle appeared to for. bury at a;.deep as m--a ma<. It will base a ring up ourselves and the gift of God shaz is in lieged only by prayer. They really began to burgherand asked him hd@ be came to spare go up to Heaven to a ebernor of fare, usaggitle ... Junect.on isorwaharandang. They may roJI sp... Chriour.n converse and communIon. How fekr that the child would corner dre than give the life of suchm rascal- or my app seat sensitility of his or atI death a great don -, and seal 1118 sepulebre in abich can one he warm alone. ap Oh pshawl never mind the ease; have no lesid th dho our jet ad TERHED at LILoans -.'>bu N r.>n assys, a La . to rbb stir say r The great thinglor which weenouldseek and her drink," many a father woul have said. to be made a noble," thy king sad. .ed on an- n,.,terr. Beidom come.- to Chr;alians with grant ants.I bout. It cannot die, being of an im- p.ra). .&, that abatever be on lot, it may be a "Poor littlethingl Imust let the mindinggo ted him one immediately, giving n.m as arts, wraputalone or usak e, temperson to comrun a mortal rm-: for. 7 lhi Spanish r.ross-st.nobly ..anonised one. to another time," most mothers watxid have rial bearings a wooden bo= II* ps.-rs,-J on, ass grear ,ain You bring e. green log and a candle declares. TP .. n! .0gger. 0.1. thought: Lut Mr. and Mrs. Kart did ziotsee it arrow! The family only lately k...*am.= extinct logether and they are very male ne.ghbors; hut "I I so. If it was like death for a will to field after in the person ofan old ma iden Rady F;.v.4 bring e. In abanage and set them alight, and IM dd * eighteen months' growth, what would it be af, G'oldenl)eede men br.r.g a few easil suc ks, and in usesn TilE VINE Or 80DOM.-Ill 06 VAN OH M- This orgaird the M. E. Obur. h, Suath, it PU B- ter months and years of indigence? God take fire, and the log be in the mad.t oI them. dan in the ne) hbothood of Jericho; not far LISBED WEEKLY at 110&, to be trasued to Himself. If abe could When Bishop Latimer was on trial s sklifat it le wnb link i.ins. You will be unrded with abundance, the vine of Sodom, a at from v ] lous famil not be made to obey her father whom she had answered carelessly. But presently he heaid th.. Idea or commmin a great san, and so the the ridds around abou dev.:.ted city hfah pro- It is an old and well estabi. ejefug y seen, how abould one become obedient 5 her the pen going behind the tapestry, which was deval brags you a hall tempration, and leave. duces grapes as t.nter sc. gall and winb as blew aper, and hasor, or a .susperamon, re F.ats*-r so heaven, whom abe had not seen* taking down his words: then he war. eareful you ro sudulge yourself "Therw is no him in aeudly to be poleon oi a serporal This dele- EU e a sacher. throgra.:.at the Santh Travery fact abil her WIII BBBoatrong.mble [L vbit he?"ekid tras, two great pers! its abar;" and so by [base tenous Unic &.3 menuoned by Moses id terms are as Ag. nus, n$ will receive, sand g we receipts unperseatrea thear mindathatt it abould b. There fun ali n.consisag g..en beharadis eur lairlochipo we are Grst eamly lighted up, and at vbw .. Fully jusrdy rne tweaersion For their for, aub rgtier... On under abe control of her conses-ace. tainof the skied, taking door. words and aists lau she green log is burned. Watch and pray rin., soor the via.ie of Sodom E.nes of the Selds The termiEnfe. L 8 tr. adnance. bat Ir cone.iders, too h id way b gr h1de folductgmen .u at arose. -16* .un 2,( Jud.u. is Ass ye enter 0.:.[ auto templation ordo sh h rape at grapy g t en oi[ans gr at warettymor me mMt costaratr as protracted, the more likely n seemed, ab a written wit a pen of iron, and rhe point of a RELY urox Yovasarr.--Nover ask a favor. It poison or dragons and the ornel venom of asps. pynsent, of whose ad.acripLlors, BL thealming 0011* e result would be a 'inal one, and the more dirminand." It graves. deep its record on the as better to sulfar than to supplicase; and ask- In c. probably the wild vine a speci-=a of gourd, forenow, they will become respoulble mportant that it should be right, Then the imperiabable tablets of eternits-a seeard of ing a favor even from your dearest friend, or whion produGEA CGIDquintidH, B fruit tD 0200 T ER M 8: anereb.Idren wisobsd been metabing this new every thought, word, and a 1. How ought wq your nearest relative, to only a mild form of ively batter thant cannot be eaten, and when one Dollar for Three Months; 1 r so nich a ouha slabey learn that the authoriry coding pen ng every flour, annoe we know uns ded exertions, or go without it. There in It was of abi d rine thesonsol the ropheta Three Dollars for One Tear s d "rodes eenanunug's tod V on b217d 2 e earned 5 epagued in the bodksh that moor guiT In nu ydno man r ho n I sr kdu led no7f le art or w sh FI oilrare for Two a ; E. H. TESS y explained to the little one cord is a perishable as @to ty. comfort gained though the grams of a ravor, secounts for th r alarm ** * * Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs | http://ufdc.ufl.edu/UF00102121/00026 | CC-MAIN-2018-43 | refinedweb | 16,359 | 71.85 |
Eugen Paris1,064 Points
create a function called square
import math def square(number): return number square
Jamison Habermann11,690 Points
The challenge is asking you to create the function yourself, not use the math import.
You must create a function that creates a square of whatever number is passed in as an argument. A square of a number is the number multiplied by itself. Hope this helps.
1 Answer
Chris FreemanTreehouse Moderator 59,461 Points
You are on the right path. There are a few simple ways to "square a number" in Python
# Let's set number to 5 # to represent the argument passed to the function >>> number = 5 >>> number * number # A number times itself 25 >>> number ** 2 # A number raised to the power 2 25
Post back if you need more help. Good luck!
<noob />17,037 Points
<noob />17,037 Points
Hi! in this challange u being asked to create a function named square that take a paramater, and return the square of the parameter u do it by multiply the parameter by it self. gl | https://teamtreehouse.com/community/create-a-function-called-square | CC-MAIN-2020-24 | refinedweb | 179 | 66.57 |
I have enjoyed my time here at geekswithblogs (even the green monster) but I will be moving my blog to codebetter.com. Topics the same, URL different.
This blog will be moving to
posted @ Friday, June 09, 2006 4:30 PM | Feedback (3)
Some you have probably seen a post from last Tuesday entitled Floating Point Fun. If you have not read this I would recommend going back and reading it before continuing. In this post I discuss some of the interesting things that can happen when dealing with floating point math in C#, it is important to note that these items did not happen in version 1.x of the framework.
The root of these problems is that when in a register the floating point is treated with a different precision than when it is being held in memory. As such you can run into cases where you are comparing a Float32 or a Float64 against an 80 bit register based float. These equality comparisons (or conversions to other types such as an integer) can obviously fail due to the difference in precision.
After tracing through the generated assembly, I found a great reference on the subject at David Notario's Blog. David correctly points out that this is not a CLR/JIT issue, in fact changes like this were eluded to in the CLR spec (there is a quote from the ECMA spec on his blog) or here
There was some documentation on this breaking change in 2.0. Here is the listing from the breaking changes documentation:
What makes these changes particularly nasty is that you are forced to second guess how the JIT works in order to provide consistent results. In my previous post I used the example of
float f = 97.09f; f = (f * 100f); int tmp = (int)f; Console.WriteLine(tmp);
This code will work in either debug or release mode when a debugger is attached, having the debugger attached will disable the JIT optimizations that cause the problem. It does as I describe in the previous post fail when run without the debugger. If we wanted it to work all of the time we would need to write it in the form.
float f = 97.09f; f = (float)(f * 100f); int tmp = (int)f; Console.WriteLine(tmp);
The explicit cast to a float forces it to be narrowed back to a float32, without the narrowing it will actually be in a register as an 80 bit float. As such we end with a predictable behavior of always producing the correct result of 9709.
The problem I have with this behavior is that it is a leaky abstraction. In order to have our code work properly (and to be efficient) we need to know exactly how the compiler and the JIT intends to optimize our code. This introduces a logical problem though as by its very definition we do not know how the JIT will optimize our code. The JIT very well could place this into a register at some times and not at others or the JIT run on a different platform could offer a different behavior than the JIT we tested with.
This becomes especially nasty when dealing constants, consider the following code.
float f = 97.09f; f = (f * 100f); bool test = f == 97.09f * 100f; Console.WriteLine(test);
What is the value of test? The abstraction leaks for both the compiler and the JIT. To start with, are the floats actually being calculated at runtime or is the compiler smart enough to realize that they are constants? In this particular case the C# compiler generates instructions for the first floating point operation but recognizes that the second is a constant value and as such pre-computes the value. These types of scenarios are exactly the type of thing that compilers look for when optimizing.
If the compiler did not recognize the constant expression this might work as both of the calculations would have been done with their result being saved in a register, at that point we would actually have to look at how the JIT handled this case. Both of these items may change based upon environment.
The CLR has basically left the choice to the language as to how it wants to handle these cases. Visual C++ has handled this by providing compiler switches. The link is also interesting as it deals with how the switches apply as well to optimizations that occur within the compiler that can cause further issues. C# does not have many such optimizations at this point but it is only a matter of time before they get introduced.
I would therefore propose that C# should be given switches as well (similar to those available for C++) which could allow for the automatic narrowing of floating point values.
It is often brought up that C# does it the way it does it for performance reasons; it is obviously faster to leave values in registers when possible as opposed to narrowing them. The only way consistent way of doing it is through the use of the narrowing. From all of the studies I have seen, C# is primarily used for business applications where consistency (and reduction of programmer thought) is the primary goal and quite often run-time speed is sacrificed in order to better meet these goals (think abstractions). If an argument can be made for C++ to have an option of a precise switch, I would imagine a better argument can in fact be made for C#.
Based upon this I would also propose that the default behavior of the compiler should be to support consistent operations (/fp:precise in C++).
This switch would not eliminate people from writing code that was dependent upon how the compiler/JIT treated things; it would however force the programmer to make a conscious decision by setting the switch that they were assuming the risks associated with the performance gains. VC++ by default runs with /fp:precise so I would not think it a large jump to make the C# compiler consistent.
As a note for the people I am sure will say, “don't do this .. use a precision range or round instead“. These are simply examples ... I am fine with using these solutions (in fact I normally use range checks). The problem is that code like this crops up regularly and it creates a very subtle problem (that did not exist in 1.x). That and there are times when you actually want (validly) to do an equivalence test on two floating point numbers that should have a consistent value (i.e. results of the same calculation). If these operations are to be disallowed, that is fine as well .. but lets completely disallow them and have the compiler generate an warning/error in the circumstance.
This issue is known by very few, if you agree with the concepts here I ask you to either leave a comment below or to link to the post on your blog. Hopefully getting this knowledge more into the mainstream will both reduce the number of bugs caused by this subtlety and bring more focus on it by those with the power to change it.
posted @ Monday, June 05, 2006 6:41 AM | Feedback (7)
I
posted @ Sunday, May 28, 2006 5:24 AM | Feedback (0)
slick!
This is looking useful!
posted @ Saturday, June 03, 2006 12:32 AM | Feedback (1)
I knew there was a reason I kept Junfeng Zhang's Blog on my list (even during the slow months). I hadn’t checked the blog in a few weeks but reading it now just made my day.
There are two new items listed on the blog. The first is that someone fixed a huge security hole, I have actually run into this particular security hole. Junfeng calls it kernel object name squatting. I have never heard it called by this name but it is a pretty simple problem. Shared objects between process are shared the question is who protects them from prying eyes?
Let’s propose you have the following code (note this is a trivial example and likely has bugs in it but it should illustrate the point).
static void Main(string[] args) { bool Created; Mutex m = new Mutex(false, "MutexWeWillSteal2", out Created); if (Created) { Console.WriteLine("MutexCreated"); } for (int i = 0; i < 100; i++) { Thread.Sleep(200); bool havelock = false; try { havelock = m.WaitOne(5000, false); if (havelock) { Console.WriteLine("acquired lock"); Thread.Sleep(500); } else { Console.WriteLine("Unable to acquire lock"); } } finally { if (havelock) { m.ReleaseMutex(); } } } }
As we can see this application is simply starting up, creating a mutex if it does not exist already then simply obtaining and releasing the lock. You can quite easily bring up two of these applications to notice that they are synchronizing with each other. Using a named mutex like this is extremely common in order to synchronize two processes.
The problem with mutex is not as great as some objects as I can apply an ACL to prevent people not at a certain level from accessing it. Unfortunately I still suffer from a denial of service attack from applications at my own level. One can quite easily use the debugger (or other tools) to find out the names of the objects I am using (!handle in windbg will bring this right up for me). Once I have that name I can write a bit of code such as the following.
static void Main(string[] args)
{ bool Created; Mutex m = new Mutex(true, "MutexWeWillSteal2", out Created); m.WaitOne(-1, false); Thread.Sleep(int.MaxValue); }
Providing this code access the mutex before our other processes, our other processes will just fail. We have effectively made the other application unable to do anything (the basis of a denial of service attack).
What is being introduced in LH is the ability for me to make my two processes share a namespace. As such their namespace can be protected. The malicious program can start but one can avoid it from having access to the mutex explains exactly how it works but basically the processes that want to share the data define a boundary (and requirements to get into the boundary that they both share)
This function requires that you specify a boundary that defines how the objects in the namespace are to be isolated. Boundaries can include security identifiers (SID), session numbers, or any type of information defined by the application. The caller must be within the specified boundary for the create operation to succeed.
The second post I found really interesting was condition variables. Condition Variables are one of the 3 locking mechanisms (along with mutex and semaphore). I do not quite understand the excitement over it (perhaps POSIX compatibility?). I was under the possibly misinformed impression that it was roughly equivalent to Events in windows.
Basically I can in POSIX use
Pthread_signal() which alerts one thread waitingPthread_boradcast() which alerts all of my threads waiting
In Win32 I can use
PulseEvent() but there are two types of events
AutoResetEvent which will only let one thread though and ManualResetEvent which I can use to let some or all through
I am now morbidly curious on the subject; it has been way too long since I was at this level
posted @ Thursday, June 01, 2006 7:04 PM | Feedback (0)
I.
posted @ Wednesday, May 31, 2006 9:23 PM | Feedback (3)
I!
posted @ Monday, May 29, 2006 9:56 PM | Feedback (9)
I am sure by now that most know how floating point approximations work on a computer.. They can be quite interesting. This has to to be the weirdest experience I have ever had with them though
Open a new console application in .NET 2.0 (set to build in release mode /debug:pdbonly should be the default) it is important for me to note that all of this code runs fine in 1.x.
Paste the following code into your main function
float f = 97.09f; int tmp = (int) (f * 100.0f); Console.WriteLine(tmp);
Output: 9708
Interesting eh? It gets more interesting!
float f = 97.09f; float tmp = f * 100.0f; Console.WriteLine(tmp);
Output: 9709
This is very interesting when taken in context with the operation above. Let’s stop for a minute and think about what we said should happen. We told it to take f and multiply It by 100.0 storing the intermediate result as a floating point, and to then take that floating point and convert it to an integer. When we run the second example, we can see that if we do the operation as a floating point, it comes out correctly. So where is the disconnect?
Let’s try to explicitly tell the compiler what we want to do.
float f = 97.09f; f = (f * 100f); int tmp = (int)f; Console.WriteLine(tmp);
Output: 9709 (with a debugger attached, 9708 without in release mode!!) DEBUG:PDBONLY (even with no debug information through advanced settings)
Wow this has become REALLY interesting. What on earth happened here?
Let’s look at some IL to get a better idea of what’s going on here.
.locals init ( [0] float32 single1, [1] float32 single2) L_0000: ldc.r4 97.09 L_0005: stloc.0 L_0006: ldloc.0 L_0007: ldc.r4 100 L_000c: mul L_000d: stloc.1 L_000e: ldloc.1 L_000f: call void [mscorlib]System.Console::WriteLine(float32) L_0014: ret
This is our floating point example that prints the correct value (as a float)
.locals init ( [0] float32 single1, [1] int32 num1) L_0000: ldc.r4 97.09 L_0005: stloc.0 L_0006: ldloc.0 L_0007: ldc.r4 100 L_000c: mul L_000d: conv.i4 L_000e: stloc.1 L_000f: ldloc.1 L_0010: call void [mscorlib]System.Console::WriteLine(int32) L_0015: ret
This is our floating point example that came out wrong above
.locals init ( [0] float32 single1, [1] int32 num1) L_0000: ldc.r4 97.09 L_0005: stloc.0 L_0006: ldloc.0 L_0007: ldc.r4 100 L_000c: mul L_000d: stloc.0 L_000e: ldloc.0 L_000f: conv.i4 L_0010: stloc.1 L_0011: ldloc.1 L_0012: call void [mscorlib]System.Console::WriteLine(int32) L_0017: ret
This is our floating point example that gets it right when debugger is attached but not without
Interesting, the only significant difference between the one that never works and the one that does but only in when a debugger is attached is that the one that does work stores and then loads our value back onto the stack before issuing the conv.i4 on the value.
L_000c: mul L_000d: stloc.0 L_000e: ldloc.0 L_000f: conv.i4
Basically these instructions are telling it to take the result from the multiplication (pop it off of the stack) and store them back into location0 which is our floating point variable. It then says to take that floating point variable and push it onto the stack so it can be used for the cast operation. This is probably something that should be handled for us (by the C# compiler) in the case of our first example so that it works as well as the 3rd example.
The “debugger/no debugger” problem is still our big problem though. The fact that JIT optimizations are changing behavior of identical IL is frankly kind of scary. My initial thought upon seeing the changes we just identified was that the operation was being optimized away by the JIT (storing and loading the same value on the stack seems like just the thing the JIT optimizer would be looking for) thus causing the problem.
The next step in tracking this down will be to look at the native code being generated.
Note: In order to do this you have to enable “Native Debugging” in Visual Studio.
00000000 push esi 00000001 sub esp,8 00000004 fld dword ptr ds:[00C400D0h] 0000000a fld dword ptr ds:[00C400D4h] 00000010 fmulp st(1),st 00000012 fstp qword ptr [esp] 00000015 fld qword ptr [esp] 00000018 fstp qword ptr [esp] 0000001b movsd xmm0,mmword ptr [esp] 00000020 cvttsd2si esi,xmm0 00000024 cmp dword ptr ds:[02271084h],0 0000002b jne 00000037 0000002d mov ecx,1 00000032 call 7870D79C 00000037 mov ecx,dword ptr ds:[02271084h] 0000003d mov edx,esi 0000003f mov eax,dword ptr [ecx] 00000041 call dword ptr [eax+000000BCh] 00000047 call 78776B48 0000004c mov ecx,eax 0000004e mov eax,dword ptr [ecx] 00000050 call dword ptr [eax+64h] 00000053 add esp,8 00000056 pop esi 00000057 ret
This is our native code when started without the debugger (attach to process when its running) 9708
00000000 push esi 00000001 sub esp,10h 00000004 mov dword ptr [esp],ecx 00000007 cmp dword ptr ds:[00918868h],0 0000000e je 00000015 00000010 call 79441146 00000015 fldz 00000017 fstp dword ptr [esp+4] 0000001b xor esi,esi 0000001d mov dword ptr [esp+4],42C22E14h 00000025 fld dword ptr ds:[00C51214h] 0000002b fmul dword ptr [esp+4] 0000002f fstp dword ptr [esp+4] 00000033 fld dword ptr [esp+4] 00000037 fstp qword ptr [esp+8] 0000003b movsd xmm0,mmword ptr [esp+8] 00000041 cvttsd2si eax,xmm0 00000045 mov esi,eax 00000047 mov ecx,esi 00000049 call 78767DE4 0000004e call 78767BBC 00000053 nop 00000054 nop 00000055 add esp,10h 00000058 pop esi 00000059 ret
This is our native code when started with the debugger 9709
(I am fairly certain this disables at least some forms of JIT optimizations)
Unfortunately when looking at the native code it does not appear that this push/pop is being removed. I have to admit that I am very rusty on my assembly language but my uneducated guess here would be that the difference is being seen due to the change from dword values to qword values . In the version that does not work, the operation is being done on QWORD values, in the version that does work it is being done on DWORD values.
If we look we can see that in the working example, it is done in dwords; then changed to be a qword
0000002b fmul dword ptr [esp+4] 0000002f fstp dword ptr [esp+4] 00000033 fld dword ptr [esp+4] 00000037 fstp qword ptr [esp+8]
In the non-working example all operations are done with qwords
00000010 fmulp st(1),st 00000012 fstp qword ptr [esp] 00000015 fld qword ptr [esp] 00000018 fstp qword ptr [esp]
My (again uneducated) guess is that what is happening is that the higher precision of the qword is picking up a small residual causing the result to be off (just slightly i.e. 98.9999999997). This could easily cause the behavior being seen.
Basically this is not so much a bug, as it is an oddity. The CLR is treating floats internally (when its time to do calculations) as if they were float64s (I would imagine since context switching from floating point to MMX is kind slow?? (again not my area of specialty)). This can cause other issues as well if you have something in a register (fresh from a calculation) and something in memory as they are in different formats, the one in the register is still in a native 64bit format where as the memory one will get widenned to 64 bits in order to be compared (as such they will not be equal)...
Back to our first example .. you remember how it was missing the
L_000d: stloc.0 L_000e: ldloc.0
before the conversion to an integer? It is failing because it is using the 64 bit version of the float value (still in a register) that has not yet been converted back to a 32 bit version.
I took my best uneducated guess, hopefully someone smarter than I can come through here and either confirm what I have said or identify the real problem :)
update: I finally found a resource on this and it seems I am in the right ballpark
Another good question is, why is this doing anything at runtime :) Couldn't we multiply the two constants at compile time?
posted @ Tuesday, May 30, 2006 12:35 PM | Feedback (6).
posted @ Monday, May 29, 2006 1:02 PM | Feedback (0)
Eralier I posted (and deleted) about the Queue class not implementing ICollection from some research this is by design.
From:"Some collections that limit access to their elements, like the Queue class and the Stack class, directly implement the ICollection interface."If you look, the interfaces are also very different from each other in what they include. The generic one includes methods such as ... Add, Remove, and Clear which do not exist on the non-generic ICollection.So conclusion .. Queue is correct but I have to say that having the generic and non-generic ICollections that represent completely different things is a bit confusing at best :-/
posted @ Tuesday, May 09, 2006 10:19 PM | Feedback (0)
Be.
posted @ Sunday, May 28, 2006 12:17 AM | Feedback (14)
Hoping that this works…
posted @ Sunday, May 28, 2006 10:17 PM | Feedback (0)
In the creation of my dynamic proxy I ran into some “interesting” fringe conditions....
I started off as all do needing a very simple interceptor generator, this is btw very easily done if anyone is thinking about attempting it. I later decided to add mixin support which was a bit more interesting. That said lets get into some background information to help explain the issues.
Mixins for those who are not aware involves the dynamic aggregation (either at compile, link, or runtime) of multiple objects. When I first did mixins I only supported interface/implementor pairs (I'll explain why after the example). Here's a basic hand done example of what occurs when we are implementing an interface/implementor pair.
posted @ Tuesday, January 24, 2006 12:25 PM | Feedback (0)
Sorry for spamming but I have had a few posts stuck in my head for a few weeks and I am sitting at my desk in a position I have resigned waiting out my time. Obviously I will not be receiving any new projects and all old projects are on wait of other resources = Bored Greg.
Continuing with the dynamic proxy I created a DSL for runtime based aspect assignment.
One thing that I did which was a bit different than anything I had seen done before was that I allowed attributes to map aspects.
In the assignment language you can say
Assign to Attribute
or
Assign to Method Level Attribute
Assign to Class Level Attribute
The first statement will assign an aspect to any method that has the attribute defined at either it's method level or at the class level that contains it.
The second statement will assign an aspect to any method that specifically declares the attributte at it's method level
The last statement will assign an aspect to any method that has the attribute declared at the class level.
One can also add an optional Having clause where you can access the public properties of the attribute
i.e. Having RoleType=“Admin“ and UseSecurity=true
This simply allows you to filter the assignment.
This addition was invaluable to my dynamic proxy and IMHO really helps bridge the gap between aspect based code and attribute based code. The gives me the big pro of having a simple code based declarative method of defining a behavior and at the same time it keeps me from having to pay the penalty of going through reflections (and a special handling object) in order to process the attributes.
There are some pitfalls to this methodology, one of the largest is that since the metadata is defined as an attribute a re-compile is required to change it. I am at this point leaving it to the developer to decide when it is good to use the attribute and when it is bad :)
What is really nice about this is it allows me to define “Types“ of join points via the use of attributes. In other words this allows me to group method types which are similar points to simplify my aspect assignment (1 line instead of hundreds or thousands). It also allows me to go and refactor those places that were using attribute based programming to use aspects instead quite easily :D
Anyone else have any thoughts on this?
Greg
posted @ Tuesday, January 24, 2006 1:01 PM | Feedback (0)
I
posted @ Tuesday, January 31, 2006 12:20 PM | Feedback (0) | http://geekswithblogs.net/gyoung/Default.aspx | crawl-002 | refinedweb | 4,064 | 69.31 |
fdopen()
Associate a stream with a file descriptor
Synopsis:
#include <stdio.h> FILE* fdopen( int filedes, const char* mode );
Arguments:
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The fdopen() function associates a stream with the file descriptor filedes, which represents an opened file or device.
The filedes argument is a file descriptor that was returned by one of accept() , creat() , dup() , dup2() , fcntl() , open() , pipe() , or sopen() .
The fdopen() function preserves the offset maximum previously set for the open file description corresponding to filedes.
Errors:
- EBADF
- The filedes argument isn't a valid file descriptor.
- EINVAL
- The mode argument isn't a valid mode.
- EMFILE
- Too many file descriptors are currently in use by this process.
- ENOMEM
- There isn't enough memory for the FILE structure.
Examples:
#include <stdio.h> #include <fcntl.h> #include <unistd.h> #include <stdlib.h> int main( void ) { int filedes; FILE *fp; filedes = open( "file", O_RDONLY ); if( filedes != -1 ) { fp = fdopen( filedes, "r" ); if( fp != NULL ) { /* Also closes the underlying FD, filedes. */ fclose( fp ); } } return EXIT_SUCCESS; }
Classification:
Last modified: 2013-12-23 | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/f/fdopen.html | CC-MAIN-2014-10 | refinedweb | 190 | 69.79 |
Asking problem - Java Beginners
java problem Write a program that could be used to help children practise their subtraction problems. The
problems involve only integers between 10 and 99, inclusive. The program should start by
asking the user how many
Open Jgraph Problem - Java Beginners
Open Jgraph Problem Hi,
I have go through the example given...;
GraphLayoutManager layoutManager;
// Get a VisualGraph
editor = new GraphEditor... the user select ?add new node? the new node will appear. Following the above example
Java Example program to get the current working directory
Java Example program to get the current working
directory
This example java... object we can get
two types of paths in java. These are as follows
java problem - Java Beginners
java problem Hotel.java
This file declares a class of object which... will represent the room number in the hotel. For example room numbered 2... from a small Java program
java problem - Java Beginners
java problem Room.java
This file defines a class of Room objects... with beds, tariff , and is
vacant.
Example:
Room with 2 beds, tariff 100.00... will represent the room number in the hotel. For example room numbered 2
problem
problem Hi,
what is java key words
Hi Friend,
Java Keywords are the reserved words that are used by the java compiler for specific... information, visit the following link:
Java Keywords
Thanks
Java get System Locale
Java get System Locale
In this section, you will learn how to obtain the locale. We are
providing you an example which will obtain the locale of the system
problem
problem hi'sir mai niit student hu.mujhe java ka program samaj me nhi aata mai kya karu and mai kaise study karu please help me.
Learn Java from the given link:
Java Tutorials
Problem with display of images in applets - Applet
information. with display of images in applets Hi all,
When I run... void init() {
img=getImage(getDocumentBase(), getParameter("img"));
}
public
Java interval problem - Java Beginners
Java interval problem I want to create a problem that finds the common interval of some periods of hours.
For example if i have the time period... interval of these two (which in the example is 11.00-12.00).
Which is the best way
Java Get Method
Java Get Method
In this example you will learn how to use the get method in Java.
Java... to use get method in Java.
The example is going to show the date details Get Memory
Java Get Memory
In this section of Java Example, you will see how to get the memory... can
easily learn it.
In this Java Get Memory Size example, we have created
java programming problem - Java Beginners
/java/java-tips/data/strings/96string_examples/example_count.shtml
http.../java-tips/data/strings/96string_examples/example_countVowels.shtml
follow...java programming problem Hello..could you please tell me how can I
Java Servlet : Get URL Example
Java Servlet : Get URL Example
In this tutorial, You will learn how to get url... can add query
string parameters.
Example : In this example we are using
getRequestURL() method to get current URL.
GetURLExample.java
package
code problem - Java Beginners
code problem Dear sir, my problem is given below:
suppose a file...; Hi friend,
Code to help in solving the problem :
import java.io....);
FileInputStream fstream = new FileInputStream("textfile.txt");
// Get
Java program problem
is a path to a directory.
Example:
java DuplicateFinder c:\Documents...Java program problem Hi,
I took the chance to login here in your educational page due to my questions in mind about my java programming assignment
Java example to get Object class name at runtime
Java example to get Object class name at runtime
java get Object class name
In java...; by calling the method getName() we
can get the name of the object class.
In our example
Java example to get the execution path
Java example to get the execution path
get execution path
We can get the execution path of the system in java by
using the system property. For getting execution
java programming problem - Java Beginners
java programming problem we are given a number ,num. ( 0<2 )
we have to represent the number(num) in the form of a,b,c,d... get the solution.(for BSc Math Hon's Student
coding problem - Java Beginners
coding problem hi friend!
Im new to jasper reports.how can i start that coding inorder to generate reports.can u send some sample programs for reporting?im badly need some clearly mentioned example because im new to jasper
java swing problem - Java Beginners
java swing problem i doesn't know about the panel in swings here i had created one frame then created a panel and i added that to my frame but which... CreatePanel () {
super("Example Class");
setBounds(100,100,200,200 get windows Username
Java get windows Username
In this section, you will learn how to obtain the window's username. We are
providing you an example which will obtain the window's username
java servlet connectivity problem with access
java servlet connectivity problem with access Import java.sql... an SQL query, get a ResultSet
ResultSet rs = st.executeQuery("SELECT NAME...=con.createStatement();
^
i am confused what is the problem example program to get extension
Java example program to get extension
java get extension
To get the file name and file... "filename.ext"
is some filename with extension is provided. To get
Get IP Address in Java
Get IP Address in Java
In this example we will describe How to get ip address in java. An IP address
is a numeric unique recognition number which....
In this example We have used for method:
InetAddress().
getLocalHost
java Problem
java Problem I want to create a binary tree for displaying members in Downline. i am creating a site for MLM(Multi-Level MArketing). tree must be dynamically populated from database. is there any help for me.
Thanks in advance
Java program to get the desktop Path
Java program to get the desktop
Path
In this example program we have to get the desktop path
of the system. In the java environment we can get the desktop path also
Get Image Size Example
Get Image Size Example
This Example shows you get the image size... "java.awt.headless" is set to true.getDefaultToolkit is used for get
Java Get Memory
problem on marker interface - Java Beginners
problem on marker interface i want to know about marker interface....
Here is simple example:
public interface roseDemo{
}
public class...";
}
}
-------------------------------------------------------
Visit for more information:
concept Understatnding problem - Java Beginners
concept Understatnding problem Even though I have studied in detail inheritance & interfaces in java , I fail to understand "How Interfaces Help in Multiple Inheritance ?" . Pls. Supplement ur ans. with an example. Thanx
Java example program to get the environment variable
Java example program to get the environment variable
java get environment variable
The getenv() method of the java.lang.System
provide us the functionality to get
Get form value in same page - JSP-Servlet
Get form value in same page Hello friends,
Can we get a form field value in the same to be processed in java coding. For example
problem regarding autoboxing - Java Beginners
problem regarding autoboxing hello all ,
i have a problem... args[]) throws IOException {
System.out.println("Example of value comparing...://
Thanks.
Amardeep
Java Get Host Name
Java Get Host Name
In this Example you will learn how to get host name in Java. Go through...;
Java code to get host name
import java.net.
Get Image
Get Image
This Example shows you get the image.
Description of the code....
Toolkit.getImage() :
getImage method return a Image class object and this object get
servlet program problem - Java Beginners
servlet program problem
i used ur servlet example prg with xml file of helloworld program and i run dat program in tomcat, it shows only... file to run it Hi Friend,
Please clarify your problem.
Thanks
JComboBox Display Problem - Java Beginners
(true);
Image img = Toolkit.getDefaultToolkit().getImage("logo.jpg
printout problem
information, visit the following link:
java type casting problem - Java Beginners
java type casting problem what is type casting? what is the type of type casting? explain with example? Hi Friend,
Please visit the following link:
get absolute time java
get absolute time java How to get absolute time in java
Get Usage Memory Example
Get Usage Memory Example
... in understanding Get Usage
Memory. The class Memory Usage include the main method... ( ) - This method return you
the total amount of memory that your java
Servlet Problem - Java Interview Questions
Servlet Problem I need to get an alert message when the submit button is clicked again while processing...
how is it possible...??
Thanks in advance
How to get the output of jsp program using Bean
How to get the output of jsp program using Bean Hello my Roseindia... i created in Java and compiled
<%@ page language="java" import="beans" %>
<HTML>
<HEAD>
<TITLE>Use Bean Counter Example <
java get file permission
java get file permission How to check the get file permission in Java
printing example - Java Beginners
printing example Is it possible to print java controls using print method?
My problem is to print a student mark list using java?
The mark list should like that of university mark list
Java file get name
Java file get name
In this section, you will learn how to get the name of the file.
Description of code:
You can see in the given example, we have created... get the name of any file.
Output:
File name is: out.txt get byte array from file
Java get byte array from file what is the code example in Java... code in Java. Following tutorials will teach you how to achieve this.
Java example for Reading file into byte array
Reading a File into a Byte Array
Thanks
Java example program to get Operating System type or architecture
Java example program to get Operating System type or
architecture
java get OS type
In java... by using the system property.
Here this java example code displays how one can get
Get Environment Variable Java
Get Environment Variable Java
In this example we are getting environment variable. To
get... of the System class and gets the information about environment variable.
get
Java program to get current date
Java program to get current date now
In this example program we have to get...:mm:ss format.
Here is the full example code of GetDateNow.java
Java file get size
Java file get size
In this section, you will learn how to get the size of a file.
Description of code:
You can see in the given example, we have created... is: " + filesizeInKB + " KB");
}
}
Through the method length(), you can get User Home
Java get User Home
In this section, you will study how to get the user home. We are providing
you an example which will obtain the home directory by using
Java Get Time in MilliSeconds
Java Get Time in MilliSeconds
... the particular time, go
through the below given example that illustrate how to get...;
Java Syntax to get time in milliseconds
import
Java get Next Day
Java get Next Day
In this section, you will study how to get the next day in java using
Calendar class.
In the given example, we have used the class get File Type
Java get File Type
This section illustrates you how to get the file type. In the given example,
we have create an instance of the class File and passed the file 'Hello.txt
Java problem - Java Beginners
Java problem what are threads in java. what are there usage. a simple thread program in java Hi Friend,
Please visit the following link:
Thanks
Get current working directory in Java
to get current directory in Java with Example
public class...Get current working directory in Java
In this section we will discuss about how to get the the current working directory in
java. Current working
java input problem - Java Beginners
java input problem I am facing a Java input problem
Java Strings problem - Java Interview Questions
Java Strings problem How can you find a word which is repeating in maximum number of times in document, which is in text format in java? ...
FileInputStream fstream = new FileInputStream("Filename");
// Get the object
How to get Java SDK
How to get Java SDK Hi,
I have purchased a new computer and installed windows 7. Now I have to get the Java SDK and install on my computer for developing and testing Java programs.
So, can anyone tell me How to get Java SDK
Get computer name in java
Get computer name in java
We can get the computer name by the java code program....
Here is the full example code of GetComputerName.java
as follows:
Java Get Month
and how
to get the current date and time. But in this example, we are going to show the
current month of the existing year. This Java get month example is simply...
Java Get Month
Java get middle character
Java get middle character
In this tutorial, you will learn how to get middle character from the word.
Here is an example of getting the middle letter of the word entered by the user
using Java. If you have a word string 'company
Help with cypher and key problem in Java - Java Beginners
Help with cypher and key problem in Java So far I have a program that asks a user for a message and a key amount and displays an encrypted message... message: "); //get the message and the key from the user
message=reader.nextLine How to create executable file of a java program.That is steps to create a Jar File of a Java Program Hi Friend,
Try the following code:
import java.io.*;
import java.util.jar.*;
public class
Jsp/java-script, spring combination problem
Jsp/java-script, spring combination problem Hi Friends.......
I am developing one application using jsp,spring,java script,hibernate,Pojo... same method get called.
for that when i select element from location list | http://www.roseindia.net/tutorialhelp/comment/82989 | CC-MAIN-2015-14 | refinedweb | 2,333 | 55.03 |
16 July 2008 09:59 [Source: ICIS news]
SHANGHAI (ICIS news)--China-based Ningbo LG Yongxing Chemical would likely cut operating rates at its acrylonitrile-butadiene-styrene (ABS) plant due to squeezed margins, a company source said on Wednesday.
“Skyrocketing feedstock prices and low demand plagued our margins severely,” the source said, declining to elaborate how much it would cut the operating rate.
"We will not rule out the possibility of price hikes if the operating rate reduction does not offset losses."
BD has surged around $1,180-1,250/tonne to $3,050-3,150/tonne CFR CMP (China main port) from early May, while SM also increased $205-230/tonne to $1,645-1,690/tonne CFR CMP.
The 500,000 tonne/year ABS plant, located in ?xml:namespace>
The company is a joint venture of LG Chem and
($1 = €0.63)
Paris Lv from CBI contributed to this article. | http://www.icis.com/Articles/2008/07/16/9140517/lg-yongxing-may-cut-abs-rate-on-margin-squeeze.html | CC-MAIN-2013-20 | refinedweb | 152 | 61.26 |
[Date Index]
[Thread Index]
[Author Index]
Re: basic namespace question, and access to BinarySearch?
It is not ideal, but it is current intended. When version 8 was released, it
was deemed that the new Graph functionality was sufficiently advanced and
useful to release, but it did not replicate all the functions of
combinatorica. A decision was therefore made to keep Combinatorica but
deprecate it. With each release more functionality is built into the kernel
and eventually combinatorica will be removed.
>.)
I don't think that BinarySearch does what you want; it is much more similar to
Position for a list than a tree search algorithm. As for that, I'm not aware
of built in functionality. I think you'd have to roll your own solution, since
a graphs need not be trees, or even directed/acyclic. Others may know better
than I on this specific point.
> 3. I see that a BinarySearch function is in the GeometricFunctions package,
> which in turn is available by default. But I cannot find any related
> documentation. What do I make of this arrangement?
GeoemtricFunctions` is an internal package (which probably needs a better name
to make this clearer) which is used to support the plotting of splines and
such. It is not intended for public consumption.
--
Itai Seggev
Mathematica Algorithms R&D
217-398-0700 | http://forums.wolfram.com/mathgroup/archive/2013/Sep/msg00050.html | CC-MAIN-2015-11 | refinedweb | 222 | 56.96 |
The simplest possible starting point for scalable logging on your Kubernetes cluster
So you’re the proud new owner of a Kubernetes cluster and you have some microservices deployed to it. Now, if only you knew what the heck they were all doing.
Implementing log aggregation is an imperative in the world of microservices. Things can get out of hand quickly — as our application grows, it’s easy to become overwhelmed trying to figure out what’s going on.
You’ve probably read other blog posts on this subject. They are usually about Fluentd and Elasticsearch. That seems to be the default solution, but the setup is usually overly complicated and difficult.
What if you want to understand what’s going on under the hood? Good luck with that. What if you’d prefer a light-weight solution? Try again.
In this blog post, we’ll explore what it takes to build your own Kubernetes log aggregator. We’ll look at an example Node.js microservice called Loggy that is designed to forward logs from Kubernetes to an external log collector. This won’t be enterprise-ready, but there’s no reason we can’t achieve that later. For the moment, we’ll focus on building something that’s suitable for a small application (possibly an MVP) and along the way we’ll learn about Kubernetes logging architecture and DaemonSets.
Logging architecture
How does logging work in Kubernetes?
Figure 1 gives you a graphical depiction.
Figure 1: A conceptual log aggregation system on Kubernetes
We have a Kubernetes cluster with multiple nodes. Each node runs multiple containers (themselves each contained within a Pod). Our microservices application is composed of all the containers across the whole cluster. Each container produces its own output, which is automatically collected by Kubernetes and stored on the node.
We need a log aggregation system to merge these log files and forward the output to an external log collector. We can implement such a system on Kubernetes using a DaemonSet. This is how we’ll run our Loggy microservice on every node and from there give it access to the node’s accumulated log files.
Figure 2 shows how the output from each container is collected to separate log files that are then picked up by Loggy:
Figure 2: Our log aggregator microservice (Loggy) can access the logs from every container running on the node
Getting the example code
Example code for this blog post is available on GitHub at
The example code is designed for you to replicate the results in this blog post.
To follow along you’ll need:
- A Kubernetes cluster to experiment on (preferably not your company’s production cluster).
- A Linux-style terminal to run commands from:
- I use Ubuntu Linux,
- MacOS should work, or
- Git Bash on Windows
- Docker installed so you are ready to build and push images.
- Kubectl installed and authenticated so you are ready to interact with your cluster.
Basic logging
If you have not already tried the more basic approaches to logging with Kubernetes, you should try them first before delving into more advanced log aggregation.
If you are already setup to use the Kubernetes dashboard, that’s probably the simplest way to view recent logs from any pod.
Otherwise, use the kubectl logs command to pull logging from any pod:
kubectl logs <pod-name>
These basic approaches are a good starting point. You can get a lot of mileage with them in the early days of building your application on Kubernetes.
Sooner or later, though, you’ll want to aggregate your logs into a single place where it’s easy to view and search them.
Something to log
Before we can experiment with log aggregation, we need to have a container running and generating output.
Listing 1 is a YAML file that deploys a counter pod to Kubernetes. This creates a container that generates a continuous stream of output.
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
Listing 1: YAML for a Kubernetes counter pod to generate output
Deploy the counter pod to Kubernetes with the following command:
kubectl apply -f ./scripts/kubernetes/counter.yaml
Give the counter pod a few moments to start and then check that it’s generating output:
kubectl logs counter
You should see a continuing sequence of output from the counter pod.
Exploring Kubernetes log files
Now let’s create a test pod that we’ll use to explore the log files collected on a node.
Listing 2 is a YAML file that deploys our new pod as Kubernetes DaemonSet so that the pod runs on every node in the cluster. We’ll use this to get access to the log files.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: test
namespace: kube-system
labels:
test: test
spec:
template:
metadata:
labels:
test: test
spec:
serviceAccountName: test
containers:
- name: test
image: ubuntu:18.04
args: [/bin/sh, -c, 'sleep infinity']
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Listing 2: YAML for a Kubernetes testDaemonSet to explore log files on a node:
In Listing 2, we’re simply starting an Ubuntu Linux container and using the command sleep infinity to make the container run forever (doing nothing).
Note how the directories /var/log and /var/lib/docker/containers are mounted into the test pod. This is what gives us access to the directories containing the log files.
Deploy the DaemonSet to the cluster with this command:
kubectl apply -f ./scripts/kubernetes/test-run.yaml
Give it a few moments to start and we’ll have the test pod running on each node in our cluster. Run the get pods command to list the pods:
kubectl --namespace kube-system get pods
We are only listing pods from the kube-system namespace, which is where we deployed the new DaemonSet. In the list you should see the test pods.
There should be one pod running for each node in your cluster. If you have three nodes, you’ll see three of these test pods.
Pick the first test pod and run a shell in it like this:
kubectl --namespace kube-system exec -it test-4hft2 -- /bin/bash
Just be sure to replace test-4hft2 with whatever the name of your pod actually is. That is the generated name for the pod and it will be different in your cluster. This opens a command line shell in your container so you can issue commands to it.
Let’s use this to explore the log files.
Change directory to the mounted volume that contains the log files:
cd /var/log/containers
Now view the content of the directory:
ls
You should see a bunch of log files here. Find the log file for the counter pod. If you don’t see it, it probably means you are connected to the wrong test pod.
Because it’s a DaemonSet, you have one of these running on each node in your cluster and you might have connected to one that’s running on the wrong node. If that’s the case, try connecting to the other test pods in turn until you find the one that has the counter pod’s log file.
Print the content of the counter pod’s log file like this:
cat counter_default_count-7a6a001c407ef818ea85f28685b829f51512d70b18a4bf01f.log
You’ll need to replace the name of the log file with the name that you actually see in your container, because the unique ID it generates will be different for your cluster than it is for mine.
The content of the counter pod’s log file will look something like this:
{"log":"0: Sun Dec 1 03:33:17 UTC 2019\n","stream":"stdout","time":"2019-12-01T03:33:17.223963224Z"}
{"log":"1: Sun Dec 1 03:33:18 UTC 2019\n","stream":"stdout","time":"2019-12-01T03:33:18.224897474Z"}
{"log":"2: Sun Dec 1 03:33:19 UTC 2019\n","stream":"stdout","time":"2019-12-01T03:33:19.225738311Z"}
{"log":"3: Sun Dec 1 03:33:20 UTC 2019\n","stream":"stdout","time":"2019-12-01T03:33:20.226719734Z"}
You can see immediately that each line of the log file is a JSON object. Scanning the structure of this log file gives us some clues on how we should go about parsing it.
When you are done exploring the log files in your test pod, delete the DaemonSet like this:
kubectl delete -f ./scripts/kubernetes/test-run.yaml
Tackling log aggregation
Before we can really tackle log aggregation, we have a bunch of questions we must answer.
How will we find the log files?
Globby is a great npm package for finding files based on globs.
We’ll use the following glob to find all the log files on each node:
/var/log/containers/*.log
How will we eliminate system log files?
The log file directories on any given node contain logs not just for our application but also for all the pods that make up the Kubernetes system.
We usually just care about output from our application, so we need a way to exclude the system log files (including the Loggy microservice itself).
Fortunately, we can also do this with Globby by using a glob to exclude the system log files:
!/var/log/containers/*kube-system*.log
The exclamation mark says that we’d like to exclude log files that match that pattern.
Putting both our globs together gives us a specification that identifies the log files of interest:
/var/log/containers/*.log
!/var/log/containers/*kube-system*.log
How will we track a log file?
As a log file is updated we’d like to be notified of the new output.To do this, we’ll use the node-tail npm package.
There’s actually a bunch of options for this kind of thing, but I’ve gone with the one that has the most stars on GitHub.
How will we be notified of new log files as they are created?
As new pods are deployed to the cluster, new log files are created.
We need to be watching the file system to know when new log files become available to be tracked.
To do this, we’ll be using the npm module chokidar.
Again, there’s other options for this kind of thing, but I’ve used chokidar before and am happy to use it again.
How will we parse the log file?
This part is easy. We already determined that each line in the log file is a JSON object.
We’ll be using the node-tail library to receive new lines as they come. We’ll parse each incoming line of output with the builtin JavaScript function JSON.parse.
Where will we send each log entry?
This part is entirely up to you and depends on where you want to store your logs.
You could store the logs in a database in your cluster, but that’s not a great idea — if you have problems with your cluster, you may not be able to retrieve the logs to diagnose the issue.
It’s best to forward your logs to an external log collector. A good way to do that is sending batches of logs by HTTP POST request. In this blog post, we’ll only go as far as printing out the logs as they arrive in Loggy.
Loggy: possibly the world’s simplest log aggregator for Kubernetes
This brings us to Loggy, possibly the simplest and smallest log aggregation microservice for Kubernetes.
Listing 3 shows the full code for Loggy.
Read through it and notice the following:
- globby identifies the log files;
- tail tracks each log file for new output; and
- chokidar watches for new log files.
const tail = require("tail");
const globby = require("globby");
const chokidar = require("chokidar");
//
// The directory on the node containing log files.
//
const LOG_FILES_DIRECTORY = "/var/log/containers";
//
// A glob that identifies the log files we'd like to track.
//
const LOG_FILES_GLOB = [
// Track all log files in the log files diretory.
`${LOG_FILES_DIRECTORY}/*.log`,
// Except... don't track logs for Kubernetes system pods.
`!${LOG_FILES_DIRECTORY}/*kube-system*.log`,
];
//
// Map of log files currently being tracked.
//
const trackedFiles = {};
//
// This function is called when a line of output is received
// from any container on the node.
//
function onLogLine(containerName, line) {
//
// At this point you should forward your logs to an
// external log collector.
// For this simple example we'll just print them
// directly to the console.
//
// The line is a JSON object so parse it
// first to extract relevant data.
const data = JSON.parse(line);
const isError = data.stream === "stderr"; // Is the output an error?
const level = isError ? "error" : "info";
console.log(`${containerName}/[${level}] : ${data.log}`);
}
//
// Commence tracking a particular log file.
//
function trackFile(logFilePath) {
const tail = new tail.Tail(logFilePath);
// Take note that we are now tracking this file.
trackedFiles[logFilePath] = tail;
// Super simple way to extract the container
// name from the log filename.
const containerName = logFileName.split("-")[0];
// Handle new lines of output in the log file.
tail.on("line", line => onLogLine(containerName, line));
// Handle any errors that might occur.
tail.on("error", error => console.error(`ERROR: ${error}`));
}
//
// Identify log files to be tracked and start tracking them.
//
async function trackFiles() {
const logFilePaths = await globby(LOG_FILES_GLOB);
for (const logFilePath of logFilePaths) {
if (trackedFiles[logFilePaths]) {
continue; // Already tracking this file, ignore it now.
}
// Start tracking this log file we just identified.
trackFile(logFilePath);
}
}
async function main() {
// Start tracking initial log files.
await trackFiles();
// Track new log files as they are created.
chokidar.watch(LOG_FILES_GLOB)
.on("add", newLogFilePath => trackFile(newLogFilePath));
}
main()
.then(() => console.log("Online"))
.catch(err => {
console.error("Failed to start!");
console.error(err && err.stack || err);
});
Listing 3: The simplest Node.js microservice for log aggregation on Kubernetes
The only thing you have to do with listing 3 is decide what to do with your logs. You could, for instance, use HTTP POST requests to send batches of logs to your external log collector.
Listing 4 shows the YAML file to deploy Loggy as a DaemonSet to Kubernetes.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: loggy
namespace: kube-system
labels:
test: loggy
spec:
template:
metadata:
labels:
test: loggy
spec:
serviceAccountName: loggy
containers:
- name: loggy
image: <yourcontainerregistry>/loggy:1
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Listing 4: YAML file to deploy Loggy to Kubernetes
The most important thing to see in listing 4 is how the volumes are mapped from the node into the container. The directories /var/log and /var/lib/docker/containers on the node contain the collected logs from all the pods on the node. These directories are mounted into Loggy’s container so that Loggy can access the log files.
Before deploying the DaemonSet in listing 4, you’ll need to set <yourcontainerregistry> to the URL for your own container registry.
You have to build listing 3 into a Docker image, the Dockerfile for it is provided in the example code repository, and push the image to your container registry. Now you can deploy Loggy to your Kubernetes cluster with this command:
kubectl apply -f ./scripts/kubernetes/loggy.yaml
Loggy is now collecting logs from all your microservices. To view the pods that were created run this command:
kubectl --namespace=kube-system get pods
You can pick out each instance of Loggy from the list of system pods (you’ll have one for each node in your cluster). You can then use the logs command to view the aggregated logs for each node (because Loggy is simply outputing to the console). For example:
kubectl --namespace=kube-system logs loggy-7h47q
Just change loggy-7h47q to be one of the instance names that you pick from the output of get pods.
Now you just need to update Loggy to make it send your logs somewhere outside the cluster!
Conclusion
In this blog post we explored the basics of log aggregation on Kubernetes.
We deployed Loggy to the Kubernetes cluster as a DaemonSet. That’s to make it run on every node in our cluster. We mounted the node’s filesystem into Loggy’s container and this gives Loggy access to the log files on that node. From there the logs can be stored in your database or HTTP POSTed to another service.
To complete this system for yourself, you must now augment listing 3 and store your logs in a place where it will be easy for you to view and search them.
Resources
Create your own Kubernetes cluster quickly:
You can learn more about building with applications with microservices with my new book Bootstrapping Microservices:
Example code for this blog post is available here:
Here is the documentation for the Kubernetes logging architecture:
Kubectl cheat sheet: | https://codecapers.com.au/kubernetes-log-aggregation/ | CC-MAIN-2022-21 | refinedweb | 2,838 | 62.78 |
Archives
Improved CodeSmith Templates for ORMapper
The following is news from Paul Welter, who's been gracious enough to create and update CodeSmith templates for my ORMapper:
I've updated the tempates on the CodeSmith site above. This update contains many new features. The biggest new feature is merge support. Now you can author new code in the generated entity class file and not loose your changes when the templates are re-run. The merge works by updating marked regions in the file. All your modifications will be saved as long as you don't edit regions marked as 'do not modify'. Another improvement is that it now generates a DataManager class that contains a singleton instance of the ObjectSpace class. Having this common singleton allows for generation of some default methods for each entity like Retrieve, RetrieveAll, Save and Delete. The goal of these updated templates was to abstract the ORMapper from the UI layer by creating methods off the entity objects. This leads to more of a true object oriented design. No longer does a consumer of the entity objects have to know any ORMapper syntax. Working with the entity now looks like ...
Customer cust = new Customer();
cust.FirstName = "Paul";
cust.LastName = "Wilson";
cust.Save();
All the ORMapper tracking and persisting is handled internal to the customer class. This is a much more intuitive syntax to work with.
There is also a new set of templates to generate NUnit test classes for each entity class. The test classes are very basic and designed as only a starting point. Another advantage of the improved way of working with the entity classes is that it is much easier to author tests for.
The templates should be used in the following order ...
1)MappingFile.cst - This generates the mapping file for the WilsonORMapper.
2)ClassGenerator.cst - This generates or updates the entity classes defined in the mapping file. This template also generates a DataManager class if it doesn’t already exist.
3)TestGenerator.cst - This generates NUnit test classes for each enity class. This is an optional template to use.
Thanks
Paul Welter
I did not know SQL Server Views are Static
I have a SQL Server view defined to be "SELECT * FROM Table WHERE MyCriteria". I intentionally coded it with "SELECT *" since I wanted it to be all fields no matter what -- just a subset of records. A new field was added to my table recently -- no problem -- at least that's what I thought anyhow. But this new field did not show up in my view afterall -- and Enterprise Manager still shows my view as being defined with "SELECT *". So I dropped the view and recreated it -- that did the trick -- my new field is now in my view where it should be. What's up with this behavior?
The Latest Crap on Code-BePart in ASP.NET v2.0
Fritz Onion has a new post about the latest twist to the ASP.NET v2.0 Code-BePart (code-behind, code-beside, code-bewhat, ...) saga. This really makes me think they should have just left code-behind as it was in v1.*. I was initially opposed to these code-behind changes when it was introduced back at the first private preview in October 2003 to a few of us. But most of us relented when the ASP.NET team insisted the code-behind model was too "brittle", as well as too difficult. While there was certainly truth in this, and certainly beginners in OO have issues understanding code-behind, it just doesn't look like they've succeeded in the end here. Now it appears to be a monstrosity that's not OO, not simple, and not even as functional as the original model which at least allowed you to pre-compile the code-behind while leaving the design non-compiled so designers could modify small things in it if necessary. Yes, they've added the ability to pre-compile everything, including the aspx design pages, which is important to many, but just as many of us depend on the aspx design being left as is while still protecting our code-behind. Code-behind was at least pretty standard OO, and now that you get full intellisense in a single file there isn't much of a reason to try to bring simplicity to the code-behind model, so why oh why did we muck with this?
Bug Fix: WilsonORMapper v3.1.0.1
I inadvertently broke most custom providers in my WilsonORMapper v3.1 -- this is fixed in v3.1.0.1. Those that were manually adding their own parameter names in their mapping files were not affected, which was also why I didn't notice this in my testing. Of course the recent change in the MySql provider did not help make this any easier to spot. Thanks to David Dimmer for helping me track down the bug in my code -- and just when I was bragging about stability.
MySql ADO.NET Provider Change
MySql recently acquired ByteFX which made the ADO.NET provider for MySql. They recently released a new version of their provider with the MySql namespace instead of the ByteFX namespace which also introduces a small but significant change that will affect all .NET MySql users. You now have to prefix your parameters with "?" instead of "@"! For backwards compatibility you can add "old syntax=yes;" to your connection string to force "@", but I'm not sure if this will be supported forever or not. Note that with the WilsonORMapper this means that you specify yourCustomProvider.ParameterPrefix = "?", or add the "old syntax=yes;" to your connection string (but don't do both).
UI Mappers Making News -- OR Mappers Are Common
Jimmy Nilsson recently hosted a small architecture workshop in Lillehammer that is making the news now. Oh how I wish I could have went, but alas it was a little too far way and costly for me. Anyhow, so far I've found two things of interest to me in Mats Helander's posts. First, they didn't spend much time discussing O/R Mapping because its pretty much a given now! Its been that way in Java for quite some time, and I do believe its getting there in .NET with your top architects too. Unfortunately, I don't think any such claim should be really made in .NET just yet, since most typical .NET developers still haven't even heard of O/R Mappers in my opinion. But it is nice to see that at least among my peers that you don't have to spend time justifying O/R Mappers any more.
The other item of interest to me was the discussion of "UI Mappers" -- apparently both Patrik Löwendahl and Roger Johansson are building their own UI Mappers now (update: Roger is building an O/O Mapper). Of course, as my readers know, I've had a UI Mapper in beta for some time, and in production at a client, so this really makes me think I need to find the time to finish it up. :) By the way, I'm sorry for tooting my own horn, but I'm just thrilled to see them use the term "UI Mapper", especially with the likes of Martin Fowler being present, since I invented the term on this blog 6 months ago (at least to my knowledge) ! Oh well, now I need to find the time to finish it up, adding support for other O/R Mappers and 3rd party controls -- what do my readers think about the concept?
SQL Server 2005 and Limitations to Assembly Loading
I've recently started reading "A First Look at SQL Server 2005 for Developers" in my spare time (yea, I'm not getting very far since I don't have much spare time) and I came across something rather limiting I think. It says that you must be logged into SQL Server using an integrated security login, as opposed to a sql server account, in order to create a .NET assembly in SQL Server 2005. The rationale given was that this was necessary in order to check if the user should have access to the file system location where the .NET assembly is to loaded from. That does make sense, but it seems that implies that shared web hosts won't be able to easily allow us to use .NET assemblies on their SQL Servers -- am I missing something here? Of course I'm not convinced that I would actually want a shared SQL Server on a shared web host allowing .NET use anyhow, since I don't want my data access slowed down by someone else playing with .NET stored proc, but I hadn't realized this limitation would exist either.
Unusual Chinese Fortune Cookie
My son Zack (nearly 7) got the following fortune in his cookie:
"You have an unusual equipment for success, use it properly."
Hmmm . . .
The Best O/R Mappers: NHibernate, LLBLGen Pro, and EntityBroker
Frans calls me to task on my last blog post where I said:
."
Sorry Frans if this offends you, as that was not my intent at all. I've tried most of the mappers out there, and there's a reason why I mentioned LLBLGen Pro, NHibernate, and EntityBroker -- they are the best out there in my opinion! I think I've also proven that I do in fact recommend many people to your mapper and the others, so I was not trying to say anything negative at all. Do I think your mapper is easy to use -- absolutely -- and I also think most other mappers are easy to use. LLBLGen Pro is also probably unique in that it actually gets easier to use over time, due to the code gen approach that you take which makes intellisense possible. But I still think that too many people totally new to O/R mapping get frustrated and quit when it takes longer than 30 minutes to get working the first time. Is that fair? No, its not, but that's the type of developer that the MS community often brings us -- they download our cool products and get frustrated when our products can't read their minds and tell them what they are doing wrong -- then they quit and go on their way content in their belief that O/R mappers are not the right approach. And that's been one of my goals -- too give people an entry point that is simple enough that anyone can use it in 30 minutes or less -- then if they need more they will be much more willing to consider the other mappers. Are there people that get yours working in 30 minutes? I'm sure there are, but I seriously doubt that the average MS developer can get most O/R mappers, and many other cool tools for matter, in 30 minutes or less -- and I don't think that's a negative statement about your mapper. As I said earlier, LLBLGen Pro is probably unique in its code gen approach, which probably makes it actually get easier to use over time -- and that's very cool. Your mapper is hands down the only mapper I would recommend to anyone that those that prefer code gen -- yours and not mine -- although that's not my personal preference.
Next Frans, you asked how many databases my mapper supports really supports? MS SQL Server, Oracle, Access, MySql, PostgreSql, Sqlite, Firebird, DB2, VistaDB, Sybase, and lastly I think SqlCE. All of those have people that I know for a fact are using my mapper with them, except for SqlCE which I know some people were interested in using but I never heard back to know if they succeeded or not. Furthermore, unlike other mappers, if you work with another database that I have not listed, you can probably get it to work with my mapper without writing or modifying any driver code -- no recompile necessary. But yes, you are absolutely correct too when you say that I don't "really" support all theses databases, if you what you mean by that is supporting features that are peculiar to individual databases. Do I support sequences? Yes, but it does require that you know how to set it up in your database, which yours probably does automatically. Do I support joins, aggregates, group by, having clauses? No, not even on one database -- as I have said on many occasions, LLBLGen Pro has far more features than mine, as does NHibernate and EntityBroker -- mine simply targets the most common 80-90% (or more) of CRUD, with or without stored procs, while giving the user a decent DAL for the other cases. Many people may read that and immediately choose your mapper, or one of these others, and that's absolutely the right thing to do if you need these features, but many people have also apparently decided that they were quite content with a mapper like mine too. For instance, I actually do work with joins, aggregates, group by, and having clauses -- in my databases -- that's right, I'm quite comfortable writing a view or stored proc and mapping it. That's a "heresy" too many purists -- but I like databases -- my mapper doesn't shield me from the database -- it simply allows me to avoid writing all the boring and repetive CRUD and start working with objects right away.
I thought about ending here by saying when I would recommend LLBLGen Pro vs. NHibernate vs. EntityBroker -- but that would probably just cause more issues since I would be making some generalizations to some degree. So instead I'll end it by just challenging everyone that reads these blogs but still hasn't tried an O/R mapper to just try one and see for yourself for a change. And I'm very sorry if any of my statements, here or earlier, are generalizations that may be debateable -- that was not my intent and I apologize sincerely. I consider myself an O/R mapping evagelist more than an O/R mapping vendor (and I'm certainly not a fulltime vendor, nor do I make enough money to quit my day job) -- but there is a fine line that sometimes I inadvertently cross in my comments.
How do you decide what features to add or cut?
Ever since my WilsonORMapper hit v3.0 back a few months ago things have been very smooth. By that I mean both that there have been very few bugs and very few new feature requests. In other words, its reached a mature point and it meets most expectations. Lately I've been readying v3.1, and I had to make some decisions on what to include.
Some feature requests are easy to decide to include -- they are easy to code, affect little else, and are often requested. Examples of this were the desires to map properties (as opposed to just member fields) and to have a public ExecuteScalar method. Note that I actually don't like mapping properties and ExecuteScalar isn't really necessary, but they were still included. There were also some other requests that were easy to decide to include, even though they were not often requested. These were still easy to code and affected little else, but they also provided some real value even if they were seldom requested. Some examples of these were adding support for multiple mapping files (or multiple embedded resources) and output parameters for stored procs.
Next there were a few requests that were easy to not include -- these aren't easy to categorize, so lets look at examples. One case was Whidbey generics and nullable types -- these were often requested, and they may be easy to code and affect little else. But's let be realistic -- these are still in beta 1, changes may be possible, and few really need them yet. But note that these will be one of my top priorities for v4.0, probably in the beta 2 timeframe when there is a go-live license. Another case is that a few people don't agree with my assumption about unchanging primary keys, also called surrogate keys. I try not to force my personal tastes on others, thus I decided to allow properties to be mapped now, but this is one assumption that is to integral to my mapper. I hesitate to say that those that disagree are doing something wrong, but there must be a few basic assumptions, especially with a "simple" product.
But then there are the requests that are hard to decide on whether they should be included or not -- these are especially difficult when people send you code. My mapper does support entity inheritance, but only the most minimal database inheritance -- this would be a huge plus to add to my list of features. But this is a big change, probably affecting a lot, no one has sent me any code, and this is a minor point update. So this did not make the cut -- it might make sense in v4.0 though. Another thing my mapper supports is composite keys, but not composite key relationships -- at least not until now. This was also a big change, and it certainly did affect a lot, but someone did send me some code on this one. Now I should point out that just because someone sends me code does not mean its done -- someone else's code usually solves their cases, but not all the other generic cases, so it can still be a lot of work.
Finally, there was one case that really required me to make a difficult call -- one-to-one relationships. My mapper supports one-to-many, many-to-one, and many-to-many relationships, but not one-to-one relationships. This is not trivial to implement, and it also affects a lot of things, but it would be a big plus on my feature list -- and someone sent me some code! But this did NOT make the cut -- that's right one-to-one relationships are still not supported by my mapper, and likely never will. Why you ask? First, there is an easy work-around -- one side of a one-to-one relation is actually a many-to-one relation, and the other side is a one-to-many relation where the many is always equal to one! If you don't like to see that, then that's what a property is for -- leave the member field an IList, but make the property be the strongly typed object with the getter and setter hiding the fact that you are actually always working with the 0th index object in a list.
But isn't this requested enough to justify making it easier? That's where the hard call came in -- and I decided that it is not worth the additional complexity. That would be one more thing to have to explain on the end-use side, and it would complicate the codebase greatly. That's because every relationship type has to be handled for new and existing objects, lazy-loaded and not, dynamic sql and stored procs, and now for single and composite keys. That's a lot of cases -- a lot to code (the code sent me handled only the few pertinent to that person), a lot to test (I still haven't tested all the cases of composite key relationships), and a lot for the next person to worry about! And that last part is one of the most important things for my mapper -- the simplicity of the codebase itself. This is why I get so many user contributions -- they find it easy to extend when there is something else they want.
So I have consciously chosen to keep my mapper "simple", although I think that I can safely say that it does meet most people's needs already -- far beyond the most common 80-90% that I was originally shooting for..
WilsonORMapper v3.1 Released One Year Exactly After v1.0
WilsonORMapper v3.1.0.0 (released on 1/7/2005) includes the following:
New Features:
Map Properties or Member Fields -- My Preference is Member Fields
Mappings can be in defined in Multiple Files or Embedded Resources
New ExecuteScalar Method and Stored Proc Override for ExecuteCommand
Relationships for Composite Keys -- Warning: Very Little Testing
Added Support for Output Parameters with Stored Procedure Options
Improvements:
Improved Embedded Objects -- Multiple Levels, Interface Optional
Mapper attempts to Resolve Mapping Paths or Load Embedded Resources
Better Exception Handling, No longer Catching and Eating Exceptions
Improved Parameter Typing necessary for Providers that do not Check
Recursive BaseType Check for Inheritance Support in ObjectSet Add
Bug Fixes:
Added Date Delimiter for Access, Date Format for Access and Oracle
Support Providers that do not support Timeouts, like MySql ByteFX
Default Parameter Names for Fields with Spaces/Dashes in their Names
ObjectHolder Key Setter for Null Values, Other Isolated Bug Fixes
ORHelper: Small Improvements and Fixes, VB Code for Initialization
And a special thanks to each of the following contributors to v3.1:
Chris Schletter () -- Mapping Properties, Embedded Objects
Nick Franceschina () -- Help with Composite Key Relations
Stephan Wagner () -- Command Timeouts, Parameter Names
Ken Muse () -- Date Delimiter, Date Format
Alister McIntyre () -- Stored Proc Output Parameters
Stephen Roughley () -- Default Parameter Names
Gerrod Thomas () -- ObjectHolder Key Setter
I am: Not nerdy, but definitely not hip.
Beautiful Weather in Atlanta -- Great for Rock-Climbing
We took the kids to the Atlanta Zoo today -- beautiful weather to get out -- so much for winter. They had a kids area with a small rock climbing area big enough for adults too (barely) -- so I gave it a try and made it to the top -- a lot harder than it looks, but great exercise. The monkeys seemed well behaved, and of course they know how to groom each other, but there didn't seem to be a way to trade my monkeys for theirs. :)
My Highlights of 2004 and Goals for 2005
Professional Highlights in 2004:
- WilsonORMapper Released and Matured -- Simplest O/R Mapper and Supports Most Databases
- WilsonXmlDbClient Released OpenSource -- Work with Xml Data using ADO.NET and SQL Syntax
- WilsonWebForm moved to GDN OpenSource -- Multiple Server Forms and Non-Postback in ASP.NET
- WilsonUIMapper in Development and Beta -- Runtime UI based on Mappings to Business Classes
- Mixed ASP.NET Security Article on MSDN -- Mix Forms and Windows Authentication in Single App
- Quit Corporate Job and went Independent -- Work From Home and Control Application Architecture
- Develop App with OR/UIMappers for Client -- Doing an ASP.NET Application the Right Way Finally
- Large Successful Server Installation for Client -- About 20 Servers at MCI supporting Tons of Traffic
- Worked with .NET WinForm App on Citrix -- More than I wanted to know about GC and .NET Memory
- Attended MVP Summit and Met Many MVPs -- IIS7 Goes Modular and ObjectSpaces Delayed til 20??
- Update WilsonORMapper Periodically -- v3.1 ASAP has Composite Relations, Properties, Multiple Files
- Release and Mature WilsonUIMapper -- Need to Add 3rd Party Control Support and WinForm Runtime
- Develop the next Killer .NET Project :) -- I've got an idea for this one although how much time will I have ?
- Update Site and Projects to Whidbey -- Add Generics and Nullable Types to the WilsonORMapper v4.0
- Articles, MVP/PDC, User Group, etc. -- Attend MVP and/or PDC Conference and More User Groups too
- Sold our own House without an Agent
- Built and Moved into our new House
- Setup our Home Office with new Desks
- Family Vacation at Discovery Cove
- Worked at Home with Kids in Summer
- Zack's 6th Birthday at Hobby Store
- Tori's 7th Birthday at Horse Farm
- Tori and Zack Finished Kindergarten
- Read and Helped in School Classes
- Jenny Diagnosed with Breast Cancer
- Support Jenny to Beat Breast Cancer
- Celebrate with Nice Family Vacation
- Exercise More and Get in Better Shape
- Help Read More with School Classes
- Start Finishing Basement and Yard | http://weblogs.asp.net/pwilson/archive/2005/01 | CC-MAIN-2016-07 | refinedweb | 3,997 | 66.37 |
> From: Matt Benson [mailto:gudnabrsam@yahoo.com]
>
> --- Jose Alberto Fernandez <jalberto@cellectivity.com>
> wrote:
>
> > All this discussion about roles brings me back to
> > the
> > proposal/implementation
> > of Roles that I made a long time ago and that was
> > rejected.
>
> If it works and solves this problem, it's okay with
> me. My only concern was the complexity of the code
> needed to make such a thing work. I suppose if we
> introduced this it would be simplest to
> comma/space-delimit classnames in a task|typedef
> properties file; e.g.:
>
> and=org.apache.tools.ant.types.selectors.AndSelector,
> \
> org.apache.tools.ant.types.resources.selectors.And
>
The main reason for allowing multiple things with the same name
is that they can be defined that way by different providers.
You could have that if you load several 3rd party antlibs on the
same namespace (which I think you can do by loading explicitly).
One could think on adding all your antlibs explicitly to the ant:core
and remove the usage of NS completely, but that would be a user choice.
(this could be a way to deal in a BC way with optional stuff, I.e.,
put it in an antlib that loads into ant:core NS and hence uses default
prefix).
When you have more than one lib on the same NS you have the posibility
of name clashes
and hence the issues about role resolution. So it is not about one
property file
is about multiple sources of definintions.
Jose Alberto
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org | http://mail-archives.apache.org/mod_mbox/ant-dev/200504.mbox/%3CEA9D95F907E2924A9A32F274114BCDEF0D4618@perth.Cellectivity.local%3E | CC-MAIN-2014-49 | refinedweb | 271 | 55.74 |
Daniel John Debrunner <djd@apache.org> writes:
> Knut?
I don't know the details about how the optimizations are performed in
those cases, but it shouldn't be a problem for the methods in
ArrayInputStream though, since they barely have method calls. A
typical method looks like this:
public final char readChar() throws IOException {
int pos = position;
byte[] data = pageData;
if (pos >= (end -1))
throw new EOFException(); // end of file
int c = ((data[pos++] & 0xff) << 8) | (data[pos++] & 0xff);
position = pos;
return (char) c;
}
And I would assume that this equivalent method would run just as fast:
public final char readChar() throws IOException {
if (position >= end - 1)
throw new EOFException();
return (char) (((pageData[position++] & 0xff) << 8) |
(pageData[position++] & 0xff));
}
Although I doubt that it would run any faster. But perhaps the jar
file would be a couple of bytes smaller! :)
--
Knut Anders | http://mail-archives.apache.org/mod_mbox/db-derby-dev/200611.mbox/%3Cx7y7pv7840.fsf@Sun.COM%3E | CC-MAIN-2019-09 | refinedweb | 142 | 52.73 |
Jonathan Coxhead wrote OSLib as a private project during he time that he was employed by Acorn. He is the sole copyright owner of the library, together with the OSLib maintainers who share the copyright of certain aspects of the package. OSLib is now in the public domain under the GNU General Public Licence (GPL), nonetheless, we ask you to respect Jonathan's copyright.
The GNU General Public Licence stipulates that any program which embodies any GPL code must itself be distributed under the terms of the GPL. In particular this means that source code must be made available for any such program. This is clearly unacceptable for any proprietary, commercial, code.
The GNU Lesser General Public Licence relaxes this requirement somewhat in only requiring that binary object files be freely available, thus protecting investment in proprietary code to some extent.
The OSLib copyright owner has relaxed even this requirement, as explained elsewhere and any code linked with OSLib may be distributed in whichever way the developer sees fit. In other words, OSLib may be freely used in the construction of proprietary software. However, do please consider joining the ever increasing free software movement, by placing your work in the public domain. Free source is good; hoarded source bad!
OSLib originally defined file handles to be 8 bits wide. This was based on inside knowledge of the OS, and even under RISC OS 4, no file handle greater than &FF is ever issued. However, the PRMs do specify that file handles should be 32 bits wide for future compatibility.top Home
This has left the OSLib maintainers with a bit of a dilemma. It is our faithful promise to never break an existing interface, but, clearly, the present situation could not be allowed to continue.
OSLib V6.0 went some way to resolving the problem by defining a 32 bit file handle, OS_FW, and a set of functions to use it. The intention is that they would be used in place of the legacy OS_F and its associated functions. However, many users felt this was non-intuitive behaviour, and pleaded for the return of OS_F, but in a 32-bit guise. Unfortunately, it was impossible to simply change the type of OS_F, because that would cause many programs to break if they relied on 8 bit file handles.
The problem was finally resolved in OSLib V6.3, by adding extra header files which, by default, makes OS_F a synonym of OS_FW, and does likewise with their associated functions. However, it leaves OS_F and its friends as symbols in the library, to allow legacy code to be linked correctly. Therefore, any new compilations will, by default, use 32-bit file handles, but 8-bit compatibility is assured.
One further refinement to this scheme is that this name translation can be disabled by defining the constant OSLIB_F8, which will cause the headers to revert to their previous behaviour, and thus allowing anyone who particularly needs to retain 8-bit handles to do so. This is best achieved by passing -DOSLIB_F8 in any makefile or command line when invoking the compiler.
A rule has been adopted throughout OSLib that any 32-bit field upon which arithmetic can be performed is typed as a signed int. This makes it straightforfard to do comparisons. It is acknowledged that using an unsigned int gives an extra bit of information in the absence of a long long int, but this would be at the expense of usability, and therefore considered a bodge. If you really need the extra range, cast it to unsigned yourself.top Home
Regarding os_t, code such as the following from PRM 3-185, is made much simpler with a signed int, and correctly handles wrap-around - which an unsigned wouldn't:os_t newtime = os_read_monotonic_time(); while ((newtime-oldtime) > 0 ) oldtime += 100;
There is really nothing special about using OSLib with C or assembler programs.top Home
A generalised command line for Acorn (Norcroft) C using standard C, OSLibSupport, and OSLib would be something like this:
or:or:
cc -IC:,OSLibSupport:,OSLib: C:o.stubs OSLib:o.OSLib OSLibSupport:o.OSLibSupport -o program.o program.c
cc -IC:,OSLibSupport:,OSLib: -c program.c -o program.o
link program.o C:o.stubs OSLib:o.OSLib OSLibSupport:o.OSLibSupport -output program
For GCC (with UnixLib) this becomes somewhat simpler with the compile and link steps merged:
or:or:
gcc program.c -I OSLib: -I OSLibSupport: OSLib:o.OSLib OSLibSupport:o.OSLibSupport -o program
gcc -I OSLib: -I OSLibSupport: -c program.c -o program gcc program.o OSLib:o.OSLib OSLibSupport:o.OSLibSupport -o program
ObjASM doesn't seem to be able to expand path variables in the command line, so it is necessary to bodge things a bit:
do ObjASM -I <OSLibPath> program.s program.o
link program.o -output program
The GCC assembler can get invoked by:
gcc -I OSLib: -I OSLibSupport: -c program.s -o program.o
gcc program.o OSLib:o.OSLib OSLibSupport:o.OSLibSupport -o program
OSLib is, with one exception, completely compiler independent. It was originally built and used using the Acorn (Norcroft) compiler, and many people are successfully using it with GCC and GC++.top Home
The Acorn (Norcroft) compiler reserves the special symbol "__swi", as an optimizing hint. Other compilers don't recognise it, and fault it. Also, for some reason best known to themselves, when using Acorn's compiler through CFront, it also faults __swi. In these instances you need to define the symbol to nothing by placing "#define __swi" in your code files before #including any OSLib headers, or by putting -D__swi in any command line or makefile command when calling your compiler.top Home
This problem is evident wen using CathLibCPP (map.h) with OSLib (os.h) under CFront, when the map field clashes. There appears to be a problem with Cfront (its template handling was never very good) which causes a template name to clash with other symbols. As a work-round try the following:top Home#define map addr
#include "os.h"
#undef map
#include "map.h"
#define map addr
top Hometop Home
OSLib can be built using APCS-R and APCS-32 ABIs. The former can only be used for running on 26-bit architectures, while the latter can run on both 26-bit and 32-bit architectures. APCS-R is deprecated and it is recommmended to have your programs built using APCS-32 for future compatibility with new ARM hardware.
The OSLib source code is identical for either. The binary distribution contains two ALF images: OSLib (APCS-R) and OSLib32 (APCS-32). The 32 bit version does not preserve the caller's processor flags, the 26 bit one does; this may raise compatibility issues when migrating from one to the other so choose the correct one you're linking.
Note that the last version of APCS-R OSLibSupport is 6.70. All later versions of OSLibSupport library are APCS-32 only.
It is expected that OSLib 7.00 release will be APCS-32 only and that APCS-R support will be dropped for good.
Never do that! Whilst the syntax of the OSLib toolbox calls may look similar to those provided by Acorn's tboxlibs, they are subtly different in the parameters they pass. Unless you really know what you're doing, and can guarantee which one is called, never mix the two libraries. OSLib provides a complete, and better, coverage of the toolbox API, and there is no need to use tboxlibs.top Home | http://ro-oslib.sourceforge.net/faq.html | crawl-001 | refinedweb | 1,253 | 63.39 |
Working in a team of developers, the following might happen:
Developer 1 creates a piece of UI and adds a method to load data into said UI. The method he creates is called GetUserInfo(…)
Here comes Developer 2 and he also creates a piece of UI that displays user info. Developer 2 however, needs to display more than just the user info, he also needs the requests made by the user. Thus GetUserInfoWithRequests(…) is created next to the already existing GetUserInfo method.
Developer 3 wants to display some user info as well, but not just the info by itself, and also without the requests but WITH the users team information. Can you guess what happens? … Right. GetUserInfoWithTeamMembers(…) is added to our list.
So we now have three methods that do something similar, but not quite the same:
- GetUserInfo(…)
- GetUserInfoWithRequests(…)
- GetUserInfoWithTeamMembers(…)
If you let this run uncontrolled it will turn in to a maintenance nightmare! We could simply end the discussion and say it’s a lack of discipline within the team, but there is a nice way to help prevent this through code!
What is needed is a query-building object that will handle the loading of all this data for us. Without resorting to separate methods that all touch the database.
This query-building object has at least one method: Fetch(). The fetch method is the only method that will touch the database. The constructor of our object will take in the key on which to filter. In our example, this would probably be the primary key for the user in our database. Lets call it a UserInfoRetriever to match the example.
public class UserInfoRetriever { readonly int _userId; public UserInfoRetriever(int userId) { _userId = userId; } public User Fetch() { using (var context = new DbContext()) { return context.Users.Where(u => u.UserId == _userId).SingleOrDefault(); } } }
The code above is what this might look like for the first method in our set of three. But we have more! Lets expand the class a bit further to support the other scenarios.
public class UserInfoRetriever { readonly int _userId; bool _withRequests; bool _withTeamMembers; public UserInfoRetriever(int userId) { _userId = userId; } public UserInfoRetriever WithRequests() { _withRequests = true; return this; } public UserInfoRetriever WithTeamMembers() { _withTeamMembers = true; return this; } public User Fetch() { using (var context = new DbContext()) { var users = context.Users; if (_withRequests) { users.Include("Requests"); } if (_withTeamMembers) { users.Include("TeamMembers"); } return context.Users.Where(u => u.UserId == _userId).SingleOrDefault(); } } }
This version of the class implements two extra methods. They only change the bools on our class and then returns itself. This is where the cool stuff is at. Because we return to ourselves in the methods, we can chain them together to form queries as we see fit like this:
var userInfo = new UserInfoRetriever(10).Fetch(); var userInfo = new UserInfoRetriever(10).WithTeamMembers().WithRequests().Fetch(); var userInfo = new UserInfoRetriever(10).WithRequests().Fetch(); var userInfo = new UserInfoRetriever(10).WithTeamMembers().Fetch();
As you can see, we can load the data any way we like and our data-access code is still all in one place (in the Fetch() method). Adding new scenarios is very easy and it prevents (with a little discipline of course :-)) willy-nilly methods that all handle their own data-access. | https://itq.nl/fluent-data-access/ | CC-MAIN-2018-34 | refinedweb | 529 | 56.25 |
95178/how-to-import-a-module-given-its-name-as-string
I'm writing a Python application that takes as a command as an argument, for example:
$ python myapp.py command1
I want the application to be extensible, that is, to be able to add new modules that implement new commands without having to change the main application source. The tree looks something like:
myapp/
__init__.py
commands/
__init__.py
command1.py
command2.py
foo.py
bar.py
So I want the application to find the available command modules at runtime and execute the appropriate one.
With Python older than 2.7/3.1, that's pretty much how you do it.
For newer versions, see importlib.import_module for Python 2 and Python 3.
You can use exec if you want to as well.
Or using __import__ you can import a list of modules by doing this:
>>> moduleNames = ['sys', 'os', 're', 'unittest']
>>> moduleNames
['sys', 'os', 're', 'unittest']
>>> modules = map(__import__, moduleNames)
You can use the code below:
lis = ...READ MORE
You can use re module of python ...READ MORE
The following code might help -
mystring = ...READ MORE
Hello,
For install biopyhton with proper instruction you can ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
For Python 3.5+ use:
import importlib.util
spec = importlib.util.spec_from_file_location("module.name", ...READ MORE
Assuming module foo with method bar:
import foo
method_to_call = getattr(foo, 'bar')
result ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/95178/how-to-import-a-module-given-its-name-as-string?show=95179 | CC-MAIN-2021-21 | refinedweb | 275 | 69.79 |
Dieter Maurer was as kind as he always is one the Zope mailing list and explained to me the difference between using the
Cache-Control and the
Expires header for inducing browsers to cache your web elements. He explains:
Cache control is the modern (HTTP 1.1) way, "Expires" the old (HTTP 1.0) one.
My problem is that I've know about each but never bothered to find out why there are two ways of doing the same thing. In most of my projects I've used the oldschool method of setting RFC822 formatted future dates on the
Expires header, but from now on I'll use both. Dieter continues on my question "What is best to use and for what?":
Use them both:
HTTP 1.1 clients will honour "Cache-Control" (which is easier to use and much more flexible).
HTTP 1.0 clients will ignore "Cache-Control" but honour "Expires". With "Expires" you get thus at least a bit control for these old clients.
Now, let's speak code. Here's what I now do to set caching on my pages when I want to:
def doCache(self, hours=10): """ set cache headers on this request if not in debug mode """ if not self.doDebug(): response = self.REQUEST.RESPONSE now = DateTime() then = now+int(hours/24.0) response.setHeader('Expires',then.rfc822()) response.setHeader('Cache-Control', 'public,max-age=%d' % int(3600*hours))
This I can then use in say a stylesheet like this:
<dtml-call "doCache(48)">
Follow @peterbe on Twitter
How do I add this to my site. I would like it so when the user goes back you get an Expired page appear. I have looked over the web for a solution but cannot seem to find one. Can I simply put the above method in a script and call it each time a page is loaded.
Could you kindly explain what the DateTime object is? Perhaps it's because I'm using Python 2.7, but the datetime class in the datetime module has no rfc822 method.
Thanks!
DateTime was a utility library used in Zope2. | https://api.minimalcss.app/plog/cache-control_or_expires | CC-MAIN-2020-45 | refinedweb | 353 | 82.34 |
Provided by: alliance_5.0-20110203-4_amd64
NAME
savelofig - save a logical figure on disk
SYNOPSYS
#include "mlu.h" void savelofig(ptfig) lofig_list ∗ptfig;
PARAMETER
ptfig Pointer to the lofig to be written on disk
DESCRIPTION
savelofig writes on disk the contents of the figure pointer to by ptfig. All the figure lists are ran through, and the appropriate objects written, independently of the figure mode. The savelofig function in fact performs a call to a driver, choosen by the MBK_OUT_LO(1) environment variable. The directory in which the file is to be written is the one set by MBK_WORK_LIB(1). See MBK_OUT_LO(1), MBK_WORK_LIB(1) and mbkenv(3) for details.
ERRORS
"∗∗∗ mbk error ∗∗∗ not supported logical output format 'xxx'" The environment variable MBK_OUT_LO is not set to a legal logical format. "∗∗∗ mbk error ∗∗∗ savelofig : could not open file figname.ext" Either the directory or the file are write protected, so it's not possible to open figname.ext, where ext is the file format extension, for writting.
EXAMPLE
#include "mlu.h" void save_na2_y() { savelofig(getlofig("na2_y")); }
SEE ALSO
mbk(1), mbkenv(3), lofig(3), addlofig(3), getlofig(3), dellofig(3), loadlofig(3), flattenlofig(3), rflattenlofig(3), MBK_OUT_LO(1), MBK_WORK_LIB(1). | http://manpages.ubuntu.com/manpages/precise/man3/savelofig.3.html | CC-MAIN-2019-47 | refinedweb | 200 | 57.98 |
:
acidblue wrote:
> OK a couple of questions: First is there a calender ctrl in boa?
Yes. But you should enable UserCompanions plug-in first:
Explorer->Preferences->Plug-in Files->UserCompanions
> and second what is the formula for adding dates, example i want to add 90
> days to todays date: 90days+today=ansewer.
It has nothing to do with boa. It is general python question.
If you have python 2.3.x - use built-in datetime module, and timedelta
object. For example:
import datetime
print datetime.date.today() + datetime.timedelta(days = 90)
If you have python older than 2.3 - use mxDateTime module.
--
Oleg Deribas,
OK a couple of questions: First is there a calender ctrl in boa? and =
second what is the formula for adding dates, example i want to add 90 =
days to todays date: 90days+today=3Dansewer. | https://sourceforge.net/p/boa-constructor/mailman/message/7141646/ | CC-MAIN-2018-17 | refinedweb | 139 | 60.21 |
Introducing PyRXP
February 11, 2004
PyRXP is a DTD validating XML parser developed by ReportLab. It is Python wrapper around RXP, a C parser developed by Richard Tobin and Henry Thompson of the Edinburgh Language Technology Group as the core of LT XML, "an integrated set of XML tools and a developers' tool-kit, including a C-based API". ReportLab is a vendor of database reporting software and very well known and respected in the Python community. PyRXP is a core component of many of ReportLab's open source and commercial components. PyRXP focuses on performance above all things by using a fast C parser and by strictly building a bare-bones Python structure of tuples and string buffers from XML source. RXP and PyRXP are both distributed under the GNU General Public License.
I downloaded the full tar/gzip distribution of PyRXP 0.9 for running on Python 2.3.2. Note: the archive does not create its own directory when unpacked, so you'll want to do so by hand:
$ mkdir pyRXP-0-9 $ cd pyRXP-0-9 $ tar zxvf ../pyRXP-0-9.tgz [SNIP] $ python setup.py install [SNIP]
Source XML for the documentation comes in the distribution, but I didn't see an obvious way to build it so I just downloaded the PDF documentation.
Character trouble in tag land
PyRXP builds a bare bones tuple-based Python structure from an XML instance. To get a flavor of this structure, I tried to parse the same document I've been using in recent explorations of Python-XML tools (Listing 1).Listing 1: Sample XML file (labels.xml) containing address labels
<>
My attempt was the code in listing 2:Listing 2: Simple parse of XML in a file
import pyRXP parser = pyRXP.Parser() fobj = open('labels.xml').read() #Introspection doesn't reveal any "parseFile"-like method doc = parser.parse(fobj)
The result of this attempt was rather hair raising:
$ python listing2.py Traceback (most recent call last): File "listing2.py", line 4, in ? doc = parser.parse(fobj) pyRXP.Error: Error: 0x2026 is not a valid 8-bit XML character in unnamed entity at line 6 char 61 of [unknown] error return=1 0x2026 is not a valid 8-bit XML character Parse Failed!
The problem, besides the fact that the parser seemed to fail parsing a perfectly well-formed XML document, is that the error message is unhelpful. The phrase "valid 8-bit XML character" is meaningless. The XML character set is Unicode, with the restriction that some characters are not allowed. But there is no concept of "bits" in the idea of an XML character. Each character is merely an abstract code point. A character can be encoded into a storage format associated with a standard bit length such as UTF-8 (8 bit), but this really has nothing to do with the XML character model. To be fair, this and other concepts relating to Unicode can be rather arcane; but there are excellent resources to help clear things up, including Mike Brown's article "XML Tutorial--A reintroduction to XML with an emphasis on character encoding". For a very friendly discussion of Unicode focusing on the Python implementation there is " Unicode Support in Python (PDF)" by Marc-Andre Lemburg. I gather a lot of relevant notes on these matters in my Akara article "XML Character issues in Python".
At any rate, I pored over the PyRXP documentation expecting to find something I must
have
missed. I found a few properties that can be set on the parser and the closest I found
was
ExpandCharacterEntities. In effect it returns a character entity such as
…, the one in the sample document, as the literal sequence of seven
separate characters, starting with the ampersand and ending with the semicolon. This
is a
serious violation of the basic principles of XML, in which
… is strictly
one character rather than seven; further, it doesn't help me parse the sample file
properly. I then checked the ReportLab mailing lists and found others who had run
into the
same problem. The responses from the developers were, more or less, that PyRXP raises
a
fatal error when presented with XML characters with Unicode ordinal greater than U+256,
regardless of how they are represented. The unfortunate upshot of this is that PyRXP
0.9 is not an XML parser.
I only cover XML processing tools in this column; and, frankly, such a fundamental case of non-conformance would have been to my mind more than enough to disqualify PyRXP from discussion. Nevertheless, there was no way I was going to throw up my hands at this point. I have heard a lot of good things about PyRXP, and I'd like to be sure there is fair coverage of as broad a selection of Python-XML tools as possible. I pored through the docs again and found a bit that I'd overlooked the first time. Earlier on, in searching on whether users of the core C RXP parser also had this problem, I came across Norm Walsh's simple instruction to one such user: "I think you need to rebuild or reconfigure RXP with Unicode support. XML isn't 8-bit."
It turns out that the PyRXP developers have provided a start toward this. From the manual, "PyRXPU is the 16-bit Unicode aware version of pyRXP. It is currently only available the source distribution of pyRXP, since it is still 'alpha' quality. Please report any bugs you find with it."
It's still odd to tie the idea of bit width of a character encoding to the foundation of an XML parser (the phrase "16-bit Unicode" is almost as meaningless as "8-bit XML character") but PyRXPU seems well worth a try.
A Conformant Version of PyRXP?
It appears that, contrary to the note in the manual, PyRXPU is only available in CVS. I grabbed and built the CVS version like so:
$ cvs -d :pserver:anonymous@cvs.reportlab.sourceforge.net:/cvsroot/reportlab login [SNIP] $ cvs -d :pserver:anonymous@cvs.reportlab.sourceforge.net:/cvsroot/reportlab co rl_addons/pyRXP [SNIP] $ cd rl_addons/pyRXP $ python setup.py install [SNIP]
I just hit "Enter" at the "CVS password" prompt.Listing 3: Simple parse of XML in a file, reprise
import pyRXPU parser = pyRXPU.Parser() fobj = open('labels.xml').read() #Introspection doesn't reveal any "parseFile"-like method doc = parser.parse(fobj)
This time the parse is successful, and I was able to start digging into the resulting data structure as illustrated by jumping into the interpreter after running the script:
>>> import pprint >>> pprint.pprint(doc) (u'labels', None, [u'\n ', (u'label', {u'added': u'2003-06-20'}, [u'\n ', (u'quote', None, [u'\n \n ', (u'emph', None, [u'Midwinter Spring'], None), u' is its own season\u2026\n '], None), u'\n ', (u'name', None, [u'Thomas Eliot'], None), u'\n ', (u'address', None, [u'\n ', (u'street', None, [u'3 Prufrock Lane'], None), u'\n ', (u'city', None, [u'Stamford'], None), u'\n ', (u'state', None, [u'CT'], None), u'\n '], None), u'\n '], None), u'\n ', (u'label', {u'added': u'2003-06-10'}, [u'\n ', (u'name', None, [u'Ezra Pound'], None), u'\n ', (u'address', None, [u'\n ', (u'street', None, [u'45 Usura Place'], None), u'\n ', (u'city', None, [u'Hailey'], None), u'\n ', (u'state', None, [u'ID'], None), u'\n '], None), u'\n '], None), u'\n'], None)
I knew that the result would be a structure of Python primitives; thus, as in the
article, I used the
pprint module to produce a representation I could follow
easily. It's easy to see the basic pattern: elements become tuples with the node name
as the
first (Unicode) item, a dictionary of attributes or None as the second, and a list
of
contents or None as the third. The fourth is reserved for customized use. This data
structure is quite simple, which is one of the attractions of PyRXPU; but it might
be a bit
cumbersome to navigate in order to extract patterns of data, especially in comparison
to
data binding tools.
As you can see, all strings are Unicode objects, which is very good. From my understanding,
using the production version of PyRXP you only get "classic" string objects, which
I do not
recommend mixing into XML processing. You can see the character that was giving the
production version such fits, that
\u2026. Here it is properly treated.
Nevertheless, the strange bit about "16-bit Unicode" made me wonder whether there
were also
any such conformance problems in PyRXPU. Certainly XML allows numerous characters
above code
point 65535. The following is the relevant production from the XML 1.0 spec:
Character Range [2] Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
The accompanying comment is "any Unicode character, excluding the surrogate blocks, FFFE, and FFFF." Note that this permissiveness will open up even more now that XML 1.1 has just become a full W3C recommendation. Some formerly forbidden characters including the range from #x1 through #x8 have been allowed, strictly in the form of character references.
I tested the treatment of very high Unicode characters in PyRXPU, and it does seem to handle them well enough. If you're an archaeologist with an interest in the Mycenaean culture you might have an interest in Unicode character U+10000, "LINEAR B SYLLABLE B008 A", which is used in the XML document parsed in the following snippet:
>>> import pyRXPU >>> p = pyRXPU.Parser() >>> p.parse("<spam>Very high Unicode char: 𐀀</spam>") (u'spam', None, [u'Very high Unicode char: \U00010000'], None)
As you can see the character value becomes
\U00010000 in Python. Python gets
most Unicode matters right and deals with such high characters with aplomb whether
you
compile Python to store Unicode in 16 bits or 32 bits (again the bit width is not
relevant
to the Unicode character whatsoever but is merely a property of the chosen storage
or
encoding). It's good to have this confidence that PyRXPU is a conforming XML parser.
Benchmarks: A Lawyer's Best Friend
ReportLab bills PyRXP as "the fastest validating XML parser available for Python, and quite possibly anywhere..." David Mertz in an independent review also lauds PyRXP's speed but does not seem to have discovered its erroneous handling of characters. I think this is a good example of why benchmarking is a very slippery exercise. It's really inappropriate to even compare PyRXP to any other XML parser: it's not a conformant XML parser and thus not an XML parser at all. As many implementors tell you, it is often the odd corners of conformance that are behind the most significant performance losses. Standardization means we sacrifice some local optimization in order to gain flexibility and interoperability. By refusing to accept a very large class of quite valid XML instances, PyRXP rather does a disservice to the entire idea of XML. I have produced tools that do not fully conform to a target standard, but in such cases I follow the usual convention that such deviations are treated as bugs. I take a rather dim view of the situation in PyRXP given that
- the developers have publicly refused to remedy the non-conformance; and
- the developers trumpet the speed and low memory footprint of PyRXP, even though these advantages are only made possible by scorning conformance
I found threads discussing the development of the PyRXPU variant, which actually does seem to be XML conformant. As I expected, it is some two times less efficient in speed and memory footprint than PyRXP. The only difference is in proper treatment of Unicode, and this demonstrates my point about the cost of conformance. I have a lot of respect for the developers of PyRXP, and I hate to be so sharp about this matter, but I think it's quite serious and merits very unambiguous statement.
I'd also like to mention that if anyone is working on benchmarks of XML processing, which are useful if well done, that they run the tests on a variety of hardware and operating systems, and that they don't focus on a single XML file, but rather examine a variety of XML files. Numerous characteristics of XML files can affect parsing and processing speed, including:
- The preponderance of elements versus attributes versus text (and even comments and processing instructions)
- Any repetition of element or attribute names, values and text content
- The distribution of white space
- The character encoding
- The use of character and general entities
- The input source (in-memory, string, file, URL, etc.)
I do want to point out that I'm one of the developers of cDomlette, which one might consider a competing package. This might seem a temptation to take an especially hard line with competing tools, but then again in this column I have covered the likes of ElementTree, gnosis.xml.objectify, and libxml and have never before had such a fundamental problem with any package.
Conclusion
My recommendation is to consider PyRXPU, but to avoid plain PyRXP. I hope that the former version becomes the default so that this confusing situation can be resolved. PyRXPU produces a simple and highly Pythonic data structure, though one that might be a bit tricky to navigate correctly in code. It operates quickly and offers a low memory footprint.
Development activity seem to be picking up again in the Python-XML world. Peter Yared announced Python XML Marshaller 0.2, a new Python data binding for XML available under the PSF Python license. It includes some WXS support and can generate WXS from Python data structures for round-trip support. It also has some features for customizing the binding. See the announcement.
Walter Dorwald announced XIST 2.4. Billed as an "object oriented XSLT", XIST uses an easily extensible, DOM-like view of source and target XML documents to generate HTML. This release features some API improvements, bug fixes, and a new find function for searching attributes. See the announcement.
Magnus Lie Hetland announced Atox 0.1 which allows you to write custom scripts for converting plain text into XML. You define the text to XML binding using a simple XML language. It's meant to be used from the command line. See the full announcement.
Arnold deVos announced GraphPath a little XPath-like language for analysing graph-structured data, especially RDF. The implementation is Python and works with rdflib or the Python binding of Redland. It includes a query evaluator and a goal-driven inference engine. I found this annoucement interesting because GraphPath is reminiscent of our early proposals while developing the Versa RDF query language at Fourthought. I think this is an important approach to RDF query and superior to the many SQL-like query languages. It's good to see more than one development along these lines. | http://www.xml.com/pub/a/2004/02/11/py-xml.html | CC-MAIN-2017-09 | refinedweb | 2,483 | 60.65 |
This chapter provides information about performing general storage administration maintenance tasks with Solaris Volume Manager.
This is a list of the information in this chapter:
Solaris Volume Manager Maintenance (Task Map)
Viewing the Solaris Volume Manager Configuration
Working with Configuration Files
Changing Solaris Volume Manager Defaults
Expanding a File System Using the growfs Command
Overview of Replacing and Enabling Components in RAID 1 and RAID 5 Volumes
The following task map identifies the procedures needed to maintain Solaris Volume Manager.
The format of the metastat command:
-p specifies to output a condensed summary, suitable for use in creating the md.tab file.
-i specifies to verify that all devices can be accessed.
component-name is the name of the volume to view. If no volume name is specified, a complete list of components will be displayed.
The following example illustrates output from the metastat command.
The following example illustrates output from the metastat command for a large storage volume (11 TB).
For more information, see metastat(1M).
The.
When used with the -x option, the metarename command exchanges the names of an existing layered volume with one of its subdevices. This exchange can occur between a mirror and one of its submirrors, or a transactional volume and its master device.
You must use the command line to exchange volume names. This functionality is currently unavailable in the Solaris Volume Manager GUI. However, you can rename a volume with either the command line or the GUI.
The metarename -x command can make it easier to mirror or unmirror an existing volume, and to create or remove a transactional volume of an existing volume.
You cannot rename a volume that is currently in use. This includes volumes that are used as mounted file systems, as swap, or as active storage for applications or databases. Thus, before you use the metarename command, stop all access to the volume being renamed. For example, unmount a mounted file system.
You cannot exchange volumes in a failed state, or volumes using a hot spare replacement.
An exchange can only take place between volumes with a direct parent-child relationship. You could not, for example, directly exchange a stripe in a mirror that is a master device with the transactional volume.
You must use the -f (force) flag when exchanging members of a transactional device.
You cannot exchange (or rename) a logging device. The workaround is to either detach the logging device, rename it, then reattach it to the transactional device; or detach the logging device and attach another logging device of the desired name.
Only volumes can be exchanged. You cannot exchange slices or hot spares.
Solaris Volume Manager transactional volumes do not support large volumes. In all cases, UFS logging (see mount_ufs(1M).
Solar.
The Solaris Volume Manager configuration has the following default values:
128 volumes per disk set
d0 through d127 as the namespace available for use by volumes
4 disk sets
State database replica maximum size of 8192 blocks
The default values of total volumes, namespace, and number of disk sets can be changed, if necessary. The tasks in this section tell you how to change these values.
The.
This.
After.
Solar). See Enabling a Component.. | http://docs.oracle.com/cd/E19683-01/817-5776/6ml784a7v/index.html | CC-MAIN-2015-14 | refinedweb | 533 | 54.73 |
jQuery's animation functions are built on the use of the default function queue. They are easy to use, but can sometimes be confusing because of the way they mix different ways of showing and hiding elements.
In the previous chapter we looked in detail at the jQuery function queue - a way of running a set of asynchronous functions one after the other in such a way that each is completely finished before the next starts. This is the basis of jQuery's animation system. Each special effect is an asynchronous function that animates some part of the UI for a specified time. These animation functions automatically make use of the default function queue to run sequentially without blocking the UI while they are working.
As well as a set of predefined animation functions there is also a custom function that you can use to create your own animations.
Let's start with one of the predefined functions and see how it works.
There are a set of animation functions that change the opacity of an element - generally called the "fade" animations. The fade animations modify the CSS opacity property, but some of them do a little more than this and this can be confusing if you don't know exactly how they work. The purest and the best example of an animation function is fadeTo as this only modifies the opacity.
The opacity property has a value between 0 and 1 with 0 being completely transparent and 1 completely opaque. Of course elements that you create in HTML have their opacity set to 1 by default. If you want to see an opacity directly you can use
style="opacity: value"
where value is between 0.0 and 1.0.
The simplest of the fade animation functions is fadeTo which will animate the opacity from its current value to the specified value. You have to at least specify the speed of the animation and the final value. For example:
.fadeTo(1000,0.5);
animates the elements that match so that their opacity is 0.5 after 1000 milliseconds, i.e. 1 second.
To see this in action try:
<body> <button id="button1" style="opacity: 0.0"> mybutton </button> </body> <script> $("#button1").fadeTo(1000,0.5); </script>
Notice that the animation proceeds from whatever the opacity is to the final value. In this case the effect is a fade in as the opacity goes from 0.0 to 0.5 in 1 second. If you start with an opacity higher than the final value then it will be a fade out as the value reduces to the final value.
It is also important to realize that each element is animated independently from the rest. For example, if you have two buttons to animate, one set to opacity 1 and the other opacity 0 then animating them to 0.5 sees the first fade out and the second fade in.
You can also use "slow" and "fast" for the duration of the animation and by default these are 600 and 200 milliseconds. The default speed is 400 milliseconds. These can be changed by assigning to the fx.speeds property. For example
$.fx.speeds.slow=1000;$.fx.speeds.fast=100;$.fx.speeds._default=500;
sets slow to 1 second, fast to 100 milliseconds and default to 500 milliseconds, i.e. half a second.
You can also specify a function that will be called when the animation is complete.
The fadeTo function is very simple in that it changes the opacity from what it currently is to the target, taking the time you specify to do the job. However, there a number of other parameters that you can use to control the animation and these are general to all of the animation functions. The most important is easing, although unless you also use jQuery UI you have only two choices of easing.
Easing is simply the speed at which the animation occurs. For example you could change the opacity slowly at first and then ever more quickly. There are two easing function provided by jQuery - linear which changes the opacity or other animated property evenly through the time period and swing which starts and finishes gradually:
This chart of swing easing has percentage time along the horizontal axis and property value on the vertical axis.
To specify easing all you have to is specify "linear" or "swing". For example:
$("#button1").fadeTo(3000,1,"linear");
You can add your own easing functions directly to jQuery, but the recommended way to do this is using jQuery UI.
For example, you could specify an easing function that changes the property from a maximum to 0 and then back to the maximum:
You could do this with:
$.easing.twinkle = function (p) { return (p - 0.5) * (p - 0.5)*4; }
and to use it:
$("#button1").css("opacity",0) .fadeTo(3000, 1, "twinkle");
The css opacity setting is done to give the animation something to do in case the button is already set to opacity 1.0.
In general you can add an easing function by adding a property to $.easing that implements the easing characteristic you want.
The full range of possiblities for calling fadeTo are:
fadeTo(duratio,opacity,easing,complete)
and you can leave out the easing parameter, i.e. the name of the easing function and/or the complete parameter - a callback to use when the animation is over. | http://www.i-programmer.info/programming/jquery/10461-jquery-3-animation.html | CC-MAIN-2017-26 | refinedweb | 902 | 63.7 |
C++ can be quite confusing for me... Haven't touched this file in weeks and it has been compiling with no probs,now today after having been growing my project by modifying other classes, i get an error from a class which hasn't been modified and has been compiling
here's part of the class
I mean i never this problem, and this stream has been used this wayI mean i never this problem, and this stream has been used this wayCode:#include "Fleet.h" #include <fstream> #include <iostream> #include <algorithm> #include <conio.h> using namespace std; struct CAR_MAKE { string make; string model; }; Fleet::Fleet(): fleet() {} Fleet::CarFleet Fleet::getFleet() { return fleet; } // --------------------------------------------------------------- // Implementing function to read fleet of cars from text file void Fleet::ReadFleetFromFile() { char filename[MAX_PATH] ; puts("\n\nPlease enter the text file to open:\n"); cin >> filename; istream fleetin; // Problem is here fleetin.open (filename); /* Ensure file was openned, otherwise send an error */ if ( fleetin.fail() ) { puts(""); perror("ERROR! while trying to open file"); exit(1); } else { Car car; // Read in fleet records while ( !car.Read(fleetin) ) { fleet.push_back(car); SortAlpha(); } // Did we read something? if( fleet.size()==0) { puts("\nData was unsuccessfully read !!..."); } else puts("\nProcessed reading..."); } fleetin.close(); } /* ERROR : error C2512: 'std::basic_istream<_Elem,_Traits>' : no appropriate default constructor available with [ _Elem=char, _Traits=std::char_traits<char> ] : error C2039: 'open' : is not a member of 'std::basic_istream<_Elem,_Traits>' with [ _Elem=char, _Traits=std::char_traits<char> ] : error C2039: 'close' : is not a member of 'std::basic_istream<_Elem,_Traits>' with [ _Elem=char, _Traits=std::char_traits<char> ] */ | https://cboard.cprogramming.com/cplusplus-programming/106614-error-no-appropriate-default-constructor-available.html | CC-MAIN-2017-51 | refinedweb | 261 | 51.89 |
This article will help you to understand the Nullable type implementation in C#. This article also explains about Coalescing Operator and how CLR has special support for Nullable value type.
As we all know, a value type variable cannot be null. That's why they are called Value Type. Value type has a lot of advantages, however, there are some scenarios where we require value type to hold null also. For instance, If you are retrieving nullable integer column data from database table, and the value in database is null, there is no way you can assign this value to an C# int. Let's havea look at another scenario: In Java, java.Util.Date is a reference type, and therefore, the variable of this type can be set to null. However, in CLR, System.DateTime is a value type and a DateTime variable cannot be null. If an application written in Java wants to communicate a date/time to a Web service running on the CLR, there is a problem if the Java application sends null because the CLR has no way to represent this and operate on it.
null
null
int
java.Util.Date
System.DateTime
DateTime
To get rid of these situations, Microsoft added the concept of Nullable types to the CLR. To Understand this, have a look over the definition of System.Nullable<t> Type:
Nullable
System.Nullable<t>
[Serializable, StructLayout(LayoutKind.Sequential)]
public struct Nullable<t> where T : struct
{
// These 2 fields represent the state
private Boolean hasValue = false; // Assume null
internal T value = default(T); // Assume all bits zero
public Nullable(T value)
{
this.value = value;
this.hasValue = true;
}
public Boolean HasValue { get { return hasValue; } }
public T Value
{
get
{
if (!hasValue)
{
throw new InvalidOperationException(
"Nullable object must have a value.");
}
return value;
}
}
public T GetValueOrDefault() { return value; }
public T GetValueOrDefault(T defaultValue)
{
if (!HasValue) return defaultValue;
return value;
}
public override Boolean Equals(Object other)
{
if (!HasValue) return (other == null);
if (other == null) return false;
return value.Equals(other);
}
public override int GetHashCode()
{
if (!HasValue) return 0;
return value.GetHashCode();
}
public override string ToString()
{
if (!HasValue) return "";
return value.ToString();
}
public static implicit operator Nullable<t>(T value)
{
return new Nullable<t>(value);
}
public static explicit operator T(Nullable<t> value)
{
return value.Value;
}
}
From the above definition, you can easily make out that:
Nullable<t>
struct
struct
Boolean
HasValue
Nullable<t>
Nullable<T>
boolean
nullable
T
boolean
To use Nullable type, just declare Nullable struct with a value type parameter, T, and declare it as you are doing for other value types. For example,
Nullable<int> i = 1;
Nullable<int> j = null;
Use Value property of Nullable type to get the value of the type it holds. As the definition says, it will return the value if it is not null, else, it will throw an exception. So, you may need to check for the value being null before using it.
Value
Console.WriteLine("i: HasValue={0}, Value={1}", i.HasValue, i.Value);
Console.WriteLine("j: HasValue={0}, Value={1}", j.HasValue, j.GetValueOrDefault());
//The above code will give you the following output:
i: HasValue=True, Value=5
j: HasValue=False, Value=0
C# also supports simple syntax to use Nullable types. It also supports implicit conversion and casts on Nullable instances. The following example shows this:
// Implicit conversion from System.Int32 to Nullable<Int32>
int? i = 5;
// Implicit conversion from 'null' to Nullable<Int32>
int? j = null;
// Explicit conversion from Nullable<Int32> to non-nullable Int32
int k = (int)i;
// Casting between nullable primitive types
Double? x = 5; // Implicit conversion from int to Double? (x is 5.0 as a double)
Double? y = j; // Implicit conversion from int? to Double? (y is null)
C# allows you to use operators on Nullable types as you can use it for the containing types.
true
false
false
See the example below:
int? i = 5;
int? j = null;
// Unary operators (+ ++ - -- ! ~)
i++; // i = 6
j = -j; // j = null
// Binary operators (+ - * / % & | ^ << >>)
i = i + 3; // i = 9
j = j * 3; // j = null;
// Equality operators (== !=)
if (i == null) { /* no */ } else { /* yes */ }
if (j == null) { /* yes */ } else { /* no */ }
if (i != j) { /* yes */ } else { /* no */ }
// Comparison operators (< > <= >=)
if (i > j) { /* no */ } else { /* yes */ }
C# provides you quite a simplified syntax to check null and simultaneously assign another value in case the value of the variable is null. This can be used in Nullable types as well as reference types.
For example, the code below:
int? i = null;
int j;
if (i.HasValue)
j = i.Value;
else
j = 0;
//The above code can also be written using Coalescing operator:
j = i ?? 0;
//Other Examples:
string pageTitle = suppliedTitle ?? "Default Title";
string fileName = GetFileName() ?? string.Empty;
string connectionString = GetConnectionString() ?? defaultConnectionString;
// If the age of employee is returning null
// (Date of Birth might not have been entered), set the value 0.
int age = employee.Age ?? 0;
//The Coalescing operator is also quite useful in aggregate function
//while using linq. For example,
int?[] numbers = { };
int total = numbers.Sum() ?? 0;
// Many times it is required to Assign default, if not found in a list.
Customer customer = db.Customers.Find(customerId) ?? new Customer();
//It is also quite useful while accessing objects like QueryString,
//Session, Application variable or Cache.
string username = Session["Username"] ?? string.Empty;
Employee employee = GetFromCache(employeeId) ?? GetFromDatabase(employeeId);
You can also chain it, which may save a lot of coding for you. See the example below:
// Here is an example where a developer is setting the address of a Customer.
// The business requirement says that:
// (i) Empty address is not allowed to enter
// (Address will be null if not entered). (ii) Order of precedence of
// Address must be Permanent Address which if null, Local Address which if null,
// Office Address.
// The following code does this:
string address = string.Empty;
string permanent = GetPermanentAddress();
if (permanent != null)
address = permanent;
else
{
string local = GetLocalAddress();
if (local != null)
address = local;
else
{
string office = GetOfficeAddress();
if (office != null)
address = office;
}
}
//With Coalescing Operator, the same can be done in a single expression.//
string address = GetPermanentAddress() ?? GetLocalAddress()
?? GetOfficeAddress() ?? string.Empty;
The code above with Coalescing operator is far easier to read and understand than that of a nested if else chain.
if else
Since I have mentioned earlier that the Nullable<T> is still a value type, you must understand performance while boxing and unboxing of Nullable<T> type.
The CLR executes a special rule to box and unbox the Nullable types. When CLR is boxing a Nullable instance, it checks to see if the value is assigned null. In this case, CLR does not do anything and simply assigns null to the object. If the instance is not null, CLR takes the value and boxes it similar to the usual value type.
object
While unboxing to Nullable type, CLR checks If an object having its value assigned to null. If yes, it simply assigns the value of Nullable type to null. Else, it is unboxing as usual.
object
// Boxing Nullable<T> is null or boxed T
int? n = null;
Object o = n; // o is null
Console.WriteLine("o is null={0}", o == null); // results to "True"
n = 5;
o = n; // o refers to a boxed Int32
Console.WriteLine("o's type={0}", o.GetType()); // results to "System.Int32"
// Create a boxed int
Object o = 5;
// Unbox it into a Nullable<int> and into an int
int? a = (Int32?) o; // a = 5
int b = (Int32) o; // b = 5
// Create a reference initialized to null
o = null;
// "Unbox" it into a Nullable<int> and into an int
a = (int?) o; // a = null
b = (int) o; // NullReferenceException
When calling GetType() for Nullable<T> type, CLR actually lies and returns the Type the Nullable type it holds. Because of this, you may not be able to distinguish a boxed Nullable<int> was actually a int or Nullable<int>. See the example below:
GetType()
Nullable<int>
int
int? i = 10;
Console.WriteLine(i.GetType()); // Displays "System.Int32" instead
// of "System.Nullable<Int32>"
Note that I haven't discussed the details of memory allocation and object creation while boxing and unboxing to keep the article focused to Nullable types only. You may Google it for details about boxing and unboxing.
Since Nullable Type is also a value type and fairly lightweight, don't hesitate to use it. It is quite useful in your data driven application.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Bob DoleThe internet is a great way to get on the net.
GetType
Object
Int32
Nullable<Int32>
is
int? x = 1;
Console.WriteLine(x is Nullable<int>); // True
Console.WriteLine(x is int); // True
int? y = 5;
object o = y;
Console.WriteLine(o.GetType());
int? x = null;
Console.WriteLine(x is Nullable<int>); // False (if x is null)
Console.WriteLine(x is int); // False (if x is null)
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/275471/Nullable-Types-in-Csharp-Net?fid=1661576&df=90&mpp=25&noise=3&prof=True&sort=Position&view=Normal&spc=Relaxed&select=4727462&fr=1 | CC-MAIN-2014-41 | refinedweb | 1,503 | 59.19 |
Bug #7037closed
float formatting inconsistently rounds half to even
Description
MRI does not appear to consistently round half to even. I'm not sure what rounding strategy this is, but it rounds xx05 and xx15 to odd for xxx, and other values to even:
irb(main):001:0> "%1.1f" % 1.05
=> "1.1"
irb(main):002:0> "%1.1f" % 1.15
=> "1.1"
irb(main):003:0> "%1.1f" % 1.25
=> "1.2"
irb(main):004:0> "%1.1f" % 1.35
=> "1.4"
irb(main):005:0> "%1.1f" % 1.45
=> "1.4"
irb(main):006:0> "%1.1f" % 1.55
=> "1.6"
None of the tie-breaking strategies I could find () seem to support MRI's model.
If MRI is indeed using "half even", xx05 should round to xx0 and xx15 should round to xx2. An example with Java's BigDecimal appears to support this:
irb(main):029:0> java.math.BigDecimal.new('1.05').round(java.math.MathContext.new(2, java.math.RoundingMode::HALF_EVEN)).to_s
=> "1.0"
irb(main):030:0> java.math.BigDecimal.new('1.15').round(java.math.MathContext.new(2, java.math.RoundingMode::HALF_EVEN)).to_s
=> "1.2"
irb(main):031:0> java.math.BigDecimal.new('1.25').round(java.math.MathContext.new(2, java.math.RoundingMode::HALF_EVEN)).to_s
=> "1.2"
irb(main):032:0> java.math.BigDecimal.new('1.35').round(java.math.MathContext.new(2, java.math.RoundingMode::HALF_EVEN)).to_s
=> "1.4"
We would like clarification about the proper rounding tie-breaker strategy to use so we can fix this JRuby issue properly:
Updated by nobu (Nobuyoshi Nakada) almost 10 years ago
- Status changed from Open to Closed
=begin
Just formats the value with full precision and "rounds half up" the next char.
(gdb) printf "%.17f\n", 1.05
1.05000000000000004
(gdb) printf "%.17f\n", 1.15
1.14999999999999991
(gdb) printf "%.17f\n", 1.25
1.25000000000000000
(gdb) printf "%.17f\n", 1.35
1.35000000000000009
(gdb) printf "%.17f\n", 1.45
1.44999999999999996
(gdb) printf "%.17f\n", 1.55
1.55000000000000004
(gdb) printf "%.17f\n", 1.65
1.64999999999999991
(gdb) printf "%.17f\n", 1.75
1.75000000000000000
(gdb) printf "%.17f\n", 1.85
1.85000000000000009
(gdb) printf "%.17f\n", 1.95
1.94999999999999996
=end
Updated by shyouhei (Shyouhei Urabe) almost 10 years ago
I can't under stand what @nobu (Nobuyoshi Nakada) says so I did this myself.
@headius (Charles Nutter) mixed two points.
- 1.15 for instance is not exactly representable in floating point number. So tie-breaking is not the case.
- 1.25 for instance is exact. This, and only this one in above example, is the tie-breaking, and is rounded to 1.2. Rounding 1.25 to 1.2 is not inconsistent regarding "half to even".
am I correct?
Updated by nobu (Nobuyoshi Nakada) almost 10 years ago
Sorry, it's my bad.
Indeed, "half even" not "half up".
Updated by headius (Charles Nutter) almost 10 years ago
Ok, I can buy the precision argument, and it fits if I expand my example to all values of 1.x5:
irb(main):002:0> "%1.1f" % 1.05
=> "1.1"
irb(main):003:0> "%1.1f" % 1.15
=> "1.1"
irb(main):004:0> "%1.1f" % 1.25
=> "1.2"
irb(main):005:0> "%1.1f" % 1.35
=> "1.4"
irb(main):006:0> "%1.1f" % 1.45
=> "1.4"
irb(main):007:0> "%1.1f" % 1.55
=> "1.6"
irb(main):008:0> "%1.1f" % 1.65
=> "1.6"
irb(main):009:0> "%1.1f" % 1.75
=> "1.8"
irb(main):010:0> "%1.1f" % 1.85
=> "1.9"
irb(main):011:0> "%1.1f" % 1.95
=> "1.9"
My next question is if this is actually desirable or not. In JRuby and Java/JDK, it appears float formatting rounds half up, and treats imprecise halves as precise.
JRuby:
irb(main):010:0> "%1.1f" % 1.05
=> "1.1"
irb(main):011:0> "%1.1f" % 1.15
=> "1.2"
irb(main):012:0> "%1.1f" % 1.25
=> "1.3"
irb(main):013:0> "%1.1f" % 1.35
=> "1.4"
irb(main):014:0> "%1.1f" % 1.45
=> "1.5"
irb(main):015:0> "%1.1f" % 1.55
=> "1.6"
irb(main):016:0> "%1.1f" % 1.65
=> "1.7"
irb(main):017:0> "%1.1f" % 1.75
=> "1.8"
irb(main):018:0> "%1.1f" % 1.85
=> "1.9"
irb(main):019:0> "%1.1f" % 1.95
=> "2.0"
Even given arguments that we should round half to even, we'd likely be consistent with human expectations here.
Here's Java's String.format in action:
public class FormatFloat {
public static void main(String[] args) {
System.out.println(String.format("%1.1f",1.05));
System.out.println(String.format("%1.1f",1.15));
System.out.println(String.format("%1.1f",1.25));
System.out.println(String.format("%1.1f",1.35));
System.out.println(String.format("%1.1f",1.45));
System.out.println(String.format("%1.1f",1.55));
System.out.println(String.format("%1.1f",1.65));
System.out.println(String.format("%1.1f",1.75));
System.out.println(String.format("%1.1f",1.85));
System.out.println(String.format("%1.1f",1.95));
}
}
Output:
system ~/projects/jruby $ java FormatFloat
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
These are Java doubles. JRuby uses the same representation internally, so ignoring the half-up-versus-half-even issue, this may be a platform-specific float representation question.
The JVM does not enforce strict IEEE 754 floating point math since Java 1.2 for performance reasons; instead, it may (not required) use the most efficient and accurate representation of floating point values on the given platform. For x86_64, that is not a 64-bit value but is instead 80-bit extended precision. This means (correct me if I'm wrong) that the JVM can accurately represent 1.05 and friends exactly as the equivalent of 105x10^-2 since it can fully and accurately represent a 64-bit signed integer and with an extra 16 bits for exponentiation.
In this light, ignoring the rounding strategy for the moment, the JRuby and JVM results make sense. Halves can all be represented accurately up to 64 bit signed integer values + exponent.
So this leaves me with two questions about how to handle this issue in JRuby:
- Is strict IEEE 754 floating-point precision a specified behavior of Ruby?
If yes, then all builds of MRI on all platforms must exhibit the same behavior regardless of platform representation or compiler optimizations. If no, then JRuby's use of JVM floating-point representation and behavior are ok. We may be able to force IEEE 754 floating point behavior in JRuby, but my quick tests seemed to show it's not as simple as just turning on strictfp. I'd prefer to not have to go that direction.
- Is rounding half even a specified behavior of Ruby?
If yes, I question whether it's a significant distinction to make, since only two "half" values can ever be represented accurately in IEEE 754 anyway: xx25 and xx75. If no, then JRuby's current behavior is acceptable as a platform-specific or implementation-specific detail. In JRuby's case, where we're relying on JVM floating-point representation, we are always using the "round half away from zero" strategy, which has bias, but we're consistent for all input values.
Updated by naruse (Yui NARUSE) almost 10 years ago
headius (Charles Nutter) wrote:
- Is strict IEEE 754 floating-point precision a specified behavior of Ruby?
ISO Ruby says floating point should follow ISO/IEC/IEEE 60559 (IEEE 754).
Anyway, did you see ?
It looks like almost broken.
Updated by headius (Charles Nutter) almost 10 years ago
If we want to go by the letter of the spec (and I may have an older copy...please confirm), it says "if the underlying platform of a conforming processor supports IEC 60559:1989" the representation shall be that representation. I need to check the document for definitions of "platform" and "processor", but an argument can be made that our platform is the JVM...in which case, the JVM's representation of floating-point would be on with the spec.
The spec also says rounding due to arithmentic operations is implementation-defined.
The spec also says nothing about String#% or Kernel#sprintf.
Updated by headius (Charles Nutter) almost 10 years ago
The only definition I could find is a sideways use of "processor" to mean "Ruby processor", as in a Ruby language processor, i.e. a Ruby implementation. So... "the underlying platform of a conforming processor" would be the JVM, so it's probably ok for us to dodge the IEEE 754 issue.
Given the wording of the arithmetic rounding text in Float's description and the fact that formatting-related methods are not in the spec, we can probably dodge that issue too.
Of course, that's only if the ISO spec is all we care about :)
At this point I don't think I'm going to try to explore forcing strict IEEE 754, since even on the JVM it is intended as a workaround for specific, localized cases. So that leaves the rounding issue.
Given that there's no help from the spec on whether we should round half to even or half away from zero...what should we, as gentleman programmers, do?
If we agree to "half to even" then JRuby will round 1.05 to 1.0, 1.15 to 1.2, 1.85 to 1.8, and 1.95 to 2.0, which doesn't match MRI.
If we agree to "half away from zero" then MRI's rounding results of 1.25 to 1.2 would not match (the only other tiebreaking case in MRI's current system is 1.75, which already rounds to 1.8 as it would with "half away from zero").
And yes, I've seen that rounding code. I wish I could unsee it.
Updated by headius (Charles Nutter) almost 10 years ago
Oh, and if we agree that rounding for float formatting is implementation-defined (like rounding for arithmetic, according to the spec), then JRuby and MRI differ on the rounding results of 1.15, 1.25, 1.45, 1.65, and 1.95 would fail to match MRI solely due to precision.
Updated by shyouhei (Shyouhei Urabe) almost 10 years ago
It seems Ruby is just following C here.
zsh % cat tmp.c
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
printf("%1.1f\n", 1.05);
printf("%1.1f\n", 1.15);
printf("%1.1f\n", 1.25);
printf("%1.1f\n", 1.35);
printf("%1.1f\n", 1.45);
printf("%1.1f\n", 1.55);
printf("%1.1f\n", 1.65);
printf("%1.1f\n", 1.75);
printf("%1.1f\n", 1.85);
printf("%1.1f\n", 1.95);
return EXIT_SUCCESS;
}
zsh % gcc tmp.c
zsh % ./a.out
1.1
1.1
1.2
1.4
1.4
1.6
1.6
1.8
1.9
1.9
so I think, no this is not a part of our spec. This is just machine-dependent. Strictly specifying this behaviour would not make both C/JRuby people happy.
Updated by shyouhei (Shyouhei Urabe) almost 10 years ago
- Status changed from Closed to Assigned
- Assignee set to matz (Yukihiro Matsumoto)
Anyway I'm assigning this to matz, as it turned out to be a spec issue. How do you feel matz?
Updated by headius (Charles Nutter) almost 10 years ago
I would agree with leaving this behavior unspecified. Our behavior also matches underlying platform.
Updated by headius (Charles Nutter) over 9 years ago
If there's nothing further to do here and we all agree that the details of rounding logic are implementation-dependent, this can be closed.
Updated by kosaki (Motohiro KOSAKI) over 9 years ago
- Status changed from Assigned to Closed
7037: float formatting inconsistently rounds half to even
-
- I believe we agree that float half-rounding behavior is
impl-specific, so this could be closed.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/7037 | CC-MAIN-2022-27 | refinedweb | 2,028 | 68.06 |
"I am delighted once again to pen the welcome note to the Tosh!Yas Technologies ."
Call +91 74 88 34 7779 | Email : anishsingh@live.com
#include <stdio.h> int main() { int number; printf("Enter an integer: "); scanf("%d", &number); // True if the number is perfectly divisible by 2 if(number % 2 == 0) printf("%d is even.", number); else printf("%d is odd.", number); return 0; }
Output
Enter an integer: -7 -7 is odd.
#include <stdio.h> int main() { int number; printf("Enter an integer: "); scanf("%d", &number); (number % 2 == 0) ? printf("%d is even.", number) : printf("%d is odd.", number); return 0; }
Program to display a number if user enters negative number
// If user enters positive number, that number won't be displayed
#include <stdio.h> int main() { int number; printf("Enter an integer: "); scanf("%d", &number); // Test expression is true if number is less than 0 if (number < 0) { printf("You entered %d.\n", number); } printf("The if statement is easy."); return 0; }
Output 1
Enter an integer: -2 You entered -2. The if statement is easy.
When user enters -2, the test expression
(number < 0) becomes true. Hence, You entered -2 is displayed on the screen.
Output 2
Enter an integer: 5 The if statement in C programming is easy.
C programming language assumes any non-zero and non-null values as true, and if it is either zero or null, then it is assumed as false value.
C programming language provides the following types of decision making statements.
Statement & Description
The syntax of an 'if' statement in C programming language is −.
C programming language assumes any non-zero and non-null values as true and if it is either zero or null, then it is assumed as false value.
#include <stdio.h> int main () { /* local variable definition */ int a = 10; /* check the boolean condition using if statement */ if( a < 20 ) { /* if condition is true then print the following */ printf("a is less than 20\n" ); } printf("value of a is : %d\n", a); return 0; }
When the above code is compiled and executed, it produces the following result −
a is less than 20; value of a is : 10
An if statement can be followed by an optional else statement, which executes when the Boolean expression is false.
An if statement can be followed by an optional else statement, which executes when the Boolean expression is false.
The syntax of an if...else statement in C programming language is −
if(boolean_expression) { /* statement(s) will execute if the boolean expression is true */ } else { /* statement(s) will execute if the boolean expression is false */ }
If the Boolean expression evaluates to true, then the if block will be executed, otherwise, the else block will be executed.
C programming language assumes any non-zero and non-null values as true, and if it is either zero or null, then it is assumed as false value.
#include <stdio.h> int main () { /* local variable definition */ int a = 100; /* check the boolean condition */ if( a < 20 ) { /* if condition is true then print the following */ printf("a is less than 20\n" ); } else { /* if condition is false then print the following */ printf("a is not less than 20\n" ); } printf("value of a is : %d\n", a); return 0; }
When the above code is compiled and executed, it produces the following result −
a is not less than 20; value of a is : 100
An if statement can be followed by an optional else if...else statement, which is very useful to test various conditions using single if...else if statement.
When using if...else if..else statements, there are few points to keep in mind −
An if can have zero or one else's and it must come after any else if's.
An if can have zero to many else if's and they must come before the else.
Once an else if succeeds, none of the remaining else if's or else's will be tested.
The syntax of an if...else if...else statement in C programming language is −
if(boolean_expression 1) { /* Executes when the boolean expression 1 is true */ } else if( boolean_expression 2) { /* Executes when the boolean expression 2 is true */ } else if( boolean_expression 3) { /* Executes when the boolean expression 3 is true */ } else { /* executes when the none of the above condition is true */ }
#include <stdio.h> int main () { /* local variable definition */ int a = 100; /* check the boolean condition */ if( a == 10 ) { /* if condition is true then print the following */ printf("Value of a is 10\n" ); } else if( a == 20 ) { /* if else if condition is true */ printf("Value of a is 20\n" ); } else if( a == 30 ) { /* if else if condition is true */ printf("Value of a is 30\n" ); } else { /* if none of the conditions is true */ printf("None of the values is matching\n" ); } printf("Exact value of a is: %d\n", a ); return 0; }
When the above code is compiled and executed, it produces the following result −
None of the values is matching Exact value of a is: 100
It is always legal in C programming to nest if-else statements, which means you can use one if or else if statement inside another if or else if statement(s).statements.
A switch statement allows a variable to be tested for equality against a list of values. Each value is called a case, and the variable being switched on is checked for each switch case.
The syntax for a switch statement in C programming language is as follows −
switch(expression) { case constant-expression : statement(s); break; /* optional */ case constant-expression : statement(s); break; /* optional */ /* you can have any number of case statements */ default : /* Optional */ statement(s); }
The following rules apply to a switch statement −.
#include <stdio.h> int main () { /* local variable definition */ char grade = 'B'; switch(grade) { case 'A' : printf("Excellent!\n" ); break; case 'B' : case 'C' : printf("Well done\n" ); break; case 'D' : printf("You passed\n" ); break; case 'F' : printf("Better try again\n" ); break; default : printf("Invalid grade\n" ); } printf("Your grade is %c\n", grade ); return 0; }
When the above code is compiled and executed, it produces the following result −
Well done Your grade is B
It is possible to have a switch as a part of the statement sequence of an outer switch. Even if the case constants of the inner and outer switch contain common values, no conflicts will arise.
The syntax for a nested switch statement is as follows −
switch(ch1) { case 'A': printf("This A is part of outer switch" ); switch(ch2) { case 'A': printf("This A is part of inner switch" ); break; case 'B': /* case code */ } break; case 'B': /* case code */ }
| http://anishsir.in/itemlist/user/598-anishsir?start=10 | CC-MAIN-2018-05 | refinedweb | 1,112 | 55.88 |
Link: USACO06JAN 牛的舞会The Cow Prom
Description:.
约翰的N (2 <= N <= 10,000)只奶牛非常兴奋,因为这是舞会之夜!她们穿上礼服和新鞋子,别 上鲜花,她们要表演圆舞.
只有奶牛才能表演这种圆舞.圆舞需要一些绳索和一个圆形的水池.奶牛们围在池边站好, 顺时针顺序由1到N编号.每只奶牛都面对水池,这样她就能看到其他的每一只奶牛.
为了跳这种圆舞,她们找了 M(2<M< 50000)条绳索.若干只奶牛的蹄上握着绳索的一端, 绳索沿顺时针方绕过水池,另一端则捆在另一些奶牛身上.这样,一些奶牛就可以牵引另一些奶 牛.有的奶牛可能握有很多绳索,也有的奶牛可能一条绳索都没有.
对于一只奶牛,比如说贝茜,她的圆舞跳得是否成功,可以这样检验:沿着她牵引的绳索, 找到她牵引的奶牛,再沿着这只奶牛牵引的绳索,又找到一只被牵引的奶牛,如此下去,若最终 能回到贝茜,则她的圆舞跳得成功,因为这一个环上的奶牛可以逆时针牵引而跳起旋转的圆舞. 如果这样的检验无法完成,那她的圆舞是不成功的.
如果两只成功跳圆舞的奶牛有绳索相连,那她们可以同属一个组合.
给出每一条绳索的描述,请找出,成功跳了圆舞的奶牛有多少个组合? …
输入格式
Line 1: Two space-separated integers: N and M
Lines 2..M+1: Each line contains two space-separated integers A and B that describe a rope from cow A to cow B in the clockwise direction.
输出格式
Line 1: A single line with a single integer that is the number of groups successfully dancing the Round Dance.
输入输出样例
输入 #1
5 4
2 4
3 5
1 2
4 1
输出 #1
1
说明/提示
Explanation of the sample:
ASCII art for Round Dancing is challenging. Nevertheless, here is a representation of the cows around the stock tank:
_1___ /**** \ 5 /****** 2 / /**TANK**| \ \********/ \ \******/ 3 \ 4____/ / \_______/
Cows 1, 2, and 4 are properly connected and form a complete Round Dance group. Cows 3 and 5 don’t have the second rope they’d need to be able to pull both ways, thus they can not properly perform the Round Dance.
Problem solving:
这道题就是求一下强连通分量(自己一个点形成的强连通分量不可以)的个数。关于这个我也说不清也没啥好说的。单纯记录一下这道题,和tarjan的板子。
Code:
#include <bits/stdc++.h> using namespace std; const int maxn = 5e4 + 10; int dfn[maxn], low[maxn], Time = 0, cnt = 0, Stack[maxn], top = -1; bool vis[maxn]; int n, m; vector<int> V[maxn]; inline int read() { int x = 0, flag = 1; char c = getchar(); while (c < '0' || c > '9') { if (c == '-') flag = -1; c = getchar(); } while (c <= '9' && c >= '0') { x = x * 10 + (int) (c - '0'); c = getchar(); } return x * flag; } inline void tarjan(int x) { dfn[x] = low[x] = Time++; Stack[++top] = x; vis[x] = 1; for (int i = 0; i < V[x].size(); i++) { if (!vis[V[x][i]]) tarjan(V[x][i]), low[x] = min(low[x], low[V[x][i]]); else low[x] = min(low[x], dfn[V[x][i]]); } if (dfn[x] == low[x]) { int j, flag = 0; do { j = Stack[top--]; flag++; vis[j] = 0; } while (j != x); if (flag > 1) cnt++; } } int main() { int n, m; n = read(), m = read(); for (int i = 0; i < m; i++) { int x, y; x = read(), y = read(); V[x].push_back(y); } for (int i = 1; i <= n; i++) if (!dfn[i]) tarjan(i); cout << cnt << endl; return 0; } | https://cndrew.cn/2019/11/04/lg-2683/ | CC-MAIN-2019-47 | refinedweb | 412 | 77.47 |
Details
- About🍕🍕🍕🍕🍕🍕🍕🍕🍕
- Skillsi get overly excited about databases 🤓
Joined devRant on 9/26/2016
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
- I just had to put a Button + Link on a page.
the btn opens a link
and the link doesnt open a link, but copies to clip board.
And if u dont see whats wrong with this picture, you are part of the problem.4
- Head hunters reaching out with a "position that might intrest you".. with a stack of skills that are not on my CV at aaaaall
Also Headhunters after answering back: oh you have only 1 year experience with that tool? They want at least 3, goodbye.
All just a phishing scam to get u to give them an updated CV version.. no real "relevant position" in sight. Smh.3
- I just noticed that whenever I get a captcha to prove I'm human, it's always images from the street.. cars, crosswalks, traffic lights, trucks, bikes, tractors etc etc...
I just want to know... Who's ML model I've been helping to train for the last i don't know how many years?!? 😤
I should've realized something smelled funny the moment i understood that a bot is asking me to prove my humanity to it, by doing something that a bot should be able to do by now.6
-
-!!
-
- You know what's funny?
When every job post requires you to know atleast 20 things.. but then you find yourself stuck at a job that needs you to do 1 stupid thing. And you need to actually FIGHT to not lose all the skills and knowledge you collected till now :(4
-
- In all the companies i worked at so far, never seen any employee (in any department) above age 50..maybe the ceo or some lawyer, but thats it.
Where do all the 50+ ppl go in tech?
I'm not sure the tech world will be what it is now even in 10 years.. but I'm just wondering do any of u guys think/worry about where we'll work when we stop being young and cool?
Looks like it's either u start your own business, which not everyone can, or..
what is option B exactly?!?20
-
- How much does a syntax of a language impact ur opinion of it and wether you'll want to use/learn it?
Some languages I look at and it just looks like an unreadable mess, i dont even bother with. Avoided nodejs up until i didnt have to deal with 100 nested callbacks...11
- Was learning a bit AI today and then i thought.. who's actually learning more here? Me about the ai or ai about humans (me) 😐4
- I've spent many years in a bubble of 1 backend lang.. but when i started checking out other langs, I got very annoyed that each one has same basic stuff but with different syntax... Can we just agree on something? Ffs!
We really couldnt come up with unified syntax for -
false, False FALSE
OR or ||
def func function
And dont get me started on all the variations of for loops... Its like we are trying make our life hard
Looking at new versions of some langs, looks like they are copying new stuff from one another.. with different syntax.. thanks!
Nodejs trying to look more like she doesnt have callbacks.. while other langs adding callback functionality... Why why why?
- POLL:
What kind of office do u prefer working at?
1. Big open space (with tables and no separations)
2. Open space with cubicles
3. Medium room (3-5 ppl)
And did ur preference change due to covid?8
- Let me tell all of you who don't like big frameworks..
The nice thing about them is that they minimize the amount of SHIT CODE all of you who think you know how to code, but actually don't, write..
And minimizes the amount of headache for the devs who need to then maintain/fix/change your SHIT CODE.
yes...lets put routes 10 dirs deep into the project and let ppl look for it..3
- "If you're going to be living in the office, you can at least be on time for work"
This sums my mornings since i started working from home..
8:55 get out of bed
9:00 open laptop....and I'm at work :|1
-
-
- At this point of my job search, I'm gonna start singing Ariana's "Thank you, next", after each interview... JEEZ, is there no normal work places anymore?!?!?
- Fuck who ever put the `hosts` file in that path WHICH IS IMPOSSIBLE TO REMEMBER!
and then fuck who put the httpd-vhosts.conf in a totally different path that is impossible to remember!8
- Can someone make like a tinder app, only for finding employees/job?
Can we remove HR companies from standing in the middle and making things hard on both sides?!?! Fuuuck
This could be so much easier..swipe..swipe .. There's a match.. scheduled interview.. done14
- Found a Google employee in street view getting lost and using paper map .. There are also seem to be 2 guides with him...
I find it kinda amusing9
- Fullstack developer Job ad:
........ Building the next generation of IOT. Innovation that will change the way internet works.
Requiremenets:
- wordpress
...5 | https://devrant.com/users/vortex | CC-MAIN-2021-25 | refinedweb | 913 | 82.44 |
I have a df with 4 observations per company (4 quarter). However, for several companies I have less than 4 observations. When I don't have the 4 quarters for a firm I would like to delete all observations relative to the firm. Any ideas how to do this ?
I have a df with 4 observations per company (4 quarter). However, for several companies I have less than 4 observations. When I don't have the 4 quarters for a firm I would like to delete all observations relative to the firm. Any ideas how to do this ?
This is how the df looks like:
Quarter Year Company 1 2018 A 2 2018 A 3 2018 A 4 2018 A 1 2018 B 2 2018 B 1 2018 C 2 2018 C 3 2018 C 4 2018 C
In this df I would like to delete rows relative to company B because I only have 2 quarters.
Many thanks!")
The image is now treated as a matrix with rows and columns values stored in img.
Actually, if you check the type of the img, it will give you the following result:
>>>print(type(img)) <class 'numpy.ndarray'>
It’s a NumPy array! That why image processing using OpenCV is so easy. All the time you are working with a NumPy array. everyone :) Today I am beginning a new series of posts specifically aimed at Python beginners. The concept is rather simple: I'll do a fun project, in as few lines of code as possible, and will try out as many new tools as possible.
For example, today we will learn to use the Twilio API, the Twitch API, and we'll see how to deploy the project on Heroku. I'll show you how you can have your own "Twitch Live" SMS notifier, in 30 lines of codes, and for 12 cents a month.
Prerequisite: You only need to know how to run Python on your machine and some basic commands in git (commit & push). If you need help with these, I can recommend these 2 articles to you:
Python 3 Installation & Setup Guide
The Ultimate Git Command Tutorial for Beginners from Adrian Hajdin.
What you'll learn:
What you will build:
The specifications are simple: we want to receive an SMS as soon as a specific Twitcher is live streaming. We want to know when this person is going live and when they leave streaming. We want this whole thing to run by itself, all day long.
We will split the project into 3 parts. First, we will see how to programmatically know if a particular Twitcher is online. Then we will see how to receive an SMS when this happens. We will finish by seeing how to make this piece of code run every X minutes, so we never miss another moment of our favorite streamer's life.
To know if a Twitcher is live, we can do two things: we can go to the Twitcher URL and try to see if the badge "Live" is there.
Screenshot of a Twitcher live streaming.
This process involves scraping and is not easily doable in Python in less than 20 or so lines of code. Twitch runs a lot of JS code and a simple request.get() won't be enough.
For scraping to work, in this case, we would need to scrape this page inside Chrome to get the same content like what you see in the screenshot. This is doable, but it will take much more than 30 lines of code. If you'd like to learn more, don't hesitate to check my recent web scraping guide.
So instead of trying to scrape Twitch, we will use their API. For those unfamiliar with the term, an API is a programmatic interface that allows websites to expose their features and data to anyone, mainly developers. In Twitch's case, their API is exposed through HTTP, witch means that we can have lots of information and do lots of things by just making a simple HTTP request..
With your API key in hand, we can now query the Twitch API to have the information we want, so let's begin to code. The following snippet just consumes the Twitch API with the correct parameters and prints the response.
# requests is the go to package in python to make http request # import requests # This is one of the route where Twich expose data, # They have many more: endpoint = "?" # In order to authenticate we need to pass our api key through header headers = {"Client-ID": "<YOUR-CLIENT-ID>"} # The previously set endpoint needs some parameter, here, the Twitcher we want to follow # Disclaimer, I don't even know who this is, but he was the first one on Twich to have a live stream so I could have nice examples params = {"user_login": "Solary"} # It is now time to make the actual request response = request.get(endpoint, params=params, headers=headers) print(response.json())
The output should look like this:
{ 'data':[ { 'id':'35289543872', 'user_id':'174955366', 'user_name':'Solary', 'game_id':'21779', 'type':'live', 'title':"Wakz duoQ w/ Tioo - GM 400LP - On récupère le chall après les -250LP d'inactivité !", 'viewer_count':4073, 'started_at':'2019-08-14T07:01:59Z', 'language':'fr', 'thumbnail_url':'-{width}x{height}.jpg', 'tag_ids':[ '6f655045-9989-4ef7-8f85-1edcec42d648' ] } ], 'pagination':{ 'cursor':'eyJiIjpudWxsLCJhIjp7Ik9mZnNldCI6MX19' } }
This data format is called JSON and is easily readable. The
data object is an array that contains all the currently active streams. The key
type ensures that the stream is currently
live. This key will be empty otherwise (in case of an error, for example).
So if we want to create a boolean variable in Python that stores whether the current user is streaming, all we have to append to our code is:
json_response = response.json() # We get only streams streams = json_response.get('data', []) # We create a small function, (a lambda), that tests if a stream is live or not is_active = lambda stream: stream.get('type') == 'live' # We filter our array of streams with this function so we only keep streams that are active streams_active = filter(is_active, streams) # any returns True if streams_active has at least one element, else False at_least_one_stream_active = any(streams_active) print(at_least_one_stream_active)
At this point,
at_least_one_stream_active is True when your favourite Twitcher is live.
Let's now see how to get notified by SMS.
So to send a text to ourselves, we will use the Twilio API. Just go over there and create an account. When asked to confirm your phone number, please use the phone number you want to use in this project. This way you'll be able to use the $15 of free credit Twilio offers to new users. At around 1 cent a text, it should be enough for your bot to run for one year.
If you go on the console, you'll see your
Account SID and your
Auth Token , save them for later. Also click on the big red button "Get My Trial Number", follow the step, and save this one for later too.
Sending a text with the Twilio Python API is very easy, as they provide a package that does the annoying stuff for you. Install the package with
pip install Twilio and just do:
from twilio.rest import Client client = Client(<Your Account SID>, <Your Auth Token>) client.messages.create( body='Test MSG',from_=<Your Trial Number>,to=<Your Real Number>)
And that is all you need to send yourself a text, amazing right?>)
This snippet works great, but should that snippet run every minute on a server, as soon as our favorite Twitcher goes live we will receive an SMS every minute.
We need a way to store the fact that we were already notified that our Twitcher is live and that we don't need to be notified anymore.
The good thing with the Twilio API is that it offers a way to retrieve our message history, so we just have to retrieve the last SMS we sent to see if we already sent a text notifying us that the twitcher is live.
Here what we are going do to in pseudocode:
if favorite_twitcher_live and last_sent_sms is not live_notification: send_live_notification() if not favorite_twitcher_live and last_sent_sms is live_notification: send_live_is_over_notification()
This way we will receive a text as soon as the stream starts, as well as when it is over. This way we won't get spammed - perfect right? Let's code it:
# reusing our Twilio client last_messages_sent = client.messages.list(limit=1) last_message_id = last_messages_sent[0].sid last_message_data = client.messages(last_message_id).fetch() last_message_content = last_message_data.body
Let's now put everything together again:
import requests from twilio.rest import Client client = Client(<Your Account SID>, <Your Auth Token>)) last_messages_sent = client.messages.list(limit=1) if last_messages_sent: last_message_id = last_messages_sent[0].sid last_message_data = client.messages(last_message_id).fetch() last_message_content = last_message_data.body online_notified = "LIVE" in last_message_content offline_notified = not online_notified else: online_notified, offline_notified = False, False if at_least_one_stream_active and not online_notified: client.messages.create(body='LIVE !!!',from_=<Your Trial Number>,to=<Your Real Number>) if not at_least_one_stream_active and not offline_notified: client.messages.create(body='OFFLINE !!!',from_=<Your Trial Number>,to=<Your Real Number>)
And voilà!
You now have a snippet of code, in less than 30 lines of Python, that will send you a text a soon as your favourite Twitcher goes Online / Offline and without spamming you.
We just now need a way to host and run this snippet every X minutes.
To host and run this snippet we will use Heroku. Heroku is honestly one of the easiest ways to host an app on the web. The downside is that it is really expensive compared to other solutions out there. Fortunately for us, they have a generous free plan that will allow us to do what we want for almost nothing.
If you don't already, you need to create a Heroku account. You also need to download and install the Heroku client.
You now have to move your Python script to its own folder, don't forget to add a
requirements.txt file in it. The content of the latter begins:
requests twilio
This is to ensure that Heroku downloads the correct dependencies.
cd into this folder and just do a
heroku create --app <app name>.
If you go on your app dashboard you'll see your new app.
We now need to initialize a git repo and push the code on Heroku:
git init heroku git:remote -a <app name> git add . git commit -am 'Deploy breakthrough script' git push heroku master
Your app is now on Heroku, but it is not doing anything. Since this little script can't accept HTTP requests, going to
<app name>.herokuapp.com won't do anything. But that should not be a problem.
To have this script running 24/7 we need to use a simple Heroku add-on call "Heroku Scheduler". To install this add-on, click on the "Configure Add-ons" button on your app dashboard.
Then, on the search bar, look for Heroku Scheduler:
Click on the result, and click on "Provision"
If you go back to your App dashboard, you'll see the add-on:
Click on the "Heroku Scheduler" link to configure a job. Then click on "Create Job". Here select "10 minutes", and for run command select
python <name_of_your_script>.py. Click on "Save job".
While everything we used so far on Heroku is free, the Heroku Scheduler will run the job on the $25/month instance, but prorated to the second. Since this script approximately takes 3 seconds to run, for this script to run every 10 minutes you should just have to spend 12 cents a month.Ideas for improvements
I hope you liked this project and that you had fun putting it into place. In less than 30 lines of code, we did a lot, but this whole thing is far from perfect. Here are a few ideas to improve it:
Do not hesitate to tell me in the comments if you have more ideas.
I hope that you liked this post and that you learned things reading it. I truly believe that this kind of project is one of the best ways to learn new tools and concepts, I recently launched a web scraping API where I learned a lot while making it.
Please tell me in the comments if you liked this format and if you want to do more.
I have many other ideas, and I hope you will like them. Do not hesitate to share what other things you build with this snippet, possibilities are endless.
Happy Coding.
Pierre: | https://morioh.com/p/7b8fd4bd89cb | CC-MAIN-2019-47 | refinedweb | 2,099 | 71.14 |
Closed Bug 391713 (simplearia) Opened 13 years ago Closed 13 years ago
Simplify ARIA roles & attributes in text/html -- deal with the namespace dependency
Categories
(Core :: Disability Access APIs, defect)
Tracking
()
People
(Reporter: aaronlev, Assigned: aaronlev)
References
(Blocks 1 open bug)
Details
(Keywords: access)
HTML 5 is the way forward -- most likely XHTML usage will remain small. Therefore namespaces are a signicant barrier for ARIA adoption. It's too hard to use them in text/html. So at least in text/HTML we should allow ARIA roles and attributes without looking at what the namespace is. That way, whatever solution the HTML 5 community comes up with for ARIA attribute usage, the Firefox 3 implementation will be ready.
It would still be strict in application/xml+xhtml
One problem, I suppose we should allow the author to do both: setAttribute("aaa:required", "true") or setAttributeNS("", "aaa:required", "true") or setAttribute("state:required", "true") etc. There could be multiple attributes on the same node for required. But, I think we could deal with that. And then there are HTML attributes with the same name as the ARIA attribute. On might have disabled="disabled" instead of disabled="true" in HTML. In fact, in HTML disabled="false" means it's disabled -- any value on the disabled attribute.
Another possibility is to hardcode the meaning of "aaa:" into the HTML parser.
Possibilities: 1) Relax namespace checking in text/html, dont' require a namespace. Allow ARIA attributes to use directly in markup. Cons: Possible conflicts with attributes. For example, disabled="false" on an form control in HTML means it is actually disabled, because any value turns it on. 2) Recognize actual attribute name prefix "aaa:foo" as ARIA attribute names in Mozilla's DOM to a11y API mapping code. Pros: authors can just use setAttribute() and it will also set an attribute with the same name in IE and other browsers. The script and DOM will look the same. Consistently having "aaa:" as the prefix might simplify authoring and testing. Cons: HTML community may not like requiring it to have "aaa:" prefix. They may want some other kind of prefix. 3) Hardcode meaning of "aaa:" into parser Cons: Requires change to parser Script author still has to use setAttributeNS() in browsers with namespaces vs. setAttribute() in IE. Script and DOM now look different in IE vs. Firefox. Not sure what the benefit is over #2
Summary: Should we relax namespace requirements for ARIA roles and attributes? → Should we relax namespace requirements for ARIA roles and attributes, when used in text/html?
And for roles, I suggest that wairoles no longer require a namespace prefix at all, and are just recognized. If there is no prefix for a role name it would just default to looking in xhtml (already does) and wairoles.
Ah, I remember the pro of #3 over #2. The author can use setAttribute() but the mozilla/accessible code doesn't need to check for both "aaa:attributename" and the actual namespaced attribute.
To me, #1 seems preferable except for name conflicts. However, to the extent there are name conflicts, is it even necessary (in HTML) to use ARIA attributes instead of the HTML5 disabled and required boolean attributes? #2 scares me, because using the colon in the local name of the attribute would be namespace-ill-formed on the XHTML side and, therefore, would make the HTML and XHTML DOMs diverge for good, which would make the migration barrier between the two even higher. Doing something like #3 keeps popping up in the context of HTML5. I see some kind of inevitability to it unless it is decided specifically to resist the introduction of namespaces to HTML5. However, due to legacy issues with IE and existing content, this is going to be very difficult to spec, which is why putting something like it in Gecko in the Firefox 3 time frame does not seem like a good idea to me. So in summary, I suggest exploring doing #1 in a way that doesn't use ARIA true/false attributes but HTML-style boolean attributes where the presence--not the value matters. CCing Hixie.
> is it even necessary (in HTML) to use ARIA attributes > instead of the HTML5 disabled and required boolean attributes? Some of them are slightly different. For example, checked="false" means something is checked in HTML. In ARIA it means that it is not checked, but potentially is checkable. If the checked attribute is missing in ARIA this indicates that the element is not checkable. For example this is useful on tree/menu/list items that have a checkbox.
Looking at the list of properties, it does seem a bit heavy to put them into HTML wholesale. Hixie?
Summary: Should we relax namespace requirements for ARIA roles and attributes, when used in text/html? → Simplify ARIA roles & attributes in text/html -- deal with the namespace dependency
I spun off the simpler role prefix issue (not requiring a QName for WAI role usage when in text/html) to bug 395909.
Alias: simplearia
Is it worth to keep this bug open in the light of bug 398910? Is this bug still actual?
Dealt with via the dependencies.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED | https://bugzilla.mozilla.org/show_bug.cgi?id=391713 | CC-MAIN-2020-29 | refinedweb | 873 | 71.75 |
Hey guys.
I just had a problem with converting, and you guys helped me fixed it. So, here is my finished code:
Code :
import java.util.Scanner; public class RandomKeyCode { public static void generate() { double c1, c2, c3, c4, c5; String ccc1, ccc2, ccc3, ccc4, ccc5; String[] choice = new String[5]; String keycode; c1 = ((Math.random()*46)+1); ccc1 = find(c1); c2 = ((Math.random()*46)+1); ccc2 = find(c2); c3 = ((Math.random()*46)+1); ccc3 = find(c3); c4 = ((Math.random()*46)+1); ccc4 = find(c4); c5 = ((Math.random()*46)+1); ccc5 = find(c5); choice[0] = ccc1; choice[1] = ccc2; choice[2] = ccc3; choice[3] = ccc4; choice[4] = ccc5; keycode = choice[0] + choice[1] + choice[2] + choice[3] + choice[4]; System.out.print("Your keycode is: " + keycode + "."); System.out.print("Your numbers were " + c1 + " " + c2 + " " + c3 + " " + c4 + " " + c5); } public static void enter() { } public static String find(double c) { String out = null; switch((byte = null; //question variable Scanner scan = new Scanner(System.in); System.out.print("Do you want to generate your key code, or do you want to enter it?"); q = scan.nextLine(); switch(q.toLowerCase()) { case("generate") : generate(); case("enter") : enter(); } } }
So, I try to do it and here is the output:
Do you want to generate your key code, or do you want to enter it?generate
Your keycode is: ))))).Your numbers were 2.2230319103237237 40.409992358385196 46.01823508515554 8.75421673376253 7.440266533810639
As you can see, it gives me random numbers, but since the switch can't handle the output of the decimal numbers, the switch defaults to the last character.
I need to know how to make the switch able to sense the difference or how to switch the type so that it won't give me decimals.
-Silent | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18879-switch-problem-printingthethread.html | CC-MAIN-2013-48 | refinedweb | 292 | 65.12 |
Password Brute-forcer in Python
Introduction: Password Brute-forcer in Python
Introduction
To be clear, while this is a tutorial for how to create a password brute-forcer, I am not condoning hacking into anyone's systems or accounts. This is a very inefficient method which I decided to upload as I thought that many others may find it to be an interesting task (or just want some nerdy bragging points). If you wish to test it out using pyautogui: I recommend creating a website in html that does not use Capatcha and has a simple password and hosting it locally so that you can attempt to access that. Now before you continue to read on: if you want to create this entirely on your own then I do not recommend continuing to read on past the 1st section (which you will need) as this tutorial will contain many hints as this is relatively advanced programming. Likewise if you just want the code itself do not bother reading the whole (just the 1st part of the 1st section) tutorial as I attach a copy of the code below. There are also certain sections that refer to pyautogui, if you wish to only "print" or match the passwords then ignore these sections but if you want python to use your keyboard to type out the passwords then you will need to follow those instructions.
Step 1: Downloading Modules and Importing Built in Ones.
PyAutoGUI download (ignore this section if you don't want to use the keyboard inputs) you will still need to follow this step if all you want is the code
You will need to import itertools and you may also want to import time but it is not necessary (they are both built in functions)
If you don't want to have any more help than this I would strongly recommend looking into these modules if you are not already familiar with them.
Step 2: Create Your Starting Variables
You will need to create a string called "Alphabet" that contains all of the characters you wish to use. You could also create a list but that would take a lot longer to type out and would be no more effective.
You will also need to create a string under the name "username" that is set to either input or the username you wish to use or if you are not using PyAutoGUI you will want to set a variable called "password" and set it to user input. You do not need have a password function for PyAutoGUI as you would most likely be entering the password into a password input box so instead you have a username for the program to type out.
If you want to time the process (recommended for not using PyAutoGUI) then you will need to create a variable called "start" and assign it the value time.time()
Finally, you will need to create an integer called "CharLength" and assign it a value of 1. This will be used later to tell the in-built function itertools.products() how long the combinations should be. You do not technically need to create this variable but otherwise itertools.products runs through combinations with 0 characters which, when collecting data (e.g. averages) can mess with statistics.
This should look like this (do not read this if you want to do it for yourself):
<p>import pyautogui</p><p>Alphabet = ("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-_.")</p><p>CharLength = 1</p><p>username = "pancakehax@gmail.com"</p>
or if you aren't using PyAutoGUI:
<p>Alphabet = ("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-_.")</p><p>Password = input("What is your password?\n") start = time.time() counter = 1 </p><p>CharLength = 1</p>
Step 3: Creating the Brute-forcer Part 1
You will need to create a "for" loop that continues to run while your CharLength variables is not larger than the maximum number of characters you want (I suggest 25). This is not necessary but if you are planning on leaving it running for a long time then you would most likely want it to stop at some point as once it gets past a certain number of characters, it is most likely not working correctly.
Within this for loop you want to create a variable (i recommend calling it passwords) and assigning it the value itertools.product(Alphabet, repeat = CharLength) the variable will now be a generator from which you need to yield. Remember not to just print this as that will not work.
The way in which you print the products of a generator is:
for i in [generator name]: <p>print(i)</p>
But this is also not yet perfect as it would return the values "('a',)('b',) ('c',) ('d',)" which would be less than ideal; in order to remove this problem you will need to create a string version of the output and use the ".replace" built in function to remove any parts of the output that are not part of the actual attempt. You should use this format:
i = str(i)<br>i = i.replace(",","")
After this it changes significantly depending on if you are using PyAutoGUI or not; follow the corresponding final part of the tutorial.
Step 4: Creating the Brute Forcer Part 2: With PyAutoGUI
Warning: this step is only for if you are using and have downloaded PyAutoGUI: if you have not then please use the next step instead.
You will now need to use "pyautogui.typewrite()" to type the variable you created under the name username. This is important because most sites have a username box and so you should create one that has the same but if you don't need to use it, just ignore this part. You could do it like this:
pyautogui.typewrite(username)
Afterwards you will need to use the pyautogui functions keyDown and keyUp in order to press the enter key so that the website knows you have finished typing the username.
you will then need to do the same but instead have the program type the password (ensuring it still presses enter).
Step 5: Creating the Brute Forcer Part 2: Without PyAutoGUI
If you are not using pyautogui then you will want to check if the current attempt is equal to the password the user has entered.
If it is then you should create a variable called "end" and assign it the value time.time() then create another variable called "timetaken" and make that end - start; this tells you how long it took to find the password. Then you should tell the user how long it took to find their password as well as how many attempts.
After this you should, using the formula counter/timetaken, tell the user how many attempts were made per second.
Finally you need to print off their password so that they know the program correctly identified their password.
Step 6: How You Code Should Look at the End
If you used PyAutoGUI:
import itertools import time import pyautogui Alphabet = ("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-_.") CharLength = 1 username = "pancakehax@gmail.com" for Index in range(25): passwords = (itertools.product(Alphabet, repeat = Index)) for i in passwords: i = str(i) i = i.replace("[", "") i = i.replace("]", "") i = i.replace("'", "") i = i.replace(" ", "") i = i.replace(",", "") i = i.replace("(", "") i = i.replace(")", "") pyautogui.typewrite(username) pyautogui.keyDown("enter") pyautogui.keyUp("enter") pyautogui.typewrite(i) pyautogui.keyDown("enter") pyautogui.keyUp("enter") Index += 1
If you did not:
<p>import itertools import time Alphabet = ("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-_.") Password = input("What is your password?\n"). start = time.time() counter = 1</p><p>CharLength = 1 for CharLength in range(25): passwords = (itertools.product(Alphabet, repeat = CharLength)) print("\n \n") print("currently working on passwords with ", CharLength, " chars") print("We are currently at ", (counter / (time.time() - start)), "attempts per seconds") print("It has been ", time.time() - start, " seconds!") print("We have tried ", counter, " possible passwords!") for i in passwords: counter += 1 i = str(i) i = i.replace("[", "") i = i.replace("]", "") i = i.replace("'", "") i = i.replace(" ", "") i = i.replace(",", "") i = i.replace("(", "") i = i.replace(")", "") if i == Password: end = time.time() timetaken = end - start print("Found it in ", timetaken, " seconds and ", counter, "attempts") print("That is ", counter / timetaken, " attempts per second!") print(i) input("Press enter when you have finished") exit()
thx!
Welcome, Hax!
(Check your inbox.)
Oh, and I voted!
Thanks for sharing! | http://www.instructables.com/id/Password-Brute-forcer-in-Python/ | CC-MAIN-2018-05 | refinedweb | 1,388 | 62.88 |
Opened 4 months ago
Closed 2 months ago
#21674 closed Bug (fixed)
django.utils.module_loading.import_by_path considered harmful
Description
The purpose of this function is to import whatever its argument points to.
If it fails, it should raise ImportError, signalling that an import failed.
Unfortunately, it catches exceptions, including ImportErrors, and re-raises ImproperlyConfigured. Such exception masking makes it needlessly hard to diagnose circular import problems, because it makes it look like the problem comes from inside Django. It becomes supremely perverse when some code in Django catches ImproperlyConfigured and things go wrong further down the line.
I understand that the original intent was to provide more frienly error messages, but I believe that ImportError is a perfectly fine and suitable exception and that replacing it with a more generic one is a net loss.
(I know I'm attacking an old dogma, but this is an easy step in the long standing "improved error reporting" project.)
Attachments (0)
Change History (14)
comment:1 Changed 3 months ago by timo
comment:2 Changed 3 months ago by aaugustin
Well, it's a private API...
comment:3 Changed 3 months ago by aaugustin
Hum, no, it's documented. Well we could provide a replacement in Django and then deprecate it. It was only added in 1.6.
comment:4 Changed 3 months ago by timo
- Triage Stage changed from Unreviewed to Accepted
comment:5 Changed 3 months ago by berkerpeksag
- Cc berker.peksag@… added
- Has patch set
- Owner changed from nobody to berkerpeksag
- Status changed from new to assigned
I've opened a pull request on GitHub:
I borrowed the name "import_string" from the Peak project:
comment:6 Changed 3 months ago by claudep
I think that the ImproperlyConfigured exception might make sense in some cases, especially when dotted_path is coming from a setting. What about adding a new keyword argument to import_by_path, something like: def import_by_path(dotted_path, error_prefix='', error_class=ImproperlyConfigured). Then it would be possible to overwrite the exception raised by that function, on a case-by-case basis.
comment:7 Changed 3 months ago by berkerpeksag
Ah, good idea. Thanks claudep.
How about adding a new keyword argument named suppress_import_error? (somewhat similar to)
def import_by_path(dotted_path, error_prefix='', suppress_import_error=True): # or raise_import_error=False # snip except ImportError as e: if not suppress_import_error: raise raise ImproperlyConfigured # snip
comment:8 Changed 3 months ago by aaugustin
Really, I'm not eager to add an error masking API such as error_class. What's wrong with a plain ImportError? Usually it's pretty clear.
If a _caller_ can provide additional information for its specific use-case, it can catch and re-raise an exception. Exception chaining makes this less of a problem on Python 3. As long as we support Python 2, I'm -1 on catching an exception and raising another one.
I would suggest adding a new function that simply perfoms the import and doesn't handle exceptions. We would use it wherever the error_prefix argument of import_by_path isn't needed. We would make this function public and recommend it rather than import_by_path. Regarding its name, import_string sounds reasonable.
comment:9 Changed 3 months ago by timo
- Patch needs improvement set
Patch needs improvement as noted by Aymeric. Usage of the deprecated function also needs to be removed as I noted on the PR.
comment:10 Changed 3 months ago by berkerpeksag
- Patch needs improvement unset
I've updated my pull request to adress Aymeric and Tim's feedbacks:
Thanks!
comment:11 Changed 2 months ago by aaugustin
- Triage Stage changed from Accepted to Ready for checkin
The PR has been heavily reviewed by Timo and improved by Berker. I think it's ready. Thank you very much.
Timo, I'll let you do the merge, in case you want to review Berker's latest changes first.
comment:12 Changed 2 months ago by timo
My only reservation is that I think it's a bit confusing to have to catch AttributeError and ValueError (besides ImportError) for all usage of import_string. Do you think catching these exceptions inside import_string and re-raising ImportError for these cases would be considered harmful?
comment:13 Changed 2 months ago by aaugustin
That would be acceptable.
The biggest problem is that Django raises and catches ImproperlyConfigured too liberally. As long as we avoid that, I'm happy.
comment:14 Changed 2 months ago by Tim Graham <timograham@…>
- Resolution set to fixed
- Status changed from assigned to closed
Do you see any way we can make the change backwards compatible? | https://code.djangoproject.com/ticket/21674 | CC-MAIN-2014-15 | refinedweb | 752 | 54.02 |
Details
- Type:
Bug
- Status:
Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: JRuby 1.6.4
- Fix Version/s: JRuby 1.6.5, JRuby 1.7.0.pre1
- Component/s: Standard Library
-
- Testcase included:
- Patch Submitted:Yes
- Number of attachments :
Description
Each iteration of String#scan should result in a unique MatchData returned by Regexp.last_match, with each MatchData object reflecting the results of its match. Thus if String#scan resulted in two matches, after the first match Regexp.last_match would have different values than Regexp.last_match would have after the second match.
However, in the current implementation each MatchData resulting from an invocation of String#scan has the values of the most recent match. For example, from the attached unit test:
def test_scan firstmatch = nil str = "testing" re = Regexp.new('(t[^t]*)') str.scan(re) do |match| if firstmatch.nil? firstmatch = Regexp.last_match assert_equal "tes", firstmatch[0] else secondmatch = Regexp.last_match assert_equal "ting", secondmatch[0] # not the same object assert firstmatch.object_id != secondmatch.object_id # should still be the value of the first match assert_equal "tes", firstmatch[0] end end end
although they are different objects (per the object_id assertions) firstmatch and secondmatch have the same results, so the assertion:
assert_equal "tes", firstmatch[0]
will fail, with firstmatch[0] equaling "ting", the results for the second match. (Note that this test succeeds with Ruby 1.8 and 1.9.)
The reason for this behavior is that during String#scan, a MatchData is created for each match. The MatchData object has an attribute "regs" (org.joni.Region), which refers to where the pattern matched in the string.
The issue is that when String#scan creates MatchData objects for each pattern match, each of the MatchData objects refer to the same Region instance. Subsequent matches result in the Region object being updated, and each MatchData object sharing a reference to that Region will have the same value, as used in MatchData#to_s and MatchData#captures.
The solution is to clone the Region object for each newly-created MatchData, as in the attached patch.
Activity
I merged and pushed these patches to both the master and 1.6 branches.
Jeff,
I tested the patch, and it looks good. Can you submit a git-formatted patch, though, so that I can sign off and give you credit for it?
Also, it would be nice if you can turn your test into a RubySpec.
Thanks. | http://jira.codehaus.org/browse/JRUBY-6141 | CC-MAIN-2014-35 | refinedweb | 400 | 57.57 |
Guys heres my problem:
I am trying to read an integer from the user(e.g. 12345)
How can i check if the pattern "34" exists in the first integer?
My constraint is that i cannot convert it to string and cannot use arrays.
Here is what i managed to write to print some of the patterns that exists in 12345:
import math
int1 = int(input("input an integer: "))
#I used this to find out how many digits are in the integer
count = math.ceil(math.log10(int1))
for i in range(count):
print (int1 % (10 ** (i+1)))
print (int1 // (10 ** (i+1))) | http://forums.devshed.com/python-programming-11/hopelessly-stuck-please-shed-light-951581.html | CC-MAIN-2015-48 | refinedweb | 103 | 78.18 |
On 11/6/06, Rahul Akolkar <rahul.akolkar@gmail.com> wrote:
> On 11/6/06, Rhys D Ulerich <rhys@us.ibm.com> wrote:
> > Hi all,
> >
> > Just wanted to report that using a custom action <something:send> for an
> > XML namespace different than SCXML's default causes trouble. Specifically,
> > having the backing Action class implement ExternalContent causes problems.
> > The <something:send> element matches just like it is a normal
> > <scxml:send> element and the Digester rules barf--
> >
> <snip/>
>
> Worth adding a test to the nightly suite for this, it should be
> distinguishable by namespace. Curious, which binaries are you using?
> (version 0.5, built from latest source etc.)
>
<snip/>
This was fixed already. I added a test case that confirms correct
behavior to the repository.
-Rahul
---------------------------------------------------------------------
To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-user-help@jakarta.apache.org | http://mail-archives.apache.org/mod_mbox/commons-user/200611.mbox/%3Cce1f2ea80611061741r2a19c2acg22d08c10d07acfa3@mail.gmail.com%3E | CC-MAIN-2014-23 | refinedweb | 146 | 52.05 |
An avowed goal of the inventors of XML was "XML documents should be human-legible and reasonably clear." While I like to think that "legible" means usable, I'm feeling that legibility is really a minimal standard; I think it's a polite way of saying "viewable with any text editor."
I've got some content (my Building Skills books) that I've edited with a number of tools. As I've changed tools, I've come to really understand what semantic markup means.
Once Upon A Time
When I started -- back in '00 or '01 -- I was taking notes on Python using BBEdit and other text-editor tools. That doesn't really count.
The first drafts of the Python book were written using AppleWorks; the predecessor to Apple's iWork Pages product. Any Mac text editor is a joy to use. Except, of course, that AppleWorks semantic markup wasn't the easiest thing to use. It was little more than the visual styles with meaningful names.
Then I converted the whole thing to XML.
DocBook Semantic Markup
The DocBook XML-based markup seemed to be the best choice for what I was doing. It was reasonably technically focused, and provided a degree of structure and formality.
To convert from AppleWorks, I exported the entire thing as text and then used the LEO Outlining Editor to painstakingly -- manually -- rework it into XML.
At this point, the XML tags were a visible part of the document, and editing the document means touching the tags. Not the easiest thing to do.
I switched to XMLmind's XXE. This was nice -- in a way. I didn't have to see the XML tags, but I was heavily constrained by the clunky way they handle the XML document structure. Double-clicking a word can lead to ambiguity on which level of tag you wanted to talk about.
The XML was "invisble" but the many-layered hierarchical structure was very much in my face.
RST Semantic Markup
After becoming a heavy user of Sphinx, I realized that I might be able to simplify my life by switching from XML to RST.
There are a number of gains when moving to RST.
- The document is simpler. It's approximately plain text, with a number of simple constraints.
- Editing is easier because the markup is both explicit and simple.
- The tooling is simpler. Sphinx pretty much does what I want with respect to publication.
There is just one big loss: semantic markup. DocBook documents are full of <acronym>TLA</acronym> to provide some meaningful classification behind the various words. It's relatively easy to replace these with RST's Interpreted Text Roles. The revised markup is :acronym:`TLA`.
The smaller, less relevant loss, is the inability to nest inline markup. I used nested markup to provide detailed <function><parameter>a</parameter></function> kind of descriptions. I think :code:`function(x)` is just as meaningful when it comes to analyzing and manipulating the XML with automated tools.
The Complete Set of Roles
I haven't finished the XML -> Sphinx transformation. However, I do have a list of roles that I'm working with.
Here's the list of literal conversions. Some of these have obvious Sphinx/RST replacements. Some don't. I haven't defined CSS markup styles for all of these -- but I could. Instead, I used the existing roles for presentation.
.. role:: parameter(literal).. role:: replaceable(literal).. role:: function(literal).. role:: exceptionname(literal).. role:: classname(literal).. role:: methodname(literal).. role:: varname(literal).. role:: envar(literal).. role:: filename(literal).. role:: code(literal).. role:: prompt(literal).. role:: userinput(literal).. role:: computeroutput(literal).. role:: guimenu(strong).. role:: guisubmenu(strong).. role:: guimenuitem(strong).. role:: guibutton(strong).. role:: guilabel(strong).. role:: keycap(strong).. role:: application(strong).. role:: command(strong).. role:: productname(strong).. role:: firstterm(emphasis).. role:: foreignphrase(emphasis).. role:: attribution.. role:: abbrev
The next big step is to handle roles that are more than a simple style difference. My benchmark is the :trademark: role.
Adding A Role
Here's what you do to add semantic markup role to your document processing tool stack.
First, write a small module to define the role.
Second, update Sphinx's conf.py to name your module. It goes in the extensions list.
Here's my module to define the trademark role.
import docutils.nodes
from docutils.parsers.rst import roles
def trademark_role(role, rawtext, text, lineno, inliner,
options={}, content=[]):
"""Build text followed by inline substitution '|trade|'
"""
roles.set_classes(options)
word= docutils.nodes.Text( text, rawtext )
symbol= docutils.nodes.substitution_reference( '|trade|', 'trade', refname='trade' )
return [word,symbol], []
def setup( app ):
app.add_role( "trademark", trademark_role )
Here's the tweak I made to my conf.py
import sys, os
project=os.path.join( "")
sys.path.append("/Users/slott/Documents/Writing/NonProg2.5/source")
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.ifconfig', 'docbook_roles' ]
That's it. Now I have semantic markup that produces additional text (in this case the TM symbol). I don't think there are too many more examples like this. I'm still weeks away from finishing the conversion (and validating all the code samples again.)
But I think I've preserved the semantic content of my document in a simpler, easier to use set of tools. | http://slott-softwarearchitect.blogspot.com/2009/06/semantic-markup-rst-vs-xml.html | CC-MAIN-2017-47 | refinedweb | 870 | 58.79 |
Learn how to improve WebGL performance when creating complex scenes with Three.js library, by moving the render away from the main thread into a Web worker with
OffscreenCanvas. Your 3D will render better on low-end devises and the average performance will go up.
After I added a 3D WebGL model of an earth on my personal website, I found that I immediately lost 5% on Google Lighthouse.
In this article, I will show you how to win back the performance without sacrificing cross-browser compatibility with a tiny library that I wrote for this purpose.
The Problem
With Three.js it is easy to create complex WebGL scenes. Unfortunately, it has a price. Three.js will add around 563 KB to your JS bundle size (and due to its architecture it is not really tree-shakeable).
You may say that the average background image could have the same 500 KB. But every kilobyte of JavaScript costs more to your website’s overall performance than a kilobyte of image data. Latency and bandwidth are not the only things to consider if you aim for a fast website: it is also important to consider how much time will the CPU spend on processing your content. And on lower-end devices, processing resources can take more time than downloading them.
Your webpage will be effectively frozen while the browser processes 500KB of Three.js code, as executing JavaScript takes up the main thread. Your user will bot be able to interact with a page until a scene is fully rendered.
Web Workers and Offscreen Canvas
Web Workers is a solution to avoid page freeze during JS execution. It is a way to move some JavaScript code to a separated thread.
Unfortunately, multi-thread programming is very hard. To make it simpler, Web Workers do not have access to DOM. Only the main JavaScript thread has this access. However, Three.js requires and access to the
<canvas> node located in the DOM.
OffscreenCanvas is a solution to this problem. It allows you to transfer canvas access the to Web Worker. It is still thread safe as the main thread cannot access
<canvas> once you opt for this workaround.
Sounds like we got our bases covered, but here’s the problem: Offscreen Canvas API is supported by Google Chrome only.
However, even in the face of our main enemy, cross-browser issues, we shall not be afraid. Let’s use progressive enhancement: we will improve performance for Chrome and future browsers. Other browsers will run Three.js the old way in the main JavaScript thread.
We need to come up with a way to write a single file for two different environments, keeping in mind that many DOM APIs will not work inside the Web Worker.
The Solution
To hide all the hacks and keep the code readable I created a tiny offscreen-canvas JS library (just 400 bytes). The following examples will rely on it, but I will also explain how it works under the hood.
First, add
offscreen-canvas npm package to your project:
npm install offscreen-canvas
We will need to provide a separated JS file for Web Worker. Let’s create a separate JS bundle in webpack’s or Parcel’s config.
entry: { 'app': './src/app.js', + 'webgl-worker': './src/webgl-worker.js' }
Bundlers will add a cache buster to bundle’s file names in production. To use the name in our main JS file, let’s add a preload tag. The exact code will depend on the way you generate HTML.
<link type="preload" as="script" href="./webgl-worker.js"> </head>
Now we should get the canvas node and a worker URL in the main JS file.
import createWorker from 'offscreen-canvas/create-worker' const workerUrl = document.querySelector('[rel=preload][as=script]').href const canvas = document.querySelector('canvas') const worker = createWorker(canvas, workerUrl)
createWorker looks for
canvas.transferControlToOffscreen to detect
OffscreenCanvas support. If the browser supports it, the library will load JS files as a Web Worker. Otherwise, it will load the JS file as a regular script.
Now, let’s open
webgl-worker.js
import insideWorker from 'offscreen-canvas/inside-worker' const worker = insideWorker(e => { if (e.data.canvas) { // Here we will initialize Three.js } })
insideWorker checks if it was it loaded in Web Worker. Depending on the environment, it will use different ways to communicate with the main thread.
The library will execute the callback on any message from the main thread. The first message from
createWorker for our worker will always be the object with
{ canvas, width, height } to initialize canvas.
+ import { + WebGLRenderer, Scene, PerspectiveCamera, AmbientLight, + Mesh, SphereGeometry, MeshPhongMaterial + } from 'three' import insideWorker from 'offscreen-canvas/inside-worker' + const scene = new Scene() + const camera = new PerspectiveCamera(45, 1, 0.01, 1000) + scene.add(new AmbientLight(0x909090)) + + let sphere = new Mesh( + new SphereGeometry(0.5, 64, 64), + new MeshPhongMaterial() + ) + scene.add(sphere) + + let renderer + function render () { + renderer.render(scene, camera) + } const worker = insideWorker(e => { if (e.data.canvas) { + // canvas in Web Worker will not have size, we will set it manually to avoid errors from Three.js + if (!canvas.style) canvas.style = { width, height } + renderer = new WebGLRenderer({ canvas, antialias: true }) + renderer.setPixelRatio(pixelRatio) + renderer.setSize(width, height) + + render() } })
While creating an initial state of the scene, we can find some error messages from Three.js. Not all the DOM APIs are available in a Web Worker. For instance, there is no
document.createElement to load SVG texture. We will need a different loader for Web Worker and regular script environments. We can detect the environment by
worker.isWorker property:()
We rendered the initial state of the scene. But most of WebGL scenes need to react to user actions. It could be rotating a camera with a mouse. Or updating
canvas on window resize. Unfortunately, Web Worker doesn’t have access to any of the DOM’s events. We need to listen to events in the main thread and send messages to the worker:
import createWorker from 'offscreen-canvas/create-worker' const workerUrl = document.querySelector('[rel=preload][as=script]').href const canvas = document.querySelector('canvas') const worker = createWorker(canvas, workerUrl) + window.addEventListener('resize', () => { + worker.post({ + type: 'resize', width: canvas.clientWidth, height: canvas.clientHeight + }) + })
const worker = insideWorker(e => { if (e.data.canvas) { if (!canvas.style) canvas.style = { width, height } renderer = new WebGLRenderer({ canvas, antialias: true })() - } + } else if (e.data.type === 'resize') { + renderer.setSize(width, height) + render() + } })
The Result
Using
OffscreenCanvas, I fixed UI freezes on my personal site in Chrome and got a full 100 score on Google Lighthouse. And my WebGL scene still works in all other browsers.
You can check the result: demo and source code for main thread and worker.
Discussion (3)
Oh jesus, the ridiculous things that we have to do with JS!
obviamente después de leer tu articulo me puse a trabajar, es interesante el rendimento que se puede usar cuando se tiene acceso a los diferentes hilos que cuenta el celular actualmente uso un procesador con un cluster lte de 8 núcleos crees que se pueda usar lo mismo para hacer render como threejs.org/examples/?q=ray#raytra...
Similar to worker-loader from webpack | https://dev.to/evilmartians/faster-webgl-three-js-3d-graphics-with-offscreencanvas-and-web-workers-43he | CC-MAIN-2021-49 | refinedweb | 1,193 | 58.79 |
Member Since 2 Years Ago
3,140 Modify Or Extends {{ }} Functionalities.
You could create a Blade directive.
In your App service provider register it); ?>"; }); } }
And then in blade use it:
@convert($var)
Check this package which has more examples:
Replied to What Are Some Good Draggable Javscript Libraries?
Check this
For save and restore check this example
Replied to Laravel 8 Forms Build Library
If you want something like laravel 8, with components, check this one
It has support for bootstrap and tailwind, and livewire, binding and other helpers.
But if you are using jetstream then you can use their components see here
The package above offer more features.
Replied to Multi Auth With Roles And Permissions
Each auth has it's own Guard in Laravel when you setup multi auth.
Then in spatie you need to specify which guard name you want to use for that role or permission you create or check. See
It's more simple to have just one auth and multiple roles/permissions that manage, and most probably is enough.
There are some package that can help you build multi auth skeleton in case you want go with it, see:
Replied to How To Build Ajax Registration And Login System In Laravel 8
With this vanilla js and axios you can process your login and register via ajax. Can and should be improved maybe instead of alert use some toast notifications. You could also display each single errors from server and/or add client side validation. In case you can also show validation errors appending after input. You don't need to change server code if you are using Laravel UI or Fortify. The response will be json when you make ajax request.
const formElement = document.querySelector('.js-ajax-form') formElement.addEventListener("submit", function (event) { event.preventDefault() const form = this const data = new FormData(form) const url = form.getAttribute('action') const method = form.getAttribute('method') axios({ method: method, url: url, data: data, headers: {'Content-Type': 'multipart/form-data'} }) .then(function (response) { alert('Successfully logged-in'); // Reload page so you will be redirected to default page defined in Laravel window.location.reload() }) .catch(function (error) { let message = 'An error occured.'; // Default error message if (typeof error.response !== 'undefined' && typeof error.response.data !== 'undefined') { // Error message from server if (typeof error.response.data.message !== 'undefined') message = error.response.data.message // Access input errors using: if (typeof error.response.data.errors !== 'undefined') { let errors = error.response.data.errors for (let input in errors) { if (errors.hasOwnProperty(input)) { //errors[input].join('<br/>') } } } } alert(message); }) })
Replied to Why Actions In Laravel 8?
I use also Spatie permissions package :) But I agree that it has unecessary features for my need (for avoid to use word abstractions :)... However, I refactor the Models of spatie permissions by removing guard and polymorphic relations, and using only constraint relation (users, roles, user_permissions, roles_permissions). I gain a bit in performance. Most of my package that I use are from spatie. And I customize a few to fit my need.
Replied to Why Actions In Laravel 8?
I'm glad we were able to understand each other.
From this I can see that you do not want to depend on packages and want to do as much is possible manually so you can have full control over it.
Yes but I don't want that you maybe misunderstand. I don't want that you think that I want re-invent the wheel. If I need a package, backend or frontend, I am more than happy to use it. But of course if I don't need it, then why I should keep it? Who want?
Now regarding your last questions regarding Livewire and Inertia. First about Livewire. Shortly, it's a good package for those who are not good with JS. It allow to create interactive page writing only PHP. But for a full stack developer who love frontend development like me, then is not solving any problem, rather is adding only new problem to solve. Many love concept of making interactive page without writing JS, and Caleb do good job delivering something that many need. But comparing it to original idea which it come from, which is Phoenix LiveView (), Livewire is just ajax requests vs socket of Phenix LiveView. Totally two different world.
I make the same result of Jetstream with Fortify and now doing the same with Laravel UI, but even better since all page and forms (also auth forms) is loaded via ajax using axios and turbolinks. So the same interactive UI which aim to offer Livewire.
I also think that is still not mature project, but that is minor issue, considering it popularity it will become mature very soon. But at the current stage I would not use it in production.
Alpine has similar issue, I just don't like to put JS inside HTML and trying to get my head around how to solve something that I can quickly and more eleganty do with plain vanilla JS. There is another one which do similar things, and it can be good for developers who can't work with JS.
In my opinion nowday developers must consider to learn also JS, and Livewire has potential to delay this if developer use it.
I could say more things specifically about Livewire that I found, but I think and hope you get my main points.
Inertia on other hand I like a bit more because you still write JS on frontend, and if you change backend, you can still use your frontend, while with Livewire you can't. If I have to choose between the two, then Inertia would be winner. But maybe I am a bit too conservative and still prefer old fashion with API on backend and routes via VueJS (my preferred).
Last thing not less important. I can't work with Tailwind. I really try but I can't. Maybe after so long time working with Bootstrap and other OOCSS I can't change my habit and switch to something so "low-level" to use Adam words. Maybe is only question of habit. I like the Adam TailwindUI, and I consider to use it, but for now I can do just with Bootstrap much faster what I can with tailwind. There are also several things that seem TailwindCSS can't solve. But that is another story.
Replied to Why Actions In Laravel 8?
Nope? In general both are first party packages that provide abstractions for problems they are solving.
We could now speculate with words, but I see Cashier solving 1 problem, extracting logic that can be re-used in several project. While a scaffolding, or boilerplate such as Jetstream doesn't solve 1 problem, has authentication, team, UI for 2 different frontend, etc. But taking into account your answer could be seen as the same.
So you are saying that Jetstream is an unnecessary abstraction and people should use laravel/ui instead if they do not plan to use Livewire/Inertia?
Seem that you finally get my point. Thanks God. I mean exactly that. IF they don't plan to use Livewire/Inertia BUT even if they plan to use Livewire or Inertia there is unecessary abstraction. Because they will use Livewire OR Inertia. Is necessary abstraction only if they use both.
So you are against it cause it makes you use Livewire or Inertia?
This will raise I think another discussion for another week, so I think is better for me to not comment this. It's that fine for you or you would like to hear my opinion regarding this?
Replied to Why Actions In Laravel 8?
@jlrdw I know that "secret". I am not sure to get your point. I am not saying anything against OOP or separation of code. Working with any MVC framework you are already forced to separate your code in very well manner. With Laravel you have also several others way to separate your logic (form requests, policies, middleware, service providers, resources, etc.).
@bugsysha if working with Laravel standard API is "procedural solution" then we have different view what is a procedural solution. Experienced developer make worry about abstraction when there is need to worry about abstractions. Where there is not, then keep it simple. The refactoring and abstraction stuff, ie polishing code, is something that I enjoy to do, is not something to fix, but rather something to improve. And what I see there is not a general abstraction right for any project, rather is something specific to application/project.
DRY in Jetstream is only to handle multiple frontend which I and most probably you and most of us will never need to manage. It's a solution dedicated to Jetstream. Is not a solution for any end application.
You can't compare Laravel Cashier with Jetstream, first is "an expressive, fluent interface to Stripe's subscription billing services." while second is "a beautifully designed application scaffolding for Laravel.".
Can you see difference?
Last, I am not saying Jetstream implementation is bad, but rather is not appropriate for common usage as what aim to be a "scaffolding application".
In my very first reply I didn't say "stay away from Jetstream, that is bad implementation", but I start with this word "I think that it depend on project that you are working.".
Awarded Best Reply on LV8 Jetstream /register: Set Default Value From Get-parameter
Inside component prop
:value don't use {{ }}
:value="old('code', request()->get('code'))"
Replied to LV8 Jetstream /register: Set Default Value From Get-parameter
Inside component prop
:value don't use {{ }}
:value="old('code', request()->get('code'))"
Replied to Why Actions In Laravel 8?
Point is that all apps tend to get bigger. So why postpone necessary?
I think that I implicit answer this questions in my 2 long answers after it. But for the sake of clarity, I will answer explicity.
I don't think that all apps tend to get bigger. I make for many customers web apps and mobile apps that simple remain as-is since first release, for years. Or they don't have cost for further development, or they close business for other reasons. Or they are happy with current stage. I have even my commercial social network application still running with PHP 5.x and PhalconPHP 1, still being used by several customers. Now nearly 2 years not updated apart minor fixes. Technology move fast, and I see that most of your application is more easy to refactor than upgrade to new one. This happen with an Angular 1 app, with several Ionic app, Phalcon PHP, and I think also for switching from Laravel 4 to 5.
I don't start a project thinking on abstractions immediately, I do that when in phase of polishing the code, when I see necessity of extraction of code. Refactoring is part of that process.
I think an inexperienced programmer starts thinking about abstractions at first, I used to do that too. But it is impossible to predict the evolution of the code unless you do a month of analysis with UML digrams and everything else. They raise development costs that customers are not happy to pay. I'm not saying they don't need to be done, on the contrary, in some projects they are essential. But in small to medium projects they are not. I prefer to keep the code as simple as possible at the beginning, even if that means repeating the code a few times. And only towards the end if I see the need I extract the code. But often this is not necessary.
Again, if you are large company working with many developers, then you may think differently. But being a solo developer or small team, you can't afford to work as a company.
Now, Jetstream/Fortify offer abstractions for handle 2 frontend, which I will never use. And this abstraction make code more complicated to manage. Something that I and other developers don't need make your starting point already complicated.
I hope that I did explain well myself.
May Jetstream be with you.
Replied to Why Actions In Laravel 8?
I've asked nicely but I've also tried to cut it short and avoid any conflict since as I said we have different standards.
Actually, you have not asked nothing :)
Quoting your message:
"@thewebartisan7 sorry since English is not my strong point I'm having really hard time understanding you. What you are saying is so confusing to me cause everything sounds inconsistent. You say one thing then the other."
No questions here :)
It's just that the way you write is confusing to me or maybe that is caused by my interpretation of what you are trying to say.
I agree that is a mix of two. English is not my primary language, but I think you are also making your own interpretation. Considering the belief that most of communication is body language, part is the tone of voice, and very small part is the actual words spoken, it's easy to get misunderstand via written communication such as this. It is said that about 10% is verbal, so we missed 90% of the communication. But try reading again, and see if you agree almost with Spatie quote. You can't disagre with that.
All best and may the Schwartz be with you.
It took me a while to understand where this sentence came from. But then I remembered... Spaceballs :) I like more the origin of this phrase, and is not "May the force be with you", but the original is "May the Lord be with you."
Bless
Replied to PSR-4 And Upgrading To Composer 2
For your Transaction package should be:
namespace Vendor\Transaction\Events;
If you are working for a company, then you can use as Vendor CompanyName or your name, or your project name, must be unique so that doesn't get in conflict with other vendor name.
SubNamespaceNames, in your case Transaction, MUST corresponds to at least one “base directory”. So in Transaction package the base directory is transaction, so is correct Transaction.
I suggest to read this page: is not long and has all specifications and examples.
Replied to PSR-4 And Upgrading To Composer 2
PSR-4 specification must have fully qualified class name in form:
<NamespaceName>(<SubNamespaceNames>)*<ClassName>
Your namespace missing a top-level namespace name, the vendor name.
Read full specification here:
Replied to Livewire Components Rendor Method
With trait you have also somethinig like mount, like boot of eloquent model, see WithPagination, initalizeTraitName()
Replied to Livewire Components Rendor Method
I don't think you can return a component. But I would create a trait, something like trait
Livewire\WithPagination where you add shared code between the two component.
So you have still 2 component, ClientsIndex and EmployesIndeex and one trait, example ResourceIndex, which you can include in both component, and both component return the same view.
Make sense?
Replied to Why Actions In Laravel 8?
@martinbean You can blaim Spatie, because this is their solutions, not mine.
Quote spatie:
"You might see a similarity between view models and Laravel resources. Remember that resources map one-to-one on a model, whereas view models may provide whatever data they want.
In our projects, we're actually using resources and view models combined:"
class PostViewModel { // … public function values(): array { return PostResource::make( $this->post ?? new Post() )->resolve(); } }
Replied to Why Actions In Laravel 8?
Replied to Why Actions In Laravel 8?
I just notice that you seem not interested to understand. When I don't understand something, I ask specific things that I don't understand. When you reply "What you are saying is so confusing to me cause everything sounds inconsistent. You say one thing then the other." that doesn't say too much what you don't understand. So is hard for me to try explain everything again.
Let me try explain with an example. Not sure if you know Spatie package which offer the so called view model pattern. Even if this may be good in large project, is unecessary for small project.
From Spatie blog about this view model pattern, quote:
"Now I know that this isn't a problem in small projects. When you're the only developer and have 20 controllers and maybe 20 view composers, it'll all fit in your head.
But what about the kind of projects we're writing about in this series? When you're working with several developers, in a codebase that counts thousands upon thousands lines of code, it won't all fit in your head anymore - certainly not on that scale."
I agree with this, and this resume of my points above, I think very clearly. Even a good pattern is not always required in all project.
Replied to Why Actions In Laravel 8?
I think that I was clear in my arguments. He who has ears to hear, let him hear. Bless
Replied to Why Actions In Laravel 8?
First you say that something that is more complex in your mind is simpler for me, and then you say that CRUD is complicated for me.
You are taking my words out of context and making allusions about what I wanted to say. For you is simpler Fortify/Jetstream vs Laravel UI is one thing. For you CRUD is something not "small", for me is something easy to do with 100 lines of code in controller, removing comments. This are two separated things. I then also point that (maybe) for you CRUD is complicated when working with package like Nova or Backpack. It was an assumption. There is not correlation, apart point out that such abstraction can only make things more complicated. Because Nova and Backpack has their use case in large project, and also Fortify/Jetstream has their use case. I am saying that in most small-medium project, such package like Nova/Backpack/Fortify/Jetstream are unecessary.
To explain a bit more what I am saying regarding CRUD being simple see this gist:
This is a generated controller with CRUD in admin for a dummy model. Removing comments and sorting/filtering/searching of query builder package, each methods has 2-3 lines of code, so maybe there are less than 100 lines of code.
Considering you have also API, then controllers will looks like:
Also this very slim.
For reusable code logic I use packages which I can share even between project, but make reusable a few lines of code make things only complicated when you need to put some conditions.
Everything is mostly a muscle memory at this point.
This happen only when you create a logic like Fortify/Jetstream or Nova/Backpack. You don't need to use memory to understand similar small part of code such as above controllers or such as Laravel UI controllers. I never need to remember how to do something with Laravel UI, because it's there easy to find once you open controllers. While with Fortify, after working a few weeks ago on it, I already forgot how to do even simple task on it. Such abstractions is not bad, I am not saying that. It has their use case. But in some project it's not good at all. I guess if you are using the same logic for all your code like Fortify/Jetstream by adding interfaces, and features enable/disable for each features you add in code. I don't think so.
I was just trying to have a quality discussion, but I guess that is not possible since we have different standards.
There is not standards in code, the right answer is depend on project you are working on.
I would just conclude with one questions: what is more easy exctract logic of Laravel UI in actions if you need them, or make Fortify/Jetstream like Laravel UI? First thing is possible, second is not. So as starter point for an auth, Laravel UI is winner. You can do with it the same things if you need.
No hard feeling for me, I think much more was said already in web, especially in reddit about this debat, and I see two different views on this. I personally agree with above, it depend.
Glad that OP has help understand the direction where he need to go.
Replied to Why Actions In Laravel 8?
Haven't worked on a project that had absolutely no need for reusability.
Of course we re-use many codes across projects and inside projects, from frontend to backend.
But there is not need in all projects to split code in multiple class, interfaces, etc. in all projects that you are working on, that is my point. If for you Fortify/Jetstream is semplification compared to Laravel UI then we have different understanding of semplification.
Projects that have admin CRUD is already not a small project in my mind. And if API is added to the mix then that is truly not a small-mid project.
Admin that do CRUD can be very easy when you do that with standard MVC logic, even if you have dozens of models. For you maybe use a CRUD like Nova or Backpack maybe is a semplification following your logic with Fortify/Jetstream, but for me is complication. You do 80% in 20% and the rest missing 20% in 80%, the famous rule. If you think that CRUD is something complicated, then you can learn from Spatie what means beyond the CRUD:
I have my own CRUD generator with a few click I build Edit, Update, Softdelete, Restore, Mass Actions, and much more with a single line of code or via browser UI, for admin and API, with Searching, Filtering, Autocomplete, etc. It generate files such as controllers, models, views, request, policies, etc. I am freelancer and I work on small-medium project mostly alone or with a small team of other freelancer, and I know very well most of patterns, and I try many of them. In the end you lose time thinking how to create the perfect abstraction, and come to a complexity that it's more hard to manage than re-create something similar.
I always create Request classes no matter what.
You don't use request classes with Foritfy, and I thnk you don't use in login. I also think that is unecessary to use in all forms, there are small case where you don't need them. There is no generic rule for everything.
Replied to Why Actions In Laravel 8?
that kind of structure does not make code reusable.
I point out that there is nothing to be re-used in most small-medium projects where you have admin, frontend and maybe API. Let's say user models, in admin area you do CRUD for all users, in frontend you allow logged-in user to edit profile. What is re-used is validation, that you can extract in Form Request. Something else can be placed in models, and in case you need to share logic you can create a Repository, or for example upload of avatar, you can create a Service class, Support or Helper, call it how you like. Same can be applied to any other models.
Taylor also said in video that this actions was created to handle multiple frontend, and in most project you have only 1 frontend. So is unecessary complexity added if you have just one frontend.
Replied to Why Actions In Laravel 8?
I think that it depend on project that you are working. In case of jetstream that has two frontend, livewire and inertia, actions is being used to share common logic.
In most small-medium project I think that is uncessary abstractions.
MVC is already a solid pattern, where you separated things, also Laravel offer Form Request which allow to share validations.
Most of time I have admin, frontend and maybe API. Validations are shared via Form Request, controllers are slim and most of logic is in Model and sometime in Repository.
For my small-medium project is pretty enough.
For other large project with many developers involved could make sense.
Replied to LinkenIn Login Using Socialite
I suppose you get this error
I get the same initially, then it's fixed after a few days, I think it takes some time for Linkedin to be approved.
Because the issue happen already a year ago when Linkedin changed their API in v2, see this issue:
Replied to LinkenIn Login Using Socialite
You need to be an approved developer to be able to use the r_basicprofile scope. See
Replied to Jquery Tools Min Problems
If you need only for gallery try search on google, there are many.
Example
It depend which feature you need, there are simple or complex.
Replied to Jquery Tools Min Problems
I even forgot that this existed... I use it something like 10 years ago... I would not use it, it's 8 years not updated, see
Google jquery sliders or gallery, or whatever you need, there are many around.
Replied to Toastr Notification Not Working
Your problem was that you calling toastr before you include toastr js plugin, the same you are doing was jquery:
Wrong:
<script> $(document).ready(function() { $('#sidebarCollapse').on('click', function() { $('#sidebar').toggleClass('active'); }); }); </script> @if (session()->has('success')) <script> toastr.success("{!! session()->get('success')!!}"); </script> @endif <script src=""></script> <script src=""></script>
Right:
<script src=""></script> <script src=""></script> // After including script add your script for jquery and toastr. <script> $(document).ready(function() { $('#sidebarCollapse').on('click', function() { $('#sidebar').toggleClass('active'); }); }); </script> @if (session()->has('success')) <script> toastr.success("{!! session()->get('success')!!}"); </script> @endif
Replied to Load Data From Checkboxes Array Ajax
In controller the problem seem to be here:
$services->each(function ($item) { $services = ([ 'price' => $item->price, 'name' => $item->name, ]); });
You are doing strange thing here, can you see?
On JS side your success method should looks something like:
if(data.service) { $.each( data.service, function(item){ $(".display_services").append( `<li class="list-group-item lh-condensed"> <div class="d-flex justify-content-between"> <span>${item.name}</span> <span class="text-muted">${item.price}</span> </div> </li>` ) } }); else{ $(".display_services").html('') }
I can't see the whole picture of your code, but there are several way to do this, and can be improved.
Maybe try to solve firstly how you can and in case you have some question, create new question, making it more specific.
Replied to Using A Modal To Delete A User
Try this:
// HTML button <button class="delete" data- Delete </button> // JS $('.delete').click(function(){ var userId = $(this).data('user'); // Now you have user id in variable userId console.log(userId); });
Replied to Use ENUMS With Select2 And Send To A Database
That package has a method
asSelectArray() which you can use to populate the select2, see
Another feature that you can consider is casting your model mail message with MailMessageType, see
Replied to Laravel 8: Allow Trailing Slashes In URL Generator
Try with this package:
Maybe you get some right direction or use it.
Good luck
Replied to Laravel User Registration And Email Verification With AJAX
You don't need separated controller, just use the same /register endpoint, which will handle both ajax and non ajax request, so also email verification if enabled. This work both with Laravel UI and Fortify.
Awarded Best Reply on Adding Laravel Rule To Livewire $rules[]
There is method:
public function rules() { return [ 'icao' => new AirlineScalesExist, ]; }
Replied to Adding Laravel Rule To Livewire $rules[]
There is method:
public function rules() { return [ 'icao' => new AirlineScalesExist, ]; }
Replied to Best Way To Handle Import: CSV With Related Photos
Not sure if you are already using some library, but check this which may help you write code:
It has nice API, and can make things easier.
Replied to How To Custom Response JSON Object Validator
Because you can have more than one validation messages, for this reasons is an array.
You can on client side loop over each messages and display all or only first.
If you want to handle this on server side, there is several way.
If you know upfront all field name, then you could:
return response()->json([ 'name' => $validator->errors()->first('name'), 'password' => $validator->errors()->first('password'), ], 400);
Something more generic could be to convert first in a collection, and then you can build new errors array, see
For example:
$errors = collect($validator->errors()) ->reverse() ->mapWithKeys(function ($item, $key) { return [$key => $item[0]]; });
But maybe there is better way with collection. I think reverse is required so that first message is not overriden by last.
To get first message on client side would be something like:
// Taking into account that you are using axios which return response in error.response.data error.response.data.errors[Object.keys(error.response.data.errors)[0]]
And for loops all errors:
let errors = error.response.data.errors for (let input in errors) { if (errors.hasOwnProperty(input)) { // First error invalidFeedback.innerHTML = errors[input][0] } }
Replied to A Better Way To Update User Profile When No Passwords Are Updated
A better way first of all is that is not required to make additional query, you can access current user from
$request->user() or
Auth::user()
$user = $request->user(); $data = $request->only(['name', 'email', 'password']); if (empty($data['password'])) unset($data['password']); else $data['password'] = Hash::make($data['password']); $user->fill($data)->save();
Maybe add also validation would be good.
Replied to @props: Variable Value Is Always Equal To Initialized Value Inside Laravel Blade Component
Did you pass testclass prop via component? Can you show how?
If you need just add more classes you can also merge attributes with
$attributes->merge(['class' =>'bg-transparent hover:bg-blue-500' ])
Replied to How To Split Libraries In Multiple Files Across Privates Packages/components
Try add in webpack.mix.js
mix.autoload({ jquery: ['$', 'jQuery'], });
But I don't understand why load in separated file manually, you could have one app.js and one bootstrap.js where all vendor are loaded, then in webpack.mix.js:
// Extract in vendor.js mix.extract([ 'jquery', 'bootstrap', 'popper.js' ]);
This will extract all your vendor in separated file.
Awarded Best Reply Vendor.js For Public And Admin
At the moment there is not a solution to handle this with mix, but there is a workaround, that I am also using.
npm install laravel-mix-merge-manifest --save-dev
webpack.admin.mix.jsand
webpack.app.mix.js
// file webpack.admin.mix.js const mix = require('laravel-mix'); require('laravel-mix-merge-manifest'); // Merge any existing manifest mix.mergeManifest(); mix.js('resources/js/admin/admin.js', 'public/js/admin') .extract(['jquery', 'bootstrap', 'popper.js']);
// file webpack.app.mix.js const mix = require('laravel-mix'); require('laravel-mix-merge-manifest'); // Merge any existing manifest mix.mergeManifest(); mix.js('resources/js/app/app.js', 'public/js/app') .extract(['vue']);
webpack.mix.jswith below code:
if (['admin', 'app'].includes(process.env.npm_config_section)) { require(`${__dirname}/webpack.${process.env.npm_config_section}.mix.js`) } else { console.log( '\x1b[41m%s\x1b[0m', 'Provide correct --section argument to build command: app or admin' ) throw new Error('Provide correct --section argument to build command!') }
// App npm --section=app run dev // Admin npm --section=admin run dev
The manifest will be updated and not overriden, so you will find something like:
{ "/js/admin/admin.js": "/js/admin/admin.js", "/js/admin/vendor.js": "/js/admin/vendor.js", "/js/admin/manifest.js": "/js/admin/manifest.js", "/js/app/vendor.js": "/js/app/vendor.js", "/js/app/app.js": "/js/app/app.js", "/js/app/manifest.js": "/js/app/manifest.js" }
And you can include them:
// App <script src="{{ mix('js/app/manifest.js') }}"></script> <script src="{{ mix('js/app/vendor.js') }}"></script> <script src="{{ mix('js/app/app.js') }}"></script> // Admin <script src="{{ mix('js/admin/manifest.js') }}"></script> <script src="{{ mix('js/admin/vendor.js') }}"></script> <script src="{{ mix('js/admin/admin.js') }}"></script>
Let me know how this work for you.
Additionally you can install and then run both with single command.
Replied to Mac Applications For Managing Icons?
Check this
Replied to @extends Not Working As Expected In 404 Error Page
I am sure that in errors page you can't get user. Even if user is logged-in, if you try to
dd(auth()->user()) it will return null.
But if you are not in errors page, I think that you can't use multiple
@extends like that you are doing.
It should works if you have single
@extends and then add conditional views inside
@extends, something like:
@extends( isset(Auth::user()->user_type1) ? 'layouts.app-user_type1' : (isset(Auth::user()->user_type2 ? 'layouts.app-user_type2' : 'layouts.app-admin' ) )
But if you have single column that define user type, you could something like:
@extends('layouts.app-'.Auth::user()->user_type)
Which is more readable.
Replied to @extends Not Working As Expected In 404 Error Page
You can't access current user in errors page. The errors is triggered before user is retrieved from database.
Replied to Get Title Value Of Role Collection
Or you can get all title of roles using below:
@foreach($users as $user) // You can change separator in implode() {{ $user->roles->pluck('title')->implode(' | ') }} // Separate by comma: {{ $user->roles->pluck('title')->implode(', ') }} @endforeach
Awarded Best Reply on RESTful API Many To Many
See here
$user->roles()->attach($roleId); $user->roles()->detach($roleId); $user->roles()->sync([1, 2, 3]); $user->roles()->syncWithoutDetaching([1, 2, 3]); $user->roles()->toggle([1, 2, 3]);
And there are others method. | https://laracasts.com/@thewebartisan7 | CC-MAIN-2020-45 | refinedweb | 5,599 | 64.1 |
Ske of skewness has applications in data analytics, machine learning and data science in per-processing of data. Moreover, if mean, median and mode of a data distribution coincides i.e mean = median = mode then. Then data set has skewness 0 i.e there is no asymmetry in data set.
Suppose a data set
0,10, 20,20, 30, 40,40,50,50, 50,50,40,30,20,10,0
Mean: 28.75
Median: 30.0
Mode: 50
Standard Deviation: 17.275343701356565
Skewness : -0.24321198774750508
The value of coef. of skewness is negative and this type of skewness in data distribution is called negative skewness.
Formula of skewness
Coef. of Skewness = 3(Mean-Median) /Standard Deviation
The statistics are calculated using the following python code
Python Code for Calculating Coefficient of Skewness
from scipy.stats import skew import numpy as np import statistics import matplotlib.pyplot as plt x = [0,10, 20,20, 30, 40,40,50,50, 50,50,40,30,20,10,0] print(x) mean= np.mean(x) median= np.median(x) mode= statistics.mode(x) std=np.std(x) print("Mean:", mean) print("Median:", median) print("Mode:", mode) print("Standard Deviation:", std) lines = plt.plot(x) plt.setp(lines, color='r', linewidth=2.0) print( "Skewness :" , skew(x)) plt.savefig("skewness.jpg")
Consider another data set
0,10, 20, 30,40,50,50,50,60,60,70,80,90,100,110,120,70,60,60, 50,50,50,40,30,20,10,0
Mean: 51.111111111111114
Median: 50.0
Mode: 50
Standard Deviation: 30.83208205669246
Skewness : 0.32780083058284104
The value of coef. of skewness is positive and this type of skewness in data distribution is called positive skewness.
Kurtosis-
Kurtosis measures “How much heavy tail a data distribution have”. Furthermore, it is used for outlier detection in a data set
that means how many values have different characteristics.
The formula for kurtosis is
Coef. Kurtosis =(X-μ)4/Variance
Python code for Kurtosis
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import math
from scipy.stats import kurtosis
mu = 0
variance = 1
sigma = math.sqrt(variance)
x = np.linspace(mu – 5*sigma, mu + 5*sigma, 50)
y1=stats.norm.pdf(x, mu, sigma)
print(“x”,x)
print(“y1”,y1)
plt.plot(x,y1)
print(“Kurtosis\n”, kurtosis(y1))
plt.savefig(“kurtosis.jpg”)
The normally distributed data set generated from the above python code is
1.48671951e-06 4.03963981e-06 1.05285406e-05 2.63211976e-05
6.31182642e-05 1.45183206e-04 3.20324125e-04 6.77914385e-04
1.37616968e-03 2.67966838e-03 5.00497661e-03 8.96674844e-03
1.54091915e-02 2.54001718e-02 4.01610804e-02 6.09096432e-02
8.86091674e-02 1.23646888e-01 1.65500632e-01 2.12484892e-01
2.61678710e-01 3.09115411e-01 3.50255414e-01 3.80680815e-01
3.96870719e-01 3.96870719e-01 3.80680815e-01 3.50255414e-01
3.09115411e-01 2.61678710e-01 2.12484892e-01 1.65500632e-01
1.23646888e-01 8.86091674e-02 6.09096432e-02 4.01610804e-02
2.54001718e-02 1.54091915e-02 8.96674844e-03 5.00497661e-03
2.67966838e-03 1.37616968e-03 6.77914385e-04 3.20324125e-04
1.45183206e-04 6.31182642e-05 2.63211976e-05 1.05285406e-05
4.03963981e-06 1.48671951e-06
Pot of data is
And kurtosis is -0.24249670483561347
Conclusion-
I this post, I have explained about skewness and kurtosis which is very important to understand data distribution. These both data analytics method are very important in machine learning, data science and big data analytics. Hope all these concepts I have explained will help you. | https://www.postnetwork.co/skewness-and-kurtosis/ | CC-MAIN-2020-24 | refinedweb | 609 | 61.93 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.