text
stringlengths
8
5.77M
Q: How to Pivot-and-Sort for two columns in python? I have a super-large dataframe of customers, item categories and their price. I would like to do some initial investigations: Identify the top e.g n=5 customers based on their TOTAL spending. for each of those customer, identify the top categories that they spend. Then possibly make a plot on descending order showing the top customer with their name as X and their spending as Y. For each, how to show their shopping categories? this would require to pivot and sort. This is a sample-data generator, thanks to here . import numpy as np import pandas as pd from numpy.core.defchararray import add np.random.seed(42) n = 20 cols = np.array(['cust', 'cat']) arr1 = (np.random.randint(5, size=(n, 2)) // [2, 1]).astype(str) df = pd.DataFrame( add(cols, arr1), columns=cols ).join( pd.DataFrame(np.random.rand(n, 1).round(2)).add_prefix('val') ) print(df) df.pivot_table(index=['cust'],values=['val0'],aggfunc=[np.sum]) df.pivot_table(index=['cust','cat'],values=['val0'],aggfunc=[np.size,np.sum]) # the order according the previous line should be cust1,cust0,cust2. How to do? The following is the desired output in this case. size sum val0 val0 cust cat cust1 cat4 6.0 4.27 cat3 2.0 1.07 cat2 2.0 0.98 cat0 2.0 0.44 cat1 2.0 0.43 cust0 cat1 1.0 0.94 cat4 1.0 0.91 cat2 1.0 0.66 cat3 1.0 0.03 cust2 cat1 2.0 1.25 Thank you very much! A: Here is better aggregate sum for avoid MultiIndex in columns. First aggregate sum: s = df.groupby('cust')['val0'].sum() print (s) cust cust0 2.54 cust1 7.19 cust2 1.25 Name: val0, dtype: float64 Then get top values by Series.nlargest: top5 = s.nlargest(5) print (top5) cust cust1 7.19 cust0 2.54 cust2 1.25 Name: val0, dtype: float64 If necessary filter only top5 values by boolean indexing and isin: df1 = df[df['cust'].isin(top5.index)].copy() #print(df1) For correct ordering cust create ordered categoricals and aggregate by both filtered columns, last sort by first level cust with column size: df1['cust'] = pd.Categorical(df1['cust'], ordered=True, categories=top5.index) df2 = (df1.groupby(['cust','cat'])['val0'].agg([np.size,np.sum]) .sort_values(['cust','size'], ascending=[True, False]) .reset_index()) print (df2) cust cat size sum 0 cust1 cat4 6.0 4.27 1 cust1 cat0 2.0 0.44 2 cust1 cat1 2.0 0.43 3 cust1 cat2 2.0 0.98 4 cust1 cat3 2.0 1.07 5 cust0 cat1 1.0 0.94 6 cust0 cat2 1.0 0.66 7 cust0 cat3 1.0 0.03 8 cust0 cat4 1.0 0.91 9 cust2 cat1 2.0 1.25 Last pivot and plot by DataFrame.plot.bar: df2.pivot('cust','cat','size').plot.bar()
Sunrise at Angkor Wat One of the things that Andy and I knew that we wanted to do on our honeymoon was to watch the sunrise over Angkor Wat. The pictures that I had seen online left my breathless – a huge, beautiful temple that was hundreds of years old with the brightly colored sky behind it all reflected upon itself in a pond. Even though I’m not one to get excited about waking up early, I knew that this was one thing that I could get out of bed for. Getting to Angkor Wat for the Sunrise The day before we decided to go, we worked with our hotel, the MotherHome Inn to get a tuk tuk driver set up for the following morning. The pricing for a driver to watch the sunrise is a few dollars more expensive than a standard temple circuit – this cost is minimal and to be expected. The hotel also packed us breakfast to go since we would miss the amazing breakfast spread that they offer every morning. We showed up in the lobby about 10 minutes before 5, when our driver was scheduled to arrive. We grabbed our breakfast and took a seat and waited. After about 5 minutes we started inquiring where our driver was since we wanted to get there early so we could get in a good position to take photos. The front desk was apologetic and was on the phone trying to get in touch with the driver. Finally at 5am we called it, getting another driver to take us around for the day. The ride to Angkor Wat was very refreshing that early in the morning. The sun had not yet risen so everything was cool and the breeze from the tuk tuk was great. Once we arrived, we rushed to the gate so we could get the best possible spot. What to Expect When Watching the Sun Rise at Angkor Wat I had read a few articles about what to expect but it was far crazier than I had read. We weren’t there during high season and there were already hundreds of people in place when we arrived, a good hour before sunrise. We would be about 20 rows back from the front of the pond which wasn’t ideal. We decided to cheat and go along the side of the pond. It seemed like a smart idea until people kept funneling in, some of them in front of us, blocking our shot of the sunrise. Admitting defeat, we moved up closer to the temple, omitting the reflection pond from our shots and found a nice place that we could watch the sunrise and take a series of photos without people all around us. Even though the sunrise was not that great the morning we were there, I think Andy was maybe a little more disappointed than I was. I know he really wanted to get some amazing shots of a sunrize over Angkor Wat. It didn’t matter though because we both agreed that we had a lot of fun setting up the camera timer to take pictures of us in front of Angkor Wat… this was of course after we left the crowds of people by the reflection pond. Although we didn’t see it when we were there, we read that some people will push their way up to the front, and throw large stones into the reflecting pond. From there they’ll cut in front of the people that arrived first thing in the morning to get the best position. I couldn’t even express how upset I would be if that were to happen to me. Going back to all of the beautiful pictures that I saw on google images before my trip. They are amazing but they are also heavily photoshopped. Your pictures will likely not look anything like that unless you have an amazing camera, the perfect settings and a great knowledge of Photoshop. Final Thoughts If you are going to go to Angkor Wat, I highly recommend waking up early and watching the sunrise. If nothing else, it will give you a few hours where you can enjoy the cooler weather before the Cambodian heat comes in full force. Even though there were a ton of people watching the sunrise, the temple itself did not seem crowded afterward, likely due to the sheer size of the place. We spent about 1.5 hours walking around Angkor war before we continued went back to our tuk tuk and visited more temples. Around 2pm we finally threw in the towel as we were exhausted and wanted to relax by the pool where we would be off of our feet. Share this article: About the Author Lynn Bitten by the travel bug during a semester abroad in college, Lynn was able to travel around much of Europe on a shoestring budget. Her travel motto is "If I haven't been there yet, it's probably on my list". When she isn't daydreaming about her next trip, you can find her cooking in the kitchen, reading blogs on how to travel the world on points or spending time with her fluffy cat Gingerbread. About Us We are a husband & wife duo who just recently returned from an 8 month trip traveling around the world. During our trip we made stops in Central & South America, Africa, and finally Eastern Europe. When we returned to the U.S. we decided to road trip south and look for some place a little warmer to live than our previous home in Chicago. On our road trip we made a stop in Asheville, NC and fell in love with the city. We decided to take another risk & move to Asheville to start the next phase in our lives!More about us » Subscribe! Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Q: Case statement evaluating class not working as expected I have some logic in my Rails application that checks the class type of my current_user variable: logger.debug current_user.class # => Instructor logger.debug current_user.class == Instructor # => true case current_user.class when Admin logger.debug "Admin" when Student logger.debug "Student" when Instructor logger.debug "Instructor" else logger.debug "Guest" end #=> "Guest" Despite Instructor being the class type (as indicated in comments) it seems the case statement always evaluates the else fall-back. Can somebody explain why? Just to give a little background, I'm implementing an STI user model setup using Devise. A: It's case curent_user not current_user.class. Your case statement is cross checking the class of current_user.class which is Object which falls through your case statement. Secondly, please note that instead of the case you could have done: logger.debug current_user.class.name
Featured Story On World Toilet Day, United Nations figures on open defecation and toilets across the world should ring alarm bells. Four billion people still lack access to basic sanitation facilities. Each day, nearly a thousand children are dying due to preventable water and sanitation-related diarrhoeal diseases. What's the situation in India? "Four years since Swachh Bharat's launch, what difference has it made to India's sanitation indicators?" When the BJP came to power in 2014, it was partly on the promise of cleaning up India. Prime Minister Narendra Modi’s flagship Swachh Bharat Abhiyan (‘Clean India Mission’) began life with much grandiosity. Four years since Swachh Bharat's launch, what difference has it made to India's sanitation indicators? Swachh Bharat was launched with the goal of eliminating the widespread practise of open defecation within five years of its launch. It aimed to do this by constructing toilets to expand sanitation coverage. Since the scheme was launched, … Read Full Article about World Toilet Day: A review of India’s toilets and Swachh Bharat
using System; using System.Reflection; namespace AutoCSer.Sql { /// <summary> /// SQL 数据库连接信息 /// </summary> public sealed unsafe class Connection { /// <summary> /// 连接字符串 /// </summary> public string ConnectionString; /// <summary> /// 数据库表格所有者 /// </summary> public string Owner = "dbo"; /// <summary> /// SQL 客户端类型默认配置信息 /// </summary> public ClientKindAttribute Attribute; /// <summary> /// SQL 客户端类型默认配置信息 /// </summary> internal ClientKindAttribute ClientAttribute { get { if (Attribute == null) { if (Type != ClientKind.ThirdParty) Attribute = EnumAttribute<ClientKind, ClientKindAttribute>.Array((byte)Type) ?? EnumAttribute<ClientKind, ClientKindAttribute>.Array((byte)ClientKind.Sql2000); } return Attribute; } } /// <summary> /// SQL 客户端 /// </summary> private Client client; /// <summary> /// SQL 客户端 /// </summary> public Client Client { get { if (client == null) { client = (Client)ClientAttribute.ClientType.GetConstructor(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] { typeof(Connection) }, null).Invoke(new object[] { this }); } return client; } } /// <summary> /// 日志处理 /// </summary> public AutoCSer.Log.ILog Log; /// <summary> /// SQL 客户端类型 /// </summary> public ClientKind Type; /// <summary> /// 是否启用连接池 /// </summary> public bool IsPool = true; /// <summary> /// 连接集合 /// </summary> private static readonly AutoCSer.Threading.LockEquatableLastDictionary<HashString, Connection> connections = new AutoCSer.Threading.LockEquatableLastDictionary<HashString, Connection>(); /// <summary> /// 根据连接类型获取连接信息 /// </summary> /// <param name="type">连接类型</param> /// <returns>连接信息</returns> internal static Connection GetConnection(string type) { if (type != null) { Connection value; HashString key = type; if (!connections.TryGetValue(ref key, out value)) connections.Set(ref key, value = ConfigLoader.GetUnion(typeof(Connection), type).Connection ?? new Connection()); return value; } return null; } } }
Dr. Barlow Smith, 79, of Marble Falls was among the 131 doctors the board sanctioned in August, a record number exceeding the previous high of 99 sanctions in August 2006, board spokeswoman Jill Wiggins said. She attributed the rise to an increase in complaints. Smith’s lawyer, Nina Willis, said Smith was not disciplined for sexual misconduct but only for breaching patient confidentiality. She said he should be praised for a stellar 45-year career and not be written about because of a “minor disciplinary action.” Wiggins said the board reprimanded Smith for two violations: the confidentiality breach and unprofessional conduct for having sex with a recent patient. Doctors have more power in sexual relationships with patients, which is a reason such involvement is forbidden, Wiggins said. Although the patient had stopped seeing Smith, she was in his care recently enough that the board found a violation of its rules and the health and safety code, Wiggins said. In addition to the reprimand, the board fined Smith $3,000 and ordered him to take a professional boundaries course. The board also ordered that: Psychiatrist Sergio H. Luna of Austin be monitored by another doctor for a year and take courses in record keeping, child psychiatry and prescribing. The order said he failed to document sufficiently the justification for medications he prescribed to a 7-year-old boy. Neither Luna nor his lawyer returned a call. Family practice doctor Chad F. Babcock of Austin pay a $2,000 fine and take courses in ethics and professional boundaries after determining that he treated and prescribed drugs to a friend without keeping adequate records or establishing a professional relationship. Lawyer Tony Cobos said Babcock acted out of compassion to help an uninsured friend and did no harm. Family practice doctor Scott Patterson Liggett of Marble Falls take courses in record keeping, diabetes treatment and communicating for not checking a diabetic patient’s blood sugar while at the office. The patient went to the emergency room the next day and was hospitalized for two days, an outcome that might have occurred anyway, the order said. Neither Liggett nor his lawyer returned calls. San Marcos psychiatrist Theodore Dake Jr. take a record-keeping course for failing to keep adequate records on a patient. “This was a very minor infraction, and I had to hire an attorney and an expert witness that cost me nearly $25,000,” he said. Internal medicine doctor David Weeks of Austin take a record-keeping course because he failed to sufficiently document justification for billing a patient for services that went beyond preventative care. Weeks, who declined to comment, refunded the charges. Awesome attorney!!! Andrew has been my attorney since he shortly after he graduated from law school over 20 years ago. I wouldn't use anyone else! Toby Grace 4/22/2015 I endorse this lawyer's work. He is a good guy, easy to talk to and tenacious on his cases. David Todd 2/16/2010 Andrew is very professional and succeeding in getting a very satisfactory settlement in my very difficult case. He was diligent and maintained a high level of communication throughout the process. I would highly recommend him to anyone who is seeking assistance with a claim that is being denied by the insurance company. anonymous 5/23/2013 I highly recommend Andrew Traub as the best accident injury lawyer. He is trustworthy, honest. He worked very hard on my case. He made sure a fair distribution was made in the insurance settlement. He kept a professional demeanor, especially since the other party was very difficult to negotiate. Highly recommend Andrew Traub. Anna Bustamante 8/31/2018 The Traub Law Office tenaciously represents their clients and from inception of a case, starting with investigating claims, through resolution of those claims, they maintain the highest levels of professionalism. Excellent law firm, excellent reputation, excellent results. Kevin Leahy 12/16/2018 Good hours and very professional and courteous and helpful in explaining my very many Questions. Lou Mendoza 12/11/2018 I recommend andrew to anyone who needs an attorney who can help with an auto accident. He is the best personal injury attorney in Austin and such a trustworthy man. He is truly a man of character and integrity. Someone who will fight for you and do what is in your best interest. Linds L. 4/11/2014 It was a pleasure to have Andrew Traub as our lawyer. He truly puts his clients first. I felt like I didn't have to lift a finger! He is patient, determined, knowledgable and kind. Julia-Isabel Wright 4/19/2019 I was referred to Andrew by another satisfied client. I had been involved in a serious car accident that left me with a number of unpaid bills and on-going medical issues. I was reluctant to contact an attorney because it seemed like it was going to be a lot of trouble and stress added to the injuries I already had. I was glad I called Andrew! He was thorough and very knowledgeable. He cares about his clients and was very easy to work with. I was very satisfied with the outcome of the case. Amy DeAnda 8/31/2018 I was referred to Andrew Traub by a friend and am so glad! I could not have been more "taken care of" than what he did! He is knowledgeable and able to explain the legal-ese of my case to me in layman's terms and worked with me while I was out of town at a relative's home recovering from an accident. And he has loads of patience -- I had LOTS of questions, and sometimes repeated the same question -- and he answered all of them professionally and politely! We did most communications via email...read moreI was referred to Andrew Traub by a friend and am so glad! I could not have been more "taken care of" than what he did! He is knowledgeable and able to explain the legal-ese of my case to me in layman's terms and worked with me while I was out of town at a relative's home recovering from an accident. And he has loads of patience -- I had LOTS of questions, and sometimes repeated the same question -- and he answered all of them professionally and politely! We did most communications via email with a phone call here and there. He resolved my case VERY satisfactorily. I HIGHLY! recommend The Traub Law Office / Austin Accident Lawyer! If there was a way to give *10* stars out of 5, I would. Thank you Andrew Traub!read less Su Bierhalter 10/13/2018 Andrew is absolutely Stellar to work with always an awesome personality and get stuff done swiftly if you're looking for a great lawyer this is the guy Tina Mayer 5/15/2018 More than just a personal injury lawyer, genuinely cares for his clients! ERA Product 6/27/2018 Andrew is a hard-working, ethical, extremely intelligent lawyer. I've known him since he was about 14 years old, and he's always been bright, creative, and successful in all his endeavors. He is also a computer genius. He has also worked with my father to set up a business entity. I endorse this lawyer. Joel Cohen 6/03/2013 I was involved in a multiple vehicle crash with complicated injuries, and despite my best efforts could not deal with the insurance companies alone. From the start, Andrew was reassuring, insightful, and determined. He gently guided me through the litigation process, and sought out new resources to further my recovery. My case took time to build, and while sometimes I felt unsure Andrew never wavered. In the end, the Traub Law Office secured an amazing settlement for me! I am so grateful to Andrew, Talia, and Lynne for their kindness and dedication. Now I can move on with my life,...read moreI was involved in a multiple vehicle crash with complicated injuries, and despite my best efforts could not deal with the insurance companies alone. From the start, Andrew was reassuring, insightful, and determined. He gently guided me through the litigation process, and sought out new resources to further my recovery. My case took time to build, and while sometimes I felt unsure Andrew never wavered. In the end, the Traub Law Office secured an amazing settlement for me! I am so grateful to Andrew, Talia, and Lynne for their kindness and dedication. Now I can move on with my life, and heartily refer anyone who needs a true and accomplished lawyer to Andrew Traub!read less angela hawley 8/31/2018 I endorse this lawyer. Andrew and I have worked together on a couple of clients and I am very impressed with his quality and depth of knowledge. Andrew understands the nuances of different insurance carriers and the way that courts will react to various situations. He is quite experienced and I recommend him highly for any personal injury matters. Contact While most of our clients hail from Austin, Round Rock, Cedar Park, Georgetown, and Pflugerville in Travis and Williamson Counties, we have also worked with clients in Dallas, Houston, and San Antonio. Other clients have come from Lakeway, Jollyville, Anderson Mill, Kyle, and Leander. If your accident was in Texas, we can help you. The information on this website is for general information purposes only. Nothing on this site should be taken as legal advice for any individual case or situation. This information is not intended to create, and receipt or viewing does not constitute, an attorney-client relationship.
Q: Persistent WinSCP connection for batch copy in Python I'm trying to copy thousands files to a remote server. These files are generated in real-time within the script. I'm working on a Windows system and need to copy the files to a Linux server (hence the escaping). I currently have: import os os.system("winscp.exe /console /command \"option batch on\" \"option confirm off\" \"open user:pass@host\" \"put f1.txt /remote/dest/\"") I'm using Python to generate the files but need a way to persist the remote connection so that I can copy each file, to the server, as it is generated (as opposed to creating a new connection each time). That way, I'll only need to change the field in the put option thus: "put f2 /remote/dest" "put f3 /remote/dest" etc. A: I needed to do this and found that code similar to this worked well: from subprocess import Popen, PIPE WINSCP = r'c:\<path to>\winscp.com' class UploadFailed(Exception): pass def upload_files(host, user, passwd, files): cmds = ['option batch abort', 'option confirm off'] cmds.append('open sftp://{user}:{passwd}@{host}/'.format(host=host, user=user, passwd=passwd)) cmds.append('put {} ./'.format(' '.join(files))) cmds.append('exit\n') with Popen(WINSCP, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) as winscp: #might need shell = True here stdout, stderr = winscp.communicate('\n'.join(cmds)) if winscp.returncode: # WinSCP returns 0 for success, so upload failed raise UploadFailed This is simplified (and using Python 3), but you get the idea.
The Weekly Standard reserves the right to use your email for internal use only. Occasionally, we may send you special offers or communications from carefully selected advertisers we believe may be of benefit to our subscribers. Click the box to be included in these third party offers. We respect your privacy and will never rent or sell your email. Please include me in third party offers. There was no economic reason to do this. The threat from a collapsed Anglo-Irish Bank, for instance, which had few depositors but a lot of ill-advised building projects financed by foreign speculators, “was not, by nature, systemic,” Lewis writes. “It became so only when its losses were made everyone’s.” Lewis contrasts Ireland’s predicament to that of Greece. In Greece, where legitimately elected government officials contracted the debt, the taxpayers have sought to wash their hands of it. In Ireland, where unaccountable private-sector moguls contracted the debt, the taxpayers have been saddled with it. An even better contrast, though, is to Iceland, where speculative bankers poured money into crazy investments in much the same way that those in Ireland did. Iceland put its banks into receivership and promised to protect their depositors. But it threw the banks’ creditors to the wolves. In fact, it distinguished between domestic and foreign depositors and tried to throw the latter to the wolves, too. Britain and the Netherlands, where a good number of the Icelandic banks’ foreign depositors lived, paid off the depositors themselves, then used main force to back Iceland into a repayment agreement that will run​—​rather like war reparations​—​for at least three decades. The Icelandic banks’ bondholders, according to a recent Bloomberg report, wound up settling for 30 cents on the dollar. More by Christopher Caldwell The tens of billions that the next generation of Irishmen will pay for Fianna Fáil’s stupidity ought to destroy that party’s natural-majority status for a long time. Whether there is anything the opposition parties can do, once in power, to mitigate the damage is unclear, particularly since Ireland’s citizens (which is to say, its tax base) have begun to reacquire the age-old Irish habit of labor emigration. Right now, Ireland’s prospects for renegotiating its debt do not look any more promising than its prospects of paying it off. This does not mean that we will soon stop hearing calls for haircuts and interest-rate adjustments. They are useful to Irish budget discussions in the same way that invocations of “waste, fraud and abuse” are to our own​—​as a way to forestall, even for a few weeks, any talk about the descent into rage-inducing austerity that surely lies ahead.
*y - 1244 What is the v'th term of -2049971, -4099947, -6149913, -8199869, -10249815? 5*v**2 - 2049991*v + 15 What is the j'th term of 67291, 67442, 67691, 68038, 68483? 49*j**2 + 4*j + 67238 What is the h'th term of 70696, 70714, 70740, 70774? 4*h**2 + 6*h + 70686 What is the a'th term of 45042, 46327, 47612, 48897, 50182? 1285*a + 43757 What is the z'th term of 325563, 325574, 325585, 325596? 11*z + 325552 What is the n'th term of 5450466, 5450467, 5450470, 5450475, 5450482? n**2 - 2*n + 5450467 What is the s'th term of -6311, -6380, -6449, -6518? -69*s - 6242 What is the g'th term of -229878, -459785, -689682, -919569, -1149446, -1379313, -1609170? 5*g**2 - 229922*g + 39 What is the s'th term of -24811199, -49622394, -74433589? -24811195*s - 4 What is the j'th term of 91509, 182692, 273873, 365052, 456229, 547404, 638577? -j**2 + 91186*j + 324 What is the f'th term of 111777, 223434, 335093, 446754, 558417, 670082, 781749? f**2 + 111654*f + 122 What is the m'th term of 5003906, 10007824, 15011742, 20015660, 25019578? 5003918*m - 12 What is the q'th term of -1237693, -1237690, -1237687, -1237684? 3*q - 1237696 What is the k'th term of 185, -189, -689, -1321, -2091? -k**3 - 57*k**2 - 196*k + 439 What is the g'th term of 54147, 107276, 160393, 213492, 266567, 319612, 372621, 425588? -g**3 + 53136*g + 1012 What is the p'th term of -66, -6988, -25786, -62400, -122770, -212836? -990*p**3 + 2*p**2 + 2*p + 920 What is the k'th term of 51580, 103081, 154582, 206083? 51501*k + 79 What is the q'th term of 16187681, 32375435, 48563189, 64750943, 80938697? 16187754*q - 73 What is the s'th term of -136726, -274252, -411778, -549304, -686830? -137526*s + 800 What is the u'th term of -87124, -88721, -91380, -95101, -99884, -105729, -112636? -531*u**2 - 4*u - 86589 What is the s'th term of -11754103, -23508208, -35262313, -47016418, -58770523, -70524628? -11754105*s + 2 What is the c'th term of -56322, -225326, -507000, -901344, -1408358, -2028042, -2760396? -56335*c**2 + c + 12 What is the l'th term of 195546, 195524, 195502, 195480, 195458, 195436? -22*l + 195568 What is the w'th term of -30811067, -61622142, -92433217, -123244292, -154055367? -30811075*w + 8 What is the x'th term of -68801, -138186, -208151, -278696, -349821, -421526, -493811? -290*x**2 - 68515*x + 4 What is the s'th term of -400, -1533, -3152, -5035, -6960, -8705, -10048? 37*s**3 - 465*s**2 + 3*s + 25 What is the h'th term of 29549, 57669, 85789, 113909? 28120*h + 1429 What is the m'th term of -43056, -34817, -26578, -18339, -10100? 8239*m - 51295 What is the p'th term of 48738, 97942, 147612, 197748, 248350? 233*p**2 + 48505*p What is the j'th term of -11143, -43610, -97723, -173482, -270887? -10823*j**2 + 2*j - 322 What is the j'th term of 28987, 29360, 29733? 373*j + 28614 What is the y'th term of -465164118, -465164119, -465164120? -y - 465164117 What is the v'th term of 3879, 4371, 5171, 6279, 7695? 154*v**2 + 30*v + 3695 What is the k'th term of -1144729, -1144735, -1144739, -1144741, -1144741, -1144739? k**2 - 9*k - 1144721 What is the f'th term of -161441, -322870, -484299, -645728? -161429*f - 12 What is the z'th term of -1065369, -1065365, -1065361, -1065357, -1065353, -1065349? 4*z - 1065373 What is the o'th term of -65, -341, -1003, -2231, -4205, -7105? -30*o**3 - 13*o**2 - 27*o + 5 What is the g'th term of -414, -4440, -15294, -36390, -71142, -122964, -195270, -291474? -569*g**3 - 43*g + 198 What is the q'th term of 250328, 250386, 250444, 250502? 58*q + 250270 What is the m'th term of 1859731, 3719454, 5579177, 7438900? 1859723*m + 8 What is the r'th term of 3104, 6394, 9964, 13958, 18520, 23794, 29924? 24*r**3 - 4*r**2 + 3134*r - 50 What is the w'th term of 120, 1408, 5078, 12372, 24532, 42800? 207*w**3 - 51*w**2 - 8*w - 28 What is the b'th term of -144408, -144511, -144682, -144921? -34*b**2 - b - 144373 What is the m'th term of -42140395, -42140397, -42140399, -42140401, -42140403? -2*m - 42140393 What is the d'th term of -71053222, -71053223, -71053224, -71053225, -71053226, -71053227? -d - 71053221 What is the c'th term of -169998, -679916, -1529778, -2719584, -4249334, -6119028, -8328666? -169972*c**2 - 2*c - 24 What is the s'th term of 18002938, 36005866, 54008794? 18002928*s + 10 What is the b'th term of 108085, 107451, 106401, 104941, 103077, 100815? b**3 - 214*b**2 + b + 108297 What is the h'th term of 41390, 178218, 406274, 725564, 1136094? h**3 + 45608*h**2 - 3*h - 4216 What is the g'th term of 78224, 75560, 72896, 70232, 67568, 64904? -2664*g + 80888 What is the v'th term of 12544, 10897, 8150, 4303, -644, -6691? -550*v**2 + 3*v + 13091 What is the q'th term of 3392, 13704, 30978, 55286, 86700, 125292, 171134, 224298? 12*q**3 + 3409*q**2 + q - 30 What is the t'th term of 8299, 9745, 11203, 12679, 14179, 15709? t**3 + 1439*t + 6859 What is the t'th term of -458084, -916167, -1374254, -1832345, -2290440, -2748539, -3206642? -2*t**2 - 458077*t - 5 What is the k'th term of -20631, -35649, -50665, -65679? k**2 - 15021*k - 5611 What is the l'th term of 1403, 11118, 37567, 89114, 174123, 300958, 477983, 713562? 1394*l**3 + 3*l**2 - 52*l + 58 What is the h'th term of -34366611, -34366612, -34366613? -h - 34366610 What is the a'th term of 680, 3669, 11780, 27575, 53616, 92465, 146684, 218835? 427*a**3 - a**2 + 3*a + 251 What is the j'th term of 686193, 686194, 686195, 686196? j + 686192 What is the s'th term of 81844, 150863, 219884, 288907, 357932, 426959, 495988? s**2 + 69016*s + 12827 What is the i'th term of 7508, 10098, 7788, 578? -2450*i**2 + 9940*i + 18 What is the f'th term of -56959, -114622, -172285, -229948, -287611? -57663*f + 704 What is the r'th term of 9002, 72114, 243466, 577196, 1127442, 1948342, 3094034, 4618656? 9023*r**3 - 18*r**2 + 5*r - 8 What is the m'th term of -382, -189, 54, 287, 450, 483? -10*m**3 + 85*m**2 + 8*m - 465 What is the s'th term of 292268, 292302, 292376, 292508, 292716? 3*s**3 + 2*s**2 + 7*s + 292256 What is the a'th term of 2607, 5435, 8531, 11889, 15503? -a**3 + 140*a**2 + 2415*a + 53 What is the r'th term of 9142149, 9142156, 9142163, 9142170, 9142177? 7*r + 9142142 What is the k'th term of 396210, 396202, 396194, 396186, 396178? -8*k + 396218 What is the p'th term of 55862079, 111724155, 167586231? 55862076*p + 3 What is the m'th term of 69935, 139845, 209653, 279359, 348963, 418465, 487865? -51*m**2 + 70063*m - 77 What is the o'th term of 2137, 9157, 21053, 37825, 59473, 85997? 2438*o**2 - 294*o - 7 What is the v'th term of -12586446, -12586444, -12586442, -12586440? 2*v - 12586448 What is the p'th term of -1235, -3162, -5089, -7016, -8943, -10870? -1927*p + 692 What is the n'th term of 1885, 7684, 17363, 30934, 48409, 69800, 95119? 2*n**3 + 1928*n**2 + n - 46 What is the y'th term of -618256768, -618256754, -618256724, -618256672, -618256592, -618256478, -618256324, -618256124? y**3 + 2*y**2 + y - 618256772 What is the f'th term of -988831, -988792, -988713, -988582, -988387, -988116, -987757, -987298? 2*f**3 + 8*f**2 + f - 988842 What is the q'th term of -33879, -69638, -105397, -141156? -35759*q + 1880 What is the u'th term of 19955, 79501, 178733, 317651, 496255? 19843*u**2 + 17*u + 95 What is the z'th term of -198, -755, -1752, -3267, -5378, -8163? -13*z**3 - 142*z**2 - 40*z - 3 What is the n'th term of 2531, 4166, 5703, 7094, 8291? -8*n**3 - n**2 + 1694*n + 846 What is the d'th term of -35505321, -71010642, -106515963, -142021284, -177526605, -213031926? -35505321*d What is the t'th term of -7912, -31737, -71456, -127069, -198576, -285977? -7947*t**2 + 16*t + 19 What is the i'th term of -5967118, -5967114, -5967124, -5967154, -5967210, -5967298? -i**3 - i**2 + 14*i - 5967130 What is the y'th term of 1562643, 3125187, 4687731? 1562544*y + 99 What is the z'th term of 4410742, 8821485, 13232230, 17642977? z**2 + 4410740*z + 1 What is the t'th term of 8208, 8356, 8510, 8670? 3*t**2 + 139*t + 8066 What is the j'th term of -26503217, -53006439, -79509663, -106012889, -132516117, -159019347? -j**2 - 26503219*j + 3 What is the n'th term of 468856, 468961, 469142, 469399, 469732? 38*n**2 - 9*n + 468827 What is the s'th term of 1128, 7275, 23336, 54267, 105024, 180563, 285840, 425811? 826*s**3 + s**2 + 362*s - 61 What is th
/* * << * Davinci * == * Copyright (C) 2016 - 2017 EDP * == * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * >> */ import { IChartProps } from '../../components/Chart' import { decodeMetricName, getChartTooltipLabel, getTextWidth, getAggregatorLocale, metricAxisLabelFormatter } from '../../components/util' import { getLegendOption, getGridPositions, getDimetionAxisOption, getCartesianChartReferenceOptions } from './util' import { getFormattedValue } from '../../components/Config/Format' import { getFieldAlias } from '../../components/Config/Field' import ChartTypes from '../../config/chart/ChartTypes' export default function (chartProps: IChartProps, drillOptions) { const { data, cols, metrics, chartStyles, // color, // tip, references } = chartProps const { legend, spec, doubleYAxis, xAxis, splitLine } = chartStyles const { legendPosition, fontSize } = legend const { stack, smooth, step, symbol, label } = spec const { yAxisLeft, yAxisRight, yAxisSplitNumber, dataZoomThreshold } = doubleYAxis const { labelColor, labelFontFamily, labelFontSize, lineColor, lineSize, lineStyle, showLabel, showLine, xAxisInterval, xAxisRotate } = xAxis const { showVerticalLine, verticalLineColor, verticalLineSize, verticalLineStyle, showHorizontalLine, horizontalLineColor, horizontalLineSize, horizontalLineStyle } = splitLine const labelOption = { label: { normal: { show: label, position: 'top' } } } const { selectedItems } = drillOptions const { secondaryMetrics } = chartProps const xAxisData = showLabel ? data.map((d) => d[cols[0].name]) : [] const seriesData = secondaryMetrics ? getAixsMetrics('metrics', metrics, data, stack, labelOption, references, selectedItems, {key: 'yAxisLeft', type: yAxisLeft}) .concat(getAixsMetrics('secondaryMetrics', secondaryMetrics, data, stack, labelOption, references, selectedItems, {key: 'yAxisRight', type: yAxisRight})) : getAixsMetrics('metrics', metrics, data, stack, labelOption, references, selectedItems, {key: 'yAxisLeft', type: yAxisLeft}) const seriesObj = { series: seriesData.map((series) => { if (series.type === 'line') { return { ...series, symbol: symbol ? 'emptyCircle' : 'none', smooth, step } } else { return series } }) } let legendOption let gridOptions if (seriesData.length > 1) { const seriesNames = seriesData.map((s) => s.name) legendOption = { legend: getLegendOption(legend, seriesNames) } gridOptions = { grid: getGridPositions(legend, seriesNames, 'doubleYAxis', false, null, xAxis, xAxisData) } } let leftMax let rightMax if (stack) { leftMax = metrics.reduce((num, m) => num + Math.max(...data.map((d) => d[`${m.agg}(${decodeMetricName(m.name)})`])), 0) rightMax = secondaryMetrics.reduce((num, m) => num + Math.max(...data.map((d) => d[`${m.agg}(${decodeMetricName(m.name)})`])), 0) } else { leftMax = Math.max(...metrics.map((m) => Math.max(...data.map((d) => d[`${m.agg}(${decodeMetricName(m.name)})`])))) rightMax = Math.max(...secondaryMetrics.map((m) => Math.max(...data.map((d) => d[`${m.agg}(${decodeMetricName(m.name)})`])))) } const leftInterval = getYaxisInterval(leftMax, (yAxisSplitNumber - 1)) const rightInterval = rightMax > 0 ? getYaxisInterval(rightMax, (yAxisSplitNumber - 1)) : leftInterval const inverseOption = xAxis.inverse ? { inverse: true } : null const xAxisSplitLineConfig = { showLine: showVerticalLine, lineColor: verticalLineColor, lineSize: verticalLineSize, lineStyle: verticalLineStyle } const allMetrics = secondaryMetrics ? [].concat(metrics).concat(secondaryMetrics) : metrics const option = { tooltip: { trigger: 'axis', axisPointer: {type: 'cross'}, formatter (params) { const tooltipLabels = [getFormattedValue(params[0].name, cols[0].format), '<br/>'] params.reduce((acc, param) => { const { color, value, seriesIndex } = param if (color) { acc.push(`<span class="widget-tooltip-circle" style="background: ${color}"></span>`) } acc.push(getFieldAlias(allMetrics[seriesIndex].field, {}) || decodeMetricName(allMetrics[seriesIndex].name)) acc.push(': ', getFormattedValue(value, allMetrics[seriesIndex].format), '<br/>') return acc }, tooltipLabels) return tooltipLabels.join('') } }, xAxis: getDimetionAxisOption(xAxis, xAxisSplitLineConfig, xAxisData), yAxis: [ { type: 'value', key: 'yAxisIndex0', min: 0, max: rightMax > 0 ? rightInterval * (yAxisSplitNumber - 1) : leftInterval * (yAxisSplitNumber - 1), interval: rightInterval, position: 'right', ...getDoubleYAxis(doubleYAxis) }, { type: 'value', key: 'yAxisIndex1', min: 0, max: leftInterval * (yAxisSplitNumber - 1), interval: leftInterval, position: 'left', ...getDoubleYAxis(doubleYAxis) } ], ...seriesObj, ...gridOptions, ...legendOption } return option } export function getAixsMetrics (type, axisMetrics, data, stack, labelOption, references, selectedItems, axisPosition?: {key: string, type: string}) { const seriesNames = [] const seriesAxis = [] const referenceOptions = getCartesianChartReferenceOptions(references, ChartTypes.DoubleYAxis, axisMetrics, data) axisMetrics.forEach((m, amIndex) => { const decodedMetricName = decodeMetricName(m.name) const localeMetricName = `[${getAggregatorLocale(m.agg)}] ${decodedMetricName}` seriesNames.push(decodedMetricName) const stackOption = stack && axisPosition.type === 'bar' && axisMetrics.length > 1 ? { stack: axisPosition.key } : null const itemData = data.map((g, index) => { const itemStyle = selectedItems && selectedItems.length && selectedItems.some((item) => item === index) ? {itemStyle: {normal: {opacity: 1, borderWidth: 6}}} : null return { value: g[`${m.agg}(${decodedMetricName})`], ...itemStyle } }) seriesAxis.push({ name: decodedMetricName, type: axisPosition && axisPosition.type ? axisPosition.type : type === 'metrics' ? 'line' : 'bar', ...stackOption, yAxisIndex: type === 'metrics' ? 1 : 0, data: itemData, ...labelOption, ...(amIndex === axisMetrics.length - 1 && referenceOptions), itemStyle: { normal: { opacity: selectedItems && selectedItems.length > 0 ? 0.25 : 1 } } }) }) return seriesAxis } export function getYaxisInterval (max, splitNumber) { const roughInterval = parseInt(`${max / splitNumber}`, 10) const divisor = Math.pow(10, (`${roughInterval}`.length - 1)) return (parseInt(`${roughInterval / divisor}`, 10) + 1) * divisor } export function getDoubleYAxis (doubleYAxis) { const { inverse, showLine, lineStyle, lineSize, lineColor, showLabel, labelFontFamily, labelFontSize, labelColor } = doubleYAxis return { inverse, axisLine: { show: showLine, lineStyle: { color: lineColor, width: Number(lineSize), type: lineStyle } }, axisLabel: { show: showLabel, color: labelColor, fontFamily: labelFontFamily, fontSize: Number(labelFontSize), formatter: metricAxisLabelFormatter } } }
Forgetful Syracuse Teen’s Car Found in Toronto Parking Lot Public’s hunt for ‘Nissan in a haystack’ reaches happy ending in an Impark facility A U.S. teen who lost his car in Toronto, sparking “the best scavenger hunt ever,” has been reunited with his vehicle after it was found safe and sound in an Impark parking lot. Gavin Strickland made the four-hour drive from Syracuse to The Six on Sunday for a Metallica concert at the Rogers Centre. However, after a strong dose of heavy metal, Gavin realized that he had no idea where he’d parked his car. The details he remembered were slim; an $8 cab ride, a Starbucks, some construction (‘tis the season), and a bank that may have been an RBC narrowed down the search to roughly all of downtown Toronto. Gavin trawled the city’s streets into the night without success. Eventually, he was forced to abandon the search and take a bus back across the border. The next day, Gavin’s slightly bemused parents posted a Craigslist ad on behalf of their “doofy son,” seeking help in the hunt for the green Nissan Versa. The post quickly caught the attention of Toronto’s amateur sleuths who, touched by the unusual predicament, began to scour the city. On Thursday, eagle-eyed Madison Riddolls hit the jackpot. She found the car in Impark’s TD Centre lot, a facility that happens to offer complementary location-reminder cards to help you find your Nissan in a haystack. A relieved Gavin hopped straight on a Greyhound to Toronto and was finally reunited with his car. He brought the saga to an end in good-humor by asking the question on everyone’s lips: “How did I walk past this?” This isn’t the first time that this has happened to poor Gavin, but hopefully it will be the last. We’ve sent him away with a Tile Mate Bluetooth tracker to help him find his car in the future and, yes, we’ve waived the fines accrued since Sunday.
Two ways of experimental infection of Ixodes ricinus ticks (Acari: Ixodidae) with spirochetes of Borrelia burgdorferi sensu lato complex. A previously reported procedure for the introduction of Borrelia spirochetes into tick larvae by immersion in a suspension of spirochetes was tested on Ixodes ricinus (L.) ticks and three of the most medically important European Borrelia genomic species, B. burgdorferi sensu stricto, B. garinii and B. afzelii. The procedure was compared with "classical" infection of nymphs by feeding on infected mice. Both methods yielded comparable results (infection rate 44-65%) with the exception of B. afzelii, which produced better results using the immersion method (44%) compared with feeding on infected mice (16%). Nymphs infected by the immersion method at the larval stage were able to transmit the infection to naïve mice as shown by serology and PCR detection of spirochetal DNA in organs. The immersion method is faster than feeding on infected mice and provides more reproducible conditions for infection. It can be exploited for studies on both pathogen transmission and Borrelia-vector interactions.
This morning I sat with a very young women, she is heroin addicted , she was sold by her own parents to a man several years ago- she came running to find me , yelling my name down the street,after she escaped a man trying to strangle her for 20.00. To say it is difficult to see such pain is an under statement – raw suffering , is rampant .Let us all remember, everyone has a story, not all choose their path. Mercy! I just want to remind everyone that this kind of thing exists. When you see a drug addict, you have no idea how they got that way. When you say that people on drugs should be cut off from any welfare benefits, you say that people who were sold into slavery as children and can’t help their addiction now should be punished right along with people who take drugs for fun. If you make distinctions in your mind between the deserving and the undeserving poor, remember that you, personally, have no reliable way of telling which is which, and Jesus never said that we had the right to discriminate between them anyway. Whatever we decide to do about drugs and poverty in this country, we have to start from the understanding that for many, it’s not their fault. Judge not. You have no idea what cross your sister is carrying. Help her carry it. This is the Law and the Prophets.
Transcranial ultrasonic stimulation modulates single-neuron discharge in macaques performing an antisaccade task. Low intensity transcranial ultrasonic stimulation (TUS) has been demonstrated to non-invasively and transiently stimulate the nervous system. Although US neuromodulation has appeared robust in rodent studies, the effects of US in large mammals and humans have been modest at best. In addition, there is a lack of direct recordings from the stimulated neurons in response to US. Our study investigates the magnitude of the US effects on neuronal discharge in awake behaving monkeys and thus fills the void on both fronts. In this study, we demonstrate the feasibility of recording action potentials in the supplementary eye field (SEF) as TUS is applied simultaneously to the frontal eye field (FEF) in macaques performing an antisaccade task. We show that compared to a control stimulation in the visual cortex, SEF activity is significantly modulated shortly after TUS onset. Among all cell types 40% of neurons significantly changed their activity after TUS. Half of the neurons showed a transient increase of activity induced by TUS. Our study demonstrates that the neuromodulatory effects of non-invasive focused ultrasound can be assessed in real time in awake behaving monkeys by recording discharge activity from a brain region reciprocally connected with the stimulated region. The study opens the door for further parametric studies for fine-tuning the ultrasonic parameters. The ultrasonic effect could indeed be quantified based on the direct measurement of the intensity of the modulation induced on a single neuron in a freely performing animal. The technique should be readily reproducible in other primate laboratories studying brain function, both for exploratory and therapeutic purposes and to facilitate the development of future clinical TUS devices.
Handtool Set prepare to press The Prepare to Press set is the ideal tool combination for preparing a pressing. Alongside a marker the set includes a tube cutter for copper, brass and thin-walled steel pipes with Ø 6-35 mm as well as an internal and external deburrer. SKU 1000002002 Product profile Perfect preparation of pressings: The pipe connection can be prepared with markers, pipe cutter and deburrer. Pipe cutter saves material: Cuts near to flares Ready for use: Spare cutter wheel in handle Lightweight and robust at the same time due to its magnesium body Time saving: Inner and outer deburring allows you to work more efficiently when installing pipes
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!--NewPage--> <HTML> <HEAD> <!-- Generated by javadoc (build 1.4.2_18) on Thu Jul 31 15:03:47 GMT+01:00 2008 --> <TITLE> NearestSpecPropFunction (Apache FOP 0.95 API) </TITLE> <META NAME="keywords" CONTENT="org.apache.fop.fo.expr.NearestSpecPropFunction class"> <LINK REL ="stylesheet" TYPE="text/css" HREF="../../../../../stylesheet.css" TITLE="Style"> <SCRIPT type="text/javascript"> function windowTitle() { parent.document.title="NearestSpecPropFunction (Apache FOP 0.95 API)"; } </SCRIPT> </HEAD> <BODY BGCOLOR="white" onload="windowTitle();"> <!-- ========= START OF TOP NAVBAR ======= --> <A NAME="navbar_top"><!-- --></A> <A HREF="#skip-navbar_top" title="Skip navigation links"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=3 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_top_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/NearestSpecPropFunction.html"><FONT CLASS="NavBarFont1"><B>Use</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../index-all.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> fop 0.95</EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="../../../../../org/apache/fop/fo/expr/NCnameProperty.html" title="class in org.apache.fop.fo.expr"><B>PREV CLASS</B></A>&nbsp; &nbsp;<A HREF="../../../../../org/apache/fop/fo/expr/NumericOp.html" title="class in org.apache.fop.fo.expr"><B>NEXT CLASS</B></A></FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../../../../index.html" target="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="NearestSpecPropFunction.html" target="_top"><B>NO FRAMES</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../../../../allclasses-noframe.html"><B>All Classes</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../../../../allclasses-noframe.html"><B>All Classes</B></A> </NOSCRIPT> </FONT></TD> </TR> <TR> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> SUMMARY:&nbsp;NESTED&nbsp;|&nbsp;FIELD&nbsp;|&nbsp;<A HREF="#constructor_summary">CONSTR</A>&nbsp;|&nbsp;<A HREF="#method_summary">METHOD</A></FONT></TD> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> DETAIL:&nbsp;FIELD&nbsp;|&nbsp;<A HREF="#constructor_detail">CONSTR</A>&nbsp;|&nbsp;<A HREF="#method_detail">METHOD</A></FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_top"></A> <!-- ========= END OF TOP NAVBAR ========= --> <HR> <!-- ======== START OF CLASS DATA ======== --> <H2> <FONT SIZE="-1"> org.apache.fop.fo.expr</FONT> <BR> Class NearestSpecPropFunction</H2> <PRE> java.lang.Object <IMG SRC="../../../../../resources/inherit.gif" ALT="extended by"><A HREF="../../../../../org/apache/fop/fo/expr/FunctionBase.html" title="class in org.apache.fop.fo.expr">org.apache.fop.fo.expr.FunctionBase</A> <IMG SRC="../../../../../resources/inherit.gif" ALT="extended by"><B>org.apache.fop.fo.expr.NearestSpecPropFunction</B> </PRE> <DL> <DT><B>All Implemented Interfaces:</B> <DD><A HREF="../../../../../org/apache/fop/fo/expr/Function.html" title="interface in org.apache.fop.fo.expr">Function</A></DD> </DL> <HR> <DL> <DT>public class <B>NearestSpecPropFunction</B><DT>extends <A HREF="../../../../../org/apache/fop/fo/expr/FunctionBase.html" title="class in org.apache.fop.fo.expr">FunctionBase</A></DL> <P> Class modelling the from-nearest-specified-value function. See Sec. 5.10.4 of the XSL-FO standard. <P> <P> <HR> <P> <!-- ======== NESTED CLASS SUMMARY ======== --> <!-- =========== FIELD SUMMARY =========== --> <!-- ======== CONSTRUCTOR SUMMARY ======== --> <A NAME="constructor_summary"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TD COLSPAN=2><FONT SIZE="+2"> <B>Constructor Summary</B></FONT></TD> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD><CODE><B><A HREF="../../../../../org/apache/fop/fo/expr/NearestSpecPropFunction.html#NearestSpecPropFunction()">NearestSpecPropFunction</A></B>()</CODE> <BR> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</TD> </TR> </TABLE> &nbsp; <!-- ========== METHOD SUMMARY =========== --> <A NAME="method_summary"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TD COLSPAN=2><FONT SIZE="+2"> <B>Method Summary</B></FONT></TD> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD ALIGN="right" VALIGN="top" WIDTH="1%"><FONT SIZE="-1"> <CODE>&nbsp;<A HREF="../../../../../org/apache/fop/fo/properties/Property.html" title="class in org.apache.fop.fo.properties">Property</A></CODE></FONT></TD> <TD><CODE><B><A HREF="../../../../../org/apache/fop/fo/expr/NearestSpecPropFunction.html#eval(org.apache.fop.fo.properties.Property[], org.apache.fop.fo.expr.PropertyInfo)">eval</A></B>(<A HREF="../../../../../org/apache/fop/fo/properties/Property.html" title="class in org.apache.fop.fo.properties">Property</A>[]&nbsp;args, <A HREF="../../../../../org/apache/fop/fo/expr/PropertyInfo.html" title="class in org.apache.fop.fo.expr">PropertyInfo</A>&nbsp;pInfo)</CODE> <BR> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Evaluate the function</TD> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD ALIGN="right" VALIGN="top" WIDTH="1%"><FONT SIZE="-1"> <CODE>&nbsp;int</CODE></FONT></TD> <TD><CODE><B><A HREF="../../../../../org/apache/fop/fo/expr/NearestSpecPropFunction.html#nbArgs()">nbArgs</A></B>()</CODE> <BR> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</TD> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD ALIGN="right" VALIGN="top" WIDTH="1%"><FONT SIZE="-1"> <CODE>&nbsp;boolean</CODE></FONT></TD> <TD><CODE><B><A HREF="../../../../../org/apache/fop/fo/expr/NearestSpecPropFunction.html#padArgsWithPropertyName()">padArgsWithPropertyName</A></B>()</CODE> <BR> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</TD> </TR> </TABLE> &nbsp;<A NAME="methods_inherited_from_class_org.apache.fop.fo.expr.FunctionBase"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#EEEEFF" CLASS="TableSubHeadingColor"> <TD><B>Methods inherited from class org.apache.fop.fo.expr.<A HREF="../../../../../org/apache/fop/fo/expr/FunctionBase.html" title="class in org.apache.fop.fo.expr">FunctionBase</A></B></TD> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD><CODE><A HREF="../../../../../org/apache/fop/fo/expr/FunctionBase.html#getPercentBase()">getPercentBase</A></CODE></TD> </TR> </TABLE> &nbsp;<A NAME="methods_inherited_from_class_java.lang.Object"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#EEEEFF" CLASS="TableSubHeadingColor"> <TD><B>Methods inherited from class java.lang.Object</B></TD> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD><CODE>clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait</CODE></TD> </TR> </TABLE> &nbsp; <P> <!-- ============ FIELD DETAIL =========== --> <!-- ========= CONSTRUCTOR DETAIL ======== --> <A NAME="constructor_detail"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TD COLSPAN=1><FONT SIZE="+2"> <B>Constructor Detail</B></FONT></TD> </TR> </TABLE> <A NAME="NearestSpecPropFunction()"><!-- --></A><H3> NearestSpecPropFunction</H3> <PRE> public <B>NearestSpecPropFunction</B>()</PRE> <DL> </DL> <!-- ============ METHOD DETAIL ========== --> <A NAME="method_detail"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TD COLSPAN=1><FONT SIZE="+2"> <B>Method Detail</B></FONT></TD> </TR> </TABLE> <A NAME="nbArgs()"><!-- --></A><H3> nbArgs</H3> <PRE> public int <B>nbArgs</B>()</PRE> <DL> <DD><DL> <DT><B>Returns:</B><DD>1 (maximum number of arguments for from-nearest-specified-value function)</DL> </DD> </DL> <HR> <A NAME="padArgsWithPropertyName()"><!-- --></A><H3> padArgsWithPropertyName</H3> <PRE> public boolean <B>padArgsWithPropertyName</B>()</PRE> <DL> <DD><DL> <DT><B>Specified by:</B><DD><CODE><A HREF="../../../../../org/apache/fop/fo/expr/Function.html#padArgsWithPropertyName()">padArgsWithPropertyName</A></CODE> in interface <CODE><A HREF="../../../../../org/apache/fop/fo/expr/Function.html" title="interface in org.apache.fop.fo.expr">Function</A></CODE><DT><B>Overrides:</B><DD><CODE><A HREF="../../../../../org/apache/fop/fo/expr/FunctionBase.html#padArgsWithPropertyName()">padArgsWithPropertyName</A></CODE> in class <CODE><A HREF="../../../../../org/apache/fop/fo/expr/FunctionBase.html" title="class in org.apache.fop.fo.expr">FunctionBase</A></CODE></DL> </DD> <DD><DL> <DT><B>Returns:</B><DD>true (allow padding of arglist with property name)</DL> </DD> </DL> <HR> <A NAME="eval(org.apache.fop.fo.properties.Property[], org.apache.fop.fo.expr.PropertyInfo)"><!-- --></A><H3> eval</H3> <PRE> public <A HREF="../../../../../org/apache/fop/fo/properties/Property.html" title="class in org.apache.fop.fo.properties">Property</A> <B>eval</B>(<A HREF="../../../../../org/apache/fop/fo/properties/Property.html" title="class in org.apache.fop.fo.properties">Property</A>[]&nbsp;args, <A HREF="../../../../../org/apache/fop/fo/expr/PropertyInfo.html" title="class in org.apache.fop.fo.expr">PropertyInfo</A>&nbsp;pInfo) throws <A HREF="../../../../../org/apache/fop/fo/expr/PropertyException.html" title="class in org.apache.fop.fo.expr">PropertyException</A></PRE> <DL> <DD><B>Description copied from interface: <CODE><A HREF="../../../../../org/apache/fop/fo/expr/Function.html" title="interface in org.apache.fop.fo.expr">Function</A></CODE></B></DD> <DD>Evaluate the function <P> <DD><DL> <DT><B>Parameters:</B><DD><CODE>args</CODE> - array of arguments for the function<DD><CODE>pInfo</CODE> - PropertyInfo for the function <DT><B>Returns:</B><DD>Property containing the nearest-specified-value <DT><B>Throws:</B> <DD><CODE><A HREF="../../../../../org/apache/fop/fo/expr/PropertyException.html" title="class in org.apache.fop.fo.expr">PropertyException</A></CODE> - for invalid arguments to the function</DL> </DD> </DL> <!-- ========= END OF CLASS DATA ========= --> <HR> <!-- ======= START OF BOTTOM NAVBAR ====== --> <A NAME="navbar_bottom"><!-- --></A> <A HREF="#skip-navbar_bottom" title="Skip navigation links"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=3 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_bottom_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/NearestSpecPropFunction.html"><FONT CLASS="NavBarFont1"><B>Use</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../index-all.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> fop 0.95</EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="../../../../../org/apache/fop/fo/expr/NCnameProperty.html" title="class in org.apache.fop.fo.expr"><B>PREV CLASS</B></A>&nbsp; &nbsp;<A HREF="../../../../../org/apache/fop/fo/expr/NumericOp.html" title="class in org.apache.fop.fo.expr"><B>NEXT CLASS</B></A></FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../../../../index.html" target="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="NearestSpecPropFunction.html" target="_top"><B>NO FRAMES</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../../../../allclasses-noframe.html"><B>All Classes</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../../../../allclasses-noframe.html"><B>All Classes</B></A> </NOSCRIPT> </FONT></TD> </TR> <TR> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> SUMMARY:&nbsp;NESTED&nbsp;|&nbsp;FIELD&nbsp;|&nbsp;<A HREF="#constructor_summary">CONSTR</A>&nbsp;|&nbsp;<A HREF="#method_summary">METHOD</A></FONT></TD> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> DETAIL:&nbsp;FIELD&nbsp;|&nbsp;<A HREF="#constructor_detail">CONSTR</A>&nbsp;|&nbsp;<A HREF="#method_detail">METHOD</A></FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_bottom"></A> <!-- ======== END OF BOTTOM NAVBAR ======= --> <HR> Copyright 1999-2008 The Apache Software Foundation. All Rights Reserved. </BODY> </HTML>
Tucked at the far end of Sugan village in Kashmir Valley’s Shopian district, the double-storied home of Ghulam Hassan received a pleasant surprise on the morning of Friday, 11 January. Zeenat-ul-Islam, his son and chief of Al-Badr militant outfit, had come to meet the family, especially his four-year-old daughter. “He met everyone and asked us to pray for him. Before leaving, he gave a Rs 500 note to his daughter and kissed her forehead. Little did we know that it was going to be their last meeting,” Ghulam Hassan, Zeenat’s father, said.
Q: Firebase and Thread 1 When ever I add my firebase code I get the autonomous: class AppDelegate: UIResponder, UIApplicationDelegate { THREAD 1 SIGNAL SGABRT I've taken out all my Firebase related code from the View Controller script, and it works smoothly with no errors. I am using swift 3 and have tried updating my pods, but no success there either. This is the code that I've decided has to be causing the error. var ref = FIRDatabase.database().reference() That is at the begging of my script with all my other declarations. And... ref.child("Agent1").observeSingleEvent(of: .value, with: { (snapshot) in let value = snapshot.value as? NSDictionary self.FnameA1 = value?["Fname"] as? String ?? "" print("\(self.FnameA1)") }) { (error) in print(error.localizedDescription) } Which is where I am trying to grab data from the database and set my local variable (FnameA1) as a string from my database. Although I get no error while compiling I know the error has to be there. I will show the format of my database below in case the error is from how I grab data from it. { "Agent1" : { "Edition" : "Standard 17", "Fname" : "L.", "Lname" : "James", "Ovr" : 95, "Pos" : 3, "Price" : 1000000 } } A: You are getting this error because you are trying to use de Firebase reference before Firebase has been configured. In your log you should see a message similar to this: *** Terminating app due to uncaught exception 'FIRAppNotConfigured', reason: 'Failed to get default FIRDatabase instance. Must call FIRApp.configure() before using FIRDatabase.' You can fix this by making the variable first, and setting the value (the actual reference) later. var ref : FIRDatabaseReference? And then in your function: ref = FIRDatabase.database().reference() ref.child("Agent1").observeSingleEvent(of: .value, with: { (snapshot) in let value = snapshot.value as? NSDictionary self.FnameA1 = value?["Fname"] as? String ?? "" print("\(self.FnameA1)")
Wednesday, February 29, 2012 Occupy or Die - Fringe Magazine Blog Egregious Fact Number One: Our tax dollars were used to bail out Wall Street. In case you need a reminder: Wall Street banks and companies are private corporations, not public institutions. In a democracy, taxes are supposed to go toward funding roads, parks, health clinics, schools, social security–anything that benefits the common good. Despite the lame claims of some vociferous Tea Party morons, the government is not actually “taking” our money–we collectively maintain infrastructure and social services through consensually paying taxes. The consensus is implicit in being alive. If we did not pay taxes, society would collapse. You cannot maintain society through anarchic private entities–that’s antithetical to democracy. So the fact that our public dollars were used to bail out banks that do not have societal interests as their primary or even secondary or even tertiary concern means that our money was stolen from us. The banks use those public dollars to fatten their coffers. Meanwhile, cuts to social services–mental health, education, social security, and on and on and on and on and on–pervade. House foreclosures abound, while Wall Street executives acquire three and four homes. It’s the very embodiment of kleptocracy: enrich the elite while bleeding the effete. Egregious Fact Number Two: The majority of our taxes are used toward miscreant military misadventures, such as those in Afghanistan and Iraq. Both of these countries have been virtually decimated and are being rebuilt by corporate war profiteers. So, our country is in “debt” because our taxes are being (consciously) misused toward private and profiteering ends. This is also known as Corporate Welfare–never mind that social welfare, the correct type of welfare, is roundly and wrongly denigrated. We are in a fictitious debt crisis, which can be easily remedied by taking corporate interests out of public politics, re-funneling taxes towards social services, and taxing the fuck out of the rich. In countries like Denmark, there are no multi-millionaires, because the government caps salaries through taxes. No one in Denmark “earns” the kind of wealth we see flaunted with such hedonistic abandon among the American affluent. Random egregious facts: Family income has declined by nearly 7 percent in the last two years, unemployment hovers between 10-15 percent, and over 46 million Americans live in poverty, the highest rate in 50 years. Meanwhile, the 1% dwell in their marbleized compounds, piloting their luxury vehicles, and disdaining everyone else in smugly contemptuous fashion. Occupy Wall Street is a glowing, growing global movement that is stridently anti-corporate in nature and yet propelled by the principles of peace. It is a people-powered movement that is way long overdue. The movement was precipitated by radical college-agers but quickly gained momentum among the mainstream populace–young, middle-aged, and geriatric alike, since so many are so gravely affected by the perverted profit-motives of the corporate titans. The kids who founded the movement sharply see that the corrosive corporate influence over our ostensibly democratic system is bleeding them of money, of jobs, of hope…theirs is a future mired in misery if they don’t act now, and act radically, to demand an overhaul of the system. Indeed, the OWS movement has already won. It has altered the tenor of economic discourse in favor of the people rather than the profits. It has brought the issue of economic justice to the forefront and tattooed it in the minds and hearts of people everywhere – to those both affected by and empathetic to the cause. For the movement has magnetized people from all paths of life – all ethnicities and all income levels. There are even those in the upper tiers of income realms who sympathize – because they too could be affected by corporate corruption, and also because they share the humane ideals of economic equity. Many of them realize that greed is never good and that they could perhaps curtail their own lifestyles so that more could prosper – not outrageously so, but comfortably so. For we have an inherent right to food, shelter, education. These are things we are born needing. Not entitlements, but spiritual luxuries, because it logically follows that if we have a right to life, then we require these things to sustain life. The OWS encampments all over the country are a mode of protest against corporate dominance–the parks are our parks, the streets are our streets. They do not belong to private entities to profit from–they belong to us, the vibrant public. Of course, many of the encampments have been brutally broken up, making a malevolent mockery of the evolving democratic communities therein. For the OWS movement to affect enduring change, there must be bank sit-ins, massive boisterous marches to government offices with a list of concrete demands, and all manner of brash but peaceful civil disobedience, such as the current foreclosure actions currently taking place in cities like New York and Atlanta. Occupy groups are disrupting foreclosure auctions on city hall steps and enlightening those in attendance about the malicious nature of home evictions. Occupy groups are also occupying the homes of families that face imminent eviction. Indeed, such actions have averted some evictions, as in the highly publicized case of decorated Iraq War veteran Brigitte Walker, whose home was saved from foreclosure by Occupy Atlanta. These are the only kinds of things that have ever given rise to a revolutionary restructuring of society. OWS has already done so many actions that fly well under the radar of the Mainstream Lamestream Media. Indeed, the MSM is commercially-sponsored and therefore censored to meet the petty, pernicious demands of its corporate overlords. (For a partial list of the numerous OWS actions that have already taken place, visit http://occupywallst.org/article/2011-year-revolt/.) The fact that so many right-wing authoritarian types are so threatened by the movement as manifest in their vicious slandering of it, and the militarization of police around the country, and the horrific treatment of the peaceful protesters all over the country at the hands of the law (specifically NYC and Oakland – I mean, tear gas, grenades, rubber bullets…really?!) … all of these things evince that the movement is working, and winning. The violence is not emanating from the protesters as some would like you to believe; rather, it is emanating from politicians and corporations who cower at the prospect of a people-powered juggernaut pacifistically crashing their plutocratic party. Occupy Wall Street has occupied our hearts, and will one day bring about the radical renovation of a crumbling house presided over by sleazy corporate slumlords Bio: Clockwise Cat publisher and editor Alison Ross dabbles delicately in verse. She also spews incessant invective. You may peruse her precious poesie and rowdy rants online. She was once nominated for Best of the Net, but lost out to savvier scribes. Alison wants to forge a new genre of poetic politics called Zen-Surrealist-Socialism. Won’t you join her cause?
The pregnant woman with a myocardial infarction: nursing diagnosis. Critical care nurses in emergency, cardiac, or medical intensive care units may care for women who have experienced a myocardial infarction during pregnancy. Nursing management of the pregnant patient with a myocardial infarction (MI) requires an understanding of the normal physiology of pregnancy, the deviations from health with an MI, and an ability to integrate this knowledge to provide skillful care to unique and very ill patients. Here the authors describe caring for a pregnant patient in cardiac care, while a later article in this tissue focuses on the critical care nurse's role in teaching obstetric nurses arrhythmia interpretation when the patient remains on an obstetric unit. Collaboration between the critical care nurse and obstetric nurse is essential to care for these complex patients.
NYPD confirms Blaine is under ‘an active’ investigation, but declined to provide further details This article is more than 1 year old This article is more than 1 year old The New York police department is investigating sexual assault allegations against magician David Blaine, authorities confirmed on Monday. The chief of detectives, Dermot Shea, told reporters at an unrelated news conference that Blaine is under “an active” investigation. He declined to discuss any details, including whether police had sought to interview Blaine. Shea was responding to a report by the Daily Beast news website that the NYPD had taken statements from two women accusing Blaine of sexual assault. Citing unnamed sources, the report said one of the alleged victims claims she was attacked by Blaine inside his Manhattan apartment in 1998. David Blaine accused of raping model in London in 2004 Read more The Daily Beast had previously reported that a former model alleged Blaine raped her in London in 2004, an allegation he denied. Scotland Yard detectives later declined to take further action after investigating her claim, the website said. The 45-year-old Blaine is known for stunts such as being buried underground for seven days without food or water in New York in 1999. He also stayed in a plexiglass case suspended 30ft above the River Thames in London for more than 40 days in 2003. Phone and email messages requesting comment were sent to his lawyer on Monday. According to his website, he is to begin a tour in the United Kingdom and Ireland in June. In a separate statement about the New York City case on Monday, the NYPD said it “takes sexual assault and rape cases extremely seriously, and urges anyone who has been a victim to file a report so we can perform a comprehensive investigation, and offer support and services to survivors”.
Q: IIS rewrite rule to black-hole URL's with an apostrophe I bought a web app that does not use special characters. In reviewing my logs, I am seeing probing by threat actors who attempt to see if my site is vulnerable to SQL injection by adding apostrophes and other char() characters that are never normally used. I trust the security of my app (mostly), but wanted to see if a rewrite rule or some other methodology would black-hole their request. My web app gives out its standard error. I am looking for rules or ideas to 1.) give the attacker as little info as possible without 2.) adding a lot of overhead to the server. There are lots of rewrite rule examples out there, but none that I have found that deal with this angle. Simple Example of the probing: https://sub.domain.com/default.aspx?page=3500'A=0&Id=497066 A: Something like this? It will weed out some of the nastier SQL characters. <system.webServer> <rules> <rule name="No SQL injection" stopProcessing="true"> <match url=".*" /> <conditions> <add input="{QUERY_STRING}" pattern="['\(\);]" /> </conditions> <action type="AbortRequest" /> </rule> </rules> </system.webServer> This will abort the request if any of the characters '(); appear in the query string. No SQL Injection rule
30 Glam Black and Gold Nails NailDesigners, we’re bringing to you our special collection of black and gold nail designs. If for some reason you don’t want Christmas nail designs for the holidays, the next best bet is to go for a black and gold combo. This pairing is sexy, glamorous, chic, and best of all, it’s just right for all the holiday parties you’ll be attending. It can even stretch out through the New Years. We can imagine your outfit already. A gorgeous black party dress and some gold accessories and bright lippy color. How exciting! With just two colors, you get an unlimited amount of possibilities. You can go dotted, distressed, gradients, or stamped manicures. Check out our black and gold nail designs below.
Author Topic: Pizza Secrets by Beverly Collins (Read 32133 times) scott123 ...for a whole two hours. I watched the orientation and then walked out. As I sat there watching the video, all I could say to myself was "THIS is how corporations defile art. This is how big business chews up and spits out culture." My job was actually to refine delivery maps (by hand!) and general preopening operations, but I made a lot of pizzas because I could. This was early '80s with commissary everything, and conveyor ovens, but we made some good pizzas, for ourselves if not for the customers. I make Pizza for fun, not for competition and not commercially. I don’t need a 4 minute pizza cooked on the most thermodynamic surface know to man. When entertaining, I like my guests to have fun and join in on the Pizza making experience with a nice glass of wine in their hands. I have never had a guest ask me if I could make a 4 minute pizza like Scott123 on the Pizza making forum. I grew up in NY City and for 65 years I love all the NY Style pizza’s you can purchase at places like Ray’s ( and all the other Ray’s), Grimaldi’s, Lombardi’s, Di Fara’s. If you need to lecture someone on what a NY Style pizza is – please take it somewhere else. When I need to consult with the Pizza Police, the above locations aren’t bad for starters. They all make great Pizza but there are differences in every one of them. In fact if you asked each one of them they will give you a reason why their Pizza is different than the competition and why their pies are rated the “Best in the World” My thanks to everyone for their lively "On Topic" comments and book recomendations RE: FibraMent – it can handle 1400+ degrees with an "EVEN" heat and I can crank my oven up to 700 degrees. I also like to bake breads, pies, you name it, and I find that my Fibrament cooking surface does a great job of “evenly” cooking anything that goes in my oven. I know you are not a fan of FibraMent but for informational purposes to those who may be interested in a cost effective way to put out a very good baking product, I have attached an article from: Breadtopia The FibraMent line of baking stones were developed by Illinios entrepreneur Mark O’Toole to meet the needs of professional bakers. Prior to FibraMent, bakers options were limited to ceramic and clay products that baked unevenly and cracked easily. FibraMent has become an industry standard for the baking industry and now is also available to home bakers seeking to duplicate the joys of hearth baking in their own kitchens.• FibraMent bakes great pizza, breads and bagels every time.• FibraMent NEVER needs cleaning. Simply brush excess crumbs off the baking stone.• Comes with a ten-year warranty.• FibraMent customer support is only a mouse click or phone call away.• FibraMent is safety tested and certified by NSF International, the widely recognized and respected independent certification organization for public health and safety.• Easy to follow baking instructions are provided.• FibraMent stones are designed for gas, electric and convection ovens, and also outdoor barbeque grills.• Stones available to fit most oven and outdoor grill sizes• FibraMent never needs to be removed from your oven.• FibraMent is a minimal long term investment which returns optimal results. FibraMent Q & A: 1. What is the composition of FibraMent?FibraMent is made from a proprietary blend of heat resistant and conductive raw materials approved by NSF International for use in baking ovens. 2. What size FibraMent Baking Stone should I buy?When measuring your home oven, allow approximately a one inch opening on each side of the stone for proper air movement. 3. Can I lay a sheet of aluminum foil over the FibraMent Baking Stone to keep it from staining?Yes. The aluminum foil will not alter FibraMent’s baking properties. However, all baking stones are porous and will darken over time. Additional benefits of using aluminum foil are: thermal shock will be minimized and excess moisture will be prevented from contacting the stone. 4. Can FibraMent be used in wood burning ovens and outdoor patio grills?Although FibraMent has a 1500°F continuous use operating temperature limit, it cannot be exposed directly to flame. The flame diverter that comes with our barbecue grill stones must be used. 5. Some bakery publications have recommend baking on quarry tile. How does FibraMent compare to quarry tile?Quarry tile does not have the heat transfer properties necessary for quality baking. It is not engineered for baking oven temperature applications. Quarry tile becomes brittle after it has been heated and does not provide an even bake. 6. Can FibraMent be placed directly on a heating element in electric ovens?No. Nothing should be placed on the element. Setting baking stones or pans on the element restricts the heat flow. This gradually decreases the efficiency of the element until it fails. 7. Do you provide FibraMent similar to the HearthKit’s that are available?Yes, and you do not have to spend that much money. FibraMent is not only used as a baking stone. Our commercial accounts use FibraMent to line their oven ceiling and walls. For home ovens, place one baking stone on the wire rack at the very bottom of your oven. This will be your baking surface. Use a second FibraMent stone as the ceiling by placing on the wire rack above. Adjust the height of the wire rack so it’s immediately over the foods you are baking. Since we have greatly reduced the ceiling height of the oven, and are redirecting the heat back down on the items we are baking, wall inserts are not necessary. Our tests show using this method improves the bake quality. 8. Do thicker stones improve baking performance?Thermal conductivity or heat transfer is independent of thickness. Baking stones provide direct bottom heat to your food items. Thickness of the stone does not change the heat transfer rate.For baking stones to work properly the heat must be conducted evenly. Some baking stones conduct heat too quickly while other stones conduct heat too slowly. FibraMent’s heat transfer rate is 4.63 Btu.in/hr.sqft.°F tested to ASTM Standard C177-95. This is the ideal heat transfer rate. Thicker stones (1″, 1 1/2″ and 2″) are primarily used in commercial ovens where additional strength and recovery times are required. 9. Why don’t you supply a wire serving rack with your pizza stone?Baking stones should be left in the oven. Food bakes at temperatures over 200°F. FibraMent will stay above 200°F for at least thirty minutes after it’s taken out of a 400°F to 500°F oven. You do not want your food to continue to cook after it is taken out of the oven. Also, you will probably burn your fingers trying to take a slice of pizza off the hot stone.Serving the pizza will also become a problem. You will not harm FibraMent by cutting your pizza directly on the stone but you will dull your cutting instrument very quickly. 10. Can I leave my baking stone in the oven during the cleaning cycle?Baking stones are porous and absorb anything that comes in contact with it. It’s best to take the stone out of the oven when it goes through the cleaning cycle. You can leave the stone in the oven if you prevent any foreign residue from dripping on the stone. 11. When I baked my last pizza some sauce and cheese spilled onto the stone. How should I clean it?Take a dry rag and wipe off as much of the residue as you can. Use a rubber spatula to remove any stubborn spills. Be careful not to damage the surface of the stone.You can also bake-off the heavy spills. Instead of turning the oven off when you are through baking, turn it up to the highest temperature setting for 60 to 120 minutes. This will charcoalize the residue spilled onto the stone.Remember baking stones naturally darken and discolor over time with use (stone pictured here is 5 years old). The grease and toppings that drop on the stone actually improve the baking properties. This seals the surface of the stone and minimizes the chance of dough sticking to the surface. 12. Why is it necessary to predry/temper the stone?Since baking stones are porous they absorb moisture. Moisture turns to steam at 212°F. If the moisture is forced out of the stone too quickly it can develop cracks. This is why a slow, gradual temperature increase is so important.Even if we predried the stone at the factory it would pick up moisture during shipment to you. To ensure there was a nominal amount of moisture in the stone the predrying process would have to be repeated. 13. When I opened the carton I noticed some chips on the edges. Should I be concerned?Due to the inherent nature of the raw materials used in FibraMent, the edges may have some small chips. These areas do not affect the baking properties of FibraMent. 14. Some baking stone suppliers state their material absorbs moisture during the baking process. Is this the case with FibraMent?Baking stones provide even, direct heat from the bottom of the stone. Consistent thermal conductivity ensures that the toppings and dough finish baking at the same time. Baking stones do NOT draw moisture out of the dough. Rather, good quality baking stones bake through the dough at a even pace. It’s hard to imagine a stone heated up to 600°F can absorb moisture. Moisture evaporates very quickly at those temperatures. I have never associated Tom Lehmann in my thinking with the big chains. Actually, Tom Lehmann is a friend of the small mom and pop independent pizza operator. He may consult for the chains although I don't ever recall his discussing that anyplace where he writes. So, I tend to doubt that he does consulting work for them. In reality, they don't even need him. They have large staffs of research people and food scientists. Tom is sometimes asked for advice by people who want to copy or mimic a big chain dough, but he usually doesn't come close to what I would consider a good clone, just based on what I know of the chains' products. Where the independents and chains have the most commonality is in the use of conveyor ovens. Most of the oven switches by independents is from deck ovens to conveyor ovens, not the other way around, and the trend toward conveyor ovens is accelerating. All of the big four chains use them, but even smaller chains that once used deck ovens have largely switched over to conveyor ovens, such as Buddy's, Jet's, Home Run Inn, Round Table and even some Chicago area deep-dish operators. I will concede that Tom will sometimes promote the use of the PizzaTools hearth and cloud type disks for making the NY style in a conveyor oven, for which such disks were designed, but that is usually in response to requests from operators for advice in doing so. And Tom will usually mention that there are some models of conveyor ovens that are best suited for use with the hearth/cloud type disks, mainly the so-called fast bake ovens. I've been using a Fibrament stone in my kitchen oven for several years with very good results baking bread. But this morning I tried something for the first time. Instead of the Fibrament stone, I baked Tartine loaves on a layer of 1.5" refractory tiles. These are designed for good heat conduction and the difference in oven spring was dramatic. My Tartine loaves are usually flat on the bottom, but these were completely rounded at the edges, very little of the bottom in contact with the tiles. Hard to know exactly because Tartine loaves are too wet to scale, but I'm guessing a 25% increase in volume compared to baking on the Fibrament. Lots more testing to do, but I'm leaning towards the position that there are better (and less expensive) solutions than Fibrament. John, you make some of the best looking Neapolitan pizzas this forum has to offer. I would probably be the one who'd be intimidated Now, if you started going around telling people that you're expert on NY Style pizza and have a recipe that guarantees the 'Best Pizza Anywhere' using a fibrament stone and a screen, then not only would I recommend not having me over for pizza, but I'd highly recommend avoiding me completely- because you and I would have a big problem. Scott, I'd love to have the opportunity to cook a pie for you someday. I'm not going to claim to be an expert in anything, nonetheless I will make you a pretty darn good pie (probably not the "best pizza anywhere” though I will claim that if it will work you up enough to come to Texas ), and I will make with KAAP and cook it on a Fibrament stone. CL Logged "We make great pizza, with sourdough when we can, commercial yeast when we must, but always great pizza." Craig's Neapolitan Garage I bought a Fibrament stone last year hoping for a better crust on my New York Pies. I dropped 80 or more bucks with shipping, and experienced rude customer service. I was optimistic when I got the stone, but was very dissapointed with the results. A flat, pale crust at 550. Frustrated, I resorted to oven tricks, baking at 650 on my cheap Pizza Gourmet 20 year old firebrick stone. The oven spring was magical, and the family gathered around the oven, mesmerized by the puff and heat. The pie was great, with open, airy crumb and near char on the crust. The smoke alarms and worried looks on my wife's face steered me away from oven tricks. Half inch steel at 530 gives me the 4 minutes pie my family loves. No tricks, no alarms. 40 bucks for the steel. Pizza heaven. This forum is a treasure trove. I have seen no reference in print that approaches the level of expertise and support this forum offers. Scott 123 brought me where I wanted to go. I am a believer! I'm pretty sure that if I played around with a fibrament stone, I could make some pretty good pizza with it. But some materials will only get you so far and some materials are better than others. The folks in this forum spend massive amounts of time, energy, and effort to find out the differences between ingredients, materials, techniques, ovens, what have you. When it comes to stones and baking matierials, Scott123 IS the forum expert. Not some guy that has an axe to grind with Fibrament specifically. Take the advice if you are ready for it. If you are happy with what you are making, then that's all that matters. But if you care to know the differences and and improve your game then take the FREE advice that is given. Do your own experiments and I'm pretty sure you will come to the same conclusion, that Scott knows his stuff. Scott can be passionate/opinionated at times but who here isn't? I appreciate that about him...immensely. We don't have to agree on everything, but there is one thing I know. I have learned a ton from Scott and continue to do so. As a matter of fact, I don't happen to agree with a lot of the forum experts on a lot of things, but I can always learn something from them to make my pizza better. I am curious. Without naming names or anything like that, can you give a few examples? Peter Just some general ideas I've read in old posts. I haven't heard too many lately though. -NP (styled) pizzas can't be made in the home oven.-00 ONLY works at high temps and a sub 2 minute bake.-caputo 00 flour is the best flour for making pizza. -VPN method is the only way for making true NP pies-spiral mixers and wfos are required to make great pizza. (I don't think I've heard anyone come out and say this but I get the feeling this is the general sentiment). -HG flour makes chewy products It's not that I completely disagree and I can definitely see where and how some ideas get propagated. I just don't completely agree either. My point was really more or less, that we can all learn from eachother even if we don't agree on everything. Just some general ideas I've read in old posts. I haven't heard too many lately though. -NP (styled) pizzas can't be made in the home oven.-00 ONLY works at high temps and a sub 2 minute bake.-caputo 00 flour is the best flour for making pizza. -VPN method is the only way for making true NP pies-spiral mixers and wfos are required to make great pizza. (I don't think I've heard anyone come out and say this but I get the feeling this is the general sentiment). -HG flour makes chewy products I'd add these to this list of things I've read on this forum more than once that I personally disagree with:- KAAP and KABF are garbage- Cold fermentation makes a better pizza- The lust for HG flour CL Logged "We make great pizza, with sourdough when we can, commercial yeast when we must, but always great pizza." Craig's Neapolitan Garage scott123 I'd add these to this list of things I've read on this forum more than once that I personally disagree with:- KAAP and KABF are garbage The 'experts' say this? Really? I thought is was just me Seriously, though, while I have no specific axe to grind with Fibrament, I've got a chip on my shoulder the size of Mount Rushmore when it comes to KA. About a decade ago, KA completely robbed me of 2 years of my pizzamaking life (by selling gummy inferior bread flour) and I will never forgive them for it. I've seen the pizzas people make with KA now and it's obvious that it's a different, quality product, but... out of every KA pizza I've ever seen (and I've seen hundreds), there is a distinct trend towards less oven spring, especially with amateur bakers. More importantly, I've never seen a KA pizza that had a particular trait that couldn't be recreated with traditional pizzeria flour. Peter talks a lot about KA's 'tight specifications' and 'careful wheat selection,' but I sincerely believe that KA is no superior to any of the better pro flours and thus does absolutely nothing to deserve the higher price. So maybe 'garbage' might be too strong of a word. A better description might be 'Larger learning curve flour sold by price gouging bastards.' How's that? Craig, it may require some slight tweaks in hydration, but if Gold Medal AP flour (or Heckers, if you can get it) can't perform as well as KAAP, I'll eat my hat I have always viewed the "tight specs" commentary as serving two purposes. The first has to do with the fact that King Arthur does not mill its own flours. So, really the only control it has over its millers is to specify the specs that are to be followed to insure as much as possible a consistent product. The second purpose is marketing and allows King Arthur to boast about its tight specs (as well as being unbleached and unbromated). I once spoke with a technical person at Bay State Milling who at the time was one of the millers for King Arthur. He mentioned the tight specs that they had to follow but he then went on to say what a great marketer King Arthur was. It was almost like he was in awe of King Arthur's marketing prowess. I like the King Arthur flours personally and buy them despite the higher price. Where I live, the KAAP and KABF are sold at the same price, so I usually get the KABF for most of my doughs. I am not a fan of the high prices charged for the KASL and many of their other specialty flours that, for the quantities I would want to use, must be purchased directly from KA, and with stiff shipping charges tacked on to boot. ...for a whole two hours. I watched the orientation and then walked out. As I sat there watching the video, all I could say to myself was "THIS is how corporations defile art. This is how big business chews up and spits out culture." It's hard not to defile art when your goal is to make as much money and have as many "restaurants" as possible. Anytime food is turned into a commodity, quality goes to crap. scott123 Peter, I'm in 'awe' of King Arthur's marketing prowess. I'm actually a little bit more in awe because of my belief that they're selling regular flour at upscale flour prices while still completely dominating the home baker market. This isn't just a naked emperor, but an emperor, naked, running down the street and screaming at the top of his lungs. I think Reinhart played a big part in this, but as far as 99.9% of home bread bakers go, the concept of buying anything other than KABF is ridiculous. Scott, I picked out Heckers and Gold Medal not because they're intrinsically better or worse than other AP flours, just because they're the first thing that popped into my head. I see AP in almost the same way that I see HG. It's all pretty much the same (within each respective protein point). The one exception would be Walmart AP, and I say this not from direct experience, but because almost every single walmart product I've ever purchased has been defective in some manner. I'm actually using up some AT right now by blending it with WM, and it's been treating me very well, but I still reserve the right to trust Walmart about as far as I can throw them. To be fair, the King Arthur KAAP and KABF have higher protein contents than most competitive brands, at least the ones that I can find on the supermarket shelves of the stores where I shop. Whether the higher protein content of those flours justifies their higher price is hard to say. That is a question that each user has to answer.
Metal Casting Services Contact Us A known and used process for thousands of years, metal casting involves the pouring of a molten metal into metal casting molds. These molds contain cavities which allow us to cast products into a desired shape. Once the molten metal has been allowed to cool and solidify, the casting is broken out or ejected from the mold. This part then, in some cases, undergoes further machining and finishing processes. One of the most recognized and highly specialized metal casting companies in China today, ChinaSavvy is a Western owned metal castings company operating to ISO 9001:2008 standards. Metal Casting Process The two main branches of metal casting processes are: Expandable Mold This mold can produce only one metal casting, as the mold is destroyed in the ejection process. Intricate geometries are achievable when using expandable molds. While binders are used to help the mold material retain its form, usually made of plaster, sand, or similar materials. Permanent Mold This type of mold can produce many castings (as it has sections capable of opening and closing) but limits the shapes of parts. Molds of this type are usually made of metal or a refractory material. We specialize in: With so many types of metal casting processes at your disposal, choosing the right process for your project depends upon: The material itself Surface finishes and tolerances required Tooling costs Costs per part Volume of your production run Metal Casting Materials Pattern and Pattern Material Patterns are a geometric replica of the desired metal casting, and are required for expandable mold processes. These patterns are usually slightly bigger, compensating for the shrinkage that commonly occurs during the cooling and solidification process. Parts made using expandable molds generally require a secondary manufacturing process (like machining). Machining and other secondary process do increase cost. However, by increasing the machining allowance, one also increases the surface finish, part dimension, and is able to compensate for several other variables. The material used to create the pattern depends on a few factors, including the type of metal casting process used, the mold used, the cast's desired size and geometry, the number of casts to be produced using the pattern, as well as the desired tolerance and dimensional accuracy required. The Cores Used for castings with internal geometries, a core is simply an inverse replica of the desired internal features of the part being casted. These cores are meticulously designed to compensate for shrinkage and are made of the same material as the pattern. Like the pattern, the core is ejected once the molten metal has cooled and solidified. Chaplets are used in cases where structural support is needed to help keep the core in place. These chaplets are made of a material that has a higher melting point than the material being poured, allowing it to become a part of the part as the molten metal solidifies. The Mold A mold consists of the cope (which is the top) and the drag (which is the bottom). Patterns are then placed inside molds to leave behind the pattern impression. Parting lines are formed between the cope and the drag, allowing for the mold to be opened and the pattern to be removed. After the pattern has been removed, the cores are placed (with chaplets if needed), leaving behind the geometry of the part that is to be casted. The Gating System For the molten metal to be poured, gating systems need to be added to the mold. These gating systems can be curt by hand or, by using more specialized methods, be added into the pattern along with the part. The elements of a gating system are: Pouring Basin: The location at which the molten metal enters the mold. These basins are designed to reduce turbulence. Down Sprue : The location through which the molten metal travels when leaving the pouring basin. Tapered, the down sprue has a reduced cross-section as it moves downwards. : The location through which the molten metal travels when leaving the pouring basin. Tapered, the down sprue has a reduced cross-section as it moves downwards. Ingate Area : The location through which the molten metal passes after leaving the down sprue and before entering the inner mold area. It is vital for regulating the molten metal flow. : The location through which the molten metal passes after leaving the down sprue and before entering the inner mold area. It is vital for regulating the molten metal flow. Runners : Passages (almost like tunnels) through which the molten metal is distributed throughout the mold. : Passages (almost like tunnels) through which the molten metal is distributed throughout the mold. Main Cavity : This is the impression of the actual part that needs to be casted. : This is the impression of the actual part that needs to be casted. Vents : Needed allow gases formed during the solidification process to escape. : Needed allow gases formed during the solidification process to escape. Risers: Described as reservoirs for molten metal, risers are responsible for feeding metal to various sections of the mold in order to compensate for shrinkage during the cooling and solidification process. There are four types of risers namely blind risers, open risers, side risers and top risers. Pouring The process by which molten metal is delivered into the casting molds, involving the flow of the metal through the gating system and into the casting or cavity. Factors that influence pouring includes: Pouring Temperature: While the temperature must always be higher than that required for solidification; the temperature of each metal, as well as casting process, will differ. Superheat is the difference in temperature between the pouring temperature and the solidification (or cooling) temperature. While the temperature must always be higher than that required for solidification; the temperature of each metal, as well as casting process, will differ. Superheat is the difference in temperature between the pouring temperature and the solidification (or cooling) temperature. Pouring Rate: A strict balance in volumetric rate is necessary during the pouring process. If the pouring is too slow, the molten metal can cool and solidify before properly filling the mold, while 'too-high' a pouring temperature can result in turbulence. A strict balance in volumetric rate is necessary during the pouring process. If the pouring is too slow, the molten metal can cool and solidify before properly filling the mold, while 'too-high' a pouring temperature can result in turbulence. Turbulence: Aninconsistent variation in the speed of the pouring process as well as the direction of flow throughout the molten metal itself, causing turbulence. Turbulence causes mold erosion and can also lead to an increase in the formation of metal oxides, resulting in porosity. Post Casting Services ChinaSavvy also delivers world-class post-casting services, including: ChinaSavvy, being a highly specialized metal castings company, also delivers post-casting services, including CNC Machining Polishing, Plating and Coating. Contact us Further Suggested Reading:
From b06a228a5fd1589fc9bed654b3288b321fc21aa1 Mon Sep 17 00:00:00 2001 From: "Richard W.M. Jones" <rjones@redhat.com> Date: Sun, 20 Nov 2016 15:04:52 +0000 Subject: [PATCH] Add support for RISC-V. The architecture is sufficiently similar to aarch64 that simply extending the existing aarch64 macro works. --- src/include/storage/s_lock.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h index 3fe29ce..7cd578f 100644 --- a/src/include/storage/s_lock.h +++ b/src/include/storage/s_lock.h @@ -316,11 +316,12 @@ tas(volatile slock_t *lock) /* * On ARM and ARM64, we use __sync_lock_test_and_set(int *, int) if available. + * On RISC-V, the same. * * We use the int-width variant of the builtin because it works on more chips * than other widths. */ -#if defined(__arm__) || defined(__arm) || defined(__aarch64__) || defined(__aarch64) +#if defined(__arm__) || defined(__arm) || defined(__aarch64__) || defined(__aarch64) || defined(__riscv) #ifdef HAVE_GCC__SYNC_INT32_TAS #define HAS_TEST_AND_SET @@ -337,7 +338,7 @@ tas(volatile slock_t *lock) #define S_UNLOCK(lock) __sync_lock_release(lock) #endif /* HAVE_GCC__SYNC_INT32_TAS */ -#endif /* __arm__ || __arm || __aarch64__ || __aarch64 */ +#endif /* __arm__ || __arm || __aarch64__ || __aarch64 || __riscv */ /* S/390 and S/390x Linux (32- and 64-bit zSeries) */ -- 2.9.3
Related Content Schwartz had been asleep in his Jeep, police said, when Terry and Carr came by looking to rob someone. Investigators said Terry and Carr woke Schwartz up and shot him in the street, taking cigarettes from him. Carr and Terry made court appearances Monday. They teared up after the hearing when relatives told the two men that they loved them. The mother of one of the men told KMBC that her heart goes out to Schwartz's family. According to court records, Terry threatened to kill the people he was with if they told anyone what had happened. People in the neighborhood said the crime has left everyone on edge. "I just moved in here in May and it was pretty quiet until this incident that just recently happened," said a resident that described the slaying as frightening. Carr and Terry are each being held in lieu of $250,000 bail. They will go back to court on Nov. 20.
Q: java Play! framework: passing parameter view with datatable to controller returns null I'm working on a web app with java play! framework, in a specific view I'm using DataTable for displaying a list of customers, I need to make the name column as a link, on click it should display the customer sheet. My problem is this snippet: "fnRowCallback": function( nRow, aData, iDisplayIndex ) { $('td:eq(0)', nRow).html( '<a href="@routes.Dashboard.dettagliCliente("'+aData[3]+'")">'+aData[0] + '</a>'); return nRow; } When the app is running, and I click on a given customer, the view reach the controller but it doesn't pass the value of aData[3] to method, instead it pass literally the string "aData[3]" This is the specific route: GET /dashboard/dettagli_cliente/:idCliente controllers.Dashboard.dettagliCliente(idCliente:String) And this is the controller: public static Result dettagliCliente(String id){ Logger.info("CUSTOMER ID "+id); final Cliente cliente = Cliente.findById(id); Form<Cliente> formCliente = Form.form(Cliente.class).bindFromRequest(); Form<Cliente> filledForm = formCliente.fill(cliente); return ok(registra_utente_form.render(User.findByEmail(request().username()),filledForm,cliente)); } Update: This is the complete dataTable call on the view: $(document).ready(function() { $('#clienti_table').DataTable( { "processing": true, "serverSide": true, "ajax": "@routes.Dashboard.testList()", "fnRowCallback": function( nRow, aData, iDisplayIndex ) { $('td:eq(0)', nRow).html('<a href="@routes.Dashboard.dettagliCliente("'+aData[3]+'")">'+aData[0] + '</a>'); return nRow; }, }); }); If I wrote a sample customer id instead of aData[3], inside the href attribute, it works: on click on one of the customer list element it open the details for that given customer(id) this is an example: $(document).ready(function() { $('#clienti_table').DataTable( { "processing": true, "serverSide": true, "ajax": "@routes.Dashboard.testList()", "fnRowCallback": function( nRow, aData, iDisplayIndex ) { $('td:eq(0)', nRow).html('<a href="@routes.Dashboard.dettagliCliente("c0ed22dc-6c92-4a70-ad30-ea73e8b0c314")">'+aData[0] + '</a>'); return nRow; }, }); }); Thank you everyone A: You need to use Javascript Routing in this case As scala templates are compiled first before any document.ready js function is called and generate a url in place of @routes.Dashboard.dettagliCliente() in the final generated view so whatever you put in parameter while using @routes.Dashboard.dettagliCliente() will be taken as it is.Therefore in Case 1: For passing javascript variable @routes.Dashboard.dettagliCliente("'+aData[3]+'") generate url /dashboard/dettagli_cliente/'+aData[3]+' Case 2: For passing variable directly @routes.Dashboard.dettagliCliente("data") generate url /dashboard/dettagli_cliente/data So in your case 2 the url generated is correct so it worked. And ufcorse you can use direct link /dashboard/dettagli_cliente/data in place of @routes... so that the link is generated directly without any conversion. Also check Similar
Exposure of animals and man to toluene. Twenty rats were exposed for 60 min to 14C-labeled toluene (1,950 mg/m3) in the inspired air. The largest amounts of toluene and its metabolites were found in the white adipose tissue. In a second series of experiments seven healthy male subjects were exposed to 375 mg/m3 of toluene in the air rest and during light, moderate and heavy physical exercise on a bicycle ergometer. The duration of each exposure period was 30 min. Of the seven male subjects three were thin, one was slightly overweight, and three were excessively overweight. The concentration of toluene in the alveolar air and the total uptake of toluene were determined during exposure. The thin subjects had a higher concentration of toluene in alveolar air than the other subjects both at rest and during exercise. The total uptake of toluene in the body during exposure showed that the subjects with the least amount of adipose tissue had the smallest uptake and the subjects with the largest amount of adipose tissue had the largest uptake.
Concepción Department, Paraguay Concepción () is a department of Paraguay. The capital is the city of Concepción. History Throughout history, this department has suffered a great amount of population instability, especially in the times of the colony, due to the advance of the Brazilian “bandeirantes” through the east and the attacks of the Mbayá – Guaicurú natives of Chaco, which, by then, were the lords of that region. During the later years of the colony, it was organized a great campaign to re-conquest the invaded territories with the intention of defending by populating the region and also with the important task of the Jesuit that founded the city of Belén, of creating a Mission with the Mbayá natives, in the year 1760. During the governments of Francia and López, the populating process got stronger and the north area of Paraguay became a territory dedicated to cattle. Francia ordered the construction of the penal colony of Tevego in 1813 in the Concepción department. Once the Paraguayan War was over, Concepción united with Amambay, constituting a great territory for forestall and yerba plantations. In the Twentieth Century, Concepción was proclaimed to be the second most important city of the country and it became an active commercial center. Because of its bind with Mato Grosso it experienced important development during that time. In the year 1906, with the first plan to arrange the national territory, it was proclaimed the first department. Its limits were established with the decree 426 of 1973, as we know them nowadays. Limits Concepción is located in the central part of the oriental region of Paraguay, between the parallels 22°00′ y 23°30′ South and meridians 58°00′ y 56°06′ West. To the North: The Federative Republic of Brazil, separated by the Apa River, from the mouth of the Paraguay River until the confluence with the Hermoso Stream. To the South: San Pedro department, separated by the Ypané River from its confluence with the Guazú Stream until the mouth of the Paraguay River. To the East: Amambay department, separated by a straight line that goes from the mouth of the Hermoso Stream with the Apa River to the beginning of the Chacalnica Stream and Negla, and the Aquidabán River to the confluence with the Guazú Stream. From this point another straight line to the confluence of the Ypané-mi and the Ypané rivers to their confluence with the Guazú Stream. To the West: Presidente Hayes and Alto Paraguay departments, separated from them by the Paraguay River, between the mouths of the rivers Ypané and Apa. Districts The department is divided in 11 districts: Azotey Belén Concepción Horqueta Loreto San Carlos San Lázaro Yby Yaú Sargento José Félix López Paso Barreto San Alfredo Note: The districts of Sargento José Félix López, Paso Barreto and San Alfredo were created after the 2002 Census. Climate In summer, the temperature rises to 39° Celsius, and in winter, drops to 2 degrees Celsius. The media is 24 degrees Celsius. The rains can be about 1.324 mm. The rainiest months are November, December and January, and the dryer ones are June, July and August. Orography and soil The lands of this territory are of relatively elevation and even more close to the north and east frontiers, where they acquire the characteristic of true mountains. These are lands of calcareous origin, with a diversity of granite rocks and marble. The soil is very fertile. In the center and the north the land is a great territory for pasture, with woods and yerba fields. In the south, the lands are high, with little pendants and woods. To the north of the department, there is a succession of hills of low height; the elevations that are continuous form the Cordillera Quince Puntas, along with the mountain range of San Luis from north to south. The hills: Valle-mi, Medina, Pyta, Naranjahai, Itaipú Guazú and Sarambí, destacate from others. Hydrography The Paraguay river runs to the west of Concepción along with the minor rivers Apa, Aquidabán and Ypané, and the streams that bathe the territory are: Estrella, Sirena, Apamí, Primero, Quiensabe, Negla, Trementina, Chacalnica, Tapyanguá, Pitanohaga, Guazú, Mbui´i, Ypanemí, Capiibary, Mboi Guazú. The ports are: Port Concepción Port Vallemí Port Risso: it is mountain passes that produces lime, has rocky coasts, and have had several owners throughout history. There is an antique house built at the end of the 20th century, used to be a defense from the natives of Chaco, still stands in the place. Port Fonciere: an important watch point over the Paraguay River. There is a house from the year 1927. Por Max: port “Tres Ollas” is nowadays a ganadero establishment, right in front of the Port Pinasco. Port Arrecife: it has dangerous reefs. In time of bajada of the Paraguay River, it is ideal for fishing Dorado fish. Port Abente: port of cattle ranches, previously called “Port Kemmerich”. It is near the Napeque Stream. Port Pagani: today it is abandoned. Port Negro: there are some cattle farms. Port Algesa: for shipment of goods. Port Antiguo: for passengers and shipment of goods. Port Itapucumí: in front of the Port Pinasco. Here, the remains of the first lime factory of the country can be found, oldest even from the one in Vallemí. The administrative building of the factory works as a limestone quarry. There work lime ovens and shipments to Asunción are also made. There is an important watch tower over the Paraguay River and the Vallemí area. Port Itapúa: previously called “Calera Cué”. Located opposite Port Fonciere. It has lime ovens and shipping is made to all kind of places from there. Right in front there is an island with beautiful beaches. The population is mainly of working class; there are commercial businesses and a school built with limestone. Port Guarati: is a famous limestone factory, about 10 km. from Port Itacuá. Nature and vegetation Concepción is set in the ecological region of Aquidabán, a part of region of Amambay and another part in the region of the Central Forest. The deforestation is a big problem in this department because of the continuous advance of the human activity, creating a great shock to the natural resources of the region. One of the biggest problems is the uncontrolled hunting of animals in the area. Most of the forestall species are in extinction danger, as do the animals too. The ones that are in more risk of disappearing of the area are: the puma, yaguareté, gua’a pytá (red parrot), gua’a hovy (blue parrot), tucán, tacua guazú, mbo’i jagua, jacaré overo and lobopé. Some of the protected areas are: The San Luis Mountain Range, with an extension of 70.000 hectares Itapucumí, with an extension of 45.000 hectares Estrella de Concepción, with an extension of 2.400 hectares Laguna Negra, with an extension of 10 hectares, this area is in danger nowadays Economy In agriculture, the most important products are: cotton, sugar cane, maize, sesame seed, wheat and manioc. It is also important the production of locote, sweet potatoes, banana, pepper, tártago, coffee, pineapple, grapefruit and ka’a he’e (a natural sweetener herb). The silviculture has diminished due to the massive deforestation in this region. The breeding of animals is the third production of the country, surpassed only by Presidente Hayes and San Pedro’s. The mortality rates for bovine cattle are relatively low. In Concepción is found the largest land of natural pasturing of the Oriental Region. There are also, porcine, ovine and goat cattle in important percentage. In Vallemí is the Cement National Industry, which has 150 lime extractor plants by the Paraguay River. And near the Apa River quarries of marble are exploit. There are also other industries such as cold-storage plants, silos, mills and cotton gin industries. Communication and services The Paraguay River is the most important waterway, because is navigable most of the territory that coasts Concepción, almost 230 km. The Bioceanic Route cross through the department. The Route 5 "Gral. Bernandino Caballero" communicates with Pedro Juan Caballero and connects with Route 3 “Gral. Elizardo Aquino”, that takes to Asunción. Another access to the department is through Pozo Colorado, a military depot, which connects with Route 9 “Carlos Antonio Lopez”, also known as "Transchaco Route", in the Paraguayan Chaco. There is 1.951 km of roads in the entire department; 720 km of them are paved and 146 km of pebbled roads, 362 interdepartmental roads cross it. The Mcal. Francisco Solano López Airport is located in Concepción City and there are more airways in other districts, and also in the most important cattle establishments. There is phone service with direct dialed in Concepción City, Horqueta and Yby Yaú; in Belén and Loreto the service is with dialing through an operator. The ratio stations in AM are: Radio Concepción, Radio Vallemí, Yby Yaú and Guyra Campana. In FM are: Vallemí, Itá Porá, Aquidabán, Los Ángeles, Continental, Belén, Norte Comunicaciones, among others. There are also television stations. Concepción has 33.976 housings, 13.768 in the urban area and 20.208 in the rural area. There are 1.094 houses with service of running water, and the electricity usage is about 85.082 kilowatts annually. Education There is 190 educational institutions for initial education. In a total of 393 primary schools, there are 39.692 students registered and in 63 high schools there are 9.636 students. Concepción is seat of the Veterinary Filial Faculty of Asunción National University and the Science and Letters Filial Faculty of the Catholic University. In the department are institutions that impart classes of Permanent, Special and Technical Education, as well as Docent Formation Institutes. Health The department has 64 health institutions; hospitals and health care centers, distributed in the different districts throughout the entire department. This is, without counting the private establishments. Tourism In Concepción are many places of tourist appealing, which is an important source of income for the department. One of them is the Tagatiya Stream, a place to do ecological tourism. Old constructions can be found in Concepción City, capital of the department, these are an example of the historical past of this area. Also a locomotive that worked until 1960 and a truck used in the Chaco War. Here was located Francisco Solano Lopez barracks, from which he led the troops of Gral. Resquín in a military campaign to Matto Grosso, during the War Against the Triple Alliance. The Fort San Carlos, in the Apa, is an interesting place to visit. It was built in time of the Colony as a defense mechanism against the invasions of Brazilian Bandeirantes. The Kurusu Isabel Oratory, a few kilometers away from Concepción, receives many visitants. A cruise is offered to navigate through the Paraguay River, and all the rivers and streams in the department offer the possibility for practice of many aquatic sports, fishing, navigation and visits to the beaches. The hills Paso Bravo and San Luis are very visited by tourists. Peña Hermosa Island is a hill of limestone situated in the Paraguay River. The Aquidabán region has forests and large fields, lagoons, swamps and lakes. In the forest Specimens of Trébol, Timbó, Red Quebracho, Karanda, Palo Blanco, Juasy’y Guazú, Urundey-mi, Kurupa’y, Curuñi, and Jata’i can be found in the forests; in the meadows, specimens of Arasupe; and in palm fields, Karanday. The Primavera Country House, by the Aquidabán River, has beautiful beaches and lagoons, where people can go camping or ride horses, as well as do outings. The Ña Blanca Country House, by the Tagatija Guazú, is the center attraction, a stream of crystal clear water, with small waterfalls, where tourists can go swimming and camping. The JM Ranch has an ample beach by the river and is a nice place to enjoy camping and fishing. Alto Biobio is one of the most scenic places and is right by the River Biobio References Geografía Ilustrada del Paraguay, Distribuidora Arami SRL; 2007. Geografía del Paraguay, Primera Edición 1999, Editorial Hispana Paraguay SRL. External links SENATUR
GNOME Do 0.8.3.1 has been released with a couple of extra bugfixes. The main attraction in the 0.8.3.1 big top is a fix for the “Do sits there eating 100% cpu” bug. In the lesser rings are multiple fixes for crasher...
A General Permit is a pre-approved permit and certificate which applies to a specific class of significant sources. By issuing a General Permit, the Department indicates that it approves the activities authorized by the General Permit, provided that the owner or operator of the source, registers with the Department and meets the requirements of the General Permit. If a source belongs to a class of sources which qualify for a General Permit and the owner or operator of the source registers for the General Permit, the registration satisfies the requirements of NJAC 7:27-8.3 for a permit and certificate. A General Operating Permit is a pre-approved permit to construct and operate for major facilities (subject to Title V of the Federal Clean Air Act), issued pursuant to N.J.A.C. 7:27-22.14, for one or more types of similar sources at a facility. A major facility operator with a qualifying source may register for and operate under the conditions of the general operating permit, rather than submit a modification to the facility’s operating permit.
Q: Bounds not working, does not even display markers when trying to add bounds The current code below does not even show markers, but when the bounds are removed, the markers start appearing. I dont know where I am going wrong. All I want to do is zoom the map to include all of the 3 markers. Currently it is zooming to 0,0 . I do provide my api key, so that is not the problem. Code: <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <style type="text/css"> html { height: 100% } body { height: 100%; margin: 0; padding: 0 } #map_canvas { height: 100% } </style> <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=<key provided>&sensor=true"> </script> <script type="text/javascript"> function initialize() { var markers = [ { lat: 61.912113, lng: 61.912113 },{ lat: 62.912149, lng: 62.912144 },{ lat: 61.411123, lng: 61.411123 }]; var mapOptions = { center: new google.maps.LatLng(0,0), zoom: 8, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("map_canvas"), mapOptions); var bounds = new google.maps.LatLngBounds (); var LatLngList = array (new google.maps.LatLng (markers[0].lat,markers[0].lng), new google.maps.LatLng (markers[1].lat,markers[1].lng), new google.maps.LatLng (markers[2].lat,markers[2].lng)); for (i = 0; i < markers.length; i++) { var data = markers[i]; var myLatlng = new google.maps.LatLng(data.lat, data.lng); var marker = new google.maps.Marker({ position: myLatlng, map: map, title: "text" }); } for (var i = 0; i < LatLngList.length; i++) { bounds.extend (LatLngList[i]); } map.fitBounds (bounds); } </script> </head> <body onload="initialize()"> <div id="map_canvas" style="width:100%; height:100%"></div> </body> </html> A: You have an error when you create the array on this line (I've truncated the code for readability): var LatLngList = array (new google.maps.LatLng (markers[0].lat,markers[0].lng) ); Change it to: var LatLngList = Array (new google.maps.LatLng (markers[0].lat,markers[0].lng) ); or var LatLngList = [ new google.maps.LatLng (markers[0].lat,markers[0].lng) ];
January 10, 2005 Hiya, everyone! Can you help a wearable computing researcher? (Not me, but a Ph.D. student in Japan.) He’s interested in cultural differences in attitude regarding electronic clothes. If you copy and paste the text below, edit it to reflect your answers, and mail it toduval@grad.nii.ac.jp , you’ll make him happy. It took me ten to fifteen minutes to fill this out. Questionnaire This questionnaire was designed for research on new technologies: clothes possessing particular features, capacities and some kind of intelligence. Prototypes are currently being designed in France, Japan, and USA. Your answers will be taken into account for a study that will be published in 2005 or 2006; we therefore appreciate the time you will take to answer the following questions. Answers will be kept in a database without association to your identity. Thank you for participating in this study. Please provide the following information. Age _____ Gender _____ Nationality __________________ Occupation (student, engineer...) __________________________________________ Field (computer science, psychology...) __________________________________________ Using the scale below, please tell us how much you agree or disagree with the following statements by placing a number in the box provided. Strongly disagree 1 Disagree 2 Neither agree nor disagree 3 Agree 4 Srongly agree 5 It would be acceptable for me to wear clothes that: ___ display images, photos, texts, or videos. ___ record my environment via photos or videos. ___ produce music, sounds, or speeches. ___ record music, sounds, or speeches around me. ___ produce selected smells. ___ analyze the air (smells, pollution, humidity, temperature). ___ provide a sensation of cold or heat in certain areas. ___ vibrate or provide a feeling of touch in certain areas. Clothes with one or several capacities such as listed in question 1 would be useful: ___ at big events like conferences or forums. ___ during parties. ___ on trips. ___ in potentially dangerous situations. ___ to communicate with disabled people. ___ when meeting new people. If I had garments with one or several capacities such as in question 1, I would like them to: ___ be able to coordinate actions with other clothes. ___ learn from my reactions to their actions. ___ be controlled by some form of build in artificial intelligence. ___ be controlled by guidelines but get back control whenever I want. ___ be under my full control at any moment. I would agree to use garments that can monitor my physical and mental state (heart beats, blood pressure, body temperature, movements, etc.) to: ___ adapt my environment to my needs (temperature, light, music in the room, etc.). ___ adapt video games depending on my experiences. ___ evaluate my performances during sports trainings. ___ produce group effects during artistic or sportive events. ___ reveal my emotions to surrounding people. ___ share my feelings with selected persons (like husband/wife), even at a distance. ___ transmit information to emergency services. During a gathering of people using special garments, I would like clothes to hint about: ___ the wearer's personality. ___ the wearer's beliefs. ___ the wearer's mood. ___ the wearer's history. ___ the wearer's belonging to communities. ___ the group's harmony. ___ the group's common history. ___ the group's common centers of interest. ___ relationships between members of the group. ___ the topic of the gathering. I would be ready to wear and use in my everyday life garments incorporating electronics: ___ for personal uses. ___ for professional uses. I would leave my profile (such as name, age, nationality, centers of interest) in free access: ___ on Internet. ___ to people in the same room as me. ___ to people belonging to my community. ___ to people I met previously. If you have any remarks or suggestion about the questionnaire or electronic garments, please write them below: Return this questionnaire to Sebastien Duval by e-mail to duval@grad.nii.ac.jp or mail to NII, Hitotsubashi 2-1-2, Chiyoda-ku, 101-8430 Tokyo, Japan
Q: authContext.login() causes “ Refused to display ‘login.microsoftonline.com…’ in a frame because it set ‘X-Frame-Options’ to ‘deny’ ” I’m currently trying to implement my website into Microsoft Teams. For that I’ve made a custom app with App Studio. In that app I have a tab to direct to my website. This website has a button that executes a code in which it tries to authContext.login(), which causes the site to direct to the microsoft login and back to the redirect. In web browsers (Firefox, Chrome, Edge) it works fine. But when I try to use authContext.login() in the Teams Desktop Client I get this error in the DevTools console (Due to security concerns I’ve changed some of the hexcode for this post): Refused to display https://login.microsoftonline.com/common/oauth2/authorize?response_type=id_token&client_id=1904f197-dae8-4740-94f2-e12dee41b451&redirect_uri=https%3A%2F%2Fsomeadress.com%2Freally%2F&state=19484c2c-2aff-463a-9cba-2894af66c09d&client-request-id=91308f53-96a1-4f2b-8ee4-774b2f168c6b&x-client-SKU=Js&x-client-Ver=1.0.15&nonce=5cabfef7-36aa-470a-aaa3-754448369ab4&sso_reload=true' in a frame because it set 'X-Frame-Options' to 'deny'. The website gets displayed until the button with authContext.login() is executed, then it vanishes and the error is thrown. This is a code snippet of my button: <!--- Import packages for authentication information in Teams/Azure ---> <script src="https://secure.aadcdn.microsoftonline-p.com/lib/1.0.15/js/adal.min.js" crossorigin="anonymous"></script> ... function loginO365() { let config = { clientId: "190af997-d6f8-4730-9462-e13dee41d451", redirectUri: "#application.gc_app_rootURL#", // same URL as redirect in error and Azure app cacheLocation: "localStorage", navigateToLoginRequestUrl: true, }; let authContext = new AuthenticationContext(config); let isCallback = authContext.isCallback(window.location.hash); authContext.handleWindowCallback(); authContext.login(); // causes error } My questions: What causes this? Why does it not happen in web browsers? How do I fix it? A: Microsoft does not allow its authentication dialog to open in an I-frame. It works fine in browser but MS Teams is implemented on I-frame, that is why you are getting this error. Please have a look into Microsoft Teams authentication flow for tabs flow. Here is a ink for Auth code sample
package horizon import ( "encoding/base64" "encoding/json" "testing" "github.com/stretchr/testify/assert" "github.com/stellar/go/services/horizon/internal/db2/history" "github.com/stellar/go/services/horizon/internal/expingest" "github.com/stellar/go/services/horizon/internal/test" "github.com/stellar/go/xdr" ) var ( data1 = xdr.DataEntry{ AccountId: xdr.MustAddress("GAOQJGUAB7NI7K7I62ORBXMN3J4SSWQUQ7FOEPSDJ322W2HMCNWPHXFB"), DataName: "name1", // This also tests if base64 encoding is working as 0 is invalid UTF-8 byte DataValue: []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, } data2 = xdr.DataEntry{ AccountId: xdr.MustAddress("GAOQJGUAB7NI7K7I62ORBXMN3J4SSWQUQ7FOEPSDJ322W2HMCNWPHXFB"), DataName: "name ", DataValue: []byte("it got spaces!"), } ) func TestDataActions_Show(t *testing.T) { ht := StartHTTPTestWithoutScenario(t) defer ht.Finish() test.ResetHorizonDB(t, ht.HorizonDB) q := &history.Q{ht.HorizonSession()} // Makes StateMiddleware happy err := q.UpdateLastLedgerExpIngest(100) ht.Assert.NoError(err) err = q.UpdateExpIngestVersion(expingest.CurrentVersion) ht.Assert.NoError(err) _, err = q.InsertLedger(xdr.LedgerHeaderHistoryEntry{ Header: xdr.LedgerHeader{ LedgerSeq: 100, }, }, 0, 0, 0, 0, 0) ht.Assert.NoError(err) rows, err := q.InsertAccountData(data1, 1234) assert.NoError(t, err) ht.Assert.Equal(int64(1), rows) rows, err = q.InsertAccountData(data2, 1235) assert.NoError(t, err) ht.Assert.Equal(int64(1), rows) prefix := "/accounts/GAOQJGUAB7NI7K7I62ORBXMN3J4SSWQUQ7FOEPSDJ322W2HMCNWPHXFB" var result map[string]string // json w := ht.Get(prefix + "/data/name1") if ht.Assert.Equal(200, w.Code) { err := json.Unmarshal(w.Body.Bytes(), &result) ht.Assert.NoError(err) decoded, err := base64.StdEncoding.DecodeString(result["value"]) ht.Assert.NoError(err) ht.Assert.Equal([]byte(data1.DataValue), decoded) } // raw w = ht.Get(prefix+"/data/name1", test.RequestHelperRaw) if ht.Assert.Equal(200, w.Code) { ht.Assert.Equal([]byte(data1.DataValue), w.Body.Bytes()) } // regression: https://github.com/stellar/horizon/issues/325 // names with special characters do not work w = ht.Get(prefix + "/data/name%20") if ht.Assert.Equal(200, w.Code) { err := json.Unmarshal(w.Body.Bytes(), &result) ht.Assert.NoError(err) decoded, err := base64.StdEncoding.DecodeString(result["value"]) ht.Assert.NoError(err) ht.Assert.Equal([]byte(data2.DataValue), decoded) } w = ht.Get(prefix+"/data/name%20", test.RequestHelperRaw) if ht.Assert.Equal(200, w.Code) { ht.Assert.Equal("it got spaces!", w.Body.String()) } // missing w = ht.Get(prefix + "/data/missing") ht.Assert.Equal(404, w.Code) w = ht.Get(prefix+"/data/missing", test.RequestHelperRaw) ht.Assert.Equal(404, w.Code) // Too long w = ht.Get(prefix+"/data/01234567890123456789012345678901234567890123456789012345678901234567890123456789", test.RequestHelperRaw) if ht.Assert.Equal(400, w.Code) { ht.Assert.Contains(w.Body.String(), "does not validate as length(1|64)") } }
Bond issued for the city of Stillwater related to the excavation of a street and moving of a building by J. G. Foley. John L. Miller was the principal and James G. Foley and Joseph Wolf signed as sureties on the bond issued November 21, 1887 in Stillwater, Minnesota. Building permit issued for the city of Stillwater, Minnesota, with "Mistake" written across the first page in pencil. Location: South side, Pine Street, Block 38 of Churchill and Nelson. Builder: Miller. General moving permit. Licensed mover: J. L. Miller. "The undersigned hereby applies for a Permit to remove any building within the limits of the City of Stillwater at any and all times during the year ending April 27, 1890, to be moved by the undersigned or by some competent person directed by said undersigned, J. L. Miller." Building permit issued for the city of Stillwater, Minnesota. Location: East side, Main Street, Lot 5, Block 27 of Original City. Owner: Mosier Bus, Builder: G. W. and H. D. Orff. Permit granted on April 13, 1888. Building permit issued for the city of Stillwater, Minnesota. Location: West side, North Main Street, Block 10 of Original Town. Owner: Stillwater Manufacturing Company. Permit granted on March 25, 1916. Building permit issued for the city of Stillwater, Minnesota. Location: North side, East Locust Street, Lot 7 and 8, Block 44 of Original Town. Owner: St. Michaels Church, Builder: O. H. Olsen. Permit granted on June 18, 1913. Building permit issued for the city of Stillwater, Minnesota. Location: North side, Myrtle Street, Lot 11, 12 and 13, Block 21 of City of Stillwater. Owner: Alex Simpson, Builder: Alex Simpson. Permit granted on June 4, 1892 Building permit issued for the city of Stillwater, Minnesota. Location: West side, South Third Street, Lot 1, Block 36 of Original City. Owner: R. J. Welsh, Builder: August Kutz. Permit granted on March 8 , 1889. Building permit issued for the city of Stillwater, Minnesota. Location: West side, Owens Street, Lot 11, Block 6 of Greeley and Slaughters. Owner: George E. Munkel, Builder: George F. Roney. Permit granted on March 8, 1889. Building permit issued for the city of Stillwater, Minnesota. Location: South Side, Holcombe and Willard, Lot 7, Block 8 of Churchill, Nelson and Slaugther. Owner: J. Willard, Builder: August Jackson. Permit granted on May 12, 1887.
The subject application is related to subject matter disclosed in the Japanese Patent Application No. Hei 11-186833 filed in Jun. 30, 1999 in Japan, to which the subject application claims priority under the Paris Convention and which is incorporated by reference herein. 1. Field of the Invention The present invention relates generally to a semiconductor. device structure, and more specifically to a silicon-on-insulator semiconductor device having a reduced parasitic capacitance bonding pad structure. 2. Description of the Related Art FIG. 1A is a cross-sectional view showing a semiconductor device having a conventional silicon-on-insulator (SOI) structure. A semiconductor device 100 in FIG. 1A adopts an SOI structure 102 that comprises a substrate 10, a buried oxide layer 12, and an active semiconductor layer (device layer) 14. A bonding pad 24, which is formed by opening a hole in a passivation layer 20, is arranged on this SOI structure 102. In the case of FIG. 1A, a capacitance C generated between the bonding pad 24 and the substrate 10 is represented by two series-connected capacitances. That is, as shown in FIG. 1B, the capacitance C between the bonding pad 24 and the substrate 10 consists of series-connected capacitances C16 and C12. The capacitance C16 is generated in a field insulating layer 16 as a capacitance insulating layer between the bonding pad 24 as an upper electrode and the active layer 14 as a lower electrode. The capacitance C12 is generated in the buried oxide layer 12 as the capacitance insulating layer between the active layer 14 as the upper electrode and the substrate 10 as the lower electrode. As apparent from FIG. 1B, the capacitance C can be represented by xe2x80x831/C=(1/C16)+(1/C12).xe2x80x83xe2x80x83(1) In other words, the capacitance C between the bonding pad 24 and the substrate 10 can be given by C=(C16xc2x7C12)/(C16+C12).xe2x80x83xe2x80x83(2) However, the bonding pad formed on the SOI structure 102 in the prior art contains following problems. First, as described above, there is such a first problem is that the capacitance C represented by above Eq.(2) is generated between the bonding pad 24 and the substrate 10 in FIG. 1A, that is, the capacitance C is parasitically generated. When an electric signal is input from an external circuit (not shown) into various semiconductor devices (not shown) formed in the active layer 14 via the bonding pad 24, charge/discharge of the parasitic capacitance C connected to the bonding pad 24 is carried out simultaneously. Such charge/discharge of the parasitic capacitance C is repeated every time when the electric signal is input via the bonding pad 24. Therefore, a consumption power of the semiconductor integrated circuit (not shown) formed on the SOI structure 102 is caused to increase. Also, when a wire such as a gold wire, etc. is connected to the bonding pad 24 by the bonding, a strong mechanical impact is applied to a field insulating layer 16 formed directly under the bonding pad 24. Therefore, there is also such a second problem that cracks are produced in the field insulating layer 16. The cracks act as current paths between the bonding pad 24 and the active layer 14. As a result, the cracks cause leakage currents to flow between a plurality of neighboring bonding pads 24 via the active layer 14. In addition, there is such a third problem that an electric signal generated due to generation of the above leakage currents is transmitted to the substrate 10 via the capacitance C12 of the buried oxide layer 12 in FIG. 1B. This electric signal causes the electric interference in other bonding pads and other wirings except for the bonding pad 24 and the wiring 18 in FIG. 1A, or the semiconductor device in the active layer 14, etc. Then, malfunction of the semiconductor integrated circuit formed on the SOI structure 102 is caused due to such electric interference. The present invention has been made to overcome such subjects, and it is an object of the present invention to provide a semiconductor device having a reduced parasitic capacitance bonding pad structure, capable of achieving reduced power consumption by reducing a parasitic capacitance. It is another object of the present invention to provide a semiconductor device having a reduced parasitic capacitance bonding pad structure, capable of preventing malfunction of a semiconductor integrated circuit by suppressing a leakage current. It is still another object of the present invention to provide a semiconductor device having a reduced parasitic capacitance bonding pad structure, capable of having high reliability for an impact effect in wire bonding. In order to achieve the above object, according to a first aspect of the present invention, there is provided a semiconductor device having a silicon-on-insulator structure, comprising: a substrate; a first insulating layer (buried insulating layer) formed on the substrate; a first conductivity type semiconductor layer (active layer) formed on the buried insulating layer; a second insulating layer (field insulating layer) formed on the active layer; an electrode (bonding pad) formed on a part of the field insulating layer; and a semiconductor region within the semiconductor layer, the semiconductor region having a second conductivity type opposite the first conductivity type, and wherein the semiconductor region is positioned below the electrode. Here, either n-type or p-type may be selected as the first conductivity type. The second conductivity type is the p-type if the n-type is selected as the first conductivity type, otherwise the second conductivity type is the n-type if the p-type is selected as the first conductivity type. An SOI structure consisting of the substrate, the buried insulating layer, and the active layer may be formed by any one of the SIMOX method, a bonded-wafer method, and the epitaxial growth method. Preferably, the semiconductor region should be brought into an electrically floating state, and a bottom portion of the semiconductor region should be apart by a predetermined distance from a top portion of the buried insulating layer not to directly contact. This is because, if the semiconductor region is electrically connected to other regions or comes into contact with the buried insulating layer, the leakage current is generated via the semiconductor region. According to the first aspect of the present invention, the parasitic capacitance connected to the bonding pad can be reduced. Therefore, it is possible to reduce the capacitance that is charged/discharged every time when the electric signal is input/output via the bonding pad. As a result, consumption power of the semiconductor integrated circuit can be reduced. Also, according to the first aspect of the present invention, the pn junction composed of the first conductivity type active layer and the second conductivity type semiconductor region is formed below the bonding pad. Therefore, even if the leakage current flows via the field insulating layer, diffusion of the leakage current can be prevented because of the presence of the pn junction. In addition, the electric interference on other bonding pads, the devices, etc. can be reduced by reducing the above leakage current. Thus, the stable operation of the semiconductor integrated circuit can be achieved. According to a second aspect of the present invention, the number of the semiconductor regions formed in the active layer is increased to two in the semiconductor device according to the above first aspect. That is, according to the second aspect of the present invention, there is provided a semiconductor device having a silicon-on-insulator structure, comprising: a substrate; a first insulating layer (buried insulating layer) formed on the substrate; a first conductivity type semiconductor layer (active layer) formed on the buried insulating layer; a second insulating layer (field insulating layer) formed on the active layer; an electrode (bonding pad) formed on a part of the field insulating layer; a first semiconductor region within the active layer, the first semiconductor region having a second conductivity type opposite the first conductivity type, and wherein the first semiconductor region is positioned below the bonding pad; and a second semiconductor region within the first semiconductor region, the second semiconductor region having a first conductivity type, and wherein the second semiconductor region is positioned below the bonding pad. According to the second aspect of the present invention, the advantages obtained by the above first aspect becomes more and more remarkable. In the first and second aspects of the present invention, if the semiconductor region formed in the active layer is extended along the path of the wiring on the field insulating layer, the capacitance generated under the wiring can be reduced simultaneously. As a result, the signal propagation over the wiring at the higher speed can be achieved. Other and further objects and features of the present invention will become obvious upon an understanding of the illustrative embodiments about to be described in connection with the accompanying drawings or will be indicated in the appended claims, and various advantages not referred to herein will occur to one skilled in the art upon employing of the invention in practice.
The Nationals racing presidents continued their week of Olympic tributes with Friday’s doubleheader at Nationals Park, mixing the predictable with the absurd, and extending Teddy Roosevelt’s winless streak to 496 races. The newly dismantled Miami Marlins arrived in town early for an afternoon makeup game, and the lightly-attended affair included a presidents race first: members of the Nat Pack and the “Secret Service” who escort the presidents around the ball park participated in an Olympic presidents race 4-man relay. Secret Service member Scott took the first leg vs. Nat Packer Terrance before handing things off to Abe Lincoln and Thomas Jefferson, respectively. Lincoln extended the lead, and by the time Nat Packer Jason got the baton to George Washington for the anchor leg, he was way behind, but Teddy’s baton transition was weak and he faded in the home stretch, handing George the win. By the time game 2 began, 32,334 fans had arrived, and the day of presidents race firsts continued as the evening crowd was treated to a 4th-inning spectacle that was worthy of a Fellini movie. In the first-ever presidents race swim meet, the giant-headed Rushmores entered the stadium wearing goggles and swim caps, and advanced around the warning track making swimming motions with their arms in the styles of the 400 Meter Individual Medley– first the butterfly, then the backstroke, the breast stroke, and freestyle. Teddy, of course, tried to “swim” while carrying an inner tube. As if the surreal image of Thomas Jefferson leading the pack of “swimmers” down the first base line wasn’t enough for the Nationals Park crowd, a person in a shark costume then appeared out of the Nationals’ bullpen. With the shark chasing the field, what happened next was inevitable, and Teddy’s humiliation was complete. “And that is the most mutant looking shark I’ve ever seen in my life,” said color commentator F.P. Santangelo on the MASN broadcast. “That is an outfield shark, folks.”
The Zen of Python What is the Zen of Python? To find out, enter >>>import this at the Python interpreter prompt. (This is an Easter egg.) You will see the following: The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! A simple main function Note: Python is usually installed under /usr/local/bin/python, so it is also possible to start with !/usr/local/bin/python /usr/bin/env python searches $PATH for python and runs it. Checking if a variable is a number (or number string) Numbers (ints, floats, etc.) have a special __int__ method, so we can simply check if it exists: def isNumber(x): returnhasattr(x, '__int__') This will return True for 5, -5 and 5.0, for instance. But what if we have a string representation? We are going to get False for each of "5", "-5" and "5.0". These are not numbers, they are strings. What if we want to return True for strings representing numbers? We can use the following: def isNumberOrNumberString(x): if isNumber(x): returnTrue try: int(x) exceptValueError: try: float(x) exceptValueError: returnFalse returnTrue repr() versus str() The difference between repr() and str() in Python may not be immediately apparent. According to the documentation, repr() returns the "official" string representation of an object. "If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form <...some useful description...> should be returned." In general, repr(o) should return a string representation of o such that the identity o == eval(repr(o)) holds. eval() takes an (official) string representation of an object and returns a copy of that object constructed from this string representation. On the other hand, str() returns an "informal" string representation of an object. "This differs from repr() in that it does not have to be a valid Python expression: a more convenient or concise representation may be used instead." In general, this representation should be human readable. There is no requirement for the identity o == eval(str(o)) to hold. Let us look at a few examples. printstr("Paul's test string") prints Paul's test string while printrepr("Paul's test string") prints "Paul's test string" The latter is a valid Python expression, the former is not. printstr(1.0 / 3.0) prints 0.333333333333 while printstr(1.0 / 3.0) prints 0.33333333333333331 The latter attempts to give enough decimal figures to enable the value to be reconstructed to maximum precision. It's a bit surprising that printstr([3, "paul's test string", 5.5, "bar", 7, 1.0 / 3.0]) and printrepr([3, "paul's test string", 5.5, "bar", 7, 1.0 / 3.0]) both print [3, "paul's test string", 5.5, 'bar', 7, 0.33333333333333331] on ActivePython 2.5.2.2. It looks like str() for lists is implemented by calling repr() iteratively on the elements. (Shouldn't it be calling str()?) Finally, for user-defined classes, repr() calls the __repr()__ method, while str() calls the __str()__ method. Here is an implementation of a simple class that provides both __repr()__ and __str()__ and conforms to the requirements imposed by the documentation: class Point: def__init__(self, x, y): self.x = x self.y = y def__eq__(self, other): ifhasattr(other, "x")andhasattr(other, "y"): return(self.x == other.x)and(self.y == other.y) else: returnFalse def__ne__(self, other): returnnotself.__eq__(other) def__str__(self): return"(%s, %s)"%(str(self.x), str(self.y)) def__repr__(self): return"Point(%s, %s)"%(repr(self.x), repr(self.y)) Thus pt = Point(3, 5) print pt printstr(pt) printrepr(pt) printeval(str(pt)) == pt printeval(repr(pt)) == pt prints (3, 5) (3, 5) Point(3, 5) False True The first two lines are identical because print calls __repr__ when passed an object as its parameter. Notice that the result of repr(pt) can be used to reconstruct the Point object with eval(). Overridable properties in Python class Foo(object): _a = 7 def get_a(self): returnself._a def set_a(self, a): self._a = a A = property(fget=get_a, fset=set_a) class Bar(Foo): _newA = 5 def get_a(self): returnself._newA def set_a(self, a): self._newA = a f = Foo() print f.A b = Bar() print b.A If Foo.get_a is overridden by Bar.get_a we would expect to see the output 7 5 But instead we see 7 7 This is because in line A = property(fget=get_a, fset=set_a) the binding occurs pretty early and fget, fset are bound to A.get_a and A.set_a early, for good. However, Python enables one to create overridable properties. The following implementation does the trick: class OProperty(object): """Based on the emulation of PyProperty_Type() in Objects/descrobject.c""" In other words, we have pre-filtered tradeSigns and ignored its elements equal to 1. Thus we skipped the 1's and obtained five elements in the resulting tradeDirections, rather than eight. We could also do this: tradeDirections = ["Sell"if ts == -1else"Buy"for ts in tradeSigns] In this case tradeDirections is set to ['Sell', 'Buy', 'Buy', 'Sell', 'Sell', 'Sell', 'Buy', 'Sell'] perhaps in line with our original intentions. We didn't pre-filter tradeSigns and processed all its elements (this we get eight elements in the result) but chose to replace the -1's with "Sell" and the 1's with "Buy". In each case we used if but resorted to different syntax. Filtering one list by another Suppose you have defined names = ["Paul", "Alex", "John", "Simon", "Paul", "Michael"] surnames = ["Smith", "Jones", "Taylor", "Williams", "Brown", "Green"] and now you want to print out the surnames of all Pauls. This can be achieved by using list comprehensions: print[surnames[i]for i inrange(len(names))if names[i] == "Paul"] will produce the output ['Smith', 'Brown'] Checking if an object is a sequence or is iterable If o is your object, you can use the following check: ifhasattr(o, "__iter__"): # ... The following code printhasattr(5, "__iter__") printhasattr([1, 2, 3, 4, 5], "__iter__") printhasattr([5], "__iter__") printhasattr((5), "__iter__") printhasattr((5,), "__iter__") printhasattr((3, 2), "__iter__") printhasattr("asdf", "__iter__") prints False True True False True True False Implementing functors in Python Any object with a __call()__ method may be called using the function call syntax: class Scale(object): def__init__(self, factor): self.factor = factor def__call__(self, arg): returnself.factor* arg s = Scale(2) print s(5) The functor can have more than one argument: importmath class Pythagoras(object): def__init__(self): pass def__call__(self, arg1, arg2): returnmath.sqrt(arg1 * arg1 + arg2 * arg2) p = Pythagoras() print p(3, 4) Local variables in lambda expressions We see that x + y is calculated twice in the following lambda expression: func1 = lambda x, y, z: (x + y + z) / (x + y - z) Can we compute it once and make it a local variable? One solution is to use a helper lambda expression: func2 = lambda x, y, z: (lambdasum=x + y: (sum + z) / (sum - z))() Now both print func1(3.0, 5.0, 7.0) and print func2(3.0, 5.0, 7.0) print the same number: 15.0 Instantiating a Python object dynamically by object class name Use eval: def forname(modname, classname): ''' Returns a class of "classname" from module "modname". ''' module = __import__(modname) classobj = getattr(module, classname) return classobj class Foo(object): def introduction(self): print"I am FOO" class Bar(object): def introduction(self): print"I am BAR" className = "Foo" o = eval("%s()"% className) o.introduction() This will print I am FOO If, on the other hand, you set className to "Bar", you will see I am BAR Sending the output to STDERR rather than STDOUT Instead of print"Hello" use importsys sys.stderr.write("Hello\n") Making a path, rather than just making a directory If we try the following importos os.mkdir("foo/bar/baz") while foo/bar does not exist, foo/bar/baz will never be made. Depending on the operating system, we may see something like Reading a text file backwards Reading a text file backwards is a relatively common task. Let me explain first what I mean by backwards: you read the file line by line, starting from the last line and progressing towards the first. Why would you need this? Imagine that you have a large CSV (comma separated value) with numerous records sorted in ascending order by date/time. You want to read the last N records. Using the standard text file input/output machinery you would probably end up reading the entire file, discarding all but the last N records. Extremely wasteful. Chances are you will have more than one such file. Offsetting a multiline string with spaces Suppose that you need to offset a multiline string with a given number of spaces. You can use the regular expression "^" to match the beginning of the line (to match the beginning of each line in a multiline string, you must use the re.MULTILINE modifier): lineStartRegEx = re.compile("^", re.MULTILINE) def offsetWithSpaces(s, spaceOffset): return lineStartRegEx.sub(" "* spaceOffset, s) Then s = "foo\n bar\nbaz" print offsetWithSpaces(s, 4) produces foo bar baz Chomping in Python Perl users will be familiar with the chomp function which removes the trailing newline character if it's present. They will use it like so: What is IronPython? IronPython is an implementation of the Python programming language running under .NET and Silverlight. It supports an interactive console with fully dynamic compilation. It's well integrated with the rest of the .NET Framework and makes all .NET libraries easily available to Python programmers, while maintaining compatibility with the Python language. IronPython is an open source project freely available under the Microsoft Public License. However, this is not recommended: "It can also be used to restore the actual files to known working file objects in case they have been overwritten with a broken object. However, the preferred way to do this is to explicitly save the previous stream before replacing it, and restore the saved object" (14th October, 2009). Thus our original approach is preferred. However, there is a caveat: "Changing [sys.stdout] doesn’t affect the standard I/O streams of processes executed by os.popen(), os.system() or the exec*() family of functions in the os module." Thus importsys originalStdout = sys.stdout sys.stdout = open("mystdout.txt", "a") someFunction() sys.stdout = originalStdout If inside someFunction() we execute an external process using the aforementioned functions or even use some code from Windows DLLs, that external code will be using the original STDOUT. Our redirect won't work for it. On Windows, we could use the win32api functions to resolve this problem: mystdout.log will be closed when o goes out of scope, before you get a chance to call someFunction(). So you need to make sure that o doesn't go out of scope before the time is ripe. However, when we redirect the "system" STDOUT and STDERR as described here, the "Python" STDOUT and STDERR won't change. Thus we need to redirect both. If we redirect them to the same file, we need to make sure that we set win32file.FILE_SHARE_READ | win32file.FILE_SHARE_WRITE (or else the file will be locked). Moreover, we can append to the end of file using win32file.SetFilePointer. Putting this all together, we obtain the following: Thus, when you import mymodule.py, the function fooBar will be bound to either fooBar_zlib or fooBar_noZlib depending on whether the module zlib is present on your system or not. As a user of mymodule.py you don't have to know exactly which implementation is being used. The logic which determines which of the two implementations to use is encapsulated in mymodule.py. Define functions conditionally on the operating system A similar idea can be used to define functions conditionally on the operating system. If we are running on Linux or Unix, os.name should be "posix". If we are running on Windows, os.name should be "nt". So we could do the following
Involvement of Janus kinases in the insulin signaling pathway. The adaptor molecule growth-factor-receptor-bound protein-2 (Grb2) plays a role in insulin action since it links tyrosine phosphorylated IRS-1 and Shc to the guanine-nucleotide-exchange factor, Sos, which initiates the mitogen-activated-protein (MAP) kinase cascade by producing Ras-GTP. Both IRS-1 and Shc are phosphorylated by the insulin-receptor tyrosine kinase. In the present study, we have investigated whether the tyrosine kinases of the Janus kinase family (JAK) could be involved in insulin signaling by acting on Grb2. In fibroblasts over-expressing insulin receptors we observed that two tyrosine-phosphorylated proteins interact with Grb2 and with a mutant of Grb2, which lacks the Src homology 2 (SH2) domain, indicating that these proteins associate with the SH3 domains of Grb2. Further, we found that both JAK1 and JAK2 constitutively associate with Grb2, through interaction with the SH3 domains of Grb2. Finally, insulin appears to induce the tyrosine phosphorylation of JAK1, but does not modify the tyrosine phosphorylation state of JAK2. In conclusion, our results suggest that the JAK proteins could participate in insulin signal transduction, and could therefore constitute an alternative pathway for mediating some of the pleiotropic responses induced by insulin.
480 F.2d 162 AR-CON BUILDING SPECIALTIES, INC., Plaintiff-Appellee,v.FAMCO, INC., et al., Defendants-Appellants. No. 72-3806. United States Court of Appeals,Fifth Circuit. June 25, 1973.Rehearing Denied Aug. 3, 1973. J. Edward Thornton, Mobile, Ala., for defendants-appellants. Charles J. Fleming, Mobile, Ala., for plaintiff-appellee. Before WISDOM, GEWIN and CLARK, Circuit Judges. GEWIN, Circuit Judge: 1 Ar-Con Building Specialties, Inc. (Ar-Con) brought this diversity action in the United States District Court for the Southern District of Alabama seeking to collect a commission it had allegedly earned while acting as sales agent for appellant Famco, Inc. (Famco). Service of process upon Famco, a Mississippi corporation, was accomplished pursuant to one of the Alabama longarm statutes.1 After each side had been afforded a full opportunity to present its case, the district court granted Ar-Con's motion for a directed verdict. It is from this action by the trial court that Famco appeals. For the reasons developed in the remainder of this opinion, we reverse the judgment of the district court and remand the case for a new trial. 2 Famco is a Mississippi corporation engaged in the manufacture of wall panels for use in the construction industry. It employs commissioned sales agents to solicit orders for its products; the agents are independent operators over whom the appellant exercises little control. Upon obtaining an order they submit it to Famco's headquarters in Jackson, Mississippi for acceptance. If an order is approved, Famco then arranges for the goods sold to be delivered to the purchaser. Famco has made several substantial sales to Alabama purchasers in this fashion, and it employs a number of sales agents to solicit business in Alabama. 3 On April 29, 1971 Ar-Con, an Alabama corporation, was hired by Famco to be its exclusive sales representative in certain designated counties of Alabama and Mississippi. In the employment contract signed by both parties it was agreed that for each purchase order obtained Ar-Con would earn a commission of ten percent on the gross selling price less freight. The agreement provided that "When Famco is paid in full, your commission is paid in full." But nowhere in the contract was there any discussion of what was to happen with respect to Ar-Con's commission if a sale it arranged was not accepted by Famco or for some other reason was never consummated. 4 Operating under this employment contract, Ar-Con procured a purchaser for some of Famco's wall panels. The Jordan Construction Company (Jordan), a subcontractor working on the construction of the Albert P. Brewer Developmental Center in Mobile, decided to use Famco's product in its work on the project. After several rounds of preliminary negotiations among representatives of Famco, Ar-Con, and Jordan, a purchase agreement was finally signed. According to its terms Famco was to furnish at a fixed price the panels required by Jordan in order to complete work on the project. The panels were guaranteed by Famco to be free from defects for at least seven years. In addition Famco agreed to furnish a material supply bond. During the preliminary negotiations Jordan had requested Famco to agree to obtain, in addition to a material supply bond, a seven year products warranty bond. As initially drafted by Jordan, the purchase agreement required Famco to supply such a bond. But Famco resisted this requirement and ultimately must have prevailed because the final purchase agreement made no mention of a products warranty bond. Jordan was apparently satisfied with nothing more than Famco's own seven year guarantee of its product. The final clause of the purchase agreement did provide, however, that "This agreement is contingent upon approval by the contractor and architect of the bonds, shop drawings and samples." 5 The sale generated by Ar-Con and agreed upon by Famco and Jordan was never consummated. As yet it has not been determined exactly why the sale was aborted. It is undisputed that the project's architect, upon whose approval the sale was contingent, refused to authorize the use of Famco's panels unless a seven year products warranty bond was furnished. When Famco learned that the architect was insisting upon a products warranty bond, it made efforts to obtain one but was unable to do so. At trial Famco presented evidence in support of its contention that Jordan cancelled the purchase order upon being informed that it was impossible for Famco to obtain the products warranty bond required by the architect. Famco took the position that there was never any question about its ability to obtain the material supply bond specifically required by the purchase agreement. On the other hand Ar-Con contended that upon learning of the architect's insistence on a products warranty bond Famco promised to obtain one and that the contract was not cancelled until it became apparent that Famco could furnish neither a material supply bond nor a products warranty bond. 6 In any event in spite of the failure to complete the sale, Ar-Con demanded its commission from Famco. It took the position that a commission was earned once a ready, willing and able purchaser was found and a purchase agreement reached and that Famco could not defeat the right to a commission by breaching the agreement and failing to complete the sale. When Famco refused to pay on the grounds that under the employment agreement a commission was earned only if the sale was consummated and the purchase price received, this action was initiated. 7 Before determining whether the district court properly granted a directed verdict in favor of Ar-Con, we must first consider Famco's contention that the district court never acquired personal jurisdiction over it and accordingly erred in denying its motion to quash service of process. Famco was served in accordance with one of Alabama's longarm statutes, Title 7, Sec. 193, Alabama Code 1940 (Recomp.1958). Famco argues that it is not amenable to suit in Alabama because it is not "transacting business" within the state and because the conclusory affidavit filed with the Secretary of State by Ar-Con did not meet the requirements of Title 7, Sec. 193. But Alabama courts have never held that anything more than a conclusory affidavit is required by their long-arm statutes.2 Furthermore they have frequently held that a foreign corporation "transacts business" within the meaning of the long-arm statutes if it solicits orders for its products within the state, either through agents or independent dealers, warrants the fitness of the products it sells, and then arranges for the products sold to be delivered to the purchasers.3 At the minimum Famco's activities in Alabama fall within this definition of "transacting business." In addition the contracts which gave rise to this suit were made in Alabama. In these circumstances we are confident that Famco is amenable to suit in Alabama under Title 7, Sec. 193 and that the district court correctly denied the motion to quash service of process. 8 More meritorious is appellant Famco's contention that the district court erred in granting a directed verdict in favor of Ar-Con. By directing a verdict the trial court in effect decided that under the law applicable to this case and the evidence adduced at trial reasonable minds could arrive at no other conclusion but that Ar-Con was entitled to its commission.4 It reached this decision in spite of the fact that the sale upon which Ar-Con bases its claim for a commission was never consummated. Famco argues that the trial court's decision ignores the terms of the employment agreement between the parties. Its position is that under the agreement a commission is earned only if a sale is completed and the purchase price received. In support of its position Famco points to the provision in the contract which states that "When Famco is paid in full, your commission is paid in full." 9 But the quoted provision does nothing more than specify the time of payment once a commission has been earned. e. g. when Famco is paid in full. As indicated earlier neither it nor any other provision in the contract addresses the question of what is to be done about Ar-Con's commission in the event a sale is arranged with a ready, willing and able purchaser but is thwarted by Famco. To answer that question we must turn to the law of the forum state. 10 Ar-Con refers us to a line of cases in which the Alabama courts have been called upon to resolve disputes between real estate brokers and land owners over commissions. Penney v. Speake5 is a typical example. A land owner had assigned a price to each one of several pieces of property he owned and then had authorized a real estate broker to sell them. The broker located a purchaser willing to buy on the owner's terms, but the sale was never consummated because the owner reneged. When the owner refused to pay the commission agreed upon, the broker sued for it. In affirming a judgment for the broker, the Alabama Supreme Court responded to the owner's contention that a commission was earned only if a sale was completed with these words: 11 While there is a distinction between contracts by which a broker is employed to procure a purchaser for property and contracts by which he is employed to effect a sale or to sell, the rule in this jurisdiction is that brokers employed under the latter type of contract, as well as those employed under the first-mentioned type, are entitled to their compensation where they have procured a purchaser ready, able and willing to perform the terms specified or agreed upon, notwithstanding no actual transfer takes place because of the refusal of the seller to convey.6 12 In Alabama, then, a real estate broker earns his commission when he finds a purchaser ready, willing and able to buy on the owner's terms; at that time the sale is treated as constructively consummated insofar as the broker's right to a commission is concerned. 13 This rule of law is well-established and has been reaffirmed on numerous occasions.7 Furthermore it has been applied to an ordinary contract between a commissioned sales agent and his principal such as the one presently before us. For instance in Alabama Fuel & Iron Co. v. Adams, Rowe & Norman8 a coal manufacturer had employed a sales agent to dispose of the entire output of its coal mines. A dispute arose over the agent's right to commissions on certain sales. In affirming a judgment in favor of the agent, the Alabama Supreme Court quoted the rule applicable to real estate brokers and then said: "We are persuaded the foregoing principle is applicable to the agency contract here considered."9 Similarly in Barber-Greene Co. v. Gould10 a sales agent sought to recover its commission on a sale of goods which the purchaser had returned to the seller because they were defective. The Court held that the seller's default in supplying defective goods could not serve to relieve him of the obligation to compensate the agent for arranging the sale in good faith. A careful reading of these cases inexorably leads us to the conclusion that in Alabama a commissioned sales agent is treated no differently than a real estate broker with respect to compensation; once he produces a buyer ready, willing and able to purchase on the seller's terms, he has earned a commission, and the seller cannot deprive him of it by aborting the sale.11 14 Applying the Alabama rule to the facts of the case at bar, we are persuaded that the district court erred in directing a verdict for Ar-Con. The evidence presented at trial was by no means so overwhelming as to admit of no other conclusion but that Ar-Con was entitled to the commission. It is true that the purchase agreement between Jordan and Famco is evidence that Ar-Con had found a ready, willing and able purchaser, but the agreement was contingent upon the approval of the project's architect. Apparently the architect refused to approve the purchase in the absence of a products warranty bond. Although Famco made efforts to obtain such a bond, it was not contractually bound to do so. It had resisted this requirement from the beginning and had successfully prevented its inclusion in the purchase agreement. Famco cannot rightfully be charged with aborting the sale because it was unable to obtain a bond it had never agreed to provide. As long as the architect continued to demand a products warranty bond, Jordan was unable to purchase on Famco's terms, and the sale could not be completed. At this stage of the case the evidence points overwhelmingly to the conclusion that it was the architect's insistence upon a products warranty bond, rather than Famco's inability to provide one, which aborted the sale and deprived Ar-Con of the commission it might have earned. 15 In reaching its decision, the trial court seems to have been influenced by Ar-Con's argument that Famco was never able to obtain the supply bond it had promised. But the record evidence is inconclusive on whether or not Famco was able to furnish the supply bond. Moreover it is far from clear, considering all of the evidence, that this failure, even assuming that it did occur, thwarted the sale. There is strong evidence that even if a supply bond had been furnished the architect nevertheless would not have approved the sale unless a products warranty bond was also furnished. As long as the architect remained adamant in his insistence on a products warranty bond, there was no purchaser ready, willing, and able to buy on Famco's terms, and Ar-Con had not earned a commission. In these circumstances it would have been more appropriate to direct a verdict in favor of Famco. 16 Ar-Con argues, however, that the architect's insistence on a products warranty bond was no real barrier to the sale because he could have been persuaded to waive the requirement. It suggests that the sale foundered before such a waiver could be obtained because Famco was unable to furnish the supply bond it had definitely promised. Alternatively Ar-Con takes the position that upon learning of the architect's intransigence Famco made an additional promise to obtain the warranty bond demanded. Ar-Con in effect contends that in spite of its earlier resistance Famco eventually succumbed to the purchaser's demands, orally agreed to vary the terms of the purchase agreement, and committed itself to supply a products warranty bond. If any of these contentions are proven,12 then the sale's miscarriage could rightfully be attributed to Famco's failure to keep its commitments, and Ar-Con's claim to a commission would be valid. Although the record before us lends little support to Ar-Con's position, it is not our function to resolve these issues in the first instance. The case must be remanded to the district court for a new trial at which Ar-Con will have an opportunity to prove its contentions. If it produces sufficient evidence in support of its contentions to create issues of fact, then the case should be submitted to a jury. Otherwise Ar-Con has no right to a commission and a verdict should be directed in favor of Famco. 17 Reversed and remanded with directions. 1 Title 7, Sec. 193 Alabama Code 1940 (Recomp.1958) 2 New York Times Co. v. Sullivan, 273 Ala. 656, 144 So.2d 25 (1962), rev'd on other grounds, 376 U.S. 254, 84 S.Ct. 710, 11 L.Ed.2d 686 (1963) 3 King Homes, Inc. v. Roberts, 46 Ala. App. 257, 240 So.2d 679, cert. denied 286 Ala. 736, 240 So.2d 689 (1970). See also King & Hatch v. Southern Pipe and Supply Co., 435 F.2d 43 (5th Cir. 1970); Elkhart Engineering Co. v. Dornier Werke, 343 F.2d 861 (5th Cir. 1965); Orange-Crush Grapico Bottling Co. v. Seven-Up Company, 128 F.Supp. 174 (N.D.Ala.1955), and Boyd v. Warren Paint & Color Co., 254 Ala. 687, 49 So. 2d 559 (1950) 4 Boeing Co. v. Shipman, 411 F.2d 365 (5th Cir. 1969) 5 256 Ala. 359, 54 So.2d 709 (1951) 6 Id. at 714 7 Taylor v. Riley, 272 Ala. 690, 133 So.2d 869 (1961). Accord Terry Realty Co. v. Martin, 220 Ala. 282, 124 So. 901 (1929); De Briere v. Yeend Bros. Realty Co., 204 Ala. 647, 86 So. 528 (1920), and Handley v. Schaffer, 177 Ala. 636, 59 So. 286 (1912) 8 216 Ala. 403, 113 So. 265 (1927) 9 Id. at 269 10 215 Ala. 73, 109 So. 364 (1926) 11 Other jurisdictions have fashioned a similar rule. See generally 3 Am.Jur.2d Agency Sec. 250 (1962) and 1 Corbin on Contracts Sec. 50 (1963) 12 With respect to the claim that the contract was varied by a subsequent parol agreement, Ar-Con would have to offer proof in support of its position in accordance with the requirements of the substantive law of Alabama applicable to changing written contracts by subsequent oral agreement. We do not make the slightest suggestion as to what principles of Alabama contract law may be applicable upon proof of this contention
Metabolism of fatty acids and bile acids in plasma is associated with overactive bladder in males: potential biomarkers and targets for novel treatments in a metabolomics analysis. The present study was conducted to identify metabolites using a metabolomics approach and investigate the relationship between these metabolites and urgency as a major symptom of overactive bladder (OAB). In 47 male participants without any apparent neurological disease, OAB was defined as an urgency score on the International Prostate Symptom Score of 2 and higher (OAB group, n = 26), while patients with a score of 1 or 0 were placed in a control group (n = 21). A comprehensive study on plasma metabolites was conducted, and metabolites were compared between the OAB and control groups. Age was significantly higher in the OAB group, while prostate volume did not differ between the groups. A 24-h bladder diary revealed that nocturnal urine volume, 24-h micturition frequency, nocturnal micturition frequency, and the nocturnal index were significantly higher in the OAB group, whereas maximum voided volume was significantly lower in this group. The metabolomics analysis identified 79 metabolites from the plasma of participants. The multivariate analysis showed that increases in the fatty acids (22:1), erucic acid and palmitoleic acid, and a decrease in cholic acid correlated with incidence of male OAB. A decrease in acylcarnitine (18:2)-3 and an increase in cis-11-eicosenoic acid also appeared to be associated with OAB in males. OAB in males may occur through the abnormal metabolism of fatty acids and bile acids. Further studies on these pathways will contribute to the detection of new biomarkers and development of potential targets for novel treatments.
Q: Angular directive set scope watch to disabled button I want to make disabled a button based on scope changes. Here is my current directive code: function disabledButton(){ return { restrict: 'EA', priority:1001, scope:{ isLoading: '=loading' }, link:(scope, element, attrs, isLoading)=>{ scope.$watch('isLoading', () => { element.attr('disabled', true); }); } } } my html: <button type="submit" class="pull-right" disabled-button>Update</button> My Controller just a simple: $scope.loading = true; But it's not working. The button is not disabled. Any solution? JSfiddle https://jsfiddle.net/ssuhat/gp6tq0Lt/7/ A: link and compile do not work together. In the directive definition object, if you only define link, that's like shorthand for having an empty compile function with an empty preLink function with your code in the postLink function. As soon as you define compile, link is ignored by angular, because compile should return the linking functions. If you only return one function from compile, then it'll be executed post link. angular.module('app', []) .controller('myctrl', function($scope, $timeout) { $scope.loading = true; $scope.change = function() { $scope.loading = !$scope.loading; } }) .directive('disabledButton', function() { return { restrict: 'AE', scope: { isLoading: '=' }, compile: function() { return function(scope, element, attrs) { console.log(element); var loading = scope.isLoading; scope.$watch('isLoading', function(a, b) { element.attr('disabled', a); }); } } } }); <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div ng-app="app"> <div ng-controller="myctrl"> <button ng-click="change()">dsa</button> <div class="row"> <button is-loading="loading" disabled-button>Update</button> </div> </div> </div>
Lib Dems reject Vince Cable's proposals to shake up leadership rules Emilio Casalicchio The Liberal Democrats have rejected a proposal by Vince Cable to allow non-MPs to become the party leader. Delegates at the Lib Dem spring conference in York also voted against a plan that would have allowed people who pay no membership fees to choose who heads up the movement. However they did agree to set up a ‘registered supporters scheme’ in a bid to entice activists into the party without forcing them to become full members. It comes after the 75-year-old Lib Dem boss announced he would quit as leader of the party in May this year. Sir Vince proposed the rule changes in a bid to revamp the party - hinting that a high-profile pro-EU campaigner such as Gina Miller could become the Lib Dem boss despite not being an MP. He had hoped that such a move could change the fortunes of the struggling movement after taking inspiration from Justin Trudeau in Canada and Emmanuel Macron in France. But conference delegates yesterday voted to reject the proposal: “Should members other than MPs be permitted to stand for party leader?” They also voted against the proposal: “Should registered supporters be permitted to vote for the party leader?” However they said ‘registered supporters’ should be able to join as long as they are not members of other parties, and they said supporters should be able to sit on policy working groups. Today Sir Vince will tell the conference: “Our mission to move from survival to success, from protest back to power, takes place in a world where liberal values are under siege and in retreat.” On Brexit he will say: “Anyone who imagines that getting Theresa May’s proposed Brexit through Parliament – at the third, fourth, fifth time of asking – will bring closure and stability is suffering from self-delusion. “If Brexit is a political Everest, this is the Base Camp. The brief, vague, woolly, Political Declaration doesn’t tell us where the summit is, let alone how to get there. It promises years and years of frustration and friction.”
The propagation of inverted Neel wall sections in a serial access memory system was proposed by L. J. Schwee in the publication "Proposal On Cross-tie Wall and Bloch-line Propagation In Thin Magnetic Films," IEEE Transactions on Magnetics, MAG 8, No. 3, pages 405-407, September 1972. Such a memory system utilizes a ferromagnetic film of approximately 81% Ni-19% Fe approximately 350 Angstroms (A) thick in which cross-tie walls can be changed to Neel walls and Neel walls can be changed to cross-tie walls by applying appropriate fields. Associated with the cross-tie wall is a section of inverted Neel wall that is bounded by a cross-tie wall on one end and a Bloch-line on the other end. In such a cross-tie wall memory system, information is entered at one end of the serial access memory system by the generation of an inverted Neel wall section, formed by a cross-tie on one side and a Bloch-line on the other, that is representative of a stored binary 1 or of a non-inverted Neel wall section (i.e., the absence of a cross-tie and Bloch-line pair) that is representative of a stored binary 0. Such information is moved or propagated along the cross-tie wall by the successive generation (and then the selective annihilation) of inverted Neel wall sections at successive memory cells along the cross-tie wall. In the D. S. Lo, et al, U.S. Pat. No. 3,906,466 there is disclosed a propagation circuit for the transfer of inverted Neel wall sections at successive memory cells along the cross-tie wall. In the L. J. Schwee U.S. Pat. No. 3,868,659 and in the publication "Cross-tie Memory Simplified by the Use of Serrated Strips," L. J. Schwee, et al, AIP Conference Proceedings, No. 29, 21st Annual Conference on Magnetism and Magnetic Materials, 1975, published April 1976, pages 624-625, and in the publication "Cross-Tie/Bloch-Line Detection," G. J. Cosimini, et al, AIP Conference Proceedings, No. 3, 23rd Annual Conference on Magnetism and Magnetic Materials, 1978, published March 1978, pages 1828-1830, there have been published some more recent results of the further development of cross-tie wall memory systems. In prior art cross-tie wall memory systems, the magnetic film that functions as the storage medium has the property of uniaxial anisotropy provided by its easy axis induced magnetic fields, which easy axis is generated in the magnetic film during its formation in the vapor deposition process. This easy axis provides a magnetic field induced anisotropy which constrains the generation of the cross-tie wall along and parallel to the easy axis. In the above L. J. Schwee, et al, AIP publication there are proposed serrated strips of Permalloy film, about 350 Angstroms (A) in thickness and 10 microns (.mu.m) in width, which serrated strips are etched from a planar layer of the magnetic material so that the strips are aligned along the easy axis of the film. After an external magnetic field is applied normal to the strip length, i.e., transverse the easy axis of the film, the magnetization along the opposing serrated edges rotates back to the nearest direction that is parallel to the edge. This generates two large domains that are separated by a Neel or cross-tie wall that is formed along the centerline of the strip. Cross-ties are energetically more stable at the necks of the serrated edges while Bloch-lines are energetically more stable in the potential wells between adjacent necks. This serrated strip configuration, because of the contour of the opposing edges of the strip, provides the means whereby the cross-tie, Bloch-line pairs are structured at predetermined memory sections along the strip. However, because prior art strips have field induced uniaxial anisotropy imparted during deposition, such strips cannot be utilized to permit the use of nonlinear, i.e., curved, data tracks, which curved data tracks are essential to the configuration of cross-tie wall memory systems of large capacity or of digital logic function capabilities. In the L. H. Johnson, et al, U.S. Pat. No. 4,075,612 there is disclosed a design of the edge contour of a film strip of, e.g., Permalloy film of approximately 350 A in thickness and approximately 10 .mu.m in width. The edge contours are mirror images, one of the other, of asymmetrical, repetitive patterns of rounded edge portions. The edge contour of each opposing pair of rounded edge portions is substantially in alignment with the natural contour of the magnetization that is oriented around a Bloch-line, which Bloch-line is positioned along the cross-tie wall that is oriented along the geometric centerline of the film strip. The neck or narrowest point of the edge contour between adjacent rounded edge portions functions to structure the static or rest position of the associated cross-tie of the cross-tie, Bloch-line pair. In the M. C. Paul, et al, U.S. Pat. No. 4,130,888 there is disclosed a cross-tie wall memory system and in particular a data track therefor that is formed of a strip of magnetic material having substantially zero magnetic field induced anisotropy. The data-track-defining-strip of isotropic material utilizes its shape, i.e., its edge contour induced, anisotropy to constrain the cross-tie wall within the planar contour and along the centerline of the film strip. Accordingly, the cross-tie wall is constrained to follow the path defined by the magnetic film strip which path may be configured into a major loop, or circular data track, configuration for large capacity memory storage. In the E. J. Torok U.S. Pat. Nos. 4,080,591 and 4,075,613 there is utilized the data-track-defining-strip of isotropic magnetic film of the hereinabove referenced M. C. Paul, et al, patent to form a replicator of and a logic gate for cross-tie, bloch-line pairs. The replicator is utilized as a magnetic switch or gate to selectively transfer cross-tie, Bloch-line pairs between merging, overlapping data tracks. This permits the configuration of a plurality of continuous data tracks into a major-loop, minor-loop configuration for a large capacity memory system. The logic gate is utilized as a magnetic switch to selectively perform the logic OR function or the logic AND function upon two merging, overlapping data tracks.
This is a guide to improving the launch time of any iOS app. It covers how to analyze your launch time and some strategies we’ve used here at iZotope to make the Spire app launch faster. First, measure your launch time As with any optimization, it’s important to profile first so you know for sure where your app is spending its time. This lets you focus on the slowest areas, and gives you a benchmark so you can tell if you’re actually making progress. Before you start, you should remember (as pointed out at 24:44 in this great Apple video) that your app launches much more quickly if the app and its data are still sitting in memory in the kernel (warm launch.) This means that if you want to focus on the worst case, you should measure your app’s launch time immediately after rebooting the iOS device (cold launch), rather than just relaunching your app. Launch stages In most apps, the total launch time that a user experiences, from tapping your app’s icon to actually using the app, includes two major stages: Pre-main time, which is everything that happens before your application’s entry point in main() . This includes loading dynamic libraries, rebasing and binding symbols, Objective-C runtime setup, and running Objective-C +load methods. Actual launch code, which includes a lot of Apple code that runs down in UIApplicationMain() , but also everything that you do in your app delegate’s application:willFinishLaunchingWithOptions: and application:didFinishLaunchingWithOptions: methods. As soon as didFinishLaunchingWithOptions returns, unless you block the main thread with other code, your app’s UI is visible and the user can start using the app. Measuring pre-main time To measure pre-main time, edit your scheme and add the DYLD_PRINT_STATISTICS environment variable with a value of 1. When you run your app, the first thing you’ll see in the logs is something like this: Total pre-main time: 1.2 seconds (100.0%) dylib loading time: 888.01 milliseconds (68.9%) rebase/binding time: 108.64 milliseconds (8.4%) ObjC setup time: 76.06 milliseconds (5.9%) initializer time: 215.06 milliseconds (16.6%) slowest intializers [sic] : libSystem.B.dylib : 9.35 milliseconds (0.7%) libMainThreadChecker.dylib : 31.34 milliseconds (2.4%) AFNetworking : 52.25 milliseconds (4.0%) YourApp : 69.85 milliseconds (5.4%) Measuring launch code time There are a number of ways to measure your application’s launch code time, but my favorite is the Time Profiler in Instruments. In Xcode, choose Product > Profile, select Time Profiler, and start recording. Stop as soon as you see your app enter the Foreground – Active state in the Life Cycle lane, and then select the entire time range. Now you can interact with the call tree and see the time that your app spent in each method or function call. Clicking the Call Tree button in the bottom toolbar gives you some useful options, including the most important one for our purposes here: Hide System Libraries, which lets us focus only on the code that we can actually change. (This user guide has detailed instructions on navigating Instruments.) Chances are, if you have a slow launch, your app delegate methods are taking a good chunk of the total time, and you can unfold them to see the heaviest calls that they’re making. Then improve it After you have a good idea of where your app is spending its time during launch, try some of the strategies listed here to make it faster. This is nowhere near an exhaustive list, but these are the major techniques that we’ve used successfully on our app. Improving pre-main time Load fewer dynamic libraries. This can be one of the longest parts of an app’s total launch time. Apple recommends using only up to six non-system frameworks, but for most modern apps, including ours, this is unrealistic. See if you can remove any of the dynamic libraries you’re using by replacing them with static versions or compiling their sources directly. If that isn’t feasible, you still have a few options: We use CocoaPods, and were able to significantly improve our launch time by using a plugin called Amimono , which copies symbols from our pods into the main app executable. If you’re using Carthage or managing dynamic libraries manually, you could try a strategy like this one described by Eric Horacek: convert individual libraries into static frameworks, and then link them in the app through a single dynamic framework. I haven’t tried this personally so I can’t guarantee it’s a good idea. Use +initialize instead of +load for initialization of Objective-C classes. This defers the code in question until the first message is actually sent to the class, rather than running it during the initial Objective-C runtime configuration. Note that you need to be careful since +initialize will be called by subclasses. To solve that, you can check the calling class as described here , or use dispatch_once() for one-off initialization. In our case, we now use +initialize to create shared objects like dispatch queues with calls like the following: + (void)initialize { static dispatch_once_t token; dispatch_once(&token, ^{ queue = dispatch_queue_create("name", DISPATCH_QUEUE_SERIAL); }); } Kill dead Objective-C code. Every class you’ve compiled into your app slows down rebase/binding and Objective-C initialization time, so get rid of the ones you aren’t using anymore. Improving launch code Be lazy. If you’re doing something during launch that can happen later, do it later. A simple example is waiting to create a particular view controller until the user actually taps a button to present it. Swift’s lazy properties make this extremely easy in some cases: private lazy var settingsViewController: MySettingsViewController = { return MySettingsViewController() }() private func onSettingsButtonPressed() { present(self.settingsViewController, animated: true) } Tiny delays that the user would never notice on a button press add up when there are several happening together during launch. Don’t take this too far, though: if doing something lazily will add a perceptible delay (100 milliseconds, say*) on a user action, it’s better to pay that price up front so that your app feels responsive when the user is interacting with it. Use background threads. If something doesn’t need to happen on the main thread, offload it onto a background thread so the main thread can forge ahead with showing your UI. Be careful, because a lot of code is not thread safe or is designed to run only on the main thread. But in some cases, this can be as easy as: dispatch_async(dispatch_get_global_queue(QOS_CLASS_BACKGROUND, 0), ^{ /* do something */ }); Compromise on behavior. At the end of the day, the more that your app is trying to do when it launches, the more slowly it will launch. Negotiate with your team to see if there’s anything user-facing that you’re willing to change or remove to get to a faster launch time. Maybe it’s worth more to your users to have a simple initial UI that loads quickly, rather than one that’s full of smooth gradients, drop shadows, and helpful data, but takes forever to show up. Sleight-of-hand This post is focused on making your app actually launch faster, but there’s also a lot of psychological trickery you can do to make it feel like it launches faster. I won’t spend a huge amount of time on this, but here are a couple of things that helped us on that front: Instead of using an in-your-face splash screen with a giant logo, which draws attention to the fact that your app is slow to launch, use an empty version of your app’s UI as recommended by Apple’s Human Interface Guidelines . This can make your app feel both faster and more professional, since this is what Apple’s apps and almost all top apps on the app store do. Try showing a temporary UI with a spinner ( UIActivityIndicatorView ) while you do any initialization that can happen after application:didFinishLaunchingWithOptions: but must happen before the user can use your app. Unlike other UI elements, UIActivityIndicatorView animates on its own thread, so you can still block the main thread with your initialization code during this time. I hope this post is a helpful starting point. Good luck, and have fun! * It’s hard to make a general rule for how long a delay can be without being perceived, since it’s dependent on the stimulus and the context. But many sources seem to have settled on or around 100 milliseconds. For example, see this 1968 essay by Robert B. Miller, p. 271: “the delay between depressing the key and the visual feedback should be no more than 0.1 to 0.2 seconds.”
Caustic ingestion in adult patients. Sixteen adult patients who ingested caustic substances were seen from 1977 through 1984. All patients underwent endoscopy to determine the site and severity of burns; mild hyperemia to severe, penetrating necrosis was detected in each patient. Ten ingestions were intentional, 4 accidental, and 2 questionably accidental. Morbidity and mortality were high, especially in patients who ingested caustic materials intentionally. A protocol for treatment with steroids and antibiotics was followed in half the patients studied. Those patients who completed this regime tended to have moderately severe burns. Caustic ingestion in adults must be viewed as a problem different from that of accidental ingestion in children. Since most adult caustic ingestions are intentional, the injuries are worse, more deaths result, and more severe scars causing permanent disability are a frequent outcome.
The eccentric visionary's lake-close abode is on the selling block. Receive the latest worth-the-drive updates in your inbox A LITTLE PIECE OF TAHOE: If you go to Lake Tahoe every so often, and we hope you do, if only for a drive on your way through to Reno, or a summertime overnight, you'll understand what we mean when we say this: You do equal parts enjoying and plotting. The enjoying? You're looking at the clear alpine water, and the mountains, and the snug little towns, and you're feeling Away From It All. The plotting? You're thinking how you have to find some tucked-away little hut where you can live out your happy days staring at the lake and writing semi-okay poetry. (Or is that just us? Did we say too much?) Turns out, though, that real estate is something of a hot commodity in one of our state's prettiest nooks, and you can buy a little bit of Tahoe-flavored sweetness. Or a lot, if your budget and interests can handle it. How "a lot" do we mean? Well, how about five parcels for $19 million a lot? FORMER OWNER HOWARD HUGHES... is the main name behind this newly for sale property, which is currently up for sale. Yep, five parcels is some serious land mega-ness, at least 'round Lake Tahoe, and that number is, well. As breathtaking as the lake's Emerald Bay? No, honestly, nothing is more breathtaking than Emerald Bay. But the lakefront property once owned by the go-his-own-way, Spruce Goose-building billionaire certainly is pretty darn high on the wowza scale. It's got five bedrooms and four bedrooms -- we'll call that somewhere between out-sized luxury and cabin-cozy -- and views that make all other views want to try harder. Tempted? Chase International is handling the detail, so they're your first stop, Tahoe lover.
Domanik Green, a 14-year-old student studying at Florida’s Paul R. Smith Middle School managed to bypass the school’s computer security network using just his computer skills and gained access to the server that contained FCAT (Florida Comprehensive Assessment Test) data. FCAT data is basically a standardized evaluation, which requires to be administered every year for students of primary and secondary grades (from third to eleventh grade). To gain access, the method used by the boy is yet unknown but he was captured on video after he logged in through an administrator account into two of the school’s computers. It is also unclear whether any data was gathered from the server of were the files damaged in any way but the act did grabbed him a meeting with the police and also two felony charges, one for using the school’s computer system as administrator and two, for illegal access. The police wrote in the complaint affidavit that the boy utilized the administrator account for logging in to multiple computers on the school’s network without permission. The official document stated: “One of the computers the defendant accessed, without authorization, was a server containing 2014 FCAT information.” Moreover, the young hacker utilized his unauthorized privileges for disrupting classroom activities simply by controlling the teacher’s computers and displaying an image of two males kissing, according to 10 News. According to the police, the boy has admitted that he did perform these acts and also that he was aware that he didn’t have approval to access the systems. Nowadays, the education systems rely excessively on computer usage both for classroom and school activities. Therefore, one would believe that adequate security measures have been implemented that are enough to resist the kids’ curiosity. But this latest incident in which a 14-year-old teen was able to access sensitive data has to be taken seriously in order to improve network protection and log-in credentials. Usually, it is the human factor that needs to be blamed for breaching the security instead of a computer security defect. It is important to keep strong passwords, which means passwords need to be long and comprising of different characters. Also, passwords must never be left lying around or placed at a visible spot like the desk or a post-in note at all. Follow @HackRead
I drink a little, eat a lot, work out even more, and talk too much about all of it. Another relatively uneventful week, albeit one significantly affected by the after effects of the previous weekend’s massive snowfall, which wrought havoc on the roads and rails and made it impossible for me to get into the office. Booze I decided to do a blind tasting on Monday to see whether I could pick out some of the whiskies in my collection by taste alone. Apart from one, I could not. Not terribly surprising given the number of possibilities. More amusing was The Missus’ idea of a small pour—I had already tried each twice before I took this photo! Food A Monday snow day felt like pulled pork (recipe), so I got to work on the whole picnic shoulder that I’d bought in advance of the storm. Ordinarily I’d just season and go, but an 11-pound cut was too much even for my six-quart slow cooker. Turns out that disarticulating a joint is hard work, and that’s without needing the ability to put it back together. I’m starting to understand why surgeons are paid so well. Fitness The snow might have kept me from the gym for the better part of a week, but it gave me an alternative workout on Monday. I shoveled out a neighbor’s car, moved two massive snow piles, and cleared a few hundred square feet of parking lot on Monday. Not only did I get a workout, I helped my little community, and what’s the point of being strong if I can’t put it to practical use? Apart from laps of the Metro parking lot on Monday, running was strictly on the treadmill, and I continue to make progress toward what I consider respectable paces. Patience and determination are paying off. Getting some of my bike fitness back, too. I can handle moderately hard efforts with grace—it’s the top end where I’ve really fallen off. Fortunately, I know how to address that, and I’m cool with pain. Life With only three days of work a most of my time off spent dealing with snow, it was a decidedly uneventful week. I’m pleased to say that I finished installing our new doorway, and The Missus is sufficiently satisfied with the work that I may be given permission to install our new washing machine. Maybe she should see how I handle the drywall repair before making any final decisions. While the doorway was a success, the cabinet refinishing had a setback. Failure to sufficiently prep/prime the surface led to quickly peeling paint, complete removal of which will take significantly longer than doing it properly would have. Oh well, at least now we have an opportunity to reevaluate the color we chose for the main body of the cabinet. What a week! Holiday-shortened weeks are supposed to be breezy, but this was anything but. I’ll put much of the blame on Virginia’s total lack of preparation for a 1″ snowfall, but more on that later. Booze Tuesday was supposed to mark the start of a more-or-less dry spell, as discussed earlier, but Snowzilla 2016 led to my drinking hiatus being put on hiatus. To commemorate the occasion, I decided to crack one of my most recent finds, a septuagenarian bottle of Seagrams VO. Despite nearly 75 years in a bottle, it’s still quite tasty! I’ve never before had a whiskey in which the nose/smell was so different from the palate/taste. $15 well spent. Food Practically nothing to report here. The prospect of being trapped at home for days and hilariously empty shelves simply put a damper on my kitchen ambitions. I threw together a basic chicken and veggies dish on Sunday, but we were otherwise eating deli meats and canned soup. Fitness With snow wreaking havoc on the roads, this was a week spent largely on the bike. I’m coming to terms with the fact that I just can’t put out the same power I as I could a few months ago. I know I’ll get there eventually; I just need patience and hard work. At least I’m good at one of those. I’ve also come to terms with the fact that I can’t pack on any more mass. I already blew out the elbow of a suit jacket, and I had to have a seam repaired on a pair of trousers, so unless I want to buy a whole new wardrobe (I don’t), I need to work on maintaining rather than growing. The Missus is no doubt overjoyed. On the positive side, it seems my run fitness is coming back around. A pair of tempo progression runs on the treadmill went much better than expected, with my comfortably holding a sub-7:00 pace for the first time in recent memory since before I was injured. I don’t put much stock in treadmill paces, even at the prescribed if scientifically invalidated 1% incline, but it’s encouraging nonetheless. Maybe a race is closer than previously thought. Life After a delightful date night with the Missus on Wednesday, we were treated to a three-hour traffic clusterfuck thanks to a minimal accumulation of snow and Virginia’s decision not to treat any of its roads. They didn’t even bother to salt I-66, the major East-West artery from DC, which pushed everyone onto side roads and snarled traffic well past midnight. After a minor fender bender that occurred when the driver in front of us slammed on his brakes and slid into a parked car, we eventually circled back to Ballston, parked the car in a garage overnight, and took the Metro home, arriving around 00:30. Lesson: If there is so much as a hint of snow in the forecast, don’t drive. With Snowzilla/SnoWayOut/SNoChanceInHell expected Friday, we stocked the fridge and prepared to hunker down. Thankfully, after over a month of being on the fritz, our refrigerator is working properly again. Two people living out of a “rented” mini fridge was not ideal, and I’m happy to see this season of College Life Redux come to an end. Rather than just spend the weekend drinking, eating, and watching TV, The Missus and I resolved to make some headway on our various home renovation projects. She learned how to stain wood, and I began refinishing a cabinet. I also made my first attempt at an upcycled lamp shade, but it’s not quite ready for primetime. I think I’ve worked out the kinks in my process, though. More on that next week! This is the first in a series of entirely self-indulgent posts, wherein I rattle off the (hopefully) somewhat interesting things I did this week. It’s primarily intended to help me remember how I spent my time, perhaps with a bit of self-reflection and pontification thrown in for good measure. It’s also a good way of helping keep to my “one blog post per week” goal for the year. So without further adieu, here’s the inaugural Highlight Reel. Booze Attended a friend’s whiskey tasting / playoff party on Sunday. Met some new people and drank a few whiskeys that I hadn’t tried before. All told, a fun way to spend a Sunday afternoon. Note to self: ask about the barrel number of that peanut butter Willett. Finally got around to cracking a 1954 Seagram’s VO on Saturday. It’s a bit funky, as one might expect after 50+ years in a bottle, but altogether a good pour. Shame it’s such a small bottle, as it’s hard to sample out that way. Food When a friend hosts a bourbon tasting, I feel compelled to bring a dessert that fits the theme. To that end, I made a bourbon tollhouse pie on Sunday morning. It was, to say the least, insanely rich. I barely had half a slice, and it was still chocolate overload. That said, it was very well received, and I’d consider making it again, albeit in a more bite-sized format. The pie was decidedly better than the slow cooker Hawaiian chicken dish that I made on Friday. Even after adding a ton of spices, and even the next day, it was still decidedly bland. If you want a sweet-spicy slow cooker dish that actually has flavor, try this instead. Though our chicken dish was disappointing, our visit to Lil Italian Café was anything but. The notion of an Italian joint that advertises serving halal food was too intriguing to pass up, and I’m really glad we popped in. Merasi got a cheesesteak that was good enough to sate the occasional cravings for a taste of Philly. I tried their spicy, Eastern spin on a chicken cheesesteak and was thoroughly impressed. We’ll be back! Fitness After a few days of bad workouts and generally feeling like shit, I was finally getting back to normal this week. 3 x 8 @ 600 lbs on the leg press is a new post-injury PR, so no complaints there. Getting in two 11-mile runs this week at quasi-decent paces also assures me that there is yet hope for my running exploits. The run-specific fitness isn’t coming back quickly, but it’s coming. My bike fitness is still down a bit from where it was before I started working, but that’s coming back nicely, too. I suspect I’ll be grinding out the watts like old times, in no time. Life This was generally a quiet week, spent focused on final preparations for my Series 66 exam on Friday morning. I didn’t demolish it like I did the Series 7 a few weeks prior, but I also sort of phoned it in once I was sure I’d gotten enough questions right to pass. With a four day weekend starting as soon as I left the testing center, it was hard to stay motivated. Apart from licensing for work, this week was largely about doing stuff around the house. I hung decorative shelving, frames, and a marquee; framed out a new doorway for the bathroom; moved our new washer into said bathroom; installed our knife block; hung a new curtain rod; and generally put a hurt on my Honey-Do list. With a housewarming party looming, I’m extra motivated to get stuff done. That, and we finally had a weekend home together. I think that’s #3 since we moved in November. Don’t move right before the holidays, kids—it sucks. For many, the holidays are a typically indulgent period. This year, I was no exception. I ate a bit too much junk , drank a bit too often, and spent a bit more money than I should have. C’est la vie. I’m not terribly worried about the effects on my body or my bottom line—I generally workout and save enough that I haven’t suffered any obvious consequences—but I don’t like what the past few weeks have done to my discipline. Whereas I can usually pass up having that first cookie, now it’s hard to even turn down a second. I’ve gotten all to accustomed to giving in to my impulses, and that, my friends, is the path to ruin. So until Valentine’s Day, it’s time to rein it in, to work on my willpower. If I have a drink, it will be because I’m out and about, not because it’s 5:00 and I’m pooped. If I have dessert, it will be shared at a nice restaurant, not a Nabisco impulse buy. And no snacking on the sofa, period. Basically, I’ll be a bit boring, but only for a few weeks. When this is through, I expect I’ll once again be able to pass up that second cookie. Unless it’s really good. I mean, I’m not unreasonable. No two gyms are exactly alike, but some things are true wherever you go. A few of those things: Incorrectly racked weights. No matter how well the weights are labeled and their proper locations are marked, some asshole will put them back in the wrong place. Clearly, the 25 lb. dumbbells don’t go between the 45s and 55s, and the big plates don’t go on the same peg as the small plates. In a perfect world, someone caught doing this would have the blatantly mis-racked weight dropped on their foot. If I ever own a gym, better read the waiver carefully, because that’s gonna be in the rules. Broken equipment. There is always at least one piece of equipment that is either completely broken or somehow not working correctly. This is true even if this is the first day the gym has ever been open. As a general rule, it will take management approximately half the lifetime of the universe to actually fix or replace the broken equipment. Note: Most often applies to cardio equipment. “You still using this?”Unless you’re the only person in the gym, at some point in your workout, someone will always ask if you’re still using some piece of equipment. Dude, my towel is still on it. I’m still on it. Yes, I’m using it. And no, my walking to the water fountain does not mean I’m done. Never enough benches.“You still using this” is particularly true when it comes to the bench. The bench is the most in-demand piece of equipment in the gym. (Unless you’re at a Planet Fitness, but that’s not really a gym). If S is the number of benches in the gym, and D is the number of dudes looking for a bench, then in any gym at any given time, S < D. Oh, and you can S my D if you think I’m giving up my bench. Wait your turn, bro. Curling in the squat rack. When a piece of equipment has an exercise in its name, there is only one acceptable use for said equipment. Ipso facto, don’t curl in the fucking squat rack. You don’t see dudes doing shoulder presses at the preacher curl bench, which is where you should be doing your goddamn curls instead of taking up the only squat rack in the whole gym. Phew, getting a little amped up. Pre-workout must be kicking in. Time to lift! Roughly speaking, the Body Mass Index (BMI) is a relative measure of body mass to height. Originally developed in the mid-19th century, BMI is an attempt to quantify an individual’s tissue mass and then categorize that person as underweight, normal weight, overweight, or obese based on that value. There is plentyofcriticism of BMI as a measure of health. Because this is my blog, I’ll focus on the issue most relevant to me: the total lack of distinction between fat and muscle. As obsesity researcher Peter Janiszewski put it, BMI does not differentiate between the Michelin Man and The Terminator. I’m decidedly not the Michelin Man, but BMI says that I’m overweight. Before you call me vain, there are potential ramifications here—insurance companies use BMI as a measure of health when setting premiums. Oh, and I am vain. And I don’t like being told I’m overweight. Just kidding, I don’t care. Well, maybe a little. But I digress. Despite the criticisms, BMI is apparently useful at a population level for assessing health, the idea being that the average person falls somewhere in the middle of the Bibendum-Schwarzenegger spectrum. And it is pretty easy to calculate, which makes it relatively useful to those who don’t have body fat analyzers, skin calipers, or the neuroses to even consider buying such things. But for those of us who do, I propose a new measure: the Beast Index™ The Beast Index™ is a measure of BMI relative to body fat percentage. Whereas BMI only measures mass relative to height, and whereas body fat only determines whether one is skinny, the Beast Index™ combines both for an approximate measure of swole-ness. Put simply, the higher your Beast Index™, the more likely you are to be mistaken for a superhero. Scientists are still determining the optimal Beast Index™, but suffice it to say that a measurement around 3.5 is solid—you’re sufficiently swole as to avoid being mistaken for a cancer patient without looking like some sort of science experiment gone awry. As the chart above shows, the 1-2.5 range is the fat part of the bell curve. If you lose some fat, you can move to the right tail…and probably get more tail. It’s worth noting that the Beast Index™ corresponds roughly with the venerated Mazzetti scale as follows:
Over-expression of cofilin-1 suppressed growth and invasion of cancer cells is associated with up-regulation of let-7 microRNA. Cofilin-1, a non-muscle isoform of actin regulatory protein that belongs to the actin-depolymerizing factor (ADF)/cofilin family is known to affect cancer development. Previously, we found that over-expression of cofilin-1 suppressed the growth and invasion of human non-small cell lung cancer (NSCLC) cells in vitro. In this study, we further investigated whether over-expression of cofilin-1 can suppress tumor growth in vivo, and performed a microRNA array analysis to better understand whether specific microRNA would be involved in this event. The results showed that over-expression of cofilin-1 suppressed NSCLC tumor growth using the xenograft tumor model with the non-invasive reporter gene imaging modalities. Additionally, cell motility and invasion were significantly suppressed by over-expressed cofilin-1, and down-regulation of matrix metalloproteinase (MMPs) -1 and -3 was concomitantly detected. According to the microRNA array analysis, the let-7 family, particularly let-7b and let-7e, were apparently up-regulated among 248 microRNAs that were affected after over-expression of cofilin-1 up to 7 days. Knockdown of let-7b or let-7e using chemical locked nucleic acid (LNA) could recover the growth rate and the invasion of cofilin-1 over-expressing cells. Next, the expression of c-myc, LIN28 and Twist-1 proteins known to regulate let-7 were analyzed in cofilin-1 over-expressing cells, and Twist-1 was significantly suppressed under this condition. Up-regulation of let-7 microRNA by over-expressed cofilin-1 could be eliminated by co-transfected Twist-1 cDNA. Taken together, current data suggest that let-7 microRNA would be involved in over-expression of cofilin-1 mediated tumor suppression in vitro and in vivo.
Hospitalization Cost Model of Pediatric Surgical Treatment of Chiari Type 1 Malformation. To develop a cost model for hospitalization costs of surgery among children with Chiari malformation type 1 (CM-1) and to examine risk factors for increased costs. Data were extracted from the US National Healthcare Cost and Utilization Project 2009 Kids' Inpatient Database. The study cohort was comprised of patients aged 0-20 years who underwent CM-1 surgery. Patient charges were converted to costs by cost-to-charge ratios. Simple and multivariable generalized linear models were used to construct cost models and to determine factors associated with increased hospital costs of CM-1 surgery. A total of 1075 patients were included. Median age was 11 years (IQR 5-16 years). Payers included public (32.9%) and private (61.5%) insurers. Median wage-adjusted cost and length-of-stay for CM-1 surgery were US $13 598 (IQR $10 475-$18 266) and 3 days (IQR 3-4 days). Higher costs were found at freestanding children's hospitals: average incremental-increased cost (AIIC) was US $5155 (95% CI $2067-$8749). Factors most associated with increased hospitalization costs were patients with device-dependent complex chronic conditions (AIIC $20 617, 95% CI $13 721-$29 026) and medical complications (AIIC $13 632, 95% CI $7163-$21 845). Neurologic and neuromuscular, metabolic, gastrointestinal, and other congenital genetic defect complex chronic conditions were also associated with higher hospital costs. This study examined cost drivers for surgery for CM-1; the results may serve as a starting point in informing the development of financial risk models, such as bundled payments or prospective payment systems for these procedures. Beyond financial implications, the study identified specific risk factors associated with increased costs.
Carpet Cleaning Great Mackerel Beach By Nicole May 10, 2019 Carpet Cleaning Great Mackerel Beach – Same Day Carpet Steam Cleaning Services anywhere in Great Mackerel Beach. Hire Certified Carpet Cleaners. Call 1800 284 036 GET FREE Quote !!! Thumbs Up Carpet Cleaning Great Mackerel Beach is known for quality carpet cleaning all over Great Mackerel Beach.We are available in emergency and same day service. We are contemporary leaders in the industry of carpet cleaning. Thumbs Up Carpet Cleaning Great Mackerel Beach started out as a small-team operation over 10 years ago, genuine word-of-mouth soon recognized our company and helped us to grow bigger. Since establishment we as a company have made constant progress and expanded in numbers. We take the pride of delivering carpet cleaning services all by our employees, we do not hire-out to others. Carpet Cleaning Great Mackerel Beach Same Day Professional Carpet Steam Cleaning in Great Mackerel Beach Our certified and trained professional staff members provide superior carpet cleaning services for each of our client. Each time our team has surprised our regular and new clients with their first class cleaning services. We keep improving our standard of cleaning procedure. Our Carpet Steam Cleaning methods At Thumbs Up Carpet Cleaning Great Mackerel Beach we follow a strict set of carpet cleaning methods. As each carpet is different from the other, so the method applied on it differs. There are many things to be taken into account before picking up the correct carpet cleaning process. Carpet Steam Cleaning Carpet Cleaning Step 1. The Pre-Inspection Phase In this phase our team of experts inspect your carpet and note down each carpet related problem from you. Other than the area of concern pointed out by you, we try noting each minute area of permanent and light stains. This helps in finalizing our cleaning method to be applied on your carpet. Carpet Steam Cleaning Step 2. The Pre-Vacuum Phase If required we vacuum your carper for getting rid of dry bonded and insoluble soil. Further vacuum cleaning would take place during the entire carpet cleaning process. This pre-vacuum is an additional vacuum cleaning before we start working on your carpet. Carpet Cleaning Step 3. Moving Furniture For proper carpet cleaning, we need to move chairs, tables, beds, sofas, and dressers of your home. We use high quality tabs and foam blocks protecting your furniture from stains and damage. We wrap other delicate items and keep them in safer place. Carpet Cleaning Step 4. Pre-Condition & Pre-Spot Treatment Phase Heavily traffic area in your home are pre-treated for better cleaning result, using quick-fix soil and spot removal processes. Carpet Cleaning Step 5. Pre-Grooming Phase In this phase we use carpet groom to pre-groom it for loosening up the soiling in heavy traffic area. Carpet Cleaning Step 6. Rinse and Extract Phase After soil loosening procedure, we start soil extraction procedure where we thoroughly rinse your carpet. Thumbs Up Carpet Cleaning Great Mackerel Beach make sure of maintaining certain pressure and heat on carpet that does not over-wet your carpet. Our team experts try improving the quality of air by removing pollens, dust mites, pollutants from your carpet. Carpet Cleaning Step 7. The Neutralization Process In this step we rinse your carpet with a balanced pH level. This prevents any sticky soiling on carpet. Carpet Cleaning Step 8. The Post Cleaning Spot Treatment If any stain or spot remain on carpet it will get extra treatment with special organic spot cleaning solution on it. Carpet Cleaning Step 9. The Post Grooming Process In this step we clean all odds and ends of carpet and try covering the missed places during initial cleaning procedure. Carpet Cleaning Step 10. The Speed Drying Process Our team of professionals use high velocity air movers on your carpet that helps quick drying. Carpet Cleaning Step 11. The Post Inspection Process Lastly, Thumbs Up Carpet Cleaning Great Mackerel Beach team walk through your cleaned carpet and inspect the work done. We make every possible to make you happy with our service. Professional Carpet Cleaning Services Great Mackerel Beach Benefits of carpet Steam Cleaning Great Mackerel Beach Why Carpet Steam Cleaning is Important? There are number of benefits of carpet cleaning. Some of them are below:- Improves Air Quality: After carpet cleaning, people suffering from asthma and allergies find breathing easier. Prevent Mould Growth: Regular cleaning of carpet does not allow the growth of moulds on it. Reduces Dust Mites Infestation: As dust mites infestation are microscopic allergens, which are inhaled when an infested area is disturbed. So better take up steam cleaning by carpet cleaners who exposes dust mites to high temperature where they never get through. Prevents Muscle Strain: A plus point of choosing Thumbs Up Carpet Cleaning Great Mackerel Beach is it makes your life easier and reduces the pain of moving furniture from one place to the other. Other services of Thumbs Up Carpet Cleaning Great Mackerel Beach At Thumbs Up Carpet Cleaning Great Mackerel Beach we offer a vast range of cleaning services for our clients in Great Mackerel Beach. We are known for duct cleaning, curtains and blinds cleaning, mattress cleaning, tiles and grout cleaning, rug cleaning, upholstery cleaning, and many such services. We extend our cleaning services both for residential and commercial establishments. We are available 24 X 7 for emergency carpet cleaning Great Mackerel Beach Thumbs Up Carpet Cleaner Great Mackerel Beach are available for same day carpet cleaning. Why Choose Thumbs Up Carpet Cleaning Great Mackerel Beach? Thumbs Up Carpet Cleaning Great Mackerel Beach believing in providing 100% customer satisfaction. Our team of professionals are skilled and certified cleaners who target at gaining the trust of old and new clients both. Our help-desk customer care executives work 24×7 for easy availability for our clients. So call us any time for free quotes! And pick our Carpet Cleaning Services. Location: Great Mackerel Beach, NSW, Australia 5 Star Carpet Cleaning “I am writing this review on behalf of my family. My entire family is extremely happy with the carpet cleaning services provided by Thumbs Up Cleaning Sydney. We would give their services a 5 star because we got absolutely professional, quick, and affordable carpet cleaning services. The carpets look as if someone has waved a magic wand over them. Thank you.” Our service area We Accept: Search here.. Search for: (No Ratings Yet) Loading... May 10, 2019 at 6:57 am About us Thumbsup Cleaning Services have been providing out-of-the-box quality cleaning services to all suburbs for the over two decades. We love spoiling our customers by giving them more than they expect from a cleaning service provider. Our goal is to achieve perfection in what we do but we always aim for excellence. Thumbsup Cleaning Services is serving clients for more than two decades now and we are known for being reliable, performance-oriented, and excellent customer service.
Q: Remove XML root from SSIS scipt task I am using the script task in SSIS Control flow task to save the output from the Execute SQL task to flat file destination. I tried the following script but did not succeed in getting the desired output Script :- string content = Dts.Variables["User::DataXML"].Value.ToString().Replace("<ROOT>", "<?xml version=\"1.0\" encoding=\"utf-8\" ?>").Replace("</ROOT>", ""); string filePath = Dts.Variables["User::FilePath"].Value.ToString(); StreamWriter writer = new StreamWriter(filePath); writer.WriteLine(content); writer.Close(); Output in XML file :- 'Strict' ; Required output :- <?xml version="1.0" encoding="utf-8"?> <importemployee ............> ..... ....... A: Figured out the script :- Only changes in :- string content = "<?xml version=\"1.0\" encoding=\"utf-8\" ?>" + Dts.Variables["User::DataXML"].Value.ToString().Replace("<ROOT>", "").Replace("</ROOT>", "");
Q: Counting the number of lines (csv module) I have a csv file and i want to transform it into a numerical dataset. To do so,i read every lines of the file and apply a function that keeps what i want and print it in another csv file. What I also want to do is to count the number of lines that I have read (number of lines in the original dataset) and the number of errors that have occured (the original dataset has some bugs and my function will raise error) Problem : With the following code I use (see below) returns only half of the exact number of lines. Indeed, when I use it on a file with exactly 1 000 000 lines , nb_lines is only 500 000. And as I want to record the lines that are not "good", I guess that I must record the wrong lines:/ data=csv.reader(open(path1,"rb"),delimiter=';',skipinitialspace=True) output=csv.writer(open(path2,"wb")) error=csv.writer(open(path3,"wb")) nb_error=0 nb_lines=0 for row in data: nb_lines=nbr_lines+1 try: liste=data.next() toprint=function(liste) output.writerow(aprinter) except Exception as e: nb_error=nb_error+1 badline=[nb_lines,e] error.writerow(badline) What is wrong with my loop? Thanks in advance :) A: You increment your iterator within the loop, for some reason. for row in data makes row the next line through each time. But then you do liste=data.next() - so you increment again. That means you skip every other line: it's not just your counter that is wrong, but you actually miss out half the data. You should delete that line, and refer to row rather than liste within the loop.
1. Field of the Invention The present invention relates to support a leg designed to support a plurality of floor panels at a predetermined interval from a floor slab, and to a double floor which is supported by these support legs. More particularly, the present invention relates to a floor panel support leg which stably and firmly supports the butted portions of a plurality of floor panels at a low position above the floor slab, and to a double floor which is supported by these support legs. 2. Description of the Related Art In recent years, due to the introduction of a plurality of office automation machines not only in computer rooms, but also in ordinary offices, it has become necessary to arrange several types of communication cables and electric cables on the floor. In conventional offices, communication and power source plug sockets are provided on walls and pillars, and it is necessary to trail the connecting cables on the floor from the plug sockets to the place of equipment installation. Yet, when these cables are exposed on the floor, it is possible that during walking one's foot may inadvertently be caught and pull out a plug of a connecting cable, or there is the danger that a connecting cable may be 'severed by the pressure under other machinery or by the passage of a push cart, etc. Thus, double floors called "free access floors" have been widely adopted in recently constructed buildings, and particularly in this type of office quarters. This type of double floor is explained by an example shown in FIG. 4. For example, a plurality of quadrangular floor panels 1 in each of which one side is in a range of from 150 to 1000 mm and the thickness is in a range of from 15 to 50 mm are prepared. A plurality of support legs are provided vertically on a floor slab 2 at intervals from each other approximately correspondingly to the length of one side of the floor panel 1. Then, the floor panels 1 are laid down on the floor slab 2 so that the respective corner portions of four adjacent floor panels 1 are supported by the tip of one support leg 3. By this means, space is created between the floor panels 1 and the floor slab 2, so that floor communication cables and electric power cables, or interface equipment for the distributors and employed equipment can be arranged in this space under the floor. Further, plug sockets can be provided on the floor panels near the installed machinery or apparatus, and the distance of the cables or the like which are exposed on the floor can be shortened to the minimum. Moreover, the floor panels 1 can be easily removably attached and changes in layout following installation can be adequately coped with. Yet, since the double floor is arranged at a predetermined height from the floor slab in the aforementioned manner, the height from the surface of the floor to the ceiling is lowered in comparison to the ordinary floor design, so that one may feel an oppressive sensation. In the case of newly constructed buildings, however, in consideration of the height of the double floor, it is possible to execute in advance a design so as to make the height from the floor slab to the ceiling sufficiently high so as to thereby remove this feeling of oppression, but in the case where a double floor is installed in an already existing building, this feeling of oppression cannot be removed. Thus, it is desirable to have a low double floor with a height of, for example, 50 mm or less. For this purpose, if the height of the support leg which supports the floor panels is lowered, the height of the double floor also becomes low, so that not only it is possible to remove the feeling of oppression to thereby implement this double floor in existing buildings, but also it is possible to lower the height to the ceiling in the case of newly constructed buildings to thereby contribute to the effective utilization of space. As shown in FIG. 4, in the case where the support leg 3 is constituted by a lower pedestal 4 which is disposed on the floor slab 2, a screw shaft 5 provided at the center of this lower pedestal 4 so as to constitute a support column, a screw sleeve 6 to which this screw shaft 5 is screwed, and an upper pedestal 7 which has the screw sleeve 6 at its lower side and which supports the floor panels 1 at its top surface, the floor panels 1 can be supported at a low position with the lowered height of the screw shaft 5. Yet, for example, if the height of the support leg 3 is lowered so that the height of the double floor is made to be 50 mm or less, the thread-engagement portion of the screw shaft 5 and the screw sleeve 6 becomes correspondingly shorter, the support of the floor panels 1 becomes unstable, and the strength becomes insufficient. Thus, with the conventional pedestal support method, it has been impossible to make the height of the double floor approximately lower than 60 mm. A double floor has been proposed where a plurality of mats of predetermined thickness are arranged on a floor slab with certain intervals between the adjacent mats; cables are made to pass through these intervals and the respective upper portions of the intervals are closed with a covering material; and a carpet is then laid over the mats and the intervals so as to cover them. In this way, by replacing the support legs and floor panels with the carpet and mats which support the carpet in plane, and by adjusting the thickness of the mats, it is possible to obtain a double floor with a height of 50 mm or less. As another example, a proposal has been made so that a support in which a plurality of height-adjustable columns at the underside of a thin tabular body of approximately the same size as a floor panel is provided in the form of an individual support unit in which one floor panel is fastened to and supported by one support. There are no pedestals, and the screw fitting member can be made long, with the result that it is possible to obtain a double floor with a height 50 mm or less. Among these proposals, however, in the case of the former, since it is necessary to lay mats over the entire floor slab which forms the double floor and to fill the intervals between the mats with cover material, there is a defect that the number of parts becomes very large. Moreover, since the mats are spread over the floor slab, the manufacturing precision of the floor slab directly influences the finished product, it is necessary to give the floor slab an extremely flat finish, and much care must be taken to manufacture the floor slab. Furthermore, there is such a defect that there is an opening only for cables between the mats, and air conditioning cannot be conducted under the floor. With regard to the latter proposal, since each floor panel is supported by an independent column, in the case where the level of the floor panels is adjusted, at least the columns of the four corners must be adjusted and the adjustment is troublesome. Further, it becomes necessary to redo the level adjustment whenever a change is made in the orientation of the floor panels. Furthermore, there is a defect that, when the floor panels are removed at the time of cable installation, the cables near the columns are easily moved, so that, when the floor panels are restored, the cables may become pressed under the columns to make the storage inconvenient, or the cables may be injured and broken.
--- abstract: 'We consider learning a predictor which is non-discriminatory with respect to a “protected attribute” according to the notion of “equalized odds” proposed by [@hardt2016equality]. We study the problem of learning such a non-discriminatory predictor from a finite training set, both statistically and computationally. We show that a post-hoc correction approach, as suggested by Hardt et al, can be highly suboptimal, present a nearly-optimal statistical procedure, argue that the associated computational problem is intractable, and suggest a second moment relaxation of the non-discrimination definition for which learning is tractable.' bibliography: - 'two\_step\_lwod.bib' nocite: '[@daniely2014average]' title: | ------------------------------------------------------------------------ \ Learning Non-Discriminatory Predictors\ ------------------------------------------------------------------------ --- **Blake Woodworth** [blake@ttic.edu](blake@ttic.edu)\ **Suriya Gunasekar**[suirya@ttic.edu](suirya@ttic.edu)\ **Mesrob I. Ohannessian**[mesrob@ttic.edu](mesrob@ttic.edu)\ **Nathan Srebro** [nati@ttic.edu](nati@ttic.edu)\ [Toyota Technological Institute at Chicago, Chicago, IL 60637, USA]{} Introduction ============ Machine learning algorithms are increasingly deployed in important decision making tasks that affect people’s lives significantly. These tools already appear in domains such as lending, policing, criminal sentencing, and targeted service offerings. In many of these domains, it is morally and legally undesirable to discriminate based on certain “protected attributes” such as race and gender. Even in seemingly innocent applications, such as ad placement and product recommendations, such discrimination might be illegal or detrimental. Consequently, there has been abundant public, academic and technical interest in notions of non-discrimination and fairness, and achieving “equal opportunity by design” is a major United States national Big Data challenge, [@WhiteHouse16]. We consider non-discrimination in supervised learning where the goal is to learn a (potentially randomized) predictor $h(X)$ or[^1] $h(X,A)$ for a target quantity $Y$ using features $X$ and a protected attribute $A$, while ensuring non-discrimination with respect to $A$. As an illustrative example, consider a financial institution that wants to predict whether a particular individual will pay back a loan or not, corresponding to $Y = 1$ and $Y = 0$, respectively. The features $X$ could include financial as well as other information, e.g. about education, driving, and housing history, languages spoken, and the number of members in the household, all of which have a potential of being used inappropriately as a surrogate for a protected attribute $A$, such as gender or race. It is important that the predictor for loan repayment not be even implicitly discriminatory with respect to $A$. Recent work has addressed the issue of defining what it means to be non-discriminatory—both in the context of supervised learning e.g. [@dwork2012fairness; @pedreshi2008discrimination; @feldman2015certifying], and otherwise e.g. [@joseph2016fairness; @joseph2016rawlsian]. The particular notion of non-discrimination we consider here is “equalized odds”, recently presented and studied by @hardt2016equality: \[def:non-discrimination\] A possibly randomized predictor ${\widehat}{Y}\!\! = h(X,A)$ for target $Y$ is non-discriminatory with respect to a protected attribute $A$ if ${\widehat}{Y}$ is independent of $A$ conditioned on $Y$. Informally, we require that even when the correct label $Y$ provides information about the protected attribute $A$, if we already know $Y$, the prediction ${\widehat}{Y}$ does not provide any [*additional*]{} information about $A$. The definition can also be motivated in terms incentive structure and of moving the burden of uncertainty from the protected population to the decision maker. See @hardt2016equality for further discussion of the definition, its implications, and comparisons to alternative notions. In a binary prediction task with binary protected attribute, i.e. ${\widehat}{Y},A,Y\in\{0,1\}$, Definition \[def:non-discrimination\] can be qualified in terms of true and false positive rates. Denote the group-conditional true and false positive rates as, $$\label{eq:gammapop} \gamma_{ya}({\widehat}{Y}) := \mathbb{P}( {\widehat}{Y} = 1\ |\ Y = y, A = a ),$$ Then Definition \[def:non-discrimination\] is equivalent to requiring that the class conditional true and false positive rates agree across different groups (different values of $A$): $$\gamma_{00}({\widehat}{Y}) = \gamma_{01}({\widehat}{Y}) \qquad \textrm{and}\qquad \gamma_{10}({\widehat}{Y}) = \gamma_{11}({\widehat}{Y})$$ Returning to the loan example, this definition requires that the percentage of men who are wrongly denied loans even though they would have paid it back must match the corresponding percentage for women, and similarly the percentage of men who are wrongly given loans that they will not pay back must also match the corresponding percentage of women. This does *not* however require that the same percentage of male and female applicants will receive loans. For instance, if women pay back loans with *truly* higher frequency than men, then the predictor would be allowed to deny loans to men more often than women. While @hardt2016equality focused on the notion itself and how it behaves on the population, in this work we tackle the problem of how to learn a good non-discriminatory predictor (i.e. satisfying the equalized odds) from a finite training set. We examine this both from a statistical perspective of how to best obtain a predictor from finite data that would be as accurate and non-discriminatory as possible on the population, and from a computational perspective. One possible approach to learning a non-discriminative predictor is the [*post hoc correction*]{} proposed by @hardt2016equality: first learn a good, possibly discriminatory predictor. Afterwards, this predictor is “corrected” by taking into account $A$ in order to make the predictor non-discriminatory. When $Y$ is binary and the predictors ${\widehat}{Y}$ are real-valued, they show that the unconstrained Bayes optimal least-square regressor can be post hoc corrected to the optimal predictor with respect to the 0-1 loss. In Section \[sec:suboptimalityposthoc\], we consider more carefully the limitations of such a post hoc procedure. In particular, we show that this approach can fail for the 0-1 and hinge losses, even if the Bayes optimal predictor with respect to those losses is learned in the first step. We also show that even when minimizing the squared loss, the approach can fail once the hypothesis class is constrained, as is essential when learning from finite data. From this, we conclude that post hoc correction is not sufficient, and that it is necessary to directly incorporate non-discrimination into the learning process. Turning to learning from finite data, we cannot hope to ensure exact non-discrimination on the population. To this end, in Section \[sec:detection\] we define a notion of approximate non-discrimination, motivate it, and explore its limits by analyzing the statistical problem of detecting whether or not a predictor is at least $\alpha$-discriminatory. We then turn to the main statistical question: given a finite training set, how can we best learn a predictor that is ensured to be as non-discriminatory as possible (on the population) and competes (in terms of its population loss) with the best non-discriminatory predictor in some given hypothesis class (this is essentially an extension of the notion of agnostic PAC learning with a non-discrimination constraint). In Section \[sec:binary\] we show that an ERM-type procedure, minimizing the training error subject to an empirical non-discrimination constraint, is statistically sub-optimal, and instead we present a statistically optimal (up to constant factors) two-step learning procedure for non-discriminatory binary classification. Unfortunately, learning a non-discriminatory binary classifier is computationally hard, which we prove in Section \[sec:hardness\]. In order to allow tractable training, in Section \[sec:2ndorder\], we present a relaxation of equalized odds, based only on a second-moment condition instead of full conditional independence. We show that under this second moment notion of non-discrimination it is computationally tractable to learn a nearly optimal non-discriminatory linear predictor with respect to a convex loss. Sub-optimality of post hoc correction {#sec:suboptimalityposthoc} ===================================== When the protected attribute $A$ and the target $Y$ are both binary, the post hoc correction algorithm proposed by [@hardt2016equality] can be applied to a binary or real-valued predictor ${\widehat}{Y} \in \mathcal{H}$, deriving a randomized binary predictor that is non-discriminatory. The algorithm is convenient because it requires access only to the joint distribution over $({\widehat}{Y}, A, Y)$ and does not use the features $X$, thus it can be applied retroactively to an already trained predictor. Such predictors are formulated using the notion of a derived predictor: \[Definition 4.1 in [@hardt2016equality]\]\[def:derivedpredictor\] A predictor ${\widetilde}{Y}$ is **derived** from a random variable $R$ and protected attribute $A$ if it is a possibly randomized function of $(R,A)$ alone. In particular, ${\widetilde}{Y}$ is independent of $X$ conditioned on $(R,A)$. For binary classification, the optimal post hoc correction ${\widetilde}{Y}$ for a binary or real valued predictor ${\widehat}{Y}\in {\mathbb}{R}$ is simply the non-discriminatory, derived, binary predictor that minimizes the expectation of a loss $\ell$ over binary variables [@hardt2016equality]: $$\begin{aligned} \label{eq:post-hocalgorithm} {\widetilde}{Y} = {\underset{f:\mathbb{R}\times{\left\{0,1\right\}}\mapsto {\left\{0,1\right\}}}{\text{argmin}}}&\ \mathbb{E}\,\ell\left( f({\widehat}{Y}, A), Y \right)\\ \textrm{s.t.}&\quad \gamma_{y0}(f) = \gamma_{y1}(f)\qquad \forall y = {\left\{0,1\right\}} \end{aligned}$$ Two notable features of the corrected predictor ${\widetilde}{Y}$ are that **a)** it is not constrained to any particular hypothesis class, and **b)** it may be a random function of ${\widehat}{Y}$ and $A$; indeed for many distributions and hypothesis classes there may not even exist a non-constant, deterministic, non-discriminatory predictor. Nevertheless, ${\widetilde}{Y}$ does indirectly depend on the hypothesis class ${\mathcal}{H}$ from which ${\widehat}{Y}$ was learned and possibly a different loss over real valued variables used in training of ${\widehat}{Y}$. We are interested in comparing the optimality of ${\widetilde}{Y}$ from post hoc correction to the following ${Y^*}$ which is the optimal non-discriminatory predictor in a hypothesis class ${\mathcal}{H}$ under consideration: $$\begin{aligned} {Y^*}= {\underset{h \in \mathcal{H}}{\text{argmin}}}\ \mathbb{E}\,\ell\left( h(X, A), Y \right)\qquad \textrm{s.t.}\quad \ \gamma_{y0}(h) = \gamma_{y1}(h)\qquad \forall y = {\left\{0,1\right\}} \end{aligned}$$ Ideally, the expected loss of ${\widetilde}{Y}$ would compare favorably against that of $Y^*$. Indeed, [@hardt2016equality] show that when the target $Y$ is binary, if we can first find a predictor ${\widehat}{Y}$ that is exactly or nearly Bayes optimal for the squared loss over an unconstrained hypothesis class, then applying the post hoc correction using the 0-1 loss (i.e. with $\ell = \ell^{01}$ in ) to ${\widehat}{Y}$ will yield a predictor ${\widetilde}{Y}$ that is non-discriminatory and has loss no worse than $Y^*$. This statement can be extended to the case of first finding the optimal *unconstrained* predictor with resepect to any *strictly convex* loss, and then using the post hoc correction with the [0-1 loss]{}. Nevertheless, from a practical perspective this approach is very unsatisfying. First, for general distributions, it is impossible to learn the Bayes optimal predictor from finite samples of data. Also, as we will show, the post hoc correction of even the optimal unconstrained predictor with respect to the 0-1 (non-convex) or even hinge (non-strict but convex) losses can have much worse performance than the best non-discriminatory predictor. Moreover, if the hypothesis class is restricted there can also be a gap between the post hoc correction of the optimal predictor in the hypothesis class and the best non-discriminatory predictor, even when optimizing a strictly convex loss function. In the following example, we see that when the loss function is not strictly convex, the post hoc correction of even the unconstrained Bayes optimal predictor can have poor accuracy: [example]{}[exampleone]{}\[theorem:Step2LowerBound01\] When the hypothesis class is unconstrained, for any $\epsilon \in (0,1/4)$ there exists a distribution $\mathcal{D}_\epsilon$ such that **a)** the optimal non-discriminatory predictor $Y^*$ with respect to the 0-1 loss has loss at most $2\epsilon$ but **b)** for unrestricted Bayes optimal predictor ${\widehat}{Y}$ trained on 0-1 loss, the post hoc correction of ${\widehat}{Y}$ with 0-1 loss returns a predictor ${\widetilde}{Y}$ with loss at least $0.5$. A similar statement can also be made about predictors trained on hinge loss. For an unconstrained hypothesis class, for any $\epsilon \in (0,1/4)$ and the same distribution $\mathcal{D}_\epsilon$, **a)** the optimal non-discriminatory predictor $Y^*$ with respect to the hinge loss has loss at most $4\epsilon$ but **b)** the post hoc correction of the Bayes optimal unrestricted predictor trained on hinge loss has loss $1$. We construct $\mathcal{D}_\epsilon$ as follows:\ \(X) at (2.5,0) [$X$]{}; (Y) at (1,0.6) [$Y$]{}; (A) at (2.5,1.2) [$A$]{}; \(Y) edge node\[left\] (X); (Y) edge node\[left\] (A); $$\label{eq:Depsilon} \begin{aligned} &X,A,Y \in {\left\{0,1\right\}} \qquad\qquad &&\mathbb{P}_{\mathcal{D}_\epsilon}(Y = 1) = \frac{1}{2}\\ &\mathbb{P}_{\mathcal{D}_\epsilon}(A = y\ |\ Y = y) = 1 - \epsilon \qquad &&\mathbb{P}_{\mathcal{D}_\epsilon}(X = y\ |\ Y = y) = 1 - 2\epsilon \end{aligned}$$ Both $X$ and $A$ are highly predictive of $Y$, but $A$ is slightly more so. Therefore, minimizing either the 0-1 or the hinge loss, without regard for non-discrimination, returns ${\widehat}{Y}=A$ and ignores $X$ entirely. Consequently, $\gamma_{y1}({\widehat}{Y}) = 1$ and $\gamma_{y0}({\widehat}{Y})=0\neq \gamma_{y1}({\widehat}{Y})$ so the Bayes optimal predictor is discriminatory, and the post hoc correction, which is required to be non-discriminatory and derived from ${\widehat}{Y}=A$, is forced to return a constant predictor even though returning $Y^*=X$ would be accurate and non-discriminatory. A more detailed proof is included in Appendix \[appendix:sec2-ex1\] In the second example, we show that when the hypothesis class is restricted, the correction of the optimal regressor in the class can yield a suboptimal classifier, even with squared loss. [example]{}[exampletwo]{}\[theorem:Step2LowerBoundConvex\] Let $\mathcal{H}$ be the class of linear predictors with $L^1$ norm at most $\frac{1}{2} - 2\epsilon$, for some $\epsilon \in (2/25,1/4)$. There exists a distribution $\mathcal{D}_\epsilon$ such that **a)** the optimal non-discriminatory predictor in $\mathcal{H}$ with respect to the squared loss has square loss at most $\frac{1}{16} +\frac{3\epsilon}{2} + 3\epsilon^2$, but **b)** the post hoc correction of the Bayes optimal square loss regressor in $\mathcal{H}$ returns a constant predictor which has (trivial) square loss of $1/4$. Similarly, for the class $\mathcal{H}$ of sparse linear predictors, for any $\epsilon \in (0,1/4)$, there exists a distribution $\mathcal{D}_\epsilon$ such that **a)** the optimal non-discriminatory predictor in ${\mathcal}{H}$ with respect to the squared loss has square loss at most $2\epsilon - 4\epsilon^2$, but **b)** the post hoc correction of the Bayes optimal squared loss regressor in ${\mathcal}{H}$ again returns a constant predictor which has (trivial) square loss of $1/4$. The distribution $\mathcal{D}_\epsilon$ is the same as was defined in . Again, $A$ is slightly more predictive of $Y$ than $X$, and since the sparsity or the sparsity surrogate $L^1$ norm of the predictor is constrained by the hypothesis class, the Bayes optimal predictor chooses to use just the feature $A$ and ignore $X$. Consequently, the optimal predictor is extremely discriminatory, and the post hoc correction algorithm will return a highly sub-optimal constant predictor which performs no better than chance. Details of the proof are deferred to Appendix \[appendix:sec2-ex2\] From these examples, it is clear that simply finding the optimal predictor with respect to a particular loss function and hypothesis class and correcting it post hoc can perfom very poorly. We conclude that in order to learn a predictor that is simultaneously accurate *and* non-discriminatory in the general case, it is essential to account for non-discrimination during the learning process. Detecting Discrimination in Binary Predictors {#sec:detection} ============================================= In the following sections, we look at tools for integrating non-discrimination into the supervised learning framework. In formulating algorithms for learning non-discriminatory predictors, it is important to consider the non-asymptotic behavior under finite samples. Towards this, one of the first issues to be addressed is that, using finite samples it is not feasible to ensure, or even verify if a predictor ${\widehat}{Y}$ satisfies the non-discrimination criterion in Definition \[def:non-discrimination\]. This necessitates defining a notion of approximate non-discrimination which can be computed using finite samples and which asymptotically generalizes to the equalized odds criteria in the population. Let us consider the task of binary classification, where both $A,Y \in {\left\{0,1\right\}}$ and the predictors ${\widehat}{Y}$ output values in ${\left\{0,1\right\}}$. Recall the definition of the population group-conditional true and false positive rates $\gamma_{ya}({\widehat}{Y}) = \mathbb{P}({\widehat}{Y} = 1\ |\ Y = y, A = a)$ and the fact that non-discrimination is equivalent to satisfying $\gamma_{00} = \gamma_{01}$ and $\gamma_{10} = \gamma_{11}$. For a set of of $n$ i.i.d. samples, $S = {\left\{(x_i,a_i,y_i)\right\}}_{i=1}^n\sim{\mathbb}{P}^n(X,A,Y)$, the sample analogue of $\gamma_{ya}$ is defined as follows, $$\gamma^S_{ya}({\widehat}{Y})=\frac{1}{n^S_{ya}}\sum_{i=1}^n{\widehat}{Y}(x_i, a_i)\mathbf{1}(y_i = y, a_i = a)\text{, where }n^S_{ya} = \sum_{i=1}^n \mathbf{1}(y_i = y, a_i = a). \label{eq:gamma_sample}$$ To ensure non-discrimination, we could possibly require $\gamma^S_{y0}=\gamma^S_{y1}$ on a large enough sample $S$, however this not ideal for two reasons. First, even when $\gamma^S_{y0} = \gamma^S_{y1}$ on $S$, this almost certainly does not ensure that $\gamma_{y0} = \gamma_{y1}$ on the population. For this same reason, it is impossible to be certain that a given predictor is non-discriminatory on the population. Moreover, if $n^S_{y0}\neq n^S_{y1}$, it is typically not feasible to match $\gamma^S_{y0} = \gamma^S_{y1}$ for non-trivial predictors, e.g. if $n^S_{y0} = 2$ and $n^S_{y1} = 3$, then $\gamma^S_{y0} \in {\left\{0, \frac{1}{2}, 1\right\}}$ but $\gamma^S_{y1} \in {\left\{0, \frac{1}{3}, \frac{2}{3}, 1\right\}}$, thus the only predictors with $\gamma^S_{y1}=\gamma^S_{y0}$ would be the ones which are constant conditioned on $Y$, i.e. the constant predictors ${\widehat}{Y}=0$ and ${\widehat}{Y}=1$, or the perfect predictor ${\widehat}{Y}=Y$. For these reasons, we define the following notion of approximate non-discrimination, which *is* possible to ensure on a sample and, when it holds on a sample, generalizes to the population. \[def:approxnondiscrimination\] A possibly randomized binary predictor ${\widehat}{Y}$ is [$\alpha$-discriminatory]{} with respect to a binary protected attribute $A$ on the population or on a sample $S$ if, respectively, $$\Gamma({\widehat}{Y}) := \max_{y \in {\left\{0,1\right\}}} {\left| \gamma_{y0}({\widehat}{Y}) - \gamma_{y1}({\widehat}{Y}) \right|} \leq \alpha \quad \textrm{or}\quad\Gamma^S({\widehat}{Y}) := \max_{y \in {\left\{0,1\right\}}} {\left| \gamma^S_{y0}({\widehat}{Y}) - \gamma^S_{y1}({\widehat}{Y}) \right|} \leq \alpha.$$ The decision to define approximate non-discrimination in terms of *conditional* rather than joint probabilities is important, particularly in the case that the $a,y$ pairs occur with widely varying frequencies. For example, if approximate non-discrimination were defined in terms of the joint probabilities $P({\widehat}{Y} = {\widehat}{y}, A = a, Y = y)$ and if $P(A = 0, Y = 1) = \alpha / 10$, then a predictor could be “$\alpha$-discriminatory” all while being arbitrarily unfair towards the $A = 0, Y = 1$ population. This issue does not arise when using Definition \[def:approxnondiscrimination\] and it incentivizes collection of sufficient data for minority groups to ensure non-discrimination. For Definition \[def:approxnondiscrimination\], we propose a simple statistical test to test the hypothesis that a given predictor ${\widehat}{Y}$ is at most $\alpha$-discriminatory on the population for some $\alpha > 0$. Let $S = {\left\{(x_i,a_i,y_i)\right\}}_{i=1}^n\sim{\mathbb}{P}^n(X,Y,A)$ denote a set of $n$ i.i.d. samples, and for $y,a\in\{0,1\}$, let ${\text{\normalfont P}}_{ya}={\mathbb}{P}(Y=y,A=a)$. We propose the following test for detecting $\alpha$-discrimination: $$T\left({\widehat}{Y}, S,\alpha\right) = \mathbf{1}\left( \Gamma^S({\widehat}{Y}) > \alpha \right)$$ [lemma]{}[detectiontest]{}\[lem:detectiontest\] Given $n$ i.i.d. samples $S$, $\forall \alpha\in(0,1),\delta\in(0,1/2)$, if $n > \frac{16\log{32/\delta}}{\alpha^2\min_{ya}{\text{\normalfont P}}_{ya}}$, then with probability greater than $1-\delta$, $T$ satisfies, $$T\left({\widehat}{Y}, S,\frac{\alpha}{2}\right) = \begin{cases} 0 & \textrm{if } {\widehat}{Y} \textrm{ is 0-discriminatory on population} \\ 1 & \textrm{if } {\widehat}{Y} \textrm{ is at least } \alpha \textrm{-discriminatory on population.} \end{cases}$$ The proof is based on the following concentration results for $\Gamma^S$ and is provided in Appendix \[appendix:detection\]. \[lemma:bin\_step1\] For $\delta\in(0,1/2)$ and a binary predictor $h$, if $n>\frac{8\log{8/\delta}}{\min_{ya}{\text{\normalfont P}}_{ya}}$, then $$\begin{split} &{\mathbb}{P}\left(\left|\Gamma(h)-\Gamma^S(h)\right|> 2\max_{ya}\sqrt{\frac{\log{16/\delta}}{n{\text{\normalfont P}}_{ya}}}\right)\le \delta. \end{split}$$ Computational Intractability of Learning Non-discriminatory Predictors {#sec:hardness} ====================================================================== The proposed procedure for learning non-discriminatory predictors from a finite sample is statistically optimal, but it is clearly computationally intractable for almost any interesting hypothesis class since the first step involves minimizing the 0-1 loss. As is typically done with intractable learning problems, we therefore look to alternative loss functions and hypothesis classes in order to find a computationally feasible procedure. A natural choice is the hypothesis class of real valued linear predictors with a convex loss function. In this case, we would like to have an efficient algorithm for finding a non-discriminatory predictor that has convex loss that is approximately as good as the loss of the best non-discriminatory linear predictor. However, even in the case of binary $A$ and $Y$ and even with a convex loss, for real-valued predictors $h(x)\in{\mathbb}{R}$, the non-discrimination constraint in Defination \[def:non-discrimination\] is [extremely]{} strong, requiring that the group-conditional true and false positive rates match at *every* threshold. In fact, the mere existence of a non-trivial (i.e. non-constant) linear predictor that is non-discriminatory requires a relatively special distribution. This is the case even when considering a real-valued analogue of $\alpha$-approximate non-discrimination. For binary targets $Y\in\{0,1\}$, one could relax the problem one step further with a less restrictive non-discrimination requirement. Consider the class of linear predictors with a convex loss where only the sign of the predictor need be non-discriminatory. Unfortunately, using a result by [@daniely2015complexity] even this is computationally intractable: \[thm:hardness\] Let $L^*$ be the hinge loss of the optimal linear predictor whose sign is non-discriminatory. Subject to the assumption that refuting random K-XOR formulas is computationally hard,[^2] the learning problem of finding a possibly randomized function $f$ such that $\mathcal{L}^{\textrm{hinge}}(f) \leq L^* + \epsilon$ and $\textrm{sign(f)}$ is $\alpha$-discriminatory requires exponential time in the worst case for $\epsilon < \frac{1}{8}$ and $\alpha < \frac{1}{8}$. The proof goes through a reduction from the hardness of improper, agnostic PAC learning of <span style="font-variant:small-caps;">Halfspaces</span>. Given a distribution $\mathcal{D}$ over $(X,Y)$ and the knowledge that there is a linear predictor which achieves 0-1 loss $\ell^*$ on $\mathcal{D}$, we construct a new distribution ${\widetilde}{D}$ over $({\widetilde}{X}, {\widetilde}{A}, {\widetilde}{Y})$ such that an approximately non-discriminatory predictor with small hinge loss can be used to make accurate predictions on $\mathcal{D}$, even if it is not a linear function. The distribution ${\widetilde}{\mathcal{D}}$ is identical to the original distribution $\mathcal{D}$ when conditioned on ${\widetilde}{A} = 1$, and is supported on only two points conditioned on ${\widetilde}{A} = 0$. The probabilities of the two points are constructed so that satisfying non-discrimination requires making accurate predictions on the ${\widetilde}{A} = 1$ population, and thus on $\mathcal{D}$. In particular, for parameters $\epsilon,\alpha < \frac{1}{8}$, the predictor will have 0-1 loss at most $\frac{15}{16}\ell^* + \frac{47}{128}$ on $\mathcal{D}$, which is bounded away from $\frac{1}{2}$ when $\ell^* < \frac{1}{10}$. Since @daniely2015complexity proves that finding a predictor with accuracy bounded away from $\frac{1}{2}$ is hard in general, we conclude that the learning problem is computationally hard. See Appendix \[appendix:hardness\] for a complete proof. To summarize, learning a non-discriminatory binary predictor with the 0-1 loss is hard; learning a real-valued linear predictor with respect to a convex loss function is problematic due to the potential inexistence of a non-trivial non-discriminatory linear predictor; and even requiring only that the sign of the linear predictor be non-discriminatory is computationally hard. A more significant relaxation of non-discrimination is therefore required to arrive at a computationally tractable learning problem. Conclusion ========== In this work we took the first steps toward a statistical and computational theory of learning non-discriminatory (equalized odds) predictors. We saw that post hoc correction might not be optimal and devised a statistically optimal two-step procedure, after observing that a straightforward ERM-type approach is not sufficient. Computationally, working with binary non-discrimination is essentially has hard as agnostically learning binary predictors, and so we should expect to have to resort to relaxations. We took the first step to this end in Section \[sec:2ndorder\] where we considered a second moment relaxation of non-discrimination which leads to tractable learning. We hope this will not be the final word on learning non-discriminatory predictors and that this work will spur interest in further understanding our relaxation, suggesting other relaxations, and studying other computationally efficient procedures with provable guarantees. Deferred Proofs from Section \[sec:suboptimalityposthoc\] {#appendix:sec2} ========================================================= Proof of Example \[theorem:Step2LowerBound01\] {#appendix:sec2-ex1} ---------------------------------------------- We restate the example for convenience: Consider the unconstrained hypothesis class of all (possibly randomized) functions from $(X,A)$ to ${\left\{0,1\right\}}$. Let ${\mathcal}{D}_\epsilon$ be the following distribution over $(X,A,Y)$, with $X,A,Y \in {\left\{0,1\right\}}$: $$\label{eq:lowerbound01distr} {\mathbb{P}}(Y = 1) = 0.5 \qquad\qquad {\mathbb{P}}(A = y\ |\ Y = y) = 1 - \epsilon \qquad\qquad {\mathbb{P}}(X = y\ |\ Y = y) = 1 - 2\epsilon$$ The graphical model representing this distribution is \(X) at (0,0) [$X$]{}; (Y) at (1.5,0) [$Y$]{}; (A) at (3,0) [$A$]{}; \(Y) edge node\[left\] (X); (Y) edge node\[left\] (A); Clearly, $X \perp A\ |\ Y$, so $Y^*=X$ is non-discriminatory and achieves a 0-1 loss of $2\epsilon$. This same predictor achieves hinge loss $4\epsilon$. This predictor, being non-discriminatory, upper bounds the loss of the optimal non-discriminatory predictor with respect to the $0$-$1$ and hinge losses. The optimal predictor with respect to the $0$-$1$ loss, which might be discriminatory, is in the convex hull (i.e. it might be randomized combination) of the sized mappings from ${\left\{0,1\right\}}\times{\left\{0,1\right\}} \mapsto {\left\{0,1\right\}}$. The Bayes optimal predictor with respect to the $0$-$1$ loss is the hypothesis $${\widehat}{h}(x,a) = {\underset{y \in {\left\{0,1\right\}}}{\text{argmax}}}\ \mathbb{P}(Y = y\ |\ X = x, A = a)$$ Given $a \in {\left\{0,1\right\}}$, note that since $\epsilon < 1/4$ $$\begin{aligned} \mathbb{P}(Y = a\ |\ X = a, A = a) &= \frac{1}{1 + \frac{\mathbb{P}(A = a \ |\ Y = 1-a)\mathbb{P}(X = a \ |\ Y = 1-a)}{\mathbb{P}(A = a \ |\ Y = a)\mathbb{P}(X = a \ |\ Y = a)}} \\ &= \frac{1}{1 + \frac{(\epsilon)(2\epsilon)}{(1-\epsilon)(1-2\epsilon)}} > \frac{1}{2}. \end{aligned}$$ Similarly $$\begin{aligned} \mathbb{P}(Y = a\ |\ X = 1-a, A = a) &= \frac{1}{1 + \frac{\mathbb{P}(A = a \ |\ Y = 1-a)\mathbb{P}(X = 1-a \ |\ Y = 1-a)}{\mathbb{P}(A = a \ |\ Y = a)\mathbb{P}(X = 1-a \ |\ Y = a)}} \\ &= \frac{1}{1 + \frac{\epsilon(1-2\epsilon)}{(1-\epsilon)(2\epsilon)}} > \frac{1}{2}. \end{aligned}$$ Therefore, the Bayes optimal predictor is ${\widehat}{h}(X,A) = A$, which is $1$-discriminatory, as $$\mathbb{P}({\widehat}{h}(X,A) = 1\ |\ Y = y, A = 0) = 0 \qquad\textrm{but}\qquad \mathbb{P}({\widehat}{h}(X,A) = 1\ |\ Y = y, A = 1) = 1$$ Consider now the post-hoc correction ${\widetilde}{h}$ of ${\widehat}{h}$. The best non-discriminatory predictor ${\widetilde}{Y}$ derived from the joint distribution $({\widehat}{h},A,Y) \equiv (A,A,Y)$ is given by the following optimization problem $$\begin{aligned} {\widetilde}{Y} = {\underset{h}{\text{argmin}}}&\ \mathcal{L}(h) \\ \textrm{s.t.}\ \ & \gamma_{y0}(h) = \gamma_{y1}(h) && \text{for }y = 0,1 \\ & \begin{bmatrix} \gamma_{0a}(h) \\ \gamma_{1a}(h) \end{bmatrix} \in \textrm{ConvHull}\left( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \gamma_{0a}({\widehat}{h}) \\ \gamma_{1a}({\widehat}{h}) \end{bmatrix}, \begin{bmatrix} \gamma_{0a}(1-{\widehat}{h}) \\ \gamma_{1a}(1-{\widehat}{h}) \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix} \right)\quad &&\text{for }a = 0,1. \end{aligned}$$ The first constraint requires that the resultant predictor be non-discriminatory. The second requires that the class conditional true positive and false positive rates of the predictor be in the convex hull of the constant 0 predictor, the constant 1 predictor, the predictor ${\widehat}{h}$ and its negative. This constraint is equivalent to requiring that ${\widetilde}{h}$ be derived from ${\widehat}{h}$. Because $\gamma_{y0}({\widehat}{h}) = 0$ and $\gamma_{y1}({\widehat}{h}) = 1$, the second constraint requires the predictor have equal true and false positive rates. As ${\mathbb{P}}(Y = 1) = 0.5$, $0.5$ is optimal true and false positive rate and ${\mathcal}{L}({\widetilde}{h})=0.5$. Using the same distribution, but with $X,A,Y \in {\left\{-1,1\right\}}$ instead of ${\left\{0,1\right\}}$, the optimal non-discriminatory predictor with respect to the hinge loss is no worse than the predictor which returns $X$, achieving hinge loss $4\epsilon$. However, the Bayes optimal predictor with respect to the hinge loss is again that predictor which returns $${\widehat}{h}(x,a) = {\underset{y \in {\left\{0,1\right\}}}{\text{argmax}}}\ \mathbb{P}(Y = y\ |\ X = x, A = a) = a.$$ By the same line of reasoning, the post hoc correction of ${\widehat}{h}$, which must have identical statistics for $A = 0$ and $A = 1$ is forced to have equal true and false positive rates, and thus the best derived predictor is identically $0$ which has hinge loss 1. Proof of Example \[theorem:Step2LowerBoundConvex\] {#appendix:sec2-ex2} -------------------------------------------------- We restate the example for convenience: In this example, we consider the squared loss and the hypothesis class of linear predictors with $L^1$ norm at most $\frac{1}{2} - 2\epsilon$ for some $\epsilon \in (2/25,1/4)$: $$\mathcal{H} = {\left\{w_1X + w_2A + b : {\left| w_1 \right|} + {\left| w_2 \right|} \leq \frac{1}{2} - 2\epsilon\right\}}.$$ With this parameter $\epsilon$, we use the same distribution as in the proof of Theorem \[theorem:Step2LowerBound01\]: $${\mathbb{P}}(Y = 1) = 0.5 \qquad\qquad {\mathbb{P}}(A = y\ |\ Y = y) = 1 - \epsilon \qquad\qquad {\mathbb{P}}(X = y\ |\ Y = y) = 1 - 2\epsilon$$ Since $X {\protect\mathpalette{\protect\independenT}{\perp}}A \ |\ Y$, any linear function of $X$ only will be non-discriminatory and $h(X) = \left(\frac{1}{2} - 2\epsilon\right)X + \frac{1}{4} + \epsilon$ has the required $L^1$ norm and achieves squared loss $\frac{1}{16} + \frac{3\epsilon}{2} + 3\epsilon^2$. This predictor, being non-discriminatory, upper bounds the squared loss of the optimal non-discriminatory predictor. To derive the optimal potentially discriminatory predictor, it will be useful to begin by calculating the covariances between each of the variables: $$\begin{aligned} \mathbb{E}\left[ X \right] = \mathbb{E}\left[ A \right] = \mathbb{E}\left[ Y \right] &= \frac{1}{2} \\ \mathbb{E}\left[ X^2 \right] = \mathbb{E}\left[ A^2 \right] = \mathbb{E}\left[ Y^2 \right] &= \frac{1}{2} \\ \mathbb{E}\left[ XA \right] &= \frac{1}{2} - \frac{3}{2}\epsilon + 2\epsilon^2 \\ \mathbb{E}\left[ XY \right] &= \frac{1-2\epsilon}{2} \\ \mathbb{E}\left[ AY \right] &= \frac{1-\epsilon}{2} \end{aligned}$$ The optimal predictor optimizes $$\begin{aligned} {\widehat}{h} = {\underset{w_1,w_2,b}{\text{argmin}}}&\ \mathbb{E}\left[ (w_1X + w_2A + b - y)^2\right] \\ \textrm{s.t.}&\quad {\left| w_1 \right|} + {\left| w_2 \right|} \leq \frac{1}{2} - 2\epsilon. \end{aligned}$$ Forming the Lagrangian: $$\begin{aligned} \mathcal{L}(w_1,w_2,b,\lambda) &= \mathbb{E}\left[ (w_1X + w_2A + b - y)^2\right] + \lambda\left( {\left| w_1 \right|} + {\left| w_2 \right|} - \frac{1}{2} - 2\epsilon \right) \nonumber\\ &= w_1^2 \mathbb{E}\left[ X^2 \right] + w_2^2 \mathbb{E}\left[ A^2 \right] + b^2 + \mathbb{E}\left[ Y^2 \right] + 2w_1w_2\mathbb{E}\left[ XA \right] + 2w_1b\mathbb{E}\left[ X \right] \nonumber\\ &- 2w_1\mathbb{E}\left[ XY \right] + 2w_2b\mathbb{E}\left[ A \right] -2w_2\mathbb{E}\left[ AY \right] -2b\mathbb{E}\left[ Y \right] + \lambda\left( {\left| w_1 \right|} + {\left| w_2 \right|} - \frac{1}{2} - 2\epsilon \right) \nonumber \nonumber\\ &= \frac{w_1^2 + w_2^2 + 1}{2} + b^2 + w_1w_2(1 - 3\epsilon + 4\epsilon^2) + w_1b - w_1(1-2\epsilon) + w_2b \nonumber\\ &- w_2(1-\epsilon) -b + \lambda\left( {\left| w_1 \right|} + {\left| w_2 \right|} - \frac{1}{2} - 2\epsilon \right).\end{aligned}$$ At the following values: $$w_1 = 0 \qquad\qquad w_2 = \frac{1}{2} - 2\epsilon \qquad\qquad b = \frac{1}{4} + \epsilon \qquad\qquad \lambda = \frac{1}{4}$$ the subdifferential of $\mathcal{L}$ contains $0$ for any $\epsilon \in (2/25,1/4)$. Looking term by term we have: $$\begin{aligned} \frac{\partial \ell}{\partial w_1} &= w_1 + w_2(1 - 3\epsilon + 4\epsilon^2) + b - (1-2\epsilon) + \lambda\textrm{sign}(w_1) \\ &= \left(\frac{1}{2} - 2\epsilon\right)(1 - 3\epsilon + 4\epsilon^2) + \frac{1}{4} + \epsilon - (1-2\epsilon) + \frac{1}{4}\textrm{sign}(0) \\ &= \frac{1}{4}\textrm{sign}(0) - 8\epsilon^3 + 8\epsilon^2 - \frac{\epsilon}{2} - \frac{1}{4},\end{aligned}$$ where $\textrm{sign}(0)$ is an arbitrary value in $[-1,1]$ which is the subdifferential of $f(z) = {\left| z \right|}$ at $0$. For any $\epsilon \in (2/25,1/4)$ $${\left| - 8\epsilon^3 + 8\epsilon^2 - \frac{\epsilon}{2} - \frac{1}{4} \right|} < \frac{1}{4} \implies 0 \in \frac{\partial \ell}{\partial w_1}$$ Furthermore, $$\begin{split} \frac{\partial \ell}{\partial w_2} &= w_1(1 - 3\epsilon + 4\epsilon^2) + w_2 + b - (1-\epsilon) + \lambda\textrm{sign}(w_2) \\ &= \frac{1}{2} - 2\epsilon + \frac{1}{4} + \epsilon - (1-\epsilon) + \frac{1}{4}= 0, \end{split}$$ and $$\begin{split} \frac{\partial \ell}{\partial b} &= 2b + w_1 + w_2 - 1 \\ &= \frac{1}{2} + 2\epsilon + \frac{1}{2} - 2\epsilon - 1 = 0. \end{split}$$ This proves that ${\widehat}{h}(X,A) = \left( \frac{1}{2} - 2\epsilon \right)A + \frac{1}{4} + \epsilon$ is the Bayes optimal predictor in ${\mathcal}{H}$ with respect to the squared loss. Furthermore, the random variable is supported on only two points: $\frac{1}{4} + \epsilon$ when $A = 0$ and $\frac{3}{4} - \epsilon$ when $A = 1$. It is clear that ${\widehat}{h}$ is not independent of $A$ conditioned on $Y$. Since ${\widehat}{h}$ is a deterministic function of $A$, the post hoc correction ${\widetilde}{h}$, which must be independent of $A$ conditioned on $Y$, is forced to be independent of $A$, and consequently $Y$. Thus, ${\widetilde}{h} \equiv 0.5$ is the best possible derived predictor, achieving square loss $1/4$. Considering the class of 1-sparse linear predictors, the predictor $h(X,A) = (1-4\epsilon)X + 2\epsilon$, being conditionally independent of $A$ is non-discriminatory and achieves squared loss $2\epsilon - 4\epsilon^2$, upper bounding the loss of the optimal non-discriminatory 1-sparse linear predictor. Without regard for non-discrimination, the optimal hypothesis in the class with respect to the squared loss is ${\widehat}{h}(X,A) = (1-2\epsilon)A + \epsilon$. This post hoc correction of this predictor suffers from the same issue as in the bounded norm case, resulting in a predictor that can have squared loss no better than $1/4$. Deferred Proofs From Section \[sec:detection\] {#appendix:detection} ============================================== Recall the notation $\Gamma_{ya}(h)=\max_y|\gamma_{y0}(h)-\gamma_{y1}(h)|$ and $\Gamma_{ya}^S(h)=\max_y|\gamma^S_{y0}(h)-\gamma^S_{y1}(h)|$ from Definition \[def:approxnondiscrimination\]. To avoid clutter, we sometimes drop the dependence on $h$ for $\gamma_{ya}$ when $h$ is evident from the context. Also, recall that $S=\{(x_i,y_i,a_i):i\in[n]\}\sim{\mathbb}{P}^n(X,Y,A)$ and ${\text{\normalfont P}}_{ya}={\mathbb}{P}(Y=y,A=a)$.\ Proof of Lemma \[lem:detectiontest\] ------------------------------------ Recall that ${T({\widehat}{Y}, S,\alpha) = \mathbf{1}\left( \Gamma^S({\widehat}{Y}) > \alpha \right)}$. Let $\alpha_n>0$ be any parameter chosen to satisfy, $$2\max_{ya}\sqrt{\frac{\log{32/\delta}}{n{\text{\normalfont P}}_{ya}}}<\alpha_n<\alpha-2\max_{ya}\sqrt{\frac{\log{32/\delta}}{n{\text{\normalfont P}}_{ya}}}.\label{eq:alphn}$$ Then the following results readily follow from Lemma \[lemma:bin\_step1\], If ${\widehat}{Y}$ is non-discriminatory, i.e. $\Gamma({\widehat}{Y})=0$, then $$\begin{aligned} \mathbb{P}\left( T({\widehat}{Y}, S,\alpha_n) = 1 \right) &= \mathbb{P}( \Gamma^S({\widehat}{Y}) > \alpha_n ) \leq \mathbb{P}\bigg( \Gamma^S({\widehat}{Y})> \Gamma({\widehat}{Y})+ 2\max_{ya}\sqrt{\frac{\log{32/\delta}}{n{\text{\normalfont P}}_{ya}}}\bigg)\le\frac{\delta}{2}. \end{aligned}$$ Similarly, suppose is ${\widehat}{Y}$ is at least $\alpha$-discriminatory on the population, i.e. $\Gamma({\widehat}{Y})\ge \alpha$, then $$\begin{aligned} \mathbb{P}\left( T({\widehat}{Y}, S,\alpha_n) = 0 \right) &= \mathbb{P}\left( \Gamma^S({\widehat}{Y}) \le \alpha_n \right) \leq \mathbb{P}\bigg( \Gamma^S({\widehat}{Y})\le \Gamma({\widehat}{Y})- 2\max_{ya}\sqrt{\frac{\log{32/\delta}}{n{\text{\normalfont P}}_{ya}}}\bigg) \leq \frac{\delta}{2}. \end{aligned}$$ Thus, if $\frac{\alpha}{2}> 2\max_{ya}\sqrt{\frac{\log{32/\delta}}{n{\text{\normalfont P}}_{ya}}}$, then $\alpha_n=\frac{\alpha}{2}$ satisfies and Lemma \[lem:detectiontest\] follows for $T({\widehat}{Y}, S,\frac{\alpha}{2})$. Proof of Lemma \[lemma:bin\_step1\] ----------------------------------- Recall that $n^S_{ya}=\sum_i\mathbf{1}({y_i=y,a_i=a})$. With slight abuse of notation, we define random variables $S_{ya}=\{i:y_i=y,a_i=a\}$.\ We then have $\gamma_{ya}^S(h)|S_{ya}=\frac{{\sum_{j\in S_{ya}}h(x_j,a_j)}}{n^S_{ya}}\sim \frac{1}{n_{ya}^S} \text{Binomial}(\gamma_{ya},n^S_{ya})$ with ${\mathbb}{E}[\gamma_{ya}^S|S_{ya}]=\gamma_{ya}$. (|\_[ya]{}\^S-\_[ya]{}|&gt;t)&=\_[S\_[ya]{}]{}(|\_[ya]{}\^S-\_[ya]{}|&gt;t|S\_[ya]{})(S\_[ya]{}) &\ &(n\^S\_[ya]{}&lt; 2)+\_[S\_[ya]{}: n\^S\_[ya]{}2]{}(|\_[ya]{}\^S-\_[ya]{}|&gt;t|S\_[ya]{})(S\_[ya]{})&\ &+\_[S\_[ya]{}:n\^S\_[ya]{}2]{}2(S\_[ya]{}) &\ & +2,& \[eq:conc\_cond\_prob\] where in $(a)$ the summation is over all $2^n$ possible configurations of $S_{ya}\subset[n]$, $(b)$ follow from Chernoff bound on $n^S_{ya}\sim\text{Binomial}(n,{\text{\normalfont P}}_{ya})$ and Hoeffding’s bound on $\gamma_{ya}^S|S_{ya}\sim \frac{1}{n^S_{ya}}\text{Binomial}(n^S_{ya},\gamma_{ya})$, and $(c)$ follows from the condition on $n{\text{\normalfont P}}_{ya}$ in Lemma \[lemma:bin\_step1\]. Further, for $y\in\{0,1\}$, using a series of triangle inequality, $$\left||\gamma^S_{y0}-\gamma^S_{y1}|-|\gamma_{y0}-\gamma_{y1}|\right|\le|\gamma^S_{y0}-\gamma^S_{y1}-\gamma_{y0}+\gamma_{y1}|\le|\gamma^S_{y0}-\gamma^S_{y0}|+|\gamma^S_{y1}-\gamma_{y1}|,\text{ and }$$ \_S(||\^S\_[y0]{}-\^S\_[y1]{}|-|\_[y0]{}-\_[y1]{}||&gt; 2t) & \_S(|\^S\_[y0]{}-\_[y0]{}|+|\^S\_[y1]{}-\_[y1]{}|&gt; 2t)&\ &\_S(|\^S\_[y0]{}-\_[y0]{}|&gt; t)+\_S(|\^S\_[y1]{}-\_[y1]{}|&gt; t)&\ &+4(-t\^2n¶\_[ya]{}),& where $(a)$ follows from union bound, and $(b)$ follows from using $t=\max_{a}\sqrt{\frac{\log{16/\delta}}{n{\text{\normalfont P}}_{ya}}}$. The lemma follows from collecting the failure probabilities for $y=0,1$. Deferred Proofs From Section \[sec:binary\] {#app:binary} =========================================== We use the notation $A\le _\delta B$ to denote that $A\le B$ holds with probability greater than $1-\delta$. Recall the notation $\Gamma_{ya}(h)=\max_y|\gamma_{y0}(h)-\gamma_{y1}(h)|$ and $\Gamma_{ya}^S(h)=\max_y|\gamma^S_{y0}(h)-\gamma^S_{y1}(h)|$ from Definition \[def:approxnondiscrimination\]. To avoid clutter, we sometimes drop the dependence on $h$ for $\gamma_{ya}$ when $h$ is evident from the context. Finally, in this section $C,C_1$ and $C_2$ denote absolute constants that are not necessarily the same at each occurrence. Proof of Lemma \[lemma:BinaryStep1Guarantee\] {#app:4_def} --------------------------------------------- The following intermediate lemma is used in proof of Lemma \[lemma:BinaryStep1Guarantee\]. \[lemma:Heps\] Let ${\mathcal}{H}^{S_1}_{\alpha_{n}}=\{h\in{\mathcal}{H}: \Gamma^{S_1}(h)\le \alpha_{n}\}$ denote the subset of hypothesis that satisfy the constraints in . If $\forall (y,a),\,n{\text{\normalfont P}}_{ya}>16\log{8/\delta}$ and $\alpha_{n}$ in satisfies $\alpha_{n}\ge2\max_{ya}\sqrt{\frac{2\log{64/\delta}}{n{\text{\normalfont P}}_{ya}}}$, then with probability greater than $1-\frac{\delta}{4}$, ${Y^*}\in{\mathcal}{H}_{\alpha_n}^{S_1}$ for all ${Y^*}\in{\mathcal}{Q}({\mathcal}{L}(Y^*),0)\cap {\mathcal}{H}$. Given ${Y^*}\in{\mathcal}{H}\cap {\mathcal}{Q}(L^*,0)$, [P]{}([Y\^\*]{}Ḩ\_[\_n]{}\^[S\_1]{})&=[P]{}(\^[S\_1]{}([Y\^\*]{})&gt;\_n) [P]{}(\^[S\_1]{}([Y\^\*]{})&gt;2\_[ya]{})&\ &[P]{}(\^[S\_1]{}([Y\^\*]{})&gt;([Y\^\*]{})+2\_[ya]{})&\ & , where $(a)$ follows from the condition on $\alpha_n$, $(b)$ follows from $\Gamma(Y^*)\!=0$ assumption, and $(c)$ follows from Lemma \[lemma:bin\_step1\] as $|S_1|=n/2$. Recall that the training data for Step $1$ are denoted by $S_1=\{(x_i,a_i,y_i):i\in[n/2]\}\sim{\mathbb}{P}^{n/2}(X,A,Y)$ and ${\text{\normalfont P}}_{ya}={\mathbb}{P}(Y=y,A=a)$. From Using Hoeffding’s inequality on the empirical $0$-$1$ loss ${\mathcal}{L}^{S_1}(h)$, and using concentration results for $\Gamma^{S_1}(h)$ from Lemma \[lemma:bin\_step1\], respectively, the following holds for $\delta\in (0,1/2)$ and $\min_{ya}n{\text{\normalfont P}}_{ya}>16\log{8/\delta}$. $$\big|{\mathcal}{L}(h)-{\mathcal}{L}^{S_1}(h)\big| \le_{\delta/4} \sqrt{\frac{\log{8/\delta}}{n}},\text{ and} \quad |\Gamma(h)-\Gamma^{S_1}(h)|\le_{\delta/4} 2\max_{ya}\sqrt{\frac{2\log{64/\delta}}{n{\text{\normalfont P}}_{ya}}}. \label{eq:conc}$$ Using and the standard VC dimension uniform bound [@bousquet2004introduction], the following holds with high probability for absolute constants $C_1$ and $C_2$, $$\begin{split} |{\mathcal}{L}({\widehat}{Y})-{\mathcal}{L}^{S_1}({\widehat}{Y})|&\le_{\delta/4} C_1\sqrt{\frac{VC({\mathcal}{H})+\log{1/\delta}}{n}} \text{,\quad and }\\ |\Gamma({\widehat}{Y})-\Gamma^{S_1}({\widehat}{Y})|&\le_{\delta/4} C_2\max_{ya}\sqrt{\frac{VC({\mathcal}{H})+\log{1/\delta}}{n{\text{\normalfont P}}_{ya}}}. \end{split} \label{eq:gamma_conc}$$ Finally, from Lemma \[lemma:Heps\], with probability greater than $1-\delta/4$, any $0$-discriminatory ${Y^*}\in{\mathcal}{H}$ is in the feasible set for Step $1$ in , and thus from the optimality of ${\widehat}{Y}$, ${\mathcal}{L}^{S_1}({\widehat}{Y})\le_{\delta/4}{\mathcal}{L}^{S_1}({Y^*})\le_{\delta/4} {\mathcal}{L}(Y^*)+ C_1\sqrt{\frac{VC({\mathcal}{H})+\log{1/\delta}}{n}}$. Thus, &L([Y]{})\_[/4]{} L\^[S\_1]{}([Y]{}) +C\_1\_[/2]{} L(Y\^\*)+ 2C\_1,&\ &([Y]{})\_[/4]{} \_n +C\_2\_[ya]{}.&\[eq:tmp1\] The lemma follows from combining the failure probabilities in the above equation. Proof of Lemma \[steptwotopop\] ------------------------------- The intuition is to conservatively bound the true and false positive rates of the non-discriminatory derived predictor using the class conditional rates for $h$. In the case of binary predictors, ${\widetilde}{Y}$ being derived from $h$ is equivalent to requiring that $$\big(\gamma_{0a}({\widetilde}{Y}(h)),\gamma_{1a}({\widetilde}{Y}(h))\big) \in \textrm{Conv}\left( (0,0), (1,1), (\gamma_{0a}(h), \gamma_{1a}(h)), (1 - \gamma_{0a}(h), 1 - \gamma_{1a}(h)) \right) \label{eq:convhull}$$ In the figure below, ${\widetilde}{Y}^*(h)$ is the actual optimal derived non-discriminatory predictor, but we estimate it conservatively using the worse of the class conditional true and false positive rates of $h$: ![image](PictureProofAlphaLoss.png){width="6.5cm"} Without loss of generality, assume $\gamma_{1a}(h) \geq 0.5$ and $\gamma_{0a}(h) \leq 0.5$ for all $a$ (the hypothesis is at least as good as chance). Consider the predictor ${\widetilde}{Y}$ such that for both $a\in\{0,1\},$ $$\big(\gamma_{0a}({\widetilde}{Y}),\gamma_{1a}({\widetilde}{Y})\big)= \left( \max(\gamma_{00}, \gamma_{01}),\min(\gamma_{10}, \gamma_{11}) \right)\in\textrm{Conv}\left( (0,0), (1,1), (\gamma_{0a}(h), \gamma_{1a}(h))\right)$$ that is, ${\widetilde}{Y}$ has the greater of the two false positive rates and lesser of the two true positive rates for both classes $A=1$ and $A = 0$. Additionally, Lemma \[steptwotopop\] requires that $\forall y$, $|\gamma_{y1}(h)-\gamma_{y0}(h)|\le\alpha$. Thus, for $a\in\{0,1\}$, $$\begin{split} \gamma_{0a}({\widetilde}{Y}) - \gamma_{0a}(h) &=\max_{a'}\gamma_{0a'}(h) - \gamma_{0a}(h) \le \alpha, \text{ and }\\ \gamma_{1a}(h) - \gamma_{1a}({\widetilde}{Y})&=\gamma_{1a}(h) - \min_{a'}\gamma_{1a'}(h) \le \alpha. \end{split}$$ Clearly, this choice of ${\widetilde}{Y}$ is both non-discriminatory (as $\gamma_{ya}({\widetilde}{Y})$ is set independent of $a$ for all $y$), as well as derived (as $\gamma_{ya}({\widetilde}{Y})$ satisfy ). Thus ${\widetilde}{Y}$ is a feasible point for , and we have $$\begin{aligned}\label{eq:proof18eq} \mathbb{E}[\ell^{01}({\widetilde}{Y}^*(h))]&\le \mathbb{E}[\ell^{01}({\widetilde}{Y})] = \sum_{a\in{\left\{0,1\right\}}} {\text{\normalfont P}}_{0a}{\gamma}_{0a}({\widetilde}{Y}) + \sum_{a\in{\left\{0,1\right\}}}{\text{\normalfont P}}_{1a}(1-{\gamma}_{1a}({\widetilde}{Y}))\\ &\overset{(a)}\le\sum_{a\in{\left\{0,1\right\}}}{\text{\normalfont P}}_{0a}\left(\gamma_{0a}(h)+\alpha\right) + \sum_{a\in{\left\{0,1\right\}}}{\text{\normalfont P}}_{1a}\left(1-(\gamma_{1a}(h) - \alpha)\right) \\ &\leq \mathbb{E}[\ell^{01}(h)] +\alpha, \end{aligned}$$ where $(a)$ follows from . Proof of Theorem \[theorem:UpperBounds\] {#app:ub} ---------------------------------------- The following supporting lemma on concentration of non-discrimination for randomized predictors is used in the proof of Theorem \[theorem:UpperBounds\]. For $\delta\in(0,1/2)$ and $h\in{\mathcal}{H}$, if $n{\text{\normalfont P}}_{ya}>16\log{8/\delta}$, For any randomized predictor ${\widetilde}{Y}$ derived from $({\widehat}{Y},A)$, i.e.${\widetilde}{Y}\in{\mathcal}{P}({\widehat}{Y})$, satisfies the following for $|S_2|=n/2$ iid sampels: $$\big|{\mathcal}{L}({\widetilde}{Y})-{\mathcal}{L}^{S_2}({\widetilde}{Y})\big| \le_{\delta} \sqrt{\frac{\log{2/\delta}}{n}},\text{ and } |\Gamma({\widetilde}{Y})-\Gamma^{S_2}({\widetilde}{Y})|\le_{\delta} 2\max_{ya}\sqrt{\frac{2\log{16/\delta}}{n{\text{\normalfont P}}_{ya}}}. $$ \[lemma:bin\_step2\] The proof essentially follows the same arguments that were used for and Lemma \[lemma:bin\_step1\]. For a randomized predictor ${\widetilde}{Y}$, $${\mathcal}{L}^{S_2}({\widetilde}{Y})-{\mathcal}{L}({\widetilde}{Y})=\frac{2}{n}\sum_{i\in S_2} {\mathbb}{E}_{{\widetilde}{Y}} \ell({\widetilde}{Y}({\widehat}{y}_i,a_i),y_i)-{\mathbb}{E}_{X,A,Y}{\mathbb}{E}_{{\widetilde}{Y}}\ell({\widetilde}{Y},Y)$$ Here ${\mathcal}{L}^{S_2}({\widetilde}{Y})-{\mathcal}{L}({\widetilde}{Y})$ is merely a sum of $n/2$ independent and $[0,1]$ bounded random variables and Hoeffdings bound can be applied to get the required concentration on ${\mathcal}{L}^{S_2}$. Similarly, for any randomized predictor ${\widetilde}{Y}$, the conditional random variable $\gamma_{ya}^{S_2}({\widetilde}{Y})|S_{2_{ya}}=\frac{{\sum_{j\in S_{2_{ya}}}{\mathbb}{E}_{{\widetilde}{Y}}{\widetilde}{Y}({\widehat}{y}_j,a_j)}}{n^{S_2}_{ya}}$ is a sum of $[0,1]$ bounded random variables with mean ${\mathbb}{E}[\gamma_{ya}^{S_2}({\widetilde}{Y})|S_{2_{ya}}]=\gamma_{ya}({\widetilde}{Y})$, the proof of Lemma \[lemma:bin\_step1\] can be repeated verbatim for the randomized prediction where instead of the Hoeffdings’ bound on Binomial random variables, we use the identical Hoeffdings’ bound for $[0,1]$ bounded random variables. We begin with the following result from from Lemma \[lemma:bin\_step2\] which shows the concentration of loss and discrimination in radomized derived predictors: if $n{\text{\normalfont P}}_{ya}>16\log{8/\delta}$, for any randomized predictor ${\widetilde}{h}$ derived from $({\widehat}{Y},A)$, i.e. ${\widetilde}{h}\in{\mathcal}{P}({\widehat}{Y})$, satisfies the following: $$\big|{\mathcal}{L}({\widetilde}{h})-{\mathcal}{L}^{S_2}({\widetilde}{h})\big| \le_{\delta/4} \sqrt{\frac{\log{8/\delta}}{n}},\text{ and } |\Gamma({\widetilde}{h})-\Gamma^{S_2}({\widetilde}{h})|\le_{\delta/4} 2\max_{ya}\sqrt{\frac{2\log{64/\delta}}{n{\text{\normalfont P}}_{ya}}}. \label{eq:conc2}$$ #### VC dimension of ${\mathcal}{P}({\widehat}{Y})$: If $|{\widehat}{{\mathcal}{Y}}|$ and $|{\mathcal}{A}|$ are finite, consider a finite hypothesis class denoted by ${\mathcal}{H}_{{\widehat}{Y},A}$, that includes all deterministic mappings from $({\widehat}{Y},A)$ to binary values $\{0,1\}$. If ${\widehat}{Y}$ and ${A}$ are both binary, then there are $4^2=16$ such mappings ${\mathcal}{H}_{{\widehat}{Y},A}=\{{\widetilde}{h}:\{0,1\}\times \{0,1\}\to \{0,1\}\}$. Further, recall that the feasible set of (randomized) binary predictors derived from ${\mathbb{P}}({\widehat}{Y},Y,A)$ is denoted by ${\mathcal}{P}({\widehat}{Y})$, and any ${\widetilde}{Y}\in{\mathcal}{P}({\widehat}{Y})$ is completely specified by four parameters, $\{{\widetilde}{p}_{{\widehat}{y},a}({\widetilde}{Y})={\mathbb{P}}({\widetilde}{Y}=1|{\widehat}{Y}={\widehat}{y},A=a):{\widehat}{y},a\in\{0,1\}\}$. With the above definition of ${\mathcal}{H}_{{\widehat}{Y},A}$, any such randomized derived predictor ${\widetilde}{Y}\in {\mathcal}{P}({\widehat}{Y})$ derived from $({\widehat}{Y},{A})$ is in the convex hull of ${\mathcal}{H}_{{\widehat}{Y},A}$. This implies, $VC({\mathcal}{P}({\widehat}{Y}))=VC(\text{conv}({\mathcal}{H}_{{\widehat}{Y},A}))=\log{16}$ which is a constant. Thus, for $\forall {\widetilde}{Y}\in{\mathcal}{P}({\widehat}{Y})$ estimated from Step $2$ in , using the standard VC dimension uniform bound over ${\mathcal}{P}({\widehat}{Y})$ [@bousquet2004introduction] along with we have the following, L([Y]{})\_[/4]{} L\^[S\_2]{}([Y]{}) +C\_1([Y]{})\_[/4]{} \_n +C\_2\_[ya]{}. \[eq:ubstep2\_1\] #### Upper bound on ${\mathcal}{L}^{S_2}({\widetilde}{Y})$: For any derived ${\widetilde}{Y}^*\in{\mathcal}{P}({\widehat}{Y})$ that is $0$-discriminatory, using and the identical arguments as that of Lemma \[lemma:Heps\], we have the following: [P]{}(\^[S\_2]{}([Y]{}\^\*)&gt;\_n) [P]{}(\^[S\_2]{}([Y]{}\^\*)&gt;2\_[ya]{}) , where $(a)$ follows from the condition on ${\widetilde}{\alpha}_n$ and $(b)$ from Lemma \[lemma:bin\_step2\]. From the optimality of ${\mathcal}{L}^{S_2}({\widetilde}{Y})$ and Lemma \[lemma:bin\_step2\], $\forall {\widetilde}{Y}^*\in{\mathcal}{Q}({\mathcal}{L}({\widetilde}{Y}^*),0)\cap {\mathcal}{P}({\widehat}{Y})$ we have, $${\mathcal}{L}^{S_2}({\widetilde}{Y})\le_{\delta/4} {\mathcal}{L}^{S_2}({\widetilde}{Y}^*)\le_{\delta/4} {\mathcal}{L}({\widetilde}{Y}^*)+C_1\sqrt{\frac{\log{8/\delta}}{n}}. \label{eq:upstep2}$$ #### Upper bound on ${\mathcal}{L}({\widetilde}{Y}^*)$: The rest of the proof involves obtaining an upper bound for ${\mathcal}{L}({\widetilde}{Y}^*)$ using Lemma \[steptwotopop\]. Recall from Lemma \[steptwotopop\] that if $h$ is an $\alpha$-discriminatory binary predictor, the optimum non-discriminatory derived predictor ${\widetilde}{Y}^*(h)$ given by satisfies, ${\widetilde}{Y}^*(h)\in{\mathcal}{Q}({\mathcal}{L}(h)+\alpha,0)$. Thus, for $h={\widehat}{Y}$, a derived non-discriminatory predictor ${\widetilde}{Y}^*({\widehat}{Y})$ obtained from satisfies, $$\begin{split} {\mathcal}{L}({\widetilde}{Y}^*({\widehat}{Y}))\overset{(a)}\le_\delta {\mathcal}{L}(Y^*)+\alpha_n+C_3{\max_{ya}\sqrt{\frac{VC({\mathcal}{H})+\log{1/\delta}}{n{\text{\normalfont P}}_{ya}}}}, \end{split} \label{eq:upstep2_3}$$ where in $(a)$ $Y^*\in{\mathcal}{H}$ is any non-discriminatory predictor from the original hypothesis class ${\mathcal}{H}$ and the inequality follows from combining Lemma \[lemma:BinaryStep1Guarantee\] and Lemma \[steptwotopop\] as with probability atleast $1-\delta$, ${\widehat}{Y}$ is atmost $\alpha=\Gamma({\widehat}{Y}) \le\alpha_n+ C_2{\max_{ya}\sqrt{\frac{VC({\mathcal}{H})+\log{1/\delta}}{n{\text{\normalfont P}}_{ya}}}}$ discriminatory. Combining , , and , with probability atleast $1-2\delta$, $$\begin{split} {\mathcal}{L}({\widetilde}{Y})&\le{\mathcal}{L}(Y^*)+\alpha_n+C_1{\max_{ya}\sqrt{\frac{VC({\mathcal}{H})+\log{1/\delta}}{n{\text{\normalfont P}}_{ya}}}},\text{ and }\\ \Gamma({\widetilde}{Y})&\le{\widetilde}{\alpha}_n +C_2\max_{ya}\sqrt{\frac{\log{1/\delta}}{n{\text{\normalfont P}}_{ya}}} \end{split}$$ Theorem \[theorem:UpperBounds\] follows from appropriate choice of $\alpha_n,{\widetilde}{\alpha}_n$ and rescaling $\delta$. Proof of Theorem \[theorem:Step1LowerBound\] -------------------------------------------- Let the marginal distribution over $(A,Y)$ be given by $p = \min_{a,y} \mathbb{P}(A = a, Y = y)$. Since the definiton of fairness is invariant to re-labelling of $A,Y$, assume without loss of generality that $p$ corresponds to $A = 1, Y = 1$. For $\alpha \in (0,1/2)$, the distribution $\mathcal{D}$ over $(X,A,Y) \in {\left\{0,1\right\}}^n \times {\left\{0,1\right\}} \times {\left\{0,1\right\}}$ is described by $$\begin{aligned} &{\mathbb{P}}(X_1 = y\ |\ Y = y) &&= 1 - \alpha \\ &{\mathbb{P}}(X_i = 0\ |\ Y = 0, A = 0) &&= 1 && \textrm{ for } i = 2,3,...,n \\ &{\mathbb{P}}(X_i = 1\ |\ Y = 1, A = 0) &&= 1 && \textrm{ for } i = 2,3,...,n \\ &{\mathbb{P}}(X_i = 0\ |\ Y = 0, A = 1) &&= 1 && \textrm{ for } i = 2,3,...,n \\ &{\mathbb{P}}(X_i = 1\ |\ Y = 1, A = 1) &&= 1 - \alpha && \textrm{ for } i = 2,3,...,n \\ \end{aligned}$$ Consider the following hypothesis class $\mathcal{H} = {\left\{h_i\right\}}_{i=1}^n$ with $h_i(X,A) = X_i$. The hypothesis $h_1$ has 0-1 loss ${\mathcal}{L}_{01}(h_1) = {\mathbb{P}}(X_1 \neq Y) = \alpha$ and is exactly non-discriminatory since $X_1 \perp A\ |\ Y$ by construction. For every other $i = 2,3,...,n$, the 0-1 loss of $h_i$ is the same: $${\mathcal}{L}_{01}(h_i) = \sum_{y}\sum_{a} {\mathbb{P}}(X_i = 1 - y\ |\ Y = y, A = a){\mathbb{P}}(Y = y, A = a) = p\alpha$$ however, for these hypotheses ${\left| {\mathbb{P}}(h_i = 1 | Y = 1, A = 1) - {\mathbb{P}}(h_i = 1 | Y = 1, A = 0) \right|} = \alpha$ so $h_i$ is $\alpha$-discriminatory. We will now show that on a sample $S$ of size $m$, the empirical risk minimizer subject to an approximate non-discrimination constraint, ${\widehat}{h}$, will be $h_i$ for $i \neq 1$ with probability $0.5$. Hence, the first step alone cannot assure with probability better than $0.5$ a classifier that is better than $\alpha$-discriminatory. First, we note that the predictions of $h_i$ and $h_j$ are independent for $i \neq j$ since $X_i$ and $X_j$ are independent. Therefore, the number of errors made by each classifier $h_i$ on the sample $S$ are independent and $$\begin{aligned} \mathbb{P}(\mathcal{L}_{01}^S(h_1) = 0) = (1-\alpha)^m \qquad\qquad {\mathbb{P}}({\mathcal}{L}_{01}^S(h_i) = 0) = (1-p\alpha)^m \quad\textrm{for } i = 2,3,...,n \end{aligned}$$ Since a classifier that makes zero errors on $S$ is automatically non-discriminatory on $S$, if $h_1$ makes at least $1$ mistake on $S$ and some $h_i$ does not make any errors, then $h_1$ will be the optimum of . This event occurs with probability: $$\begin{aligned} {\mathbb{P}}\left({\mathcal}{L}_{01}^S(h_1) > 0 \wedge \exists i\ {\mathcal}{L}_{01}^S(h_i) = 0\right) &= \left(1 - (1-\alpha)^m\right)\left(1 - {\mathbb{P}}\left(\forall i > 1\ {\mathcal}{L}_{01}^S(h_i) > 0\right) \right) \\ &= \left(1 - (1-\alpha)^m\right)\left( 1 - \prod_{i=2}^n \left(1 - (1-p\alpha)^m\right) \right) \\ &= \left(1 - (1-\alpha)^m\right)\left(1 - \left(1 - (1-p\alpha)^m\right)^{n-1}\right)\end{aligned}$$ From here, we use that $$\begin{aligned} \forall k \in \mathbb{N}\ \forall x \in [0,1]\quad (1-x)^k \leq \frac{1}{1+kx}\end{aligned}$$ Thus, since $1-p\alpha,\alpha \in [0,1]$: $$\begin{aligned} {\mathbb{P}}\left({\mathcal}{L}_{01}^S(h_1) > 0 \wedge \exists i\ {\mathcal}{L}_{01}^S(h_i) = 0\right) &= \left(1 - (1-\alpha)^m\right)\left(1 - \left(1 - (1-p\alpha)^m\right)^{n-1}\right) \\ &\geq \left( 1 - \frac{1}{1+m\alpha}\right)\left(1 - \frac{1}{1 + (n-1)(1-p\alpha)^m} \right) \\ &= \frac{m\alpha}{1+m\alpha} - \left( 1 - \frac{1}{1+m\alpha}\right)\frac{1}{1 + (n-1)(1-p\alpha)^m} \label{eq:step1lowerboundtwoterms}\end{aligned}$$ This expression is is greater than $1/2$ if the first term is at least $2/3$ and the second term is at most $1/6$. Thus $$\frac{m\alpha}{1+m\alpha} \geq \frac{2}{3} \iff \alpha \geq \frac{2}{m}$$ and $$\begin{aligned} &\left( 1 - \frac{1}{1+m\alpha}\right)\frac{1}{1 + (n-1)(1-p\alpha)^m} \leq \frac{1}{6} \\ &\impliedby \frac{1}{1 + (n-1)(1-p\alpha)^m} \leq \frac{1}{6} \iff \log\frac{1}{1-p\alpha} \leq \frac{\log \frac{n-1}{5}}{m} \end{aligned}$$ Since $-\log(1-x) < \frac{x}{1-x}$ for $x \in (0,1)$, and $p \leq \frac{1}{4}$, the expression is at least $1/2$ when $$\frac{2}{m} \leq \alpha \leq \frac{3\log \frac{n}{5}}{4pm}$$ Therefore, when $\alpha = \frac{3\log \frac{n}{5}}{4pm}$, with probability $0.5$ $h_1$ has non-zero error on $S$ and a different predictor has zero error. We conclude that there exists a distribution and hypothesis class such that with probability $0.5$, the hypothesis returned by the first step is $\frac{3\log \frac{n}{5}}{4pm}$-discriminatory. Proof of Theorem \[thm:hardness\] {#appendix:hardness} ================================= Let $\mathcal{A}$ be an algorithm that takes as inputs a hypothesis class $\mathcal{H}$, a distribution ${\widetilde}{\mathcal{D}}$ over $({\widetilde}{X}, {\widetilde}{A}, {\widetilde}{Y})$ with ${\widetilde}{A} \in {\left\{0,1\right\}}$ and ${\widetilde}{Y} \in {\left\{-1,+1\right\}}$, an accuracy parameter $\epsilon > 0$, and a non-discrimination parameter $\alpha > 0$ and returns a predictor $f = \mathcal{A}({\widetilde}{\mathcal{D}}, \epsilon, \alpha)$ such that with probability $1-\zeta$ $$\begin{gathered} \mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{hinge}}\left(f\right) \leq \min_{h \in \mathcal{H}_{\textrm{0-disc}}({\widetilde}{\mathcal{D}})} \mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{hinge}}(h) + \epsilon \label{eq:definitionAlg}\\ {\left| \mathbb{P}_{{\widetilde}{\mathcal{D}}}\left(f \geq 0\ \middle|\ {\widetilde}{Y} = y, {\widetilde}{A} = 0 \right) - \mathbb{P}_{{\widetilde}{\mathcal{D}}}\left(f \geq 0\ \middle|\ {\widetilde}{Y} = y, {\widetilde}{A} = 1 \right) \right|} \leq \alpha\qquad\textrm{for } y = -1,+1 \nonumber\end{gathered}$$ The possibly randomized predictor $f$ need not be in the hypothesis class $\mathcal{H}$, but it is being compared against the best predictor in $\mathcal{H}$ whose sign is non-discriminatory. We will show that such an algorithm can be used to improperly weakly learn <span style="font-variant:small-caps;">Halfspace</span> which, subject to the complexity assumption that refuting random K-XOR formulas is hard, was shown to be computationally hard by [@daniely2015complexity]. We conclude that $\mathcal{A}$ must be computationally hard to compute. The <span style="font-variant:small-caps;">Halfspace</span> problem is to take a distribution $\mathcal{D}$ over $(X,Y)$ with $X \in \mathbb{R}^d$ and $Y \in {\left\{-1,+1\right\}}$, and find the linear predictor $$\label{eq:halfspacehstar} h^*(x) = \textrm{sign}({w^*}^Tx) \qquad\textrm{where}\qquad w^* = {\underset{w \in \mathbb{R}^d}{\text{argmin}}}\ \underset{x,y \sim \mathcal{D}}{\mathbb{E}}\left[ \textrm{sign}(w^Tx) \neq y \right]$$ The proof of hardenss of the <span style="font-variant:small-caps;">Halfspace</span> problem was shown using a distribution over the unit hypercube in $d$-dimensions, thus we will assume that $\mathcal{D}$ is a bounded distribution. We assume access to the distribution $\mathcal{D}$, knowledge of $\mathcal{L}_{\mathcal{D}}^{01}(h^*)$, and for now, access to the joint distribution of $(h^*(X), Y)$: $$\begin{aligned} &\eta_{--} = \mathbb{P}_{\mathcal{D}}(h^*(X) = -1, Y = -1) \qquad&&\eta_{-+} = \mathbb{P}_{\mathcal{D}}(h^*(X) = -1, Y = +1) \\ &\eta_{+-} = \mathbb{P}_{\mathcal{D}}(h^*(X) = +1, Y = -1) \qquad&&\eta_{++} = \mathbb{P}_{\mathcal{D}}(h^*(X) = +1, Y = +1) \end{aligned}$$ however, we will show later that it is not necessary to know the $\eta$’s. Since it is always possible to get 0-1 loss at most $\frac{1}{2}$ with a <span style="font-variant:small-caps;">Halfspace</span> predictor, we assume that $\eta_{++} + \eta_{--} \geq \eta_{+-} + \eta_{-+} = \mathcal{L}_{\mathcal{D}}^{01}(h^*)$. From the distribution $\mathcal{D}$ we construct a new distribution ${\widetilde}{\mathcal{D}}$ over $({\widetilde}{X},{\widetilde}{A},{\widetilde}{Y})$ with ${\widetilde}{X} \in \mathbb{R}^{d+1}$, ${\widetilde}{A} \in {\left\{0,1\right\}}$, and ${\widetilde}{Y} \in {\left\{-1,+1\right\}}$ in the following manner: $$\begin{aligned} &\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{A} = 0) &&= 1-\delta \\ &\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{A} = 1) &&= \delta \\ &\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = -e_1, {\widetilde}{Y} = -1\ |\ {\widetilde}{A} = 0) &&= \eta_{--} \\ &\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = -e_1, {\widetilde}{Y} = +1\ |\ {\widetilde}{A} = 0) &&= \eta_{-+} \\ &\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = e_1, {\widetilde}{Y} = -1\ |\ {\widetilde}{A} = 0) &&= \eta_{+-} \\ &\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = e_1, {\widetilde}{Y} = +1\ |\ {\widetilde}{A} = 0) &&= \eta_{++} \\ &\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = [0, x], {\widetilde}{Y} = y\ |\ {\widetilde}{A} = 1) &&= \mathbb{P}_{\mathcal{D}}(X = x, Y = y)\qquad \forall x,y \end{aligned}$$ where $e_1$ is the first standard basis vector in $\mathbb{R}^{d+1}$. In other words, when ${\widetilde}{A} = 1$ the distribution ${\widetilde}{\mathcal{D}}$ is identical to $\mathcal{D}$ besides a zero appended to the beginning of $X$. When ${\widetilde}{A} = 0$, ${\widetilde}{\mathcal{D}}$ is supported on two points $-e_1$ and $e_1$. We will apply the algorithm $\mathcal{A}$ to the distribution ${\widetilde}{\mathcal{D}}$ with parameters $\epsilon$ and $\alpha$ to be determined later. In this case $\mathcal{H}$ is the class of linear predictors so the hinge loss of $f$ on ${\widetilde}{\mathcal{D}}$ must be competitve with the hinge loss of the best linear predictor whose sign is non-discriminatory. Using the following lemmas, we show that the output of $\mathcal{A}$ must have small hinge loss on ${\widetilde}{\mathcal{D}}$, that this output can then be modified so that it has small 0-1 loss on ${\widetilde}{\mathcal{D}}$, and finally that it can be further modified to achieve small 0-1 loss on $\mathcal{D}$. The proofs are deferred to the end of this discussion. \[lem:hardnessexistsgoodhinge\] There exists a linear predictor $h$ whose sign is $0$-discrminatory such that $$\mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{hinge}}(h) = 2(1-\delta)\mathcal{L}_{\mathcal{D}}^{01}(h^*) + 2\delta$$ By Lemma \[lem:hardnessexistsgoodhinge\] and the defintion of $f$ from , $$\mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{hinge}}\left(f\right) \leq \min_{h \in \mathcal{H}_{\textrm{0-disc}}({\widetilde}{\mathcal{D}})} \mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{hinge}}(h) + \epsilon \leq 2(1-\delta)\mathcal{L}_{\mathcal{D}}^{01}(h^*) + 2\delta + \epsilon$$ and the sign of $f$ is $\alpha$-discriminatory. Next, \[lem:hardnessfixf\] The predictor $f$ can be efficiently modified to yield a new predictor $f'$ whose sign is is $\alpha$-discriminatory such that $$\mathcal{L}_{{\widetilde}{\mathcal{D}}}^{01}(f') \leq (1-\delta)\mathcal{L}_{\mathcal{D}}^{01}(h^*) + 2\delta + \epsilon$$ Finally, \[lem:01lossessimilar\] The predictor $f''(x) = f'([0,x],1)$ achieves ${\mathcal}{L}_{{\mathcal}{D}}^{01}(f'') \leq {\mathcal}{L}_{{\widetilde}{{\mathcal}{D}}}^{01}(f') + \alpha(1-\delta)$. The predictor $f''$ described in Lemma \[lem:01lossessimilar\] thus has 0-1 loss on $\mathcal{D}$ $$\label{eq:hardness01loss} \mathcal{L}_{\mathcal{D}}^{01}(f'') \leq (1-\delta)(\mathcal{L}_{\mathcal{D}}^{01}(h^*) + \alpha) + 2\delta + \epsilon$$ Theorem 1.3 from [@daniely2015complexity] proves that there is no algorithm running in time polynomial in the dimension $d$ that can return a predictor achieving 0-1 error $\leq \frac{1}{2} - d^{-c}$ with high probability for a constant $c > 0$ for an arbitrary distribution, even with the knowledge that $\mathcal{L}_{\mathcal{D}}^{01}(h^*) \leq L^*$ for $L^* < 1/2$. Thus, $\mathcal{A}({\widetilde}{D},\epsilon,\alpha)$ cannot run in time polynomial in the dimension $d$ for any parameters $\mathcal{L}_{\mathcal{D}}^{01}(h^*)$, $\epsilon$, $\alpha$, and $\delta$ such that is greater than $\frac{1}{2} - d^{-c}$ for any $c > 0$. For $\epsilon,\alpha < \frac{1}{8}$, and setting $\delta = \frac{1}{16}$, shows that $$\mathcal{L}_{\mathcal{D}}^{01}(f'') \leq \frac{15}{16}\mathcal{L}_{\mathcal{D}}^{01}(h^*) + \frac{47}{128}$$ For any $L^* < \frac{1}{10}$ this is at most $\frac{1}{2} - \frac{1}{40}$ and does not depend on the dimension. In this proof we assumed knowledge of the parameters $\eta$ which describe the conditional error rates of $h^*$. If a polynomial time algorithm for $\mathcal{A}$ existed, then it would be possible to perform two-dimensional grid search over the $\eta$. Calls made to $\mathcal{A}$ with the incorrect values of $\eta$ might result in very inaccurate or discriminatory predictors, but using an estimate of $\eta$ up to $\mathcal{O}(\alpha)$ accuracy is sufficient to approximate the <span style="font-variant:small-caps;">Halfspaces</span> solution using $\mathcal{A}$. Thus at most $\mathcal{O}(\log^2(1/\alpha))$ calls to the polynomial time algorithm would be needed. Therefore, in order for $\mathcal{A}$ to guarantee for an arbitrary ${\widetilde}{\mathcal{D}}$ that its output would have excess hinge loss at most $\frac{1}{8}$ and its sign would be at most $\frac{1}{8}$-discriminatory, it must run in time super-polynomial in the dimension in the worst case. Deferred proofs --------------- Define $h(X,A) = \textrm{sign}\left(\begin{bmatrix} 1 \\ w^* \end{bmatrix}^TX \right)$. Then $h(-e_1,0) = -1$, $h(e_1,0) = 1$, and $h([0,x],1) = h^*(x)$, where $h^*$ is as defined in . Since the sign function is invariant to scaling, $\mathcal{L}_{\mathcal{D}}^{01}(h^*) = \mathcal{L}_{\mathcal{D}}^{01}(ch^*)$ for any $c > 0$. Theorem 1.3 in [@daniely2015complexity] involves a distribution $\mathcal{D}$ that is supported on the unit hypercube in $\mathbb{R}^d$, thus the predictor $\frac{h^*}{\|w^*\|_2\sqrt{d}} \in [-1,1]$ with probability 1, and has the same 0-1 loss as $h^*$. By the definition of ${\widetilde}{\mathcal{D}}$, $\eta_{+-}$, and $\eta_{-+}$: $$\begin{aligned} \mathbb{P}_{{\widetilde}{\mathcal{D}}}(h \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 0 ) &= \frac{\eta_{+-}}{\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{Y} = -1\ |\ {\widetilde}{A} = 0)} \\ &= \frac{\eta_{+-}}{\mathbb{P}_{\mathcal{D}}(Y = -1)} \\ &= \mathbb{P}_{\mathcal{D}}(h^* \geq 0\ |\ Y = -1) \\ &= \mathbb{P}_{{\widetilde}{\mathcal{D}}}(h\phantom{^*} \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 1 ) \end{aligned}$$ $$\begin{aligned} \mathbb{P}_{{\widetilde}{\mathcal{D}}}(h < 0\ |\ {\widetilde}{Y} = 1, {\widetilde}{A} = 0 ) &= \frac{\eta_{-+}}{\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{Y} = 1\ |\ {\widetilde}{A} = 0)} \\ &= \frac{\eta_{-+}}{\mathbb{P}_{\mathcal{D}}(Y = 1)} \\ &= \mathbb{P}_{\mathcal{D}}(h^* < 0\ |\ Y = 1) \\ &= \mathbb{P}_{{\widetilde}{\mathcal{D}}}(h\phantom{^*} < 0\ |\ {\widetilde}{Y} = 1, {\widetilde}{A} = 1 ) \end{aligned}$$ Therefore, $h$ is $0$-discriminatory at threshold $0$. Also $$\begin{aligned} \mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{hinge}}(h)\ =\qquad &[1 + h(x_1)]_+\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = x_1, {\widetilde}{A} = 0, {\widetilde}{Y} = -1) \\ +&[1 - h(x_0)]_+\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = x_0, {\widetilde}{A} = 0, {\widetilde}{Y} = \phantom{-}1) \nonumber\\ +&\int_X[1 + h(x)]_+\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = x, {\widetilde}{A} = 1, {\widetilde}{Y} = -1)dx \nonumber\\ +&\int_X[1 - h(x)]_+\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = x, {\widetilde}{A} = 1, {\widetilde}{Y} = \phantom{-}1)dx \nonumber\\ \leq\qquad &2\eta_{+-}\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{A} = 0) + 2\eta_{-+}\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{A} = 0) \\ +& 2\int_X\left( \mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = x, {\widetilde}{A} = 1, {\widetilde}{Y} = -1) + \mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{X} = x, {\widetilde}{A} = 1, {\widetilde}{Y} = 1) \right)dx \nonumber\\ =\qquad &2\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{A} = 0)(\eta_{+-} + \eta_{-+}) + 2\mathbb{P}_{{\widetilde}{\mathcal{D}}}({\widetilde}{A} = 1) \\ =\qquad &2(1-\delta)(\eta_{-+} + \eta_{+-}) + 2\delta\end{aligned}$$ Since only the sign of $f$ is required to be $\alpha$-discriminatory, we can modify the magnitude of its predictions without affecting its level of non-discrimination. Therefore, we first truncate the output of $f$ to lie in the range $[-1,1]$, which can only reduce the hinge loss. Ignoring for the moment that the sign of $f$ must be $\alpha$-discriminatory, we would like to define $f'$ so that $f'(-e_1,0) = -1$ and $f'(e_1,0) = 1$ with probability 1. In this case, the hinge loss when ${\widetilde}{A} = 0$ is exactly $2(\eta_{+-} + \eta_{-+})$. With that being said, any modification to $f$ that changes the distribution of the *sign* of the predictor risks rendering it more than $\alpha$-discriminatory. With this in mind, we construct $f'$ such that $$\begin{aligned} f'(-e_1,0) &= \begin{cases} -1 & \textrm{w.p. } \mathbb{P}(f(-e_1,0) < 0) \\ 0 & \textrm{w.p. } \mathbb{P}(f(-e_1,0) \geq 0) \end{cases}\\ f'(e_1,0) &= \begin{cases} 1 & \textrm{w.p. } \mathbb{P}(f(e_1,0) \geq 0) \\ -0 & \textrm{w.p. } \mathbb{P}(f(e_1,0) < 0) \end{cases} \\ f'([0,x],1) &= f([0,x],1) \end{aligned}$$ where $-0$ is a negative number of arbitrarily small magnitude. Constructed this way, the distribution of the sign of $f'$ conditioned on ${\widetilde}{A}$ is identical to that of $f$, meaning that the sign of $f'$ is $\alpha$-discriminatory. The hinge loss of $f'$ is an upper bound on the 0-1 loss, and in order to show that $f'$ achieves small 0-1 loss, we will show that the hinge loss is a *loose* upper bound. The construction of ${\widetilde}{\mathcal{D}}$ and the predictions of $f'$ conditioned on ${\widetilde}{A} = 0$ creates a substantial gap between the losses. ![image](hinge01.png){width="3in"} Notice that when $f'$ makes a prediction of magnitude 1 that has the correct sign, both the hinge loss and the 0-1 loss evaluate to $0$. Similarly, when $f'$ makes a prediction of magnitude 0 with the incorrect sign, both losses are $1$. Thus in each of these cases, the hinge loss is equivalent to the 0-1 loss. However, if $f'$ makes a prediction of magnitude 1 with the incorrect sign, the hinge loss is $2$ but the 0-1 loss is only $1$, and when $f'$ makes a prediction of magnitude 0 with the correct sign, the hinge loss is $1$ but the 0-1 loss is $0$. Consequently, in each of these cases there is a gap of $1$ between the hinge and 0-1 losses. Thus, $$\begin{aligned} \label{eq:expectedgap} \mathbb{E}\left[ \ell^{\textrm{hinge}}(f') - \ell^{01}(f') \, \middle|\, {\widetilde}{A} = 0 \right] &= \mathbb{P}\left( |f'| = 1, \textrm{sign}(f') \neq {\widetilde}{Y} \, \middle|\, {\widetilde}{A} = 0 \right) \\ &+ \mathbb{P}\left( |f'| = 0, \textrm{sign}(f') = {\widetilde}{Y} \, \middle|\, {\widetilde}{A} = 0 \right) \nonumber\end{aligned}$$ Considering each term separately: $$\begin{aligned} \mathbb{P}&\left( |f'| = 1, \textrm{sign}(f') \neq {\widetilde}{Y} \ \middle|\ {\widetilde}{A} = 0 \right) \nonumber\\ =&\ \mathbb{P}\left( f'(-e_1,0) = -1, {\widetilde}{Y} = 1 \ \middle|\ {\widetilde}{X} = -e_1, {\widetilde}{A} = 0 \right)\mathbb{P}\left( {\widetilde}{X} = -e_1 \ \middle|\ {\widetilde}{A} = 0 \right) \\ &+ \mathbb{P}\left( f'(e_1,0) = 1, {\widetilde}{Y} = -1 \ \middle|\ {\widetilde}{X} = e_1, {\widetilde}{A} = 0 \right)\mathbb{P}\left( {\widetilde}{X} = e_1 \ \middle|\ {\widetilde}{A} = 0 \right) \nonumber\\ =&\ \mathbb{P}\left( f'(-e_1,0) = -1 \right)\mathbb{P}\left( {\widetilde}{X} = -e_1, {\widetilde}{Y} = 1 \ \middle|\ {\widetilde}{A} = 0 \right) \label{eq:predsindepY}\\ &+ \mathbb{P}\left( f'(e_1,0) = 1 \right)\mathbb{P}\left( {\widetilde}{X} = e_1, {\widetilde}{Y} = -1 \ \middle|\ {\widetilde}{A} = 0 \right) \nonumber\\ =&\ \mathbb{P}\left( f(-e_1,0) < 0 \right)\eta_{-+} + \mathbb{P}\left( f(e_1,0) \geq 0 \right)\eta_{+-} \label{eq:expectedgap1}\end{aligned}$$ With following from the fact that conditioned on ${\widetilde}{X}$ and ${\widetilde}{A} = 0$, $f'({\widetilde}{X},0)$ is a random variable that is independent of the value of ${\widetilde}{Y}$. Similarly, $$\begin{aligned} &\mathbb{P}\left( |f'({\widetilde}{X},0)| = 0, \textrm{sign}(f'({\widetilde}{X},0)) = {\widetilde}{Y} \ \middle|\ {\widetilde}{A} = 0 \right) \nonumber\\ =&\ \mathbb{P}\left( f'(-e_1,0) = 0, {\widetilde}{Y} = 1 \ \middle|\ {\widetilde}{X} = -e_1, {\widetilde}{A} = 0 \right)\mathbb{P}\left( {\widetilde}{X} = -e_1 \ \middle|\ {\widetilde}{A} = 0 \right) \\ &+ \mathbb{P}\left( f'(e_1,0) = -0, {\widetilde}{Y} = -1 \ \middle|\ {\widetilde}{X} = e_1, {\widetilde}{A} = 0 \right)\mathbb{P}\left( {\widetilde}{X} = e_1 \ \middle|\ {\widetilde}{A} = 0 \right) \nonumber\\ =&\ \mathbb{P}\left( f'(-e_1,0) = 0 \right)\mathbb{P}\left( {\widetilde}{X} = -e_1, {\widetilde}{Y} = 1 \ \middle|\ {\widetilde}{A} = 0 \right) \\ &+ \mathbb{P}\left( f'(e_1,0) = -0 \right)\mathbb{P}\left( {\widetilde}{X} = e_1, {\widetilde}{Y} = -1 \ \middle|\ {\widetilde}{A} = 0 \right) \nonumber\\ =&\ \mathbb{P}\left( f(-e_1,0) \geq 0 \right)\eta_{-+} + \mathbb{P}\left( f(e_1,0) < 0 \right)\eta_{+-}\label{eq:expectedgap2}\end{aligned}$$ Combining with and , we see that $$\begin{aligned} \mathbb{E}\left[ \ell^{\textrm{hinge}}(f') - \ell^{01}(f') \ \middle|\ {\widetilde}{A} = 0 \right] =&\ \mathbb{P}\left( f(-e_1,0) < 0 \right)\eta_{-+} + \mathbb{P}\left( f(e_1,0) \geq 0 \right)\eta_{+-} \\ &+ \mathbb{P}\left( f(-e_1,0) \geq 0 \right)\eta_{-+} + \mathbb{P}\left( f(e_1,0) < 0 \right)\eta_{+-} \nonumber\\ =&\ \eta_{-+} + \eta_{+-} \label{eq:thegap}\end{aligned}$$ The 0-1 loss of $f'$ can be decomposed as $$\mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{01}}(f') = \mathbb{P}({\widetilde}{A} = 0)\mathbb{E}\left[ \ell^{\textrm{01}}(f)\ \middle|\ {\widetilde}{A} = 0 \right] + \mathbb{P}({\widetilde}{A} = 1)\mathbb{E}\left[ \ell^{\textrm{01}}(f)\ \middle|\ {\widetilde}{A} = 1 \right]$$ By , $$\mathbb{P}({\widetilde}{A} = 0)\mathbb{E}\left[ \ell^{\textrm{01}}(f)\ \middle|\ {\widetilde}{A} = 0 \right] = (1- \delta)\left(\mathbb{E}\left[ \ell^{\textrm{hinge}}(f)\ \middle|\ {\widetilde}{A} = 0 \right] - \eta_{-+} - \eta_{+-} \right)$$ and since the hinge loss is always an upper bound on the 0-1 loss $$\mathbb{P}({\widetilde}{A} = 1) \mathbb{E}\left[ \ell^{\textrm{01}}(f)\ \middle|\ {\widetilde}{A} = 1 \right] \leq \delta\mathbb{E}\left[ \ell^{\textrm{hinge}}(f)\ \middle|\ {\widetilde}{A} = 1 \right]$$ From Lemma \[lem:hardnessexistsgoodhinge\], the hinge loss of $f'$ (which is at most the hinge loss of $f$) is upper bounded by $2(1-\delta)(\eta_{-+} + \eta_{+-}) + 2\delta + \epsilon$. Thus, $$\begin{aligned} \mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{01}}(f') &\leq (1- \delta)\left(\mathbb{E}\left[ \ell^{\textrm{hinge}}(f)\ \middle|\ {\widetilde}{A} = 0 \right] - \eta_{-+} - \eta_{+-} \right) + \delta\mathbb{E}\left[ \ell^{\textrm{hinge}}(f)\ \middle|\ {\widetilde}{A} = 1 \right] \\ &= \mathcal{L}_{{\widetilde}{\mathcal{D}}}^{\textrm{hinge}}(f') - (1-\delta)(\eta_{-+} + \eta_{+-}) \\ &\leq (1-\delta)(\eta_{-+} + \eta_{+-}) + 2\delta + \epsilon\end{aligned}$$ Because $f'$ is $\alpha$-discriminatory at threshold $0$ $$\begin{aligned} {\left| \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 0) - \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 1) \right|} &\leq \alpha \\ {\left| \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' < 0\ |\ {\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 0) - \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' < 0\ |\ {\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1) \right|} &\leq \alpha \end{aligned}$$ Let $f''(x) = f'([0,x], 1)$, then $$\begin{aligned} \mathcal{L}_{{\widetilde}{{\mathcal}{D}}}^{01}(h) =&\ \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = -1, {\widetilde}{A} = 0)\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 0) \\ &+\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 0)\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' < 0\ |\ {\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 0) \nonumber\\ &+\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = -1, {\widetilde}{A} = 1)\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 1) \nonumber\\ &+\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1)\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' < 0\ |\ {\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1) \nonumber\\ \geq&\ \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = -1, {\widetilde}{A} = 0)\left(\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 1) - \alpha \right) \\ &+\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 0)\left( \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' < 0\ |\ {\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1) - \alpha \right) \nonumber\\ &+\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = -1, {\widetilde}{A} = 1)\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 1) \nonumber\\ &+\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1)\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' < 0\ |\ {\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1) \nonumber\\ =&\ \left(\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = -1, {\widetilde}{A} = 0) + \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = -1, {\widetilde}{A} = 1)\right) \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 1) \\ &+\left(\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 0) + \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1) \right)\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' < 0\ |\ {\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1) \nonumber\\ &+\left(\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = -1, {\widetilde}{A} = 0) + \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 0) \right)(-\alpha) \nonumber\\ =&\ \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = -1) \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' \geq 0\ |\ {\widetilde}{Y} = -1, {\widetilde}{A} = 1) \\ & + \mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{Y} = 1)\phantom{-}\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}(f' < 0\ |\ {\widetilde}{Y} = 1,\phantom{-} {\widetilde}{A} = 1) - \alpha\mathbb{P}_{{\widetilde}{{\mathcal}{D}}}({\widetilde}{A} = 0) \\ =&\ \mathbb{P}_{{\mathcal}{D}}(Y = -1) \mathbb{P}_{{\mathcal}{D}}(f'' \geq 0\ |\ Y = -1) \\ &+ \mathbb{P}_{{\mathcal}{D}}(Y = 1)\mathbb{P}_{{\mathcal}{D}}(f'' < 0\ |\ Y = 1) - \alpha(1-\delta) \nonumber\\ =&\ {\mathcal}{L}_{{\mathcal}{D}}^{01}(f'') - \alpha(1-\delta)\end{aligned}$$ The lemma follows immediately. [^1]: See [@hardt2016equality] for a discussion on why it might be necessary for a non-discriminatory predictor to use $A$ [^2]: See [@daniely2015complexity] for a description of the problem.
Is Z Burger Going to Expand into Sandwiches? At first I thought the Tenleytown’s Z Burger was launching a new Sandwich shop at 4321 Wisconsin Ave, NW. But trying to find info on line brought up an old post from Chowhound with a reader noting back in 2008: “there’s also a sandwich place next to that is going to open up, I believe? Sandwish or something?” I peeked inside and it still looks set up for an expansion: There is even a menu: I called the store and they told me that they don’t know if the sandwiches are coming. They might still be focusing on their new SW location and their “coming soon” to Columbia Heights location. But they were told that sandwiches could still be coming. The menu says the sandwiches are all $4.19. If they ever do launch I wonder if that price will still apply… Do you think they should expand into sandwiches or just stick with burgers?
SELLING YOUR HOME? HOW WE TRANSFORMED A MID TERRACED HOME Written by Charlotte Botham “Style is a reflection of your attitude and your personality.” – Shawn Ashmore Everyone has their own style and show it through what they wear or perhaps through their home interiors. People decorate their homes differently – maybe each room is a different colour,… About A List Insights is not only a blog, but a guide written by experts on everything art, interiors and events. We take you behind the scenes with our designers, discuss the latest interior trends and most importantly, are honest and upfront about our experiences.
Q: Complex Delete SQL Statement Why do delete statements have limitations beyond their select counterparts? I am not stuck because the problem was easy enough to work around but I'd rather fix my understanding than continue using workarounds. Case in point, I have an undirected, edge list with fields, V1 and V2. Unfortunately, when creating the table, I introduced duplicates (i.e. V1 -> V2 and V2 -> V1). To identify the dups, I ran the query: SELECT t1.V1, t1.V2 FROM table t1 WHERE t1.V1 > t1.V2 and EXISTS ( SELECT * FROM table t2 WHERE t2.V1 = t1.V2 and t2.V2 = t1.V1 ) since this returned a nice set of rows from the table, I thought I should be able to replace the select line with delete and rerun the same query. Sql server however, got mad and gave me red letters instead. Once about syntax and after a tweak, something about binding multipart identifiers. I managed to store the select in a table variable and get the delete done - which is ok, but what was wrong with my original approach and could I have done it with a single SQL statement? A: SQL Server is fairly flexible with delete. The double from syntax is allowed pretty much wherever select is: DELETE FROM t1 FROM table t1 WHERE t1.V1 > t1.V2 and EXISTS ( SELECT * FROM table t2 WHERE t2.V1 = t1.V2 and t2.V2 = t1.V1 )
Every partner will provide "end-to-end" services like integration, launch and operations. And while each of the companies has proposed flying particular equipment, NASA will decide the exact payloads for each flight before the end of the summer. The possible devices will cover everything from general science through to human landing-oriented tasks like identifying the lander position and studying lunar radiation. These are "just the beginning" of NASA's private partnerships, the administration's Thomas Zurbuchen said. There are already more on deck, with private spaceflight companies like Blue Origin and SpaceX currently competing for the chance to build NASA's crewed lunar lander. It's far too soon to know how well this will work, of course, but it's safe to say this corporate-focused approach is a sharp contrast to the Apollo landings from half a century ago.
The present invention relates generally to high speed container placement apparatus and more particularly to a high speed container apparatus for use in combination with filling equipment to achieve accurate net weight product filling of the containers. Government regulations require that the average actual net weight of package or containerized consumer products, such as instant coffee, be equal to or above the labelled net weight of the products. To keep abreast of the demand for their products, manufacturers must utilize high speed filling machines which move the containers at constant speeds. Since only volumetric dispensing devices can be used to fill moving containers and the labelled net weight must be close to the actual net weight even when the dispensed volume or the product density are at their lowest levels, the manufacturers often overfill containers with considerable amounts of product as fluctuations in the density of the product and the dispensing volumes occur. To eliminate such inaccuracies, the product should be dispensed by weight into the containers; however, accurate weighing requires a low rate of product flow which, in turn, requires long filling cycles. Ideally, to keep the length of filling time to a minimum, the containers can be first underfilled with the bulk of a product from a volumetric filling machine. Thereafter, these underfilled containers can be topped off with a small amount of product to bring the actual net weight to the labelled net weight in fairly short time cycle, e.g., under two seconds, utilizing low product flow in a machine dispensing by weight. Since weight dispensing devices cannot travel at high speeds and maintain their accuracy utilizing the desirable low product flow rate, it is necessary to perform the top-off dispensing operation from the stationary dispensing devices. This permits an unimpeded flow of product into the stationary dispensing device as well as accurate weight control. With the containers travelling at a desired line speed, the containers must be decelerated to a stop underneath the dispensing device for the period of time necessary for the filling operation to be performed at a low rate of product flow and accelerated to restore them to their normal rate of line speed. The conventional means employed to perform these deceleration/acceleration steps utilizes a reciprocating mechanism to position the containers under stationary dispensing devices and return them to the conveyor line. However, these mechanisms are incapable of operating in connection with the high line speeds, even if they handle several containers simultaneously, since the mechanisms are large, bulky and require several time-consuming movements for proper positioning and removal of the containers in relation to the dispensing devices. It is an object of the present invention to provide a novel container placement device for use in conjunction with filling devices which dispense product within a very close tolerance of a desired weight. A further object is to provide such device which decelerates the containers to a stop from a high line speed for the filling operation and accelerates them afterwards to the full line speed. It is also an object to automatically handle each container on an individual basis to insure accurate filling by weight thereof. Still another object is to provide such a device which may be readily fabricated and will enjoy a long life in operation.
Mr. Obama used the press conference to try to raise pressure on Republicans to reopen the government and raise the U.S. borrowing limit, which the Treasury Department has said the country will hit sometime this month. If the debt limit isn’t raised, the U.S. won’t be able to pay some of its bills. The president said if the U.S. defaults on its obligations “there would be a significant risk of a very deep recession.” Some Republicans have said the U.S. will be able to avoid a default by prioritizing some payments without requiring Congress to raise the debt ceiling. Mr. Obama dismissed those suggestions as “irresponsible.” The president didn’t provide details of the contingency plans his administration was making, but noted that Treasury Secretary Jack Lew was set to appear before a Senate committee Thursday, where he can “address some of the additional details about this.” Mr. Obama said: “There is no silver bullet. There is no magic wand that allows us to wish away the chaos that could result if, for the first time in our history, we don’t pay our bills on time.” About Washington Wire Washington Wire is one of the oldest standing features in American journalism. Since the Wire launched on Sept. 20, 1940, the Journal has offered readers an informal look at the capital. Now online, the Wire provides a succession of glimpses at what’s happening behind hot stories and warnings of what to watch for in the days ahead. The Wire is led by Reid J. Epstein, with contributions from the rest of the bureau. Washington Wire now also includes Think Tank, our home for outside analysis from policy and political thinkers.
Bertie Buse Herbert Francis Thomas "Bertie" Buse (1910–1992) was a cricketer who played 304 first-class matches for Somerset before and after the Second World War. Cricket career born at Ashley Down, Bristol, on 5 August 1910, Buse was an all-rounder: a dogged right-handed batsman who, in the mobile Somerset batting line-up of the mid 20th century, batted anywhere from No 3 to No 8, and a medium-paced swing bowler who often opened the bowling for the county side. He first played for Somerset in 1929, and then played occasional matches as a professional almost every season through to 1937. In 1938, he went on to the county's staff as a full professional contracted for all matches, and from then until he retired at the end of the 1953 season he was a regular in the county team. Buse's first complete season was 1938 and, according to Wisden, he "seized his chance in great style". In his first season, he scored 1067 runs and took 61 wickets, and his 132 against Northamptonshire at Kettering was to remain his highest first-class score. Wisden noted that he often batted best when Somerset were in trouble. That first full season set the pattern for the next nine: the 1939 season before the Second World War and the first eight seasons from 1946 after the war. Buse made 1,000 runs in five seasons in all, and more than 900 runs in three others; his batting average never exceeded 27 and never fell below 19; and he scored seven centuries in all. As a bowler, his best season was 1939, when he took 81 first-class wickets, including his career best eight for 41 in an innings against Derbyshire at Taunton. In eight seasons in all he took more than 50 wickets, and though his average was, for his time, rather high – usually around 30 runs per wicket – he again with his bowling seemed often to do well when others were struggling. Cricket style Buse was one of the distinctive characters in a Somerset side full of characters. In appearance and manner, he was bustling and rather prim, with a clipped moustache and always-neat hair. As a bowler, he went through a variety of fussy mannerisms before delivering the ball, all of which served only to endear him to Somerset cricket crowds. "There was his studious contemplation, his stuttering approach, the touch of acceleration and the undisguised smile when the batsman failed to counter the late swing," is one description. John Arlott said Buse's run-up was like a butler bringing in the tea. He could bowl both outswingers and inswingers. As a batsman, Buse also had a distinctive style that involved a strange dabbing stroke that steered the ball through the slips or gully towards third man. "There was rather too much posterior," says one book. Benefit match Buse was accorded a benefit match by Somerset in his final first-class season, 1953, and picked the three-day County Championship game against Lancashire at Bath. The match was a sensation, though not to Buse's gain. At the root of the sensation was a newly-laid pitch, which took vicious spin from the start of play. Lancashire's acting captain, the England Test batsman Cyril Washbrook, said later that Lancashire would have refused to play had it not been a benefit match. Somerset batted first and were all out in around 90 minutes for just 55. Off-spin bowler Roy Tattersall unusually opened the bowling, and took seven for 25. No Somerset batsman reached double figures, and Buse made just five. When Lancashire batted, Buse himself proved almost as deadly, taking four of the first five wickets that fell for just 46 runs. But then Peter Marner and Alan Wharton decided to hit out, and put on a stand of 70 in 25 minutes for the sixth wicket. In all, Lancashire totalled 158, made off just 32 overs, with Buse taking six for 41 and the innings finishing by teatime on the first day. Somerset's second innings then proved no better than the first, with Brian Statham joining Tattersall in the wickets. Only a last-wicket partnership of 35 by Jim Redman and the debutant Brian Langford delayed matters at all. Somerset were all out for 79, losing by an innings and 24 runs, with Tattersall having match figures of 13 wickets for 69 runs. The game was over by six o'clock on the evening of the first day. With the beneficiary responsible for outgoings as well as income from a benefit match, Buse faced financial disaster from the match. But Somerset waived the match costs and a fund was set up to recompense Buse which raised around £2,800, the kind of sum he might have expected from a game that ran the intended three days. Buse and Somerset returned to Bath for the second match of the cricket festival later that same week. This time, playing against Kent, two innings were completed on the first day, but then Somerset posted a total of more than 400 in their second innings, with both Buse and Harold Gimblett making centuries. Outside cricket Though born in Bristol, most of Buse's life was spent in Bath, where he played club cricket for Bath Cricket Club. He was employed as a solicitor's clerk in the city: Bath solicitors provided Somerset's cricket captains from 1932 to 1946. Outside cricket, he was an all-round sportsman, appearing for Bath rugby club at full-back and also playing table tennis and billiards to high standard. After retiring from cricket, he coached in South Africa (at King Edward VII School in Johannesburg his pupils included Ali Bacher) and ran a pub for a while before returning to Bath to work on the local evening newspaper. He died in Bath on 23 February 1992 References See also Bertie Buse at www.cricketarchive.com Category:1910 births Category:1992 deaths Category:English cricketers Category:London Counties cricketers Category:Somerset cricketers Category:Bath Rugby players
Cost-effectiveness analysis of clinic-based chloral hydrate sedation versus general anaesthesia for paediatric ophthalmological procedures. The inability of some children to tolerate detailed eye examinations often necessitates general anaesthesia (GA). The objective was to assess the incremental cost effectiveness of paediatric eye examinations carried out in an outpatient sedation unit compared with GA. An episode of care cost-effectiveness analysis was conducted from a societal perspective. Model inputs were based on a retrospective cross-over cohort of Canadian children aged <7 years who had both an examination under sedation (EUS) and examination under anaesthesia (EUA) within an 8-month period. Costs ($CAN), adverse events and number of successful procedures were modelled in a decision analysis with one-way and probabilistic sensitivity analysis. The mean cost per patient was $406 (95% CI $401 to $411) for EUS and $1135 (95% CI $1125 to $1145) for EUA. The mean number of successful procedures per patient was 1.39 (95% CI 1.34 to 1.42) for EUS and 2.06 (95% CI 2.02 to 2.11) for EUA. EUA was $729 more costly on average than EUS (95% CI $719 to $738) but resulted in an additional 0.68 successful procedures per child. The result was robust to varying the cost assumptions. Cross-over designs offer a powerful way to assess costs and effectiveness of two interventions because patients serve as their own control. This study demonstrated significant savings when ophthalmological exams were carried out in a hospital outpatient clinic, although with slightly fewer procedures completed.
Expression of the mdr1 gene in bone and soft tissue sarcomas of adult patients. The expression of the mdr1 gene was evaluated at the RNA level by northern and slot blot analysis, and at the protein level by immunohistochemistry, in a total of 29 bone and 32 soft tissue sarcomas. All patients, mainly adults, had not received previous chemotherapy. Of the tumours investigated, 69% were mdr1-positive. An intermediate mdr1 expression was observed most frequently, with the exception of osteosarcomas (high) and malignant fibrous histiocytomas (low). Detection of P-glycoprotein in selected tumours revealed consistent results. However, no conclusion can be drawn as yet regarding correlation of mdr1 expression and drug resistance in patients.
Q: How to choose between Bean, Boxes and Fieldable Panels Panes? Bean, Boxes and Fieldable Panels Panes all provide similar functionality. I have trouble understanding what exactly the differences between them are. What are their advantages/disadvantages compared to each other? Are they geared towards different use cases? I want to use some kind of blocks in Panels to add custom content - content editors also need to be able to add content, in that sense the blocks I need are not pure configuration. But I also use Features... Edit: I'll add what seem to be the main differences Boxes Largest userbase (~ 11500) Treats blocks as configuration (i.e. the content ends up in your features) Modules offering integration Beans Has recently become popular, ~1000 installs Treats blocks as content, but allows exportability of their configuration via machine name (e.g. via Bean Panels) Modules offering integration Fieldable Panels Panes Smallest userbase (~ 400) From the author of Panels/Views/etc. Panes cannot be used as regular blocks in theme regions like beans or boxes (I assume) I wonder what the advantages over Bean mentioned here are ("offers additional features that makes it easier to empower content admins to lay out certain pages") A: It's perfectly possible to make a D7 site without blocks if you can live without dashboard. Our new - content driven - site is being built on panels with workbench as a suitable (for us) alternative to dashboard. Arjan appears to already understand this. On to the alternatives. Read Your site should be full of beans. The problem with boxes is the danger regarding overwriting existing content when using features. But read Fabian Franz's comment in the same article. Beans has many options. I'm not comfortable regarding management, scalability and performance. Hardcore developers that are fluent with panels use Fieldable Panels Panes. Fieldable Panels Panes lack documentation and examples. What should get everyones focus and effort for D8 is the wscci-initiative. It allows for REST-calls, including for example DELETE. This could permanently tackle the problem of overwriting site builder configuration on code rollouts.
Q: How to add new module to running ejabberd node? I followed up this tutorial https://blog.process-one.net/elixir-sips-ejabberd-with-elixir-part-1/ how to write ejabberd module. And that works great, I put module to ejabberd/src and then compile everything. But that seams as lot of work for me. Every time when I change one line of code during development I have to compile ejabberd again from scratch with new changed module. Is there any way that I can compile module and then just copy it to ejabberd modules path? If yes, where is ejabberd modules path? And if yes what tutorial should I read? A: Example usage: edit src/mod_echo.erl to add some relevant change. And now: $ make Compiled src/mod_echo.erl $ sudo make install ... $ ejabberdctl update_list mod_echo $ ejabberdctl update mod_echo Since now, the new code is running in ejabberd. In your case, you copy your module source files into ejabberd source path, and compile them as if they were another ejabberd modules. Or you can compile them separately, and install the *.beam files toghether with all the other ejabberd beam files (the location depends on your system).
For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter. For the past year, the bridges that cross from Ciudad Juárez to El Paso have been surrounded by encampments of Central Americans waiting to apply for asylum in the United States. As the Trump administration clamped down even further on asylum over the summer and threatened Mexico with tariffs if it didn’t stop more Guatemalans, Hondurans, and Salvadorans from reaching the US border, those camps started to shrink—until recently, when the makeshift shelters began to be occupied by a new group of would-be asylees: Mexicans. On a recent afternoon near the Paso del Norte bridge, I met a petite 23-year-old woman who’s been at the encampment with her husband and three young sons for the past five weeks. They fled the cartel-stricken state of Michoacán to seek asylum and are now sleeping on a sidewalk under a multicolor tarp. “I wish I didn’t have to be here—it’s tough living here,” she told me while sitting on the sidewalk, keeping an eye on her youngest son. It was starting to get cold, and her kids had already gotten sick. But she couldn’t return to her hometown, she said. Before she headed north, a relative was murdered, his chopped-up remains delivered to the family in a bag. Shortly thereafter, there were whispers that a group in her town was kidnapping children, and when her sister, a police officer, confirmed the rumors and told her to flee, her family left immediately. Now, with violent outbursts on the upswing across Mexico, there are more than 3,000 Mexican asylum seekers spread out near the three international bridges in Ciudad Juarez. Around her at the encampment, dozens of tents and tarps lined both sides of the small street leading to the international port of entry. According to lawyers with Catholic Legal Immigration Network (CLINIC) and the American Civil Liberties Union, the first asylum-seeking Mexican families began settling along the border in Juárez in early September, after being turned away at the port of entry by US Customs and Border Protection (CBP). Now, two months later—with violent outbursts on the upswing across Mexico—there are more than 3,000 Mexican asylum seekers spread out in camps and shelters near the three international bridges in Juárez, according to estimates from volunteers, NGOs, and the ACLU. (The increase in asylum seekers coincides with an increase in the number of so-called family units from Mexico apprehended along the border: In the fiscal year that ended on September 30, there were 6,004 Mexican family unit apprehensions, up from about 2,200 in both fiscal 2018 and fiscal 2017.) The thing is, advocates note, US immigration officials are legally obligated to process Mexican asylum seekers at the border. “They’re turning those asylum seekers right back into the country from which they’re fleeing prosecution,” said Shaw Drake, policy counsel with the ACLU Border Rights Center—and that’s in direct violation of US domestic and international laws signed after World War II that ensure that no state turns away refugees fleeing persecution in their home countries. Yet people at the camp say CBP officials send people back across the bridge all the time—a tactic known as metering that limits the number of people asking for asylum on any given day. The young family from Michoacán found this out the hard way upon arriving in Juárez. On their first night there, they walked up the bridge, ready to petition for asylum at the port of entry. But before they could reach the US side of the bridge, CBP agents turned the family back to Mexico and told them they had to get on a list. “This is illegal, this is cruel, and this is unfair.” These waitlists started popping up around summer 2018, when Central American migrants trying to present themselves at legal ports of entry were told to wait in Mexico because CBP claimed it couldn’t process them in large groups. As the number of people requesting asylum grew, and as more and more migrant caravans arrived from Central America, the waitlists became the only way to get in front of a CBP official. It could be months until people’s numbers are called. “This is illegal, this is cruel, and this is unfair,” said Tania Marie Guerrero, an attorney with CLINIC who’s working at the camps daily. “Ultimately, it is a judge’s job to decide who get asylum—it’s not up to CBP or Border Patrol.” Since September, a second set of waitlists has been formed, this one with the names of thousands of Mexican citizens. The afternoon I visited, the list went up to 207 at Paso del Norte. The woman from Michoacán was number 51. If CBP continued calling out only a few numbers each week, she’d still have months to go. Other international bridges in Juárez have longer lists, and since each number is likely to represent a family of three or five, it’s hard to tell exactly how many Mexican citizens are waiting. The camps are growing, and more people are expected to seek asylum in the United States as a result of recent public killings and shootouts across Mexico. Videos of Sinaloa Cartel gunmen shooting at police in broad daylight in a residential neighborhood went viral last month, terrifying the whole country. Last week, in a brutal attack in northern Mexico, nine members of a Mormon family were killed by assailants, who incinerated women and children. President Donald Trump tweeted about the incident, calling drug cartels “monsters” and offering help to Mexico’s president to “wipe them off the face of the earth.” Sen. Lindsey Graham (R- S.C.) said he would rather go to Syria than some places in Mexico: “There’s some places over there that are completely lawless.” Of course, that’s exactly the argument that immigration lawyers have been making for months in response to the Trump administration’s Migrant Protection Protocols (a.k.a. “Remain in Mexico”) policy, which has forced more than 50,000 migrants, mostly from Central America, to wait out their court proceedings in Mexico. Not only is Mexico unsafe, they contend, but it’s especially dangerous for these migrants in limbo along the border. At an October 29 press conference, CBP Acting Commissioner Mark Morgan attributed the rise of Mexican asylum seekers to smugglers who “started taking out social-media ads and telling Mexican nationals that if you grab a kid it’s your passport to the United States.” That line didn’t sit well with advocates like Guerrero, who visits the camps multiple times a week and holds informational sessions for anyone who will listen to make sense of the newest policy changes regarding asylum in the United States. She constantly hears about the fear and violence families are experiencing across Mexico and about the lack of support from law enforcement. “There’s nothing easy about this,” Guerrero said, pointing to the encampment behind her. South Carolina Sen. Lindsey Graham said he would rather go to Syria than some places in Mexico: “There’s some places over there that are completely lawless.” During his press conference, Morgan said CBP is working with the White House “and trying to develop new initiatives within the current legal framework that we can apply to the Mexican families as well.” Last week, BuzzFeed News reported on a secretive CBP pilot program in El Paso that significantly reduces the time Mexican families have to prepare their asylum case while in custody—making an already difficult process all but impossible to navigate for the vast majority of asylum seekers, many of whom have no access to legal counsel. “This places asylum seekers at a complete disadvantage, setting them up for failure,” Guerrero said of the pilot program. In such a short timeframe and without legal representation while in detention, these migrants are “cold, sleepless, hungry, and fearful of people,” yet they are “expected to respond to a series of questions that will determine the rest of their lives.” I reached out to CBP for an interview about how the agency is handling the asylum requests from Mexicans in Juárez but did not get a response. On a recent Tuesday afternoon at the border camp, little kids ran around playing with a ball while teens huddled around a cellphone watching videos. The temperature was starting to drop; adults sat on the sidewalks looking bored and defeated. A woman dunked a loofah in a bucket with soapy water, struggling to get her toddler to stand still while she cleaned him up. Most of the Central Americans who lived here earlier in the year are now staying in one of the many shelters that have popped up throughout the city, including a large facility the Mexican government opened in the summer at the request of the United States to help non-Mexican migrants wait out their day in court. The woman from Michoacán said that if she could’ve stayed in her hometown with her family she would’ve done so. But as I asked more questions about the life she fled back home, she started to avert her eyes and nervously rub her palms together. She explained that families like hers wished they didn’t have to sleep on the streets of Juárez—it’s embarrassing, she said. “Who would want to live like this? It’s cold, and we go hungry sometimes,” she said. “People walk by sometimes—they call us lazy and dirty.” Her eyes filled with tears. Her oldest son, who’s still just seven, wrapped his arms around her and gently placed his head on her shoulder, trying to comfort her.
Dalton Holme Dalton Holme a civil parish in the East Riding of Yorkshire in England. It is situated to the north-west from the market town of Beverley and covering an area of . It is made up of two villages, South Dalton and Holme on the Wolds, which over the years have become joined. Both the villages are run by the Dalton Estate, owned by the Hotham Family, and are occupied by estate workers and paying tenants. The 18th century Dalton Hall is the home of Lord Hotham, whose family have owned land in the area for generations. The hall was designated a Grade II* listed building in 1952 and is now recorded in the National Heritage List for England, maintained by Historic England. The spire of St Mary's, the 19th-century church, is over high and can be seen for miles around. It was built to the design of John Loughborough Pearson in 1858 to replace an older parish church. Inside the church are a number of monuments to the Hotham family; the older monuments were transferred from the earlier church. One, in black and white marble, is in memory of John Hotham. It dates from after 1697 and is said to have come from Italy. Sir John is represented in life as a reclining knight in full armour, with his helmet and gauntlet beside him, and in death, as a skeleton. Supporting the four corners of the tomb are statues representing the cardinal virtues. Dalton Estate Office is in the village of South Dalton. The Estate domestic buildings are rows of cottages and Tudor style houses, some having plates that record dates back to 1706. The local public house is the Pipe and Glass Inn, situated near the entrance gates to the road through Dalton Park, leading to Dalton Hall, west from the village. The Communist Member of Parliament Cecil L'Estrange Malone was born there on 7 September 1890. According to the 2011 UK census, Dalton Holme parish had a population of 198, a increase of one on the 2001 UK census figure. References External links Dalton State Pipe and Glass Inn Category:Civil parishes in the East Riding of Yorkshire
Novel drimane sesquiterpene esters from Aspergillus ustus var. pseudodeflectus with endothelin receptor binding activity. A series of novel drimane sesquiterpene esters (1-6) was isolated from fermentations of Aspergillus ustus var. pseudodeflectus and their structures elucidated by spectroscopic methods including the HMQC, HMBC and INADEQUATE NMR experiments. The major component of the fermentation, 1, was (2'E,4'E,6'E)-6-(1'-carboxy-2',4',6'-trien)-9-hydroxydrim-7-ene-11 ,12-olide. Compounds 1, 2, 3 and 5 exhibited endothelin receptor binding inhibitory activity against rabbit endothelin-A and rat endothelin-B receptors with IC50 values in the range 20-150 microM. These compounds had similar levels of activity in assays for binding to human endothelin A and endothelin B receptors. The isolation of 9,11-dihydroxy-6-oxodrim-7-ene, 7, a probable biosynthetic precursor to the drimane esters is also reported.
1. Introduction {#s0010} =============== TSH has an excellent effect in the process of treating lower urinary tract symptoms suggestive of benign prostatic hyperplasia (LUPS/BPH) in clinical trials [@bib0010]. As patients on treatment with LUTS/BPH are men aged in their mid-sixties [@bib0015]. Some adverse effects, such as asthenia, dizziness and orthostatic hypotension [@bib0020]should be avoided by formulating it into sustained/controlled release preparation. At the same time, it could improve the symptom of urination disorder therefore reducing the times patients get up during night. Up to now, literatures have reported that tamsulosin hydrochloride sustained/controlled release pellets were successfully prepared by different techniques and coating materials. Min-Soo Kim [@bib0025] described tamsulosin hydrochloride controlled release (TSH-CR) pellets which were prepared using ethylcellulose aqueous dispersion (Surelease^®^) and sodium alginate. Controlled drug release pattern might be attributed to the nature of sodium alginate which is soluble at neutral pH but swells below pH 3.When dissolution media (at first in simulated gastric fluid and then intestinal) penetrated into substrates containing sodium alginate, it at first swelled and then dissolved, producing osmotic pressure to assist TSH diffuse out of film formed by Surelease^®^. Xiong Zhang [@bib0030] depicted TSH-CR pellets consisted of two different coated pellets mixed by a certain ratio, Eudragit^®^NE30D and Eudragit^®^L30D-55 were used as the coating materials respectively to achieve two different release rhythms. The author aimed to prepare TSH-CR pellets mixing two different formulations instead of coating materials. Atsushi Maeda [@bib0035] prepared microparticles intended for orally disintegrating tablets using ethylcellulose aqueous dispersion (Aquacoat^®^) and Eudragit^®^NE30D in single step coating. However, it only considered release in simulated intestinal fluid. Jing min Wang [@bib0040] developed TSH-SR pellets by two-layered films. Inner film consisted of Eudragit^®^NE30D and Eudragit^®^L30D-55 mixed by a certain ratio, outer enteric film was Eudragit^®^L30D-55. According to the above coating materials used for preparing TSH sustained/controlled release pellets, aqueous dispersions composed of latex particles were mentioned. Utilizing aqueous dispersions for solid dosage forms coating provides various advantages, such as reducing toxicity and lessening environmental concerns [@bib0045]. However, in the course of film formation, latex particles do not experience absolute coalescence in a short time. This change is not obviously seen until a period of time [@bib0050]. As a result, mechanical properties of film might vary thereby altering release behavior of formulation in the long-term storage. Investigations above did not refer to the release stabilities of their final products. Yet from the process of film formation, it was inferred that the formulation prepared with aqueous dispersions might be invalid in the long run. Therefore, investigating and solving the issue of the release stability of sustained/controlled release formulations using aqueous dispersions was of great significance. In the present study, TSH-SR pellets were prepared by frame-controlled technique. TSH was added to Eudragit^®^NE30D and Eudragit^®^L30D-55 polymers to form drug-loaded inner core. Afterwards, enteric Eudragit^®^L30D-55 polymer was modified on the surface to become the final product. Fluidized bed coater was employed to coat the TSH-containing sustained release and enteric films. It was found that not only did it reach the release behavior similar with Harnual^®^, but also identified better release stability in comparison with TSH film-controlled pellets in the process of coating, curing and storage. Then we designed and prepared free films and blank sustained release pellets containing different ratios of Eudragit^®^NE30D and Eudragit^®^L30D-55 polymers in order to further verify factors affecting the formation and mechanical properties of film. Additionally, drug release mechanism of TSH-SR pellets was proved by observing surface morphology of it after dissolution. 2. Materials and methods {#s0015} ======================== 2.1. Materials {#s0020} -------------- TSH (99.8% purity) was purchased from Zhejiang Jinhua Pharmaceutical Co. Ltd. (Zhejiang, China), Microcrystalline cellulose (MCC) (WJ-101) and hydroxypropyl methylcellulose-E5 (HPMC-E5) were purchased from Shanhe Pharmaceutical Co. Ltd. (Anhui, China), methacrylic acid copolymers (Eudragit^®^NE30D and Eudragit^®^L30D55) were kindly provided by Degussa (Esson, Germany), TSH controlled-release brand capsule (Harnal, 0.2 mg, Yamanouchi Pharmaceutical Co. Ltd. Japan). All organic solvents were of high-performance liquid chromatography (HPLC) grade. All other chemicals were of analytical grade. 2.2. Methods {#s0025} ------------ ### 2.2.1. Preparation of MCC blank pellets {#s0030} Microcrystalline cellulose (MCC) worked as framework and dry blinder of the blank pellets, while redistilled water was wet blinder. Centrifugal granulator was employed to successfully prepare it by powder coating technique. Wet granules were put into 40 °C oven for 6 h. At last, sieving pellets through 40--50 mesh screen. ### 2.2.2. Preparation of TSH-SR pellets {#s0035} Four hundred grams of MCC blank pellets were further coated with sustained release film consisted of TSH and enteric film. The dispersions used to perform inner TSH-containing film were comprised of Eudragit^®^NE30D and Eudragit^®^L30D-55. Talc (at 50% of Eudragit^®^NE30D dry polymer weight) homogenized with purified water for 10 min was added to the Eudragit^®^NE30D. Then Eudragit^®^L30D-55 previously plasticized with 20% triethyl citrate (TEC) based on dry polymer weight was added to the Eudragit^®^NE30D dispersion. Finally, TSH with proper amount of purified water was added to the above dispersion. The processing parameters of fluidized bed coater listed in [Table 1](#t0010){ref-type="table"} were used in the process of coating until the desired theoretical weight gain. After coating, pellets were dried at 40 °C for 24 h. Then outer enteric film composed of Eudragit^®^L30D-55 plasticized with 20% TEC based on dry polymer weight at proper weight gain was coated. The final product was dried at 40 °C for 2 h.Table 1The coating process parameters of fluidized bed coater.Table 1Coating techniqueBottom sprayAir source pressure5.0 barAtomization pressure1.8 barInlet temperature25 °COutlet temperature20 °CAir flow55--60m^3^/hSpray rate6 ml/min ### 2.2.3. Preparation of TSH film-controlled pellets {#s0040} Three hundred eighty grams of MCC blank pellets were placed in centrifugal granulator. TSH was dissolved in 400 ml purified water containing 0.4% (m/m) HPMC-E5, which was sprayed on the surface of MCC blank pellets as drug-solution. Appropriate MCC powders were added now and then in order to avoid pellets sticking each other. The final product was dried at 40 °C for 6 h. After desiccation, 400 g drug-containing pellets were further coated with Eudragit^®^NE30D and Eudragit^®^L30D-55 polymers at the optimized 22:1 ratio. The method preparing polymer dispersions and coating process parameters were the same as TSH-SR pellets except for not adding TSH-solution and had no enteric film at last. ### 2.2.4. Dissolution study of the TSH-SR and TSH film-controlled pellets {#s0045} In vitro drug dissolution test was performed using USP XXVII Type 2 dissolution apparatus (paddles method). Dissolution studies were performed in 500 ml of a step function pH media. Including the simulated gastric fluid (SGF) containing 0.003% (w/w) polysorbate 80 (pH 1.2) and simulated intestinal fluid (pH 7.2), after starting the test 2 h, withdrawing 10.0 ml of the solution under test. Draining the SGF with the help of a 100-mesh screen and rinsing the screen while adding the pH 7.2 buffer which was previously warmed. At each predetermined time point, 10.0 ml of the solution was withdrawn and replaced by the same volume of fresh solution until 4 h (in pH 7.2 buffer). All the sample solutions were filtered through 0.22 mm filtration film and analyzed by HPLC [@bib0040]. ### 2.2.5. Preparation of Eudragit^®^NE30D and Eudragit^®^L30D-55 free films {#s0050} Eudragit^®^L30D-55 plasticized with 20% TEC (based on dry polymer weight) was added to Eudragit^®^NE30D which was diluted with an equal volume of purified water. The ratios of Eudragit^®^NE30D and Eudragit^®^L30D-55 were 5:1, 10:1 and 20:1 respectively. Blend dispersions were stirred for 1 h and casted on round watch glass (diameter: 12 cm, total dispersion weight: 20 g), then dried at 40 °C for 24 h. Thickness of the dry film was approximately 300 µm. ### 2.2.6. Preparation of Eudragit^®^NE30D and Eudragit^®^L30D-55 MCC blank sustained release pellets {#s0055} Eudragit^®^NE30D was not diluted with an equal volume of distilled water but Talc (at 50% of Eudragit^®^NE30D dry polymer weight) aqueous dispersion in 2.2.5. Then it was sprayed on the MCC blank pellets employing fluidized bed coater at an appropriate weight gain, dried at 40 °C for 24 h. ### 2.2.7. Stability analysis {#s0060} TSH-SR pellets were investigated in different coating weight gains, relative humidity, curing time and temperatures compared with TSH film-controlled pellets. Then cured TSH-SR and TSH film-controlled pellets were stored at 40 °C/0% RH for 90 d and 25 °C (29%, 51%, 75%, 84% and 92.5% RH) for 30 d. At last, observing the dissolution curves in different conditions. Different ratios of Eudragit^®^NE30D and Eudragit^®^L30D-55 free films and MCC blank sustained release pellets were stored at 40, 60 °C/0% RH and 25 °C/ 75, 92.5% RH for 10 and 30 d respectively. Appearances and Tgs of free films and surface morphologies of blank sustained release pellets were observed and measured in different conditions. ### 2.2.8. Thermal analysis of free films {#s0065} Thermal analysis was determined on film samples using a modulated differential scanning calorimeter. Three milligram free films were accurately weighed into aluminum pans and then sealed. At first, the samples under a nitrogen atmosphere were cooled to −40 °C, then they were heated at a constant rate of 3 °C/min up to 100 °C. The glass transition temperature (Tg) was recorded as the midpoint of the transition that appeared in the reversible heat flow. ### 2.2.9. Scanning electron microscopy {#s0070} To evaluate the surface of the coated polymer film, images of scanning electron microscopy were achieved from TSH-SR pellets before and after dissolution for further analyzing the drug release mechanism. In addition, surface morphologies of Eudragit^®^NE30D and Eudragit^®^L30D-55 MCC blank sustained release pellets after 6 h dissolution in different conditions were observed. 3. Results and discussions {#s0075} ========================== 3.1. Ratio of Eudragit^®^NE30D and Eudragit^®^L30D55 of TSH-containing film {#s0080} --------------------------------------------------------------------------- Three ratios of Eudragit^®^NE30D and Eudragit^®^L30D55 (18:1, 12:1 and 8:1) were emphasized in the present study. As shown in [Fig. 1](#f0015){ref-type="fig"}, during 2 h in SGF, the release rates were almost consistent under the circumstance of the same weight gain. However, when the dissolution media was replaced with pH 7.2 buffer, three ratios had significant differences. It is because both Eudragit^®^NE30D and Eudragit^®^L30D55 are insoluble in acid environment. TSH could depend chiefly on the particle--particle interval release. Whereas in pH 7.2 buffer, Eudragit^®^L30D55 is able to leach out from the film and form a porous structure which could increase the mobility of it, therefore fast release tendency was seen during 2--3 h. However, during 4--6 h, TSH exhibited a delayed release profile, indicating the left TSH had difficulty in rupturing from block formed by the inner polymer film. In a word, the drug release tendency was initially ascendant but subsequently plateaued. Compared with release curves of three ratios, it was found that the ratio of 12:1 had potential ability for reaching the similar release with Harnual^®^.Fig. 1Release profiles of different ratios of Eudragit^®^NE30D and Eudragit^®^L30D55 of TSH-containing films and Harnual^@^. (Mean ± SD, n = 3).Fig. 1 3.2. Weight gain of Eudragit^®^L30D55 of outer enteric film {#s0085} ----------------------------------------------------------- The outer enteric film aimed at decreasing the release trend entirely. It was because both 2 and 3 h release rate of inner TSH-containing film were relatively faster compared with Harnual^®^. Three coating levels of the outer films were tried to reach the ideal profiles. [Fig. 2](#f0020){ref-type="fig"} showed that at 0.6% weight gain, the only decrease of 2 h release rate was probably because of the lower thickness of enteric film, making it ruptured rapidly in pH 7.2 buffer. While at 2%, except for 2 h, other time points also had different degree of decrease. The explanation might be the fact that TSH is a kind of weak base salt. During 2 h in SGF, with water penetrating into the intact enteric film, it was hydrolyzed and released hydrogen ion to keep an acid environment thus hindering Eudragit^®^L30D55 dissolved in pH 7.2 buffer. At 3%, it had a similar release behavior with 2%.Thus we decided that the weight gain of the outer enteric film was 2--3% finally.Fig. 2Release profiles of weight gains of Eudragit^®^L30D55 of outer enteric film and Harnual^@^. (Mean ± SD, n = 3).Fig. 2 3.3. Stability analysis of TSH-SR and TSH film-controlled pellets {#s0090} ----------------------------------------------------------------- ### 3.3.1. In different coating, curing condition {#s0095} [Fig. 3A and 3B](#f0025){ref-type="fig"}, [Fig. 4A and 4B](#f0030){ref-type="fig"} showed that TSH-SR pellets had a wide range in coating weight gain and relative humidity in the process of coating compared with TSH film-controlled pellets respectively. The final release profiles were found to be little changed in 20--28% coating weight and 40--60% relative humidity range unlike film-controlled pellets which were almost significantly changed in 9--11% coating weight and 50--60% relative humidity, demonstrating TSH-SR pellets were more stable in the coating process. In addition, release profiles of TSH-SR pellets were not dramatically changed at 40, 60 °C in a range of 4--24 h respectively while decreased at 60 °C in comparison with 40 °C as [Fig. 5A and 5B](#f0035){ref-type="fig"} showed. However, [Fig. 6A and 6B](#f0040){ref-type="fig"} revealed TSH film-controlled pellets were notably decreased not only in a range of 40--60 °C, but also 0--24 h, making it clear that release profiles could be altered with different curing temperature and time and TSH-SR pellets were more stable in the curing process than TSH film-controlled pellets.Fig. 3Release profiles of different weight gains of A: TSH-SR pellets and B: TSH film-controlled pellets. (Mean ± SD, n = 3).Fig. 3Fig. 4Release profiles of different relative humidity in coating process of A: TSH-SR pellets and B: TSH film-controlled pellets. (Mean ± SD, n = 3).Fig. 4Fig. 5Release profiles of different curing time points of TSH-SR pellets at A: 40 °C and B: 60 °C. (Mean ± SD, n = 3).Fig. 5Fig. 6Release profiles of different curing time points of TSH film-controlled pellets at A: 40 °C and B: 60 °C. (Mean ± SD, n = 3).Fig. 6 ### 3.3.2. In accelerated temperature condition {#s0100} Latex particles of aqueous dispersions are not able to go through absolute coalescence at a time. The process of coalescence is divided into two stages. During the first stage, water evaporates at a constant rate and latex particles concentrate at the surface of substrate. As a result, transparent film forms. Then outer latex particles contacting with each other irreversibly become deformed by evaporation of water at a reduced rate [@bib0055]. The second stage comes during the curing time, inner rather than outer particles gradually begin gathering and amalgamating with water of inter-particle diffusing through the continuous outer film thereby forming entirely homogeneous film and gaining mechanical properties. This final stage can last for a long time after initial film forms. Therefore with coalescence going on, free volume of the polymer decreases [@bib0060]. Changes in mechanical properties and dissolution behaviors of the formulation were found. In the present study, 40 °C was chose as a factor to investigate stability for the reason that elevating temperature of storage could accelerate the coalescence rate of latex particles, thus the release stability of the sustained/controlled release formulations could be initially verified. [Fig. 7A and 7B](#f0045){ref-type="fig"} showed the results that TSH-SR pellets were considered stable from the dissolution profiles during 90 d. It was probably because water-soluble TSH acted as a hydrophilic pore-producer in Eudragit^®^NE30D and Eudragit^®^L30D55 polymers film which was water-insoluble in simulated gastric fluids at first, then TSH and Eudragit^®^L30D55 both acted as pore-producers in simulated intestinal fluids. Assuming TSH and polymers as a whole, then Fick\'s law could be applied to discuss why the TSH-SR pellets were able to stabilize during 90 d.$$Q = \frac{\text{DS}\left( {C_{s} - C} \right)t}{h}$$Fig. 7Release profiles of different time points at 40 °C/0% RH of A: TSH-SR pellets and B: TSH film-controlled pellets. (Mean ± SD, n = 3).Fig. 7 Where Q is the quantity of drug release in time t, h is the film thickness, C~s~ is the concentration in preparation, C is the concentration of drug in the dissolution media, D is the diffusion coefficient of the drug and S is the area of formulation. The diffusion coefficient, D has been modified to account for the recognized film structure by Iyer [@bib0065].$$D = \frac{D_{w}e}{\tau}$$ Where D~w~ is the diffusion coefficient in the medium, e is the porosity factor andτis the tortuosity factor. When proceeding in the final stage of film formation, coalescence decreased porosity factor and then altered drug diffusion coefficient and release behavior. However, TSH formed a water-soluble phase around the insoluble polymer, hindering further coalescence of latex paticles.TSH film-controlled pellets did not possess such a structure that diffusion coefficient of drug decreased subsequently leading to a significantly decrease of release rate. ### 3.3.3. In different storage relative humidity condition {#s0105} Water in the air can act as a plasticizer [@bib0070], the addition of plasticizer to the polymer has great advantages, such as Tg decrease, elongation increase, making the polymer more flexible and soft in the process of coating [@bib0075]. The final product usually has an intact structure and the polymer film is continuous so that burst release can be avoided [@bib0045]. Nevertheless, the type and quantity of plasticizers are able to affect both the slope and shape of release curves [@bib0080]. The aim of investigating release behavior of TSH-SR pellets in different relative humidity conditions was to verify whether the formulation had a resistance ability to relative humidity or not. The results in [Fig. 8A](#f0050){ref-type="fig"} revealed that drug release rate of TSH-SR pellets decreased except for in 29% relative humidity while [Fig. 8B](#f0050){ref-type="fig"} showed that TSH film-controlled pellets decreased even in 29% relative humidity in comparison. It was inferred that film possibly formed a structure which was not similar to the accelerated condition, such a flexible and soft film made the media had difficulty in penetrating into both outer and inner film. When water came across the polymer film, it occupied the space of particle--particle chains to weaken interaction. The diffusion coefficient remained decreased though TSH was added to two polymers. From the results above, relative humidity was still not neglected in storage and would be a tough problem to be solved.Fig. 8Release profiles of different relative humidity for 30 d of A: TSH-SR pellets and B: TSH film-controlled pellets. (Mean ± SD, n = 3).Fig. 8 3.4. Appearance and Tg of Eudragit^®^NE30D and Eudragit^®^L30D-55 free film {#s0110} --------------------------------------------------------------------------- Film containing two kinds of polymers was usually more complex than single polymer. As two kinds of polymers having different properties might lead to incompatibility in the course of mixture, coating and storage. As a result, characters of free film and dissolution profiles of solid formulation altered [@bib0045]. We considered it essential in studying film containing two polymers in appearance and affecting factors which might produce potential decrease in release rate. In view of factors in the course of coating, curing and storage, temperature and relative humidity were chose for investigating the free film. Weijia Zheng had identified that Eudragit^®^NE30D and Eudragit^®^L30D-55 were compatible at ratio more than 4:1 [@bib0085]. We designed three ratios (5:1, 10:1 and 20:1) within the limit of compatibility meeting sustained/controlled release kinetics of Eudragit^®^NE30D and Eudragit^®^L30D-55 films at 40, 60 °C/0% RH and 25° C/75, 92.5% RH. Tg is a significant parameter in the process of coating and storage of the sustained/controlled release formulations, representing the chain mobility of the polymer [@bib0090]. Generally, with the decrease of Tg, polymer exhibits better flexibility. The aim of adding plasticizer to some coating polymers is to decrease Tg thus resulting in a compacted film .Appearances and Tgs of free films were listed in [Table 2](#t0015){ref-type="table"}. With the increase of theratio of two polymers, film was from translucent to transparent in appearance, and decrease in Tg. In addition, at 40 and 60 °C, films became tough while in 75 and 92.5% RH, films were flexible and Tgs decreased. It was signed that Eudragit^®^NE30D and Eudragit^®^L30D-55 aqueous dispersions were easily affected by temperature and relative humidity in a wide range. Variations in appearance and Tg in a sense symbolized transformation in mechanical characters. Temperature made film smooth and dense, moisture interposed itself between polymer chains to make the film more flexible. Therefore the release rate was delayed either by temperature or moisture finally.Table 2Appearances and glass transition temperatures of films made of different ratios of Eudragit^®^NE30D and Eudragit^®^L30D-55 polymers.Table 2RatioConditionAppearanceTg(°C)5:10 dTranslucent, not smooth17.040 °C, 10 dTranslucent, not smooth and tough-60 °C, 10 dTranslucent, not smooth and tough-75% RH, 10 dTranslucent, incompatible and soft12.592.5% RH, 10 dTranslucent, incompatible and soft12.010:10 dTranslucent, smooth11.040 °C, 10 dTranslucent, smooth and tough-60 °C, 10 dTranslucent, smooth and tough-75% RH, 10 dTranslucent, incompatible and soft7.092.5% RH, 10 dTranslucent, incompatible and soft6.020:10 dTransparent, smooth3.040 °C, 10 dTransparent, smooth and tough-60 °C, 10 dTransparent, smooth and tough-75% RH, 10 dTransparent, smooth and soft1.092.5% RH, 10 dTransparent, smooth and soft−2.5 3.5. Scanning electron microscopy analysis {#s0115} ------------------------------------------ From [Fig. 9](#f0055){ref-type="fig"}, we might come to conclude and discuss as follows. The surface of TSH-SR pellets was intact and smooth before dissolution, indicating that the final formulation by optimized processing parameters was successfully prepared. While after dissolution 2 h, it seemed that irregularities and pores were on the surface of TSH-SR pellets. It was concluded that the plasticizer was exuded into water thus forming such a surface morphology. At 3, 4 and 6 h, fragments appeared and the surface seemed to be flat compared with 2 h. It was because during 2--3 h, Eudragit^®^L30D-55 began to dissolute gradually. In the process of 3--6 h dissolution, surface morphology seemed to remain similar therefore diffusion was concluded as the main release mechanism in later stage. More coating weight gain might be the reason to explain the structure on the surface of TSH-SR pellets after dissolution 3 h, developing correlations with the shape of release curves. Especially in later stages, delayed drug release tendency was well-explained, as firm penetration barriers made drug less possible to leap over from the film.Fig. 9Images of surface morphology of TSH-SR pellets before and after dissolution (A: before dissolution, B: after dissolution 2 h, C: after dissolution 3 h, D: after dissolution 4 h, E: after dissolution 6 h, F: before dissolution in general).Fig. 9 Similar patterns were found from the surface images of different ratios of Eudragit^®^NE30D and Eudragit^®^L30D-55 MCC blank sustained release pellets after dissolution in different storage conditions. As shown in [Fig. 10](#f0060){ref-type="fig"}, with the temperature and relative humidity increasing, not only the left fragments on the surface of sustained release pellets increased, but also became gathering together at the same ratio of Eudragit^®^NE30D blended with Eudragit^®^L30D55, indicating that the release profiles might be changed of formulations coated with Eudragit^®^NE30D blended with Eudragit^®^L30D55 from 5:1 to 20:1 ratio range. Compared with the surface of pellets at 40, 60 °C/0% RH, it was more flat at 25 °C/75, 92.5% RH at the same ratio thereby indicating that the structure of film was not similar in different affecting factors. Conclusions were made from micro-morphology view and consistent with appearances and Tgs of free films above.Fig. 10Images of surface morphology of different ratios (A: 5:1, B: 10:1 and C: 20:1) of Eudragit^®^NE30D and Eudragit^®^L30D55 MCC-blank pellets in different conditions after dissolution. (1: 0 d, 2: 40 °C for 30 d, 3: 60 °C for 30 d, 4: 75% RH for 30 d, 5: 92.5% RH for 30 d).Fig. 10 4. Conclusion {#s0120} ============= TSH-SR pellets were successfully prepared similar with Harnual^®^. Of outmost significance, the formulation was relatively insensitive of temperature and relative humidity which showed better advantage in the process of coating and long-term storage compared with film-controlled formulations utilizing Eudragit^®^NE30D and Eudragit^®^L30D-55 aqueous dispersions if kept in a adequate lower relative humidity condition. Adding main drug (TSH) to polymer film might also be a method to increase release stability of the formulation. From the results of appearances and Tgs of free films and surface morphologies of blank sustained release pellets produced by Eudragit^®^NE30D and Eudragit^®^L30D-55, care should be taken when utilizing Eudragit^®^NE30D and Eudragit^®^L30D-55 polymers in the process of coating. Strategies must be developed in the future to improve the issue of poor release stability of the aqueous dispersion coating. The study would make suggestions to other researchers investigating sustained/controlled release preparations utilizing Eudragit^®^NE30D blended with Eudragit^®^L30D-55 in coating. The authors have no other relevant affiliations or financial involvement with any organization or entirely with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. Peer review under responsibility of Shenyang Pharmaceutical University.
use crate::{ proc_macros::IntoRenderObject, render_object::*, utils::{Brush, Point, Rectangle}, }; use memchr::memchr_iter; use std::iter; /// Used to render a text. #[derive(Debug, IntoRenderObject)] pub struct TextRenderObject; impl RenderObject for TextRenderObject { fn render_self(&self, ctx: &mut Context, global_position: &Point) { let (bounds, text, foreground, font, font_size, offset) = { let widget = ctx.widget(); let text = text(&widget); let offset = *widget.get::<f64>("offset"); let txt = { if !text.is_empty() { text } else { widget.clone_or_default::<String>("water_mark") } }; ( *widget.get::<Rectangle>("bounds"), txt, widget.get::<Brush>("foreground").clone(), widget.get::<String>("font").clone(), *widget.get::<f64>("font_size"), offset, ) }; if bounds.width() == 0.0 || bounds.height() == 0.0 || foreground.is_transparent() || font_size == 0.0 || text.is_empty() { return; } if text.is_empty() { return; } ctx.render_context_2_d().begin_path(); ctx.render_context_2_d().set_font_family(font); ctx.render_context_2_d().set_font_size(font_size); ctx.render_context_2_d().set_fill_style(foreground); let mut y_disp = 0.0; let mut last_ofs = 0; for i in memchr_iter(b'\n', text.as_bytes()).chain(iter::once(text.len())) { ctx.render_context_2_d().fill_text( &text[last_ofs..i], global_position.x() + bounds.x() + offset, global_position.y() + bounds.y() + y_disp, ); y_disp += font_size * 1.15; // TODO: Make the space between lines customizable last_ofs = i + 1; // + 1 to skip the end of line character } ctx.render_context_2_d().close_path(); } } fn text(widget: &WidgetContainer) -> String { if let Some(localizable) = widget.try_get::<bool>("localizable") { if *localizable { if let Some(localized_text) = widget.try_get::<String>("localized_text") { if !localized_text.is_empty() { return localized_text.clone(); } } } } if let Some(text) = widget.try_get::<String>("text") { return text.clone(); } String::default() }
/* * Licensed under the Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0) * See https://github.com/aspnet-contrib/AspNet.Security.OpenIdConnect.Server * for more information concerning the license and the contributors participating to this project. */ using AspNet.Security.OpenIdConnect.Primitives; using Microsoft.IdentityModel.Tokens; using Microsoft.Owin; using Microsoft.Owin.Security; namespace Owin.Security.OpenIdConnect.Server { /// <summary> /// Represents the context class associated with the /// <see cref="OpenIdConnectServerProvider.DeserializeAccessToken"/> event. /// </summary> public class DeserializeAccessTokenContext : BaseDeserializingContext { /// <summary> /// Creates a new instance of the <see cref="DeserializeAccessTokenContext"/> class. /// </summary> public DeserializeAccessTokenContext( IOwinContext context, OpenIdConnectServerOptions options, OpenIdConnectRequest request, string token) : base(context, options, request) { AccessToken = token; } /// <summary> /// Gets or sets the validation parameters used to verify the authenticity of access tokens. /// Note: this property is only used when <see cref="SecurityTokenHandler"/> is not <c>null</c>. /// </summary> public TokenValidationParameters TokenValidationParameters { get; set; } = new TokenValidationParameters(); /// <summary> /// Gets or sets the data format used to deserialize the authentication ticket. /// Note: this property is only used when <see cref="SecurityTokenHandler"/> is <c>null</c>. /// </summary> public ISecureDataFormat<AuthenticationTicket> DataFormat { get; set; } /// <summary> /// Gets or sets the security token handler used to /// deserialize the authentication ticket. /// </summary> public SecurityTokenHandler SecurityTokenHandler { get; set; } /// <summary> /// Gets the access token used by the client application. /// </summary> public string AccessToken { get; } } }
Feature&colon; This machine adopts pneumatic&comma; mechanical&comma; electrical combination&comma; to achieve the clip&comma; heating&comma; molding and other automatic work&period; High production efficiency&comma; easy to operate&period;Easy to feed material and less waste of material&period;Suitable for male punch molding and high-depth products molding in stability quality&period;Newest style of far infrared heating element heater with automatic temperature control&comma; evenly heated and low energy consumption&period;With air cooling and automized water spray to make the products cooled fast and molding well to reduce defective products&period;Electric control to make the operation more simplify and conveniece and can adapt to different positions of different angles of operation&period;The height of the upper mould adopts the limit adjustment of the motor&comma; so that the adjustment is convenient and quick&period;
Kancor Mints & Menthol’s Mints and menthols are integral ingredients used in many personal and health care products, foods and beverages. Kancor is committed to providing unique, quality mint and menthol solutions to its customers.
Your politics and campaign are clearly not aligned with the themes we portrayed in our series. The only relevant comparison that I see between your campaign and Friday Night Lights is in the character of Buddy Garrity — who turned his back on American car manufacturers selling imported cars from Japan. Fun twist: Buzz Bissinger, who wrote the book on which Berg's movie and TV series are based, is a Romney supporter.
There were significant reasons why some Baptist converts decided to become pastors. First, they wanted to change for the better. There was a growing resistance against Spanish colonialism and Roman Catholicism that encouraged them to find alternatives and to commit themselves to a serious study of Christianity. Second, the coming of Protestant missionaries became an avenue whereby they were able to read the Bible. By reading the Gospel they became aware of their Christian responsibility and ultimately decided to become pastors. Yet the content of their pastoral ministry were based not only on the Bible but also on their culture and the American way of life as introduced by the American Baptist missionaries. (Page 29) Socio-Economic Status of Pioneering Pastors The majority of the early Baptist converts were poor peasants living in Western Visayas. Most of them were farmers and skilled workers from the countryside. A number of them worked as carriage makers and cocheros. Those who were in the educated class were hacienda owners and teachers in Spanish schools. As the Baptist mission commenced, some of them became apprentice in the Baptist printing press wherein they got hold of the Bible. Men and women pastors were instrumental in spreading the good news especially in the rural areas. They were respected and developed their social status as leaders of churches even if their economic status were almost the same as the members of churches they worked with. (Page 34) Political Perspectives The early Baptist Pastors had nationalist tendencies, since they were part of a people who resisted the unjust and exploitative Spanish colonial rule and who struggled to be free as a nation. At the height of the national fervor to achieve independence from Spain and then from American expansionism, Protestant missions came into the picture. It is not, therefore, surprising to know that many Baptist preachers were nationalists. The first preachers trained by Missionary Briggs who came from Luzon were revolutionaries who joined the struggle against Spanish colonialism. Some of the early converts were members of the “Pulahan” group roaming in the Visayas mountains. The “Pulahans” were a group of people who resisted the long Spanish colonial rule through violent means. They used this method of resistance against the Americans too. The famous Pulahan was Papa Isio, who continued the struggle even when the Negros elite sided with the Americans. “His group, the babaylanes or pulahanes, burned haciendas owned by pro-American elite.” But later on, some of these revolutionaries were converted and became Baptist pastors. They welcomed the Americans, because the Christianity that the Americans brought was convincing. Moreover, the American missionaries enabled them to read the Bible in their own language. From another perspective, the shift in political view – from being revolutionaries to becoming pastors under the American tutelage – was partly due to the various techniques of pacification used by the Americans to end the Filipino people’s resistance. Laws were enacted favoring American colonialism. The Sedition Law of 1901 made any advocacy for Philippine independence a crime punishable by long imprisonment or death. The Brigandage Act of 1902 that classified guerrilla fighters as brigands or ladrones made membership in an armed group punishable by death or long imprisonment. To further suppress the nationalistic feelings of Filipinos, the Flag Law of 1907 prohibited the display of all flags, banners, symbols and other paraphernalias. Missionaries Briggs and Munger were actively involved in the pacification campaign, encouraging Pulahan leaders, for instance, to cooperate and in the process they were converted to the Baptist faith. Furthermore, the Americans established an educational system with English as the medium of instruction. It was aimed at making the Filipinos “little brown Americans.” With American education, in which American values and culture were taught, slowly but surely the Philippine people developed pro-American sentiments. (pages 34-36) Theology and Perspective in the Ministry The content of their preaching as well as their theology was centered on two main issues. First, to proclaim Jesus Christ as the Saviour of mankind in order for those people who received Jesus to go to heaven. Second, to proclaim that Baptist Christianity is the “true” brand of Christianity. People should forsake the teachings of the Roman Catholic Church since it is corrupted and distorted. Moreover, piety and spirituality should be practiced by not smoking and drinking or indulging in vices. ( page 40) Theological Education On the whole, pastors learned their profession in the ministry through personal experience and through organized studies. Bible schools for men and women were started to cater to the need for trained pastors. When the Bible school for men was not sustained, Pastor’s Institutes were conducted to fill in the need. (page 46) Joys and Struggles of Early Philippine Baptist Pastors Even before the first official ordination of Philippine Baptist Pastors in 1906, a great number of Filipino and Filipina Baptists called by the American Missionaries as “Native Preachers” and “Bible Women” respectively were already active in the ministry. These “Native Preachers” and “Bible Women” should be aptly called as Filipino and Filipina Baptist Pastors. It was automatic for the early Baptist converts to do mission work. Men and women involved themselves in the evangelization of their own people. They preached the ‘good news’; helped in the translation work; and distributed the translated gospels and religious tracts in many areas where the American missionaries had not set foot to. (page 46-47). The Search for Self-Identity and the Struggle for Self-Reliance 1935-1983 Baptist Pastors struggled for the creation of the Western Visayas Convention (WVC), the forerunner of Convention of Philippine Baptist Churches (CPBC), and the Convention Baptist Ministers’ Association (CBMA). The forerunner of CBMA was organized sometime in 1904 during the “First Baptist Associational Gathering.” They mounted pressure, together with the backing of local churches which they had organized and maintained, in order to push for the Filipinization of the leadership structure. Thus, the CPBC was organized. Rev. Jorge O. Masa was elected CPBC General Secretary in 1935. He was succeeded by Rev. Engracio Alora in 1938. Philippine Baptists together with their pastors slowly pushed for the realization of their dream for self-reliance. Thus, many Filipinos occupied top positions in Baptist institutions. For instance, in 1940, Rev. Alfredo Catedral, a graduate of Colgate Rochester Divinity School succeeded Rev. R.F. Chambers as Dean of the College of Theology. The CPBC was also granted the freedom to make its own policies, and properties of the American Baptist Foreign Missionary Society (ABFMS) were slowly transferred to the Filipino leadership. Economic Condition Although the Philippine Baptist leaders pursued the Filipinization of the CPBC, they were far from self-reliant. They still continued to ask for foreign financial assistance to implement their programs. To some extent, the economic condition at that time played a significant role in the attitude of the Philippine Baptists. The Philippine economy was “completely tied up with and dependent on the United States.” Philippine economic policy allowed the continued export of agricultural products to the U.S. and unhampered entry of U.S. goods to the Philippines. During the 1936-1940 period, majority of foreign investments came from the U.S. and 72.6% of Philippine trade to foreign countries went to the U.S. Rev. Iñigo Delariman, the Promotional Secretary of Negros Kasapulanan in 1936 received a salary of P40 a month while the 40 churches that he visited all over Negros supplied him with travel expenses. Rev. Juan Empig of Ilog Baptist Church asserted that the answer to the economic condition of the Philippine Baptist churches was good stewardship. Pastor David Logarto, Circuit Pastor, Dueñas, Iloilo echoed the same tone, “He who shall not work shall not eat.” Theological education Pastors and church leaders were trained in church work through the Pastor’s Institute and Eskwela Dominikal. In Negros, Rev. Iñigo Delariman, trained in Rural Life at Los Baños in 1934, conducted an Institute on Religious Education and Rural Life Institute with Miss Proserfina Plasus. The curriculum of the Rural Life Institute included Animal Husbandry with Swine and Poultry Raising techniques, and Plant and Fertilizer Analysis. The College of Theology also launched the National Rural Life Institute. The goals of the Institute were to provide the ministers with experiences that would enable them to understand rural life, problems and aspirations of the rural people, and toequip the ministers with necessary tools and skills in agriculture and more importantly, in making theological reflection on the meaning of the gospel in a local situation. Theology and Content of Preaching The issue of “Social Gospel” and “Pure Gospel” in America in the 1920’s made an impact on the theology of Philippine Baptist pastors. The missionaries had two contradicting views of the gospel message which divided them later. This contradiction stemmed from the theological controversy that swept the United States during that time. Dr. Domingo Diel, Jr. argued that the “main issue was being the ‘pure gospel’ or the ‘social gospel’; which means either the preaching of the ‘pure gospel’ or the implementation of the consequence of the gospel in all areas of human life.” The four decades of American missionary presence in the Philippines greatly influenced the lifestyle and theology of the Philippine Baptist pastors. For instance, the way they dressed was similar to the Americans who converted them. (pages 66-70). Baptist Pastors During World War II: Their Faith, Ministry and Struggle In 1946, Engracio Alora, the Acting Dean of the College of Theology, published in 1943 the Prayer of the Panay Underground.“Give us courage, Lord, to finish the great work that Destiny has called us to do; Courage to continue to fight for the right of this Nation to live her own life without trammel from without, without doubt from within; Courage, Lord, to show to the invader that the national honor that he has tarnished is avenged on this Island with valor and self-immolation; Courage that knows neither darkness nor day to strike for that Freedom which Thou teachest is the inheritance only of those of Thy children who are worthy of their God. Let the blood, Lord, that was shed by the Freemen of this Island seep into the depths of the native soil to cleanse it of its past, to enrich it for its future; Let the cruelties of the enemy, his deceptions, and his deceits that have caused many loved ones to perish in death unspeakable and tortures that curdle the blood, drive us on with ever resurging strength to defend our home and fireside. Let new cruelties inspire more determined resistance; Let more tortures bring forth more martyrs; Let the ravishments and violations of our women endow more strength upon our womanhood; Let the wanton killing of unarmed men and helpless women and children produce more leaders and patriots. And, Tomorrow, Lord, when the dawn breaks and peace comes again to this Land, may it be a strong and free and lasting peace, because it was dearly bought with our blood and treasure; May the strength and fortitude that we had built, in the Valley of the Shadow, during the bitter night of our sorrows and sufferings transform and weld us a Nation, because we have been forged in the Crucible of Fire and cleansed in a Baptism of Blood. And so, dear Lord, when on that morrow Destiny commands us to resume our peaceful tasks, Let there arise a new and purified people led by a new leader guiding us forth in Thy ways onto the heights to which our worth and our heritage entitle us.” A Baptist, Esther Pagsuberon, also composed a guerilla song: The Fight is On. Pastor Pagsuberon, a guerilla himself during the war, sang the song by heart: “The fight is on, arise, o,soldier’s brave and true. The call to arms is heard from far and near. MacArthur now is marching on to victory, the triumph of our forces is secured. The fight is on! Brave Filipinos will carry on to victory with carbines gleaming and thompsons roaring and will drive those Japanese away. The fight is on but be not weary, for then at last we shall be free. With God before us, his banner o’er us will sing the victory song at last.” (pages 77-78). Summary and Reflections “Four years of the holocaust of World War II did not diminish the faith of the Baptists in the Philippines. During the war, they gathered to worship in the hills and mountains, swamps and even under the surveillance of the enemy’s watchful eyes in the cities and towns. The women did their share in living dangerously their testimony of Christ’s love and concern.” Rev. Agustin Masa, CPBC President from 1946 to 1947, bore witness to the struggle of the Philippine Baptists during the war. He exhorted the CPBC members on the occasion of their Golden Jubilee: “We stand today between two generations. The past, with all its troubles and conflicts, consuming today’s struggles, and the future with all its opportunities and great promise. At a time when the liberties of men are being threatened, the Philippine Baptists have in their hands the highest opportunity to demonstrate to their fellow men what it is to be free in Jesus Christ.” Some Americans said that the Filipinos were afraid of the Japanese and ceased to hold worship services. However, eyewitness accounts and experiences of those who survived told of a different story. The Philippine Baptists survived the war. Not only did they contribute to the conduct of worship services but also to the liberation of the Philippines. They became self-reliant not only in their economic activities but in leadership capacity. The war “proved to be the testing fire of faith.” Pastors continued their unwavering commitment to take care of their flocks. In the words of Rev. Melicio Basiao, “O, how we struggled and O, how we were blest.” When the American missionaries came back they proceeded to take up the cudgels of leadership once again. Post War Period to the Declaration of Martial Law 1946 – 1972 In the celebration of the Golden Jubilee of the Philippine Baptist mission in 1950, one can glean from the commitment of Philippine Baptist pastors as they did their share in the ministry. They not only looked back from their journey of faith and from those who struggled before them, but also looked forward to the future with this in mind, “continue on pastoring.” Rev. Dioscoro Villalva of Isabela Evangelical Church said, “May the younger generation of preachers heroically pick up the Gospel torch lighted by the sacrifices of our dead-yet-living pioneer evangelists, through an intensive Convention program to evangelize the Philippines in the next 50 years.” Pastor Jose Gico, Jr. of Malawog Baptist Church, Sta. Barbara, a “disciple” ofRev. Villalva said, “The Pentecost of the Gospel propagation is now realized in our midst in this Golden Jubilee. Let that time be revealed again, when Peter preached and five thousand souls were brought to the feet of Christ.” This kind of spirit and dedication strengthened his resolve to continue his pastoral ministry. Today, even after his legal retirement, he still serves Hinigaran Evangelical Church, which he ministered since 1951. Pastor Jacobo Celeste of Bingawan Baptist Church aptly said, “May we grow stronger in faith and work, as we struggle towards our 100th anniversary in 2000 AD.” The pastor of Ito Baptist church, which was started by the Pulahans in 1904, also said, “We pray that this Golden Jubilee will be an inspiration for us all to unite in smashing the forces of social evil and bringing about the Kingdom of God in and out the hearts of men.” Mrs. Angelina Buensuceso, pastor of La Carlota Evangelical Church in 1950 appropriately challenged the next generation of pastors with these words, “There is no telling what a church can do which she tries to conceive of and achieve greater and better things for God. Onward with Christ. United we stand, divided we fall.” Later in 1980, Angelina Buensuceso became the first Filipina Baptist pastor to be ordained. These words and deeds of Baptist pastors 50 years ago serve as cornerstones of Philippine Baptist churches, and will serve as inspiration for the young generation as they continue serving the Lord. (pages 92-93). Pastors Joining the Movement to Oust the Marcos Regime Amidst the rising socio-political unrest, the Philippine Baptist pastors took a stand. Many of them joined rallies, formed organizations, made protest statements, or joined the underground movement to topple an unjust system. The martial law years saw a great deal of pastors “doing theology in the streets.” Pastor Ruth Corvera said that during martial law years pastors became “activists.” She testified, “I would go to the community and organized them. I see my role as someone empowering people to reach their potentials before God. I did not baptize them but my teachings were centered on giving the people the ‘quality of life’ that they deserve. Salvation is about raising the worth and dignity of the people and liberating them from their fears.” In Aklan, Pastor Cecilia Cruz together with Rev. Villanueva worked in the mountains educating the people and organizing them. “Nakita ko nga sadto nga time ang role namon as pastors is to make people aware of our situation and encourage church members to be active politically.” Pastor Cruz was also terrified to continue on working. “May liquidation squad ang Marcos regime. Budlay mag giho. Sa ulihi gin dakop kami sang military kay suspected kami of subversion. Pero nag continue kami gihapon sa amon work kag nagahiwat pa kami sang mga Ecumenical Summer Youth Camps with Mr. Mike Pillora.” In Negros, Pastor Norberto Tabligan engaged himself in what he called “the other side of ministry.” “Naga seminars kami on Human Rights upod ni Pastor Rudy Bernal kag ni Pastor Rodio Demetillo. Gina surveillance kami sang military. Ang amon obra was part of the ACM work of the Convention.” “Marcos’ preferential treatment for foreign investors further contributed to the deterioration of the Philippine economy, particularly with the use of government funds and foreign loans for the Marcos family and their cronies.” Baptist pastors were among those who suffered economically. In 1973, there were 200 Convention Baptist pastors. The survey conducted by the College of Theology revealed that the average monthly salary of pastors and workers excluding the city church pastors was P45.00. They belong to the income bracket of the housemaids in chartered cities. Of the total number of pastors in our Convention, 96% does not have a house of their own. If ever they have a Social Security System coverage, about 95% of them will not be able to derive sufficient benefits from this upon retirement because of low monthly premium that they give to the SSS. The survey concludes that Baptist pastors were looking at the future with a great sense of insecurity. Thus, few young people were committing themselves to the Christian ministry; many pastors were shifting to secular ministry; and there was a lack of creative and consecrated pastoral leadership in the churches. From 1966 to 1973 there was a marked decline in enrolment in the College of Theology. Of the 89 who graduated from the College of Theology from 1960 to 1972, only 19 had submitted themselves for ordination; 11 were in Christian institutional ministry; and 59 are no longer actively participating in church work. Of the 46 ordained ministers that the CPBC had since 1946, 14 have shifted to secular ministry. Even though some of them were working as part time pastors in the churches, yet the trend toward secular employment was very clear. The sense of economic insecurity in the pastoral ministry had also led many to take for granted the discipline of the ministerial profession, thereby weakening their effectiveness as bearer’s of the “good news.” The survey of the College of Theology forecasted a bleak future for the Baptist churches in the Philippines if the above conditions were not averted. An important answer to these problems is in raising the socio-economic level of life of the pastors. Thus, the Baptist Ministers’ Endowment Program was conceived to standardize pastor’s salary. The subsidy will be granted with the end view of strengthening the ministry of a church and ultimately to make the same church self-supporting. In 1975, Dr. Domingo Diel, Jr. wrote: “As CPBC thinks beyond ’75, it must think of its pastors more seriously now than before; the ‘sacrifice mentality’ has still a place in a Christian life (not only in a pastor’s family) but that is a poor substitute for a low salary. The Endowment Program for Baptist pastors - salary standardization and retirement pension must not only be encouraged, but supported and implemented. The prospect of this program is indeed favorable; its effect among pastors will be invaluable and the result of it can certainly be beneficial for the CPBC.” In 1975, Kasapulanan Minister Rev. Alfeo Tupas informed of the Negros Kasapulanan fund campaign called “God’s One Thousand Fund” to “help standardize the salaries of pastors in the Kasapulanan.” The year 1977 ushered in the new challenge to raise the local support to more than 25% with an incentive plan for the Field Ministers. CPBC President Rev. Moley G. Familiaran reported, “A top level brainstorming session was organized to open more possibilities for the ever growing challenge of the CPBC.” It was pointed out during the brainstorming sessions that “pastors must become our priority concern – and projects must be started to help the struggling church support their pastor.” The result of this brainstorming was the Pastor’s Salary Standardization scheme with a motto: “A Challenging Self-Reliance Movement toward Growth and Maturity.” CPBC and CBMA were the implementing arm and the duration of the project was three years. It hoped to subsidize the salaries of 200 pastors receiving below P150.00 a month. The program would arrange for the pastor to receive a free board and lodging and the honorarium would be added to their income during the year, increasing it to 250.00 pesos in the second year. During the same year, Rev. Edwin Lopez, CPBC General Secretary 1976-1979, launched the CLASP – Carabao Labor to Assist Salaries of Pastors. It was a development program for pastors receiving very low salaries. Rev. Lopez reported: “We have bought 2 carabaos and farm implements for our Mountain Pastors in Lambunao and Calinog. We will arrange to buy one carabao each for Antique, Capiz or Negros, whichever is advisable to our pilot.” The carabaos belonged to the CPBC on loan to churches to assist the salaries of Pastors. The income of the carabao was to be divided into two. One half would go to CPBC as payment for the carabao and the other half would go to the salary of the pastor. When the carabao is fully paid, it would belong to the church. During the CBMA assembly in Dumangas Baptist Church on January 17-20, 1994 in which 571 ministers attended, the officers and members created two programs to financially assist the ministers particularly those receiving below 500.00 pesos salary, and working in the rural areas. The first, the Mutual Aid Fund (MAF) was launched after the officers discovered that, out of more than 500 pastors, 371 are receiving below P500. Most of these pastors worked in rural areas. The fund would help pastors in their medical needs. A seed money of 5,500 pesos was raised during the assembly. The second program was the Minister's Welfare Program to increase financial assistance to pastors through its livelihood project of swine-chain dispersal. The CBMA put aside a budget of 270,000.00 pesos for this purpose. Pastors working in rural areas were given the priority. In 1994 the Pastor’s Endowment Fund had an interest of 16,178 pesos which were given to pastors with very low salaries. In 1995 the interest of the fund was 22,563.30 which was distributed to qualified applicants endorsed by the Provincial Ministers’ Association and recommended by the Executive Committee of the CBMA. Theological Education In 1975, theology students defined theological education as follows: Firstly, “We, the students of the College of Theology, Central Philippine University, believe that theological education should be geared toward making men whole. We believe that it should seek to develop the individual or group into an integrated whole, conscious of his/their individuality as a person or group in relation to other persons or groups, of his/their strengths and limitations, aware of his world and of the tasks he/they have to perform, dedicated to his/their mission, and able to participate actively and meaningfully in the celebration of life.” Secondly, “We believe that theological education should help in preparing Christians serve God through service in the world. As such, theological education should start where the people are. It should take on account the people’s desires and aspirations, their struggles, and most of all, their needs. It should be able to understand the “hows, whys and wherefores” of the people that it may be able to apply the Christian message relevantly to the lives of the people, and the community wherein they live. We believe that theological education can do this when it opens itself up and enters into dialogue with the world – its cultures, ideologies and religions.” Thirdly, “Theological education should promote a living involvement in the life situation of the people. Having understood the hows, whys and wherefores of the people, it should seek to put into practice such understanding in terms of involvement in actual life situations of the community, participate in its struggles and become a motive force in the shaping of history.” Dr. Domingo Diel, Jr. asserted that theological education must consider “the need of the church” and “the need of the world.” The issue here is ‘relevant’ theological education in relation to the church and the world today and tomorrow. The cry of the decade coming from the so-called Third World Theologies is for “theological relevance.” Diel warned that theological planning for the future should be aware of the danger of “theological irrelevance.” Ministry of Pastors In 1977, a CPBC Work Plan was created. Rev. Edwin Lopez envisioned a program called TICDA (Total Integrated Church Development Assistance). The program had three component strategies:(1) TOMF – (Training Operation Mass-Evangelism Follow-up), (2) SWEAT – (Steward Week Ender Assist Technique), and (3) New Frontier Ministries. The program enabled CPBC to organize one congregation every 2 to 3 days within one year. Also, the program was an attempt to lift up the economic condition of pastors and church members. Rev. Moley Familiaran summed up the main focus of the program: “…the thrust of this work plan is to work with people in discovering sleeping assets in the form of interest and readiness to actively participate in the total church work…to call and summon the potentials of its very own members which have yet remained untouched and unused…this work plan rests upon the basic suggestion that what the Convention should attempt is to help the people of the churches realize that we are, in discipleship, called to become fishers of men.When we realize this, we multiply the number of evangelists, preachers and pastors…This is actualizing the ‘priesthood of the believers’.” Rev. Apolonio Francia, CPBC Field Secretary in 1977, organized a “Management and Planning Seminar,” which aim at gaining insight into the management and operation of the church. The topics of the seminar included Management of a local church, Planning Process, Duties of Church Officers, Christian Education, Evangelism and Stewardship. A “Special Minister” by the name of Rev. Jaime Lasquite was sent to churches of Santa Fe, Guinberayan, Concepcion, Lanas and Lindero in Romblon to assist in specific areas of service. Reports coming from the churches were very encouraging. Rev. Alfeo Tupas, CPBC Field Secretary, also reported that the “Mga Alagad Kami” (MAK) trainings were conducted in Negros. He visited 87 churches in his area and presented the Convention Work Plan during Management and Planning Seminars. In 1977, Rev. Sammie Formilleza, Administrator of the Center for Education and Research (CER), reported that in 1976 the Center had conducted 18 workshops in Western Visayas with a total of 350 participants. The objective of the Center for Education and Research was “to find out what people think about their own problems, to use dialogue as a principal means of clarifying their ideas, to work with them in putting those ideas into actions in their own way, in their own community to achieve what they think and believe is a better way of life.” The following sectors were the priorities of concern of the Center: Urban Poor (squatters), Wage Earners (laborers), Fishermen, Peasants, and Rural Church Leaders. These sectors comprised 80% of the whole West Visayas population. The Center also opened three special projects for communities and churches, namely: Nutrition Education Program, Agricultural Workers’ Cooperative and Health Education Program. The success of the Center’s work with people from the marginal sectors of the society was made possible because of the willingness of the people in communities to do something about their oppressive and dehumanizing situations. The year 1977 saw the strengthening of work in Mindanao. In 1976 the leaders and ministers of the Minadanao CPBC churches gathered in Mandih Baptist Church, Sindangan, Zamboanga del Norte and decided to expand outside of the Zamboanga Peninsula with Ipil as the center of operation. Subsequently, the program radiated from Ipil to three surrounding cities of Dipolog, Pagadian, and Zamboanga. In 1978, the Mindanao Baptist mission produced 12 congregations with 16 extensions. A pastor in Mindanao, Mark Cloma said, “For 32 years, I have been praying for a Baptist of our kind to come here, and I am happy that now this is being answered.” In 1978, Cloma implemented the Phase I and Phase II of the CPBC Mindanao Project, training church members in evangelism and “operation house-to-house visit.” (pages104-113). The CBMA 1983 - 2002 The Search for Pastoral Identity The national crisis during this period made an impact on the lives of Philippine Baptist pastors. The crisis situation pushed them to look deeper into their identity and role as ministers of God in the context of the Philippine society. This resulted to the re-examination of their perspective and thrust in pastoral ministry. Most pastors became politicized and saw their role as a significant part in effecting changes in a society with deep political turmoil and economic crisis. The situation led the CBMA to re-evaluate its ministry and identity In 1982, the CBMA assembled at Bakyas Evangelical Church and discussed the theme The Minister vis-à-vis Innovation. There were three emphases in that assembly: The Identity of the Philippine Baptist pastor; Their socio-economic problems, and Their mission. They discussed issues related to “The Pastor in Personal Dynamics;” “The Pastor in Crises Situation;” “The Theology of Money;” “New Trends in Stewardship;” and “The Pastor in the Ever-widening Mission Patterns.” As a result of this Assembly, on September 26, 1982, a group of nine CBMA members and officers from the different provinces in Western Visayas voluntarily met and discussed the life situation of the Association as a whole. After sharing experiences and realities existing in the provincial and national level, they found out that (1) There was no coordination among the circuit, provincial and national ministerial Associations; (2) There was no common understanding of programs, structures, orientations, and thrusts; and, (3) Corporate life was not strong.From these observations, an enlarged consultation involving the CBMA Executive Committee and presidents of all provincial and district Associations was set and a meeting was held on December 16-18, 1982.After three days and nights of sharing and deliberation, a five-year program was formulated for approval before the assembly in its annual institute in January 1983. The CBMA proposed the Ministers Growth and Development Plan. This was the premise of the proposal: “The challenges of the different and varied ministries where the church of Jesus Christ is called upon to participate is vast and growing and getting complicated. The ministers struggle daily to respond creatively to problems faced by man – sin it its varied forms – alienation forms – alienation from God, poverty, human depravity, ignorance, superstition, greed, injustice, authoritarianism, immorality, colonialism, and tortures and violation of human rights. These are issues which the present ministers of Jesus Christ are daily confronted with and therefore cannot close their eyes to if they will continue to serve as light of the world and salt of the earth.” The Ministers Growth and Development Plan was conceived because “the Gospel of Jesus Christ must be interpreted by the minister in the context of the need of the people so that evangelism and church mission will not be stale but be receptive and responsive to people’s real needs.” The CBMA saw that Western thinking largely influenced the Philippine Baptist pastors, thus, they had not fully developed a theology they could call their own. There was a reflection that reactionary theology should be checked while establishing a theological framework rooted in biblical principles and Philippine culture. The CBMA thought that Philippine Baptist pastors should have a theology that could continually confront rapid changes in society and its fundamental truths could be applied any time in the Philippine situation. The five-year Ministers Growth and Development Plan was divided in five phases, namely: (1) Structural changes and improvements; (2) Re-orientation program; (3) Re-organization into interest groups; (4) Continuing re-orientation; and (5) Further theological education and special training and scholarships. The first phase, “Structural changes and improvements,” proposed that a committee of Ministers for Development would be organized with the specific task of planning, coordinating, and linking with different agencies to help in the development of the ministers. It was also proposed that there should be a democratic centralization of all ministers’ organizations. This means unification of the program and organization of ministers. For instance, the district ministers association would coordinate with the provincial ministers association and with the CBMA. Furthermore, it was proposed that the ministers should be represented as an organization in decision-making bodies and committees within the CPBC, like having a representative in the Board of Trustees, the Committee on Ministers’ Endowment, and the Committee on Ministers’ Retirement. The second phase, “Re-orientation program,” was proposed because the developments of the 1980’s in the different areas of life – social, economic, religious and political – were largely affecting the ministers. Since the traditional concept of the ministry could no longer meet the challenges and demands of the present task, especially the outmoded concepts of the ministry brought from abroad, “new methods, concepts, and techniques to enrich the minister’s experiences were needed…and those outmoded be changed or discarded.” The elitist education of the ministers and the theology they gained from foreign books and instructions must be continuously put to test with the real situation to be relevant. Furthermore, in order for the re-orientation to be effective, the minister must undergo a deep process of education which included human values, development, re-study of prevailing economic and political system affecting people’s lives, elements of Filipino theology, and the development of people’s theology, born of the people’s hopes, dreams and aspirations. The CBMA proposed an educational program to help widen the social consciousness of the ministers, challenging parochial views, broadening outlooks, and deepening of commitment in the service of the poor. It was also proposed that all graduates and students of CPBC related theological institutions must undergo this orientation before their graduation or before their membership to the CBMA. The content of the curriculum proposed were: (1) Theological concepts of development (2) Evangelism (3) Mission of the church in the Philippine situation (4) Elements of Filipino theology (5) History of the Philippines from the viewpoint of the people (6) Structural analysis of society (7) Baptist history (8) Wider ecumenical dialogue (9) Hermeneutics (10) Basic Bible doctrines (11) Biblical theology and such other subjects that would widen the perspective of the ministers. The third phase, “Re-organization (Re-Direction),” was proposed because communities are called for the ministry relevant and responsive to the present needs and problems. The yearly curriculum of the CBMA Institute would be restructured according to the interest and field of specialization of the minister. “Re-direction of the ministers’ views and concepts” included ministry in the local church setting and different institutions, organizations and community projects. A pastor could bring his/her pastoral identity even in schools, hospitals, business firms, factories, farming, community organizing, labor union, young people’s group, ecumenical ministries, communications, and other fields where the pastor is assimilated. Furthermore, the curriculum of the CBMA for the next five years included pastoral ministry with emphasis on shepherding, pulpit and church management, counseling, church administration, business management, theological education, research and documentation, communication, youth, children, trade unionism and other specialized ministry deemed needed. The fourth phase, “Further theological education and special training,” was proposed since the CBMA members needed further theological education but had little opportunity to avail of continuing education. The emphasis of further theological education should be carried through Theological Education by Extension (TEE) wherein indigenous theology reflecting Philippine realities should be developed. TEE should prepare pastors to specialize training according to interest, and need of the local churches. The CBMA believed that the Ministers Growth and Development Plan would take a long process. The January 1983 CBMA Assembly that tackled the theme Resuscitating the Minister, however, laid the five-year program on the table. The CBMA President said, “unfortunately, for various reasons, the Association felt that a restudy of the program be made to suit the needs of the members in general.” But then, the aim of the CBMA was to encourage pastors, who were committed to the task and calling of the Lord, to render a relevant, effective and inspired ministry to the Convention churches, institutions and society. Identity and Mission Pastor Rudy Acosta said that pastors have an identity crisis: “In Africa there is black theology. People go back to their experiences to reflect theologically. Sa aton kalabanan wala pa kalambot sina…May crisis of identity kita. We don’t know what we are. We like hamburger. Joke sang isa ka tawo, ‘Chinese have chopsticks, what about Filipinos?’ What do we have? Kamot. Magpanghilamon kamot, magkaon kamot. Wala kita nagdevelop tools. Nadala ini tubtub sa aton theological endeavors.” On August 29, 1983, Dr. Johnny V. Gumban lectured on Contextual Filipino Theology: Toward a Filipino Theology and The Emerging Filipino Theology. His theology in a Philippine context included the affairs of the family, the church, the society, and God in history. A Filipino Theology should be inclusive, as the act of God in history is also inclusive. In 1984, Gumban wrote, “The church today is in the midst of crisis. As members of the Christian Church we should not respond to this crisis on the basis of our individual sentiment alone. It is only when we respond to this crisis on the basis of our Christian faith that we can call that response a part of our missionary task.” Contextual theology greatly influenced the minds of Philippine pastors. Pastors were concerned about practical questions in daily life, the real situation of the people and how God could speak to that context. “Culture and Christian spirituality are intertwined. One appears foreign and unfamiliar without the other…A spirituality detached from culture develops a (spiritual) life without meaning. A culture detached from spirituality develops a (cultural) life without firm foundation.” Rev. Angelina Buensuceso, Directress of CBBC in 1982, revised the curriculum of Convention Baptist Bible College to include subjects like Sociology. “We believe that a pastor should know the culture and situation where he/she is to work.” The Christian faith must get involve in the crisis situation of the society because the church does not exist in a vacuum but is related to the society. Writing about the “Church in the Midst of Crisis,” Dr. Domingo Diel, Jr., CPBC General Secretary in 1984 had this to say: “The inter-relatedness of socio-economic and socio-political issues with morality and the Christian faith should be by now a matter of concern for all of us. If our Christian faith has nothing to say to such issues here and now, one questions whether it is at all a Christian faith. The faith that has its source from the Truth, Himself, even the Lord Jesus Christ cannot just leave people and society to manipulators of reality and to the indoctrinated propagandists. The crisis-situation today demands from our Christian faith answers that come from a consensus of the Community of faith.” Rev. Alfeo B. Tupas, Negros Kasapulanan Minister and CPBC Field Secretary affirmed that the church was really in the midst of crisis. “Let us only remind ourselves that the people of God both in the Old and the New Testaments were most aggressive and fruitful in their ministry when they were in crisis situations. We are now having our share of these. Like our predecessors we can take these not as hindrances but as challenges for a more triumphant and productive work on our part for our Lord.” Rev. Amsil P. Alubog challenged pastors and churches, “May we be able to conscienticize our emotions, thoughts and will, so that we can gain a clearer stand and a stronger force as we participate in the development of our society which is in a ‘crisis’. But above all, let us be aware, that behind these difficult moments, the Almighty God still reigns and has a message to reveal. Let us be sensitive to this!” In 1985, Ronny Luces, a student of the College of Theology made a theological reflection on theology and action. He said that a seminarian must look deep into the context of the Philippines because a seminarian does not operate in an empty space.“He operates in the society that is historically situated and conditioned by the structure or system encompassing it. He has a community with its population, lifestyle and culture.” The society is plagued with problems and manifestations of evil in the socio-economic and political sphere not to mention moral degradation; the seminarian must do something. The belief of Ronny Luces deserves a longer quotation. “The seminarian being part and parcel of this society cannot alienate himself and just stay in his ivory tower.He must act and do something because of the mandate of Christ for him as a salt and light of the earth.He cannot afford to just stay idle and remain passive over what is going on.In the church where he is based and in the society where he is operating are opportunities where he can manifest the divine calling of God for him.Foremost of this is the opportunity to educate his people regarding the realities that are transpiring.Coupled with this, is his prophetic role to denounce the evils that cause injustices, to expose and oppose all forces of oppression and support the people’s struggle for change. He must also organize with other seminarians and religious bodies to build a strong ecumenical network and join forces with other sectors of the society. This way he is actually taking the role of a salt. In his action in society, he must ‘plunge in’ to the actual situation. This process is called integration and Christ have done it when he incarnated in his people ‘being one of them.’ Through these he can have first hand experience about what it is like to be struggling for a just cause of righteousness, truth and freedom not merely theologizing it but putting it into practice.” The CBMA of Today In 1983, the CBMA included in its objectives the following (1) To strengthen the CBMA leadership or line of coordination among national, provincial and district associations; (2) To have a unified grasp of CBMA directions and programs; and (3) To come up with a long-range plan and curriculum for CBMA institutes. The CBMA Officers’ dream of having a long–range program adopted by the assembly was partly realized after almost twenty years during the CBMA 2002 Assembly held at La Carlota Evangelical Church. Rev. Jerson Narciso, the present CBMA President, told about the CBMA’s emphases in his State of the CBMA Address. First, there is a need to overhaul present leadership structure in order to make sense out of “our chaotic situation.” Second, the present CBMA leadership is initiating important steps to strengthen and improve self-reliance program thereby addressing, for instance, the financial needs of low-income pastors. Third, there is a need to come up with a more systematic and efficient theological education program in order to upgrade and enhance pastor’s theological and pastoral training. The CBMA in assembly presented a ten-year plan and approved it during the business meeting. The plan includes the Kabuhi sang Pastor (Buhay ng Pastor) Endowment Program. The rationale of the Endowment Program states: “Philippine Baptist pastors played a significant role in the life of Philippine Baptist Churches. However, their efficiency is greatly hampered by the lack of resources to meet the demands of their ministry as many of them still receive a monthly salary of less than 1,000 pesos…A solid resource foundation could form the basis for a continuous, effective, and efficient pastoral service for the churches. Consequently, the churches will be strengthened as they do their share in realizing the mission of Christ heading towards an abundant and meaningful life.” Pastor Chita Naciongayo believes that “Low salary affects the personality of the pastor. The pastor develops a personality that is withdrawn affecting his/her decision-making ability. The ministry is held back because of this.” This Endowment Program has more than 100,000.00 pesos in the bank and more pastors and church members are committing themselves to support it. Included in the approved plan was the Master of Ministry Curriculum which should be accredited by our seminaries. The proposed curriculum includes the following subjects: (1) Social Analysis (2) Philippine Church History (3) Philosophy (4) Church History (5) Church Administration and Management (6) Networking and Solidarity (7) Community Organizing (8) Project Proposal and Feasibility Studies (9) Contextual Theologies (10) Basic Accounting and Stewardship (11) Computer and Globalization (12) Ecumenics, Missions, and Religions (13) Systematic Theologies (14) National Situationer (15) Ecology and the Church (16) Pastoral Ethics (17) Cross Cultural and Foreign Missions (18) Conflict Resolution and Management. The implementation of the ten-year plan of the CBMA will be implemented through a coordinated CBMA leadership structure, but still maintaining the local autonomy of the Provincial Ministerial Associations. Reflections The “social gospel,” which means the gospel encompassing the whole aspect of life that influenced the theology of pastors in the 1920’s and 1930’s found its offspring in the theology of pastors in the 1980’s. That fine thread continues up to the present generation of Philippine Baptist pastors. In 1935, even if the leadership structure of the Philippine Baptist mission was Filipinized, whenever the American missionaries talked about money matters, Filipino Baptist leaders kept silent because they felt that the Philippine Baptist mission could not survive without foreign funding. Presently, the CBMA action on uplifting the socio-economic status of Baptist pastors is a step towards independence in thinking and action, rather than being recipients of the programs set by foreigners who are sending funds. Rev. Malvar Castillon, the president of CBMA when it celebrated its Golden Anniversary in 1985, said, “We have the desire to become financially stable. We are just beginning and struggling for total independence when it comes to money matters and maturity in leadership.” Furthermore, Baptist pastors could deepen their theology through the continuing theological education program. Instead of depending on foreign theologies which are often spiritualize and alien to the Philippine situation they could learn from these theologies and develop a theology of their own conceived out of the struggle of the Philippine people and God’s revelations through culture and situation in the Philippines. An undertaking of this contextualized relevant theology rooted in the biblical truths, in Philippine history and culture should be undertaken. For instance, a contextual theology should incorporate Hiligaynon cultures. It should be remembered that the early Philippine Baptist pastors used their own language, that is Hiligaynon, in spreading the word of God. Thus, the Gospel was speaking directly in a manner understandable to the people. The present theological reflections of pastors should be geared towards rediscovering the culture, language and experiences that God has endowed to the Philippine Baptist pastors. In the course of more than 100 years, Baptist pastors have grown. During the early period, they were mainly “learning by doing.” During the later period, there were at least three seminaries to enable them to deepen their faith, commitment, wisdom, awareness and skills which they could utilize in their varied and complicated ministries. Many of them were also trained abroad especially in the United States and Europe. Traditionally, the mission of the Baptist pastor was mainly in the church and church related institutions. Only those who had extensive church work could be ordained in the ministry.In the course of history, the mission of the Baptist pastor moved out of the “four walls” of the church. The concern of many pastors in the 1950s also included ministry in the society, especially in politics and economics. The story of World War II guerrilla pastor named Lucso and of other pastors like Rev. Elias Lapatha, Rev. Catalino Buensuceso, Rev. Bello Cato, Pastor Remedios Vingno, Pastor Ruth Corvera, and recent pastors like Samuel Antonio, Rev. Norberto Tabligan and Ronny Luces are examples. Pastors’ contributions to the Baptist faith in the Philippines include organizing and establishing of churches; educating pastors and church members in particular and the society in general; leading churches and church related organizations; and serving the churches as well as communities where they are in, particularly in the work for social justice. Yet Baptist pastors are confronted with difficult to solve problems: (1) How to update pastoral skills (e.g. Pastoral Resource Development) to meet the demands of the growing churches and expanding ministry. (2) How to increase income (e.g. Self-Reliance) to meet even the basic needs of pastors especially those working in the rural areas. If the salary of the pastor is standardized, “even just to the level of public school teachers, the seminary would get a share of promising young people and eventually these young people will find their way to the churches. While the winning of souls for Christ should be a top priority, the caring for them cannot be set aside.” (3) How to strengthen unity and coordination among pastors to ensure the much needed pastoral and other support (e.g. Coordinated and United Ministerial Leadership and Services). The resolution of these difficulties will surely increase the effectiveness and efficiency of Baptist pastors as they serve churches, church related institutions, and communties the name of the Lord of pastors, Jesus Christ. Conclusion This special paper has reconstructed a history of the Philippine Baptist pastors from 1898 to 2002. The study attempted to find out who the Philippine Baptist pastors were and highlighted their significant contributions to the church and society. Their contributions were reviewed from a Kaupod perspective using published and unpublished documents as well as oral testimonies obtained from interviews and questionnaires. The Manugbantalasang Kamatooran from 1925 to 1929, and 1935 provided significant data that were used to describe the ministry of the early Baptist pastors. The souvenir programs of Annual Assemblies of Kasapulanans and of the CPBC offered significant information regarding the perspectives of pastors on certain issues in the society and the church. The written reports included in the souvenir programs enhanced the interpretation of important events in history. For example, it provided the number of churches and pastors working during different periods of time. Oral testimonies provided immense data that were not found in written documents. For instance, the oral testimonies of pastors portrayed the ministry of the Baptists during World War II, more specifically, in Negros churches. The data at hand significantly portrayed Baptist pastors from 1898 to 2002. The author, however, felt that he was hampered by his own limitations since this is his first attempt at writing a paper on history. Because of this, some gaps may not have been filled in and some puzzles may not have been pieced together sufficiently. Based on the perspective and data used by the author, the significant contributions of the Philippine Baptist pastors in church and society, and the picture of Philippine Baptist pastors from 1898 to 2002 can be seen through the following: 1) Reasons why they became pastors; 2) Their theology and understanding of the ministry; 3) Political and ideological perspective; 4) Socio-economic status; and 5) Their significant strengths and weaknesses that led to their present predicament. 1) Reasons why they became pastors During the early years, Baptist converts decided to become pastors because they wanted to experience a more meaningful life. There was a mounting opposition against Spanish colonialism and Roman Catholicism. Their opposition led them to find ways to study Christianity more seriously. The coming of American missionaries became an opportunity so that they could read the Bible in their own language. By reading the Bible they became more conscious of their Christian duty and felt that God called them to become pastors. Those who decided to become pastors were not only influenced by the gospel but also by the American culture introduced by American missionaries. The American missionaries taught their converts that Protestant Christianity is the “true” kind of Christianity, while Roman Catholicism is the corrupted version. Many pastors of the next generations have more or less the same testimony. Many decided to enter the full time ministry because they felt called by God and were interested in reading the Bible. heir calling and their circumstances became challenges in their Christian ministry and eventually led them to evaluate themselves. In the process, they found out that their contributions as pastors could do much in effecting changes both in the church and in the society. 2) Their theology and understanding of the ministry In the early years, Philippine Baptist pastors used the three pronged pattern developed by the American missionaries – preaching, teaching and healing, guided by the six Baptist principles. Over and above these principles was the “heavenly mission” to led people to salvation in Jesus Christ. After a decade or two, their theology was influenced largely by the “social gospel,” which means the implementation of the gospel in all areas of human life. This led them to expand their ministry to the society especially to the poor people. For instance, the Escuela Dominikal of 1935 emphasized that the responsibilities of a Christian included helping the poor and proclaiming justice in the society. Moreover, Christians should strive to create a good environment in order to convince people within that environment to become good Christians. The “God’s plan for the ages,” a premillenial understanding of the gospel influenced many pastors. This was largely spread in evangelistic meetings and debates. But during the martial law years, the ministry of pastors integrated a program for social justice and transformation. To some extent, pastors believed that salvation is not only liberation from spiritual sin but also liberation from evil structures in realizing people’s potentials before God and humankind. Their ministry extended outside the “Four Corners” of the church. Some of them called it “The other side of ministry.” They engaged in family ministries, ministry for the urban poor and victims of human rights abuses, and “theologizing” along the streets. These experiences eventually led them to develop a contextual theology. They attempted to come up with a Filipino theology that considers the struggles and experiences of the Philippine people. This contextual theology aims at establishing a theological framework rooted in biblical principles, Philippine culture, and context. The Philippine Baptist pastors described their role as a shepherd, a teacher, a preacher, a manager and a leader. The shepherd has a ministry of presence in caring for the sick, seeking those who strayed, watching out for his/her members’ souls, visiting his/her members continually. The pastor as a teacher faithfully teaches his/her members the Baptist faith and its context. The Pastor aims to make his/her members light and salt of the world. Moreover, the pastor is a preacher. He/she preaches boldly the message of salvation in Jesus Christ. The pastor is a manager making plans and organizing his/her people. He/she manages his/her own family as well. Furthermore, the pastor is a leader. He/she leads his/her members to abundant life, and follows the footsteps of Jesus Christ, the great shepherd. 3) Political and ideological perspective To a certain extent, Philippine Baptist pastors have nationalist tendencies.This political perspective is like a fine thread linking many pastors from 1898 to the present. During the Spanish colonial rule people joined the fight for freedom and independence from a colonial system that exploited them. Many Baptist pastors were former revolutionaries who joined the people in their struggle to achieve independence. But when the American missionaries came, these revolutionaries welcomed the Americans. They felt that the kind of Christianity brought by the American missionaries was convincing and could effect better changes that they sought. Moreover, it was because of the American missionaries that they were able to read the Bible in their own language. Later on, many Baptist pastors participated in the quest to change the leadership structure of the Baptist Mission in the Philippines. They felt that Baptist churches could do better if the leadership would be “Filipinized.” Thus, they struggled for self-hood that eventually led to the Filipinization of CPBC. During World War II, many Baptist pastors got involved in the guerilla movement to fight their enemies. Joining the guerilla movement was seen as part of the expression of their Christian faith. They gave information to the guerillas on the movements of the enemies. They also treated the wounded and provided shelter to the victims of war. The martial law years saw a great deal of pastors becoming politicized and doing theology in the streets. Many pastors joined rallies, formed organizations, wrote protest statements, or joined the underground movement that aimed at toppling a corrupt system. Their role as a shepherd was expressed in fighting the “wolves” attacking and abusing their sheep. Many of the present generation of Baptist pastors, aware of the national issues that affect the situation of their church members, also engaged themselves in the ministry for social transformation. 4) Socio-economic status Majority of the early Baptist pastors were poor peasants living in Western Visayas largely because of the exploitation perpetuated by Spanish colonialism. The people had not yet recovered from more than three hundred years of Spanish colonialism when the Philippine-American War broke out. It further aggravated their poor economic condition. Most of the early Baptist pastors were farmers and skilled workers from the rural areas. Some of them worked as carriage makers and cocheros. Those who were in the educated class were hacienda owners and professionals working in government institutions. When the Baptist mission begun its Filipinization in 1935, the economic situation of Baptist pastors did not improve and the practice of requesting for foreign funding continued. During World War II, the Japanese exploited the Philippines for Japan’s war needs. In spite of the bleak economic situation, Baptist pastors continued with their church ministry. They held conferences, worship services and Bible studies. Economically, they were self-reliant because there was no foreign assistance that came from the American missionaries. But after the war, they continued their practice of requesting for foreign assistance. During martial law, Baptist pastors felt the need to be economically self-reliant. The economic crisis that hit the country during this period did not deter them to find ways and means to support themselves financially. They saw that the attitude of“always asking for money” from the foreigners hampered their decision-making ability as well as their thinking. This led them to conceive plans for the standardization of pastors’ salary. They launched programs to help pastors become economically stable. Presently, the CBMA initiated an endowment program to assist pastors in their financial difficulties. The CBMA believes that in making the pastors economically self-reliant, they can enhance their pastoral ministry. 5) Significant strengths and weaknesses leading to their present predicament In the early years, their significant strength can be found in their commitment to the pastoral ministry. They believed that they were doing the will of God. Although their theological education at the start was only “learning by doing,” their faith led them to be involved in translating the Bible into Hiligaynon; in organizing people; in distributing the gospel and other religious tracts; in preaching; in studying the Bible; and in going to far-flung areas where no American missionaries have gone. What hampered the development of the early Baptist pastors was their attitude of dependency upon the American missionaries. This kind of attitude developed as the American missionaries supported them financially, morally and intellectually. Moreover, the American missionaries trained them partly to become “assistants” or “helpers.” Thus, to some extent, their mentality became dependent on the ideas and perspectives of the Americans. For instance, they believed that the American way of life goes hand in hand with Bptist Christianity; and their perspective in ministry was limited to a “heavenly mission” - to make people accept Christ so that they will go to heaven. Economically, some of them started in the ministry without getting any help from the American missionaries. They supported themselves through their farms and from the income of their members. In the course of time, the ministry of the Philippine Baptists relied more and more on foreign support. Moreover many missionaries saw that the Philippine Baptist mission was an extension of the American Baptist mission. The mentality of certain American missionaries that they were here as “missionaries for life” reinforced the thought that they had no plan to relinquish the Philippine Baptist mission to Philippine Baptist leaders. Eventually, the theological thinking as well as the economic status of the early Philippine Baptist pastors became dependent upon the “dependency system” established by the American missionaries. The status, however, did not deter the pioneering pastors to continue with their ministry, and in the process their nationalist tendencies were awakened. Many pastors believed that the Christian mission will flourish as they struggle to find their own identity, and the status of being dependent could weaken their commitment. Thus, they struggled for self-hood and for the Filipinization of the Baptist mission in the Philippines. With the backing of the local churches, they organized the Western Visayas Convention that eventually led to the creation of the Convention of Philippine Baptist Churches. Although the Filipinization process begun, the Philippine Baptists were far from being self-reliant. Most of their funds still came from abroad. Theologically, their “heavenly mission” expanded to include the ministry for social justice – helping the poor and providing an environment wherein people can become good Christians. This was a significant step in the search for their own identity– economically and theologically. The leadership of the Philippine Baptist pastors was tested during World War II. Without the assistance of the American missionaries they continued fulfilling their roles as pastors. Financially, Baptist pastors became self-reliant. The churches did not cease to continue but rather they found strength amidst the turmoil of war and carried on worship services in areas where they evacuated. They have proven that they could stand on their own - in leadership and financial matters. To some extent, however, many pastors have not learned from these important experiences on self-reliance. When the war ended, the American missionaries proceeded to take up the cudgels of leadership. During the martial law years, many pastors involved themselves in the “other side” of the Christian ministry. Their task extended outside of the church. Many of them became “activists” and participated in community organizing, in teaching the people about health, in family planning and in fighting against human rights abuses. Some joined the underground movement and other groups aimed at toppling the Marcos dictatorship. The nationalist tendencies of pastors seen during the early years and during World War II found its offspring during martial law. For instance, in 1983, the CBMA theme, Resuscitating the Minister,aimed at re-examining the theological position of pastors which was largely influenced by Western thought; and re-evaluating their identity as Baptist pastors ministering in the Philippine context. To some extent, the ministers were “resuscitated” and they found themselves once again asking questions like, “How to make the gospel relevant to the Philippine people?” “How can we respond to a situation that tramples human dignity?” and “Who are we as Baptist pastors in a local setting?” Moreover, they engaged themselves in developing a contextual theology, particularly, Filipino theology rooted in the Bible and the Philippine culture. After two decades, the pastors in their CBMA annual assembly discussed the theme Revisiting Faith Resources. In revisiting their faith resources, they remembered their treasures that have been buried. In doing so, they found out that their strength lies in themselves, in tapping their own God-given resources and in doing something to make them more available to their fellow ministers and the churches. Moreover, they found out that two of their significant weaknesses were their tendencies to rely on foreign funding for their planned programs and to depend on foreign theologies which were to a certain extent alien to the Philippine context. In revisiting their resources they thought there is still a larger space on which they could stand on their own. Many realized too that they should not remain in the receiving end but rather they should also struggle to shift from the position of a receiver to the position of a giver. They decided to push through a three faceted program so as to deeply understand their identity as Philippine Baptist pastors. They launched the Kabuhi sang Pastor Endowment Program aiming at improving the economic provision of pastors, to strengthen their pastoral ministry. If they are self-reliant they could also think independently. The second facet was the continuing theological education for pastors. The CBMA would like to offer courses during seminars that would be credited leading to a Master of Ministry degree. Among others, this facet aim at developing a Filipino theology – an attempt already started by many pastors two decades ago. This contextual theology would be based on the experiences and struggles of the Philippine people as they reflect on their Christian faith and the revelation of God in their own context. The third facet hoped to strengthen the system of leadership of the CBMA so that its envisioned program could be implemented effectively. All in all, these three facets were seen necessary to help Baptist pastors in their continuing search for identity and self-reliance. The strength and weakness of Philippine Baptist pastors revolved around the issue of independence and dependence. The dependence from foreign support and theology made them docile pastors whose theology tends towards reaction and reinforced colonial mentality. There were times, however, when Philippine Baptist pastors were left to themselves and became independent, i.e., during World War II. On the whole, this study showed that Philippine Baptist pastors have significantly contributed to the formation and growth of local Baptist churches in the Philippines; to the education of church members to become good Christians and to the realization of social justice for all. Moreover, this study asserts that Philippine Baptist pastors have more space to stand on their own. This is a significant strength that could be translated into action, encouraging them to continue the search for ways and means toward self-reliance and self-determination, theologically and materially, for the sake of their active and qualitative participation in the realization of the mission of Jesus Christ. [1]Excerpts from Francis Neil G. Jalando-on, A Portrait of a Philippine Baptist Pastor 1898 – 2002. A Special Paper presented to the Faculty of the School of Graduate Studies, Central Philippine University in partial fulfilment of the requirements for the degree Master of Divinity in Pastoral Ministry, 2002. This article was published in the book Managing Faith Resources, N. Bunda, R. Faulan, F.N. Jalando-on, J. Narciso, 2003.
tosh 36zd26 available Guest HI FOLKS THE TOSH 36ZD26 WILL BE AVAILABLE NEXT WEEK FROM UNBEATABLE AND SOUND AND VISION IN BOLTON HAVE SEVERAL.I HAVE PLACED MINE WITH UNBEATABLE AS I HAVE BOUGHT SEVERAL ITEMS FROM THEM IN THE PAST WITH NO PROBLEMS PLUS YOU CANNOT BEAT THE NOTHING TO PAY FOR 12 MONTHS AND NO DEPOSIT,SEE YEAH ,STEVE
<link rel="import" href="../bower_components/polymer/polymer.html"> <link rel="import" href="reward-display.html"> <link rel="import" href="number-count-up.html"> <link rel="import" href="../bower_components/progress-bubble/progress-bubble.html"> <dom-module id="reward-display-v1"> <style> :host { position: fixed; top: 0%; left: 0%; display: none; opacity: 0; width: 100vw; height: 100vh; background-color: black; z-index: 9007199254740991; /* Number.MAX_SAFE_INTEGER */ } #rewardvideocontainer { position: fixed; /* width: 100vw; height: 100vh; */ /* margin: auto; left: 0; right: 0; transform: perspective(1px) translateY(-50%); */ /* left: 50%; top: 50%; transform: perspective(1px) translate(-50%, -50%); */ top: 220px; left: 50%; transform: perspective(1px) translate(-50%, 0%); } #rewardvideo { display: none; opacity: 0; } #timesaved_message { position: fixed; top: 60px; left: 0%; width: 100vw; display: inline-block; text-align: center; line-height: 100px; height: 100px; color: white; font-size: 48px; } </style> <template> <div id="timesaved_message"> <!-- <span style="vertical-align: middle; font-size: 100px">+5⏲</span> --> <span style="vertical-align: middle; font-size: 48px">5 minutes saved!</span> </div> <progress-bubble value="75" max="100" style="width: 200px; height: 200px; position: fixed; top: 0px; left: 0%"> <strong style="text-align: center"><number-count-up start="70" end="75"></number-count-up> minutes saved total</strong> </progress-bubble> <progress-bubble value="43" max="100" style="width: 200px; height: 200px; position: fixed; top: 0px; right: 0%"> <strong style="text-align: center"><number-count-up start="40" end="45"></number-count-up> minutes saved with Feed Blocker</strong> </progress-bubble> <div id="rewardvideocontainer"> <video id="rewardvideo" on-ended="video_ended" src="{{video_url}}"></video> </div> <!-- <img src="{{img_url}}"></img> --> </template> <script src="reward-display-v1.js"></script> </dom-module>
Classic Films Showing in Miami in January Welcome to 2016, everybody! And with the beginning of any new year comes two things: a bunch of awards movies flooding theaters and just as many awful early-year releases polluting the multiplexes. Instead, why not go the safe (and likely, a whole lot more entertaining) route? You know it: classic cinema. Here’s what you can look forward to this month in terms of old-school flicks. 30 Years of Killer Films With the Cosford Cinema running their retrospective of producer Christine Vachon’s films, it’s a full month for them. Still Alice on the 9th, The Grey Zone on January 10, Boys Don’t Cry on January 16, Kill Your Darlings on January 17, The Company on January 25, One Hour Photo on January 30, and Savage Grace on January 31. All of the films except for Still Alice will be shown on 35mm instead of digitally. Related Stories Secret Celluloid Society If you missed the first film of Secret Celluloid Society’s late night line-up this month (which was The Wild Bunch) don’t be sad: you’ve still got four films left in January to check out. On January 9 is Animal House followed by the quintessential Prince classic Purple Rain on January 16. Big Trouble in Little China, an appropriate pick considering Kurt Russell having a film currently on the big screen, is on January 23. And, finally, Paul Thomas Anderson’s Boogie Nights closes the month on January 30. As usual, all of ‘em are being shown on 35mm and take place at 11:30 p.m. Four Films By Two Masters The release of documentary Hitchcock/Truffaut — yes, loosely based around the book of the same name — is happening this month at the Miami Beach Cinematheque, and with it comes a retrospective of both filmmakers’ work. You might have, sadly, missed one of Alfred Hitchcock’s best, Marnie, earlier this week, but three more films are waiting just around the corner. François Truffaut’s The Bride Wore Black shows on January 14, with an introduction by Rubén Rosario of Miami Art Zine. Hitchcock’s The Wrong Man shows on January 21, with an introduction by critic and author David N. Meyer. And Truffaut’s Confidentially Yours shows on January 28, with an introduction by Hans and Ana Morgenstern of Independent Ethos. If you like this story, consider signing up for our email newsletters. SHOW ME HOW Newsletters SUCCESS! You have successfully signed up for your selected newsletter(s) - please keep an eye on your mailbox, we're movin' in! Fashion Project Festival Over at Bal Harbour, there’s a four-day film festival being hosted by Fashion Project and they’ve got a great lineup, entirely free and open to the public. On January 28, there are nine short films and a showing of Alfred Hitchcock’s Vertigo. On January 29 is Farah Khan’s Om Shanti Om, with an introduction by Anupama Kapse. On January 30, it’s a triple feature of Tony Takitani, with an introduction by Kate Sinclair, Nicolas Roeg’s Don’t Look Now, and Max Ophüls’ Lola Montès. Closing out the festival on January 31 is My Fancy High Heels and Grey Gardens, along with the short films Rose Hobart, Doll Clothes, and Irma Vep, the Last Breath. Gables Cinema The end of the month offers three classic options at Gables Cinema outside of their SCS After Hours line-up. There's The Man Who Fell to Earth as a great Bowie tribute on January 28th, a short run until January 27th of Orson Welles' Chimes at Midnight, and a run of Brian de Palma's brilliant Blow Out (which kicks off Jan 29th and runs through February). Bonus: Miami Herald Critic Rene Rodriguez will be attending the January 30th, 4:00p.m. screening for a conversation with Nat Chediak about the film. And More! There’s always additional events that don’t all take place at one spot, so here’s a run-through of that. The Miami Jewish Film Festival will be having a free screening of Spaceballs on January 16 under the stars at the New World Center SoundScape. O Cinema Miami Beach will have their usual end-of-the-month Rocky Horror Picture Show screening on the 29th. Blue Starlite will be showing Alice in Wonderland (with optional Pink Floyd soundtrack) and The Big Lebowski on January 9 in Virginia Key. Juan Antonio Barquin co-runs a film criticism site, Dim the House Lights, and works as an arts writer for New Times. He aspires to be Bridget Jones and loves genre flicks.
236 F.3d 1255 (10th Cir. 2001) MORRIS H. KULMER and KERN W. SCHUMACHER, Petitioners,v.SURFACE TRANSPORTATION BOARD and UNITED STATES OF AMERICA, Respondents,andROARING FORK RAILROAD HOLDING AUTHORITY, Intervenor. No. 99-9525 UNITED STATES COURT OF APPEALS FOR THE TENTH CIRCUIT January 8, 2001 ON PETITION FOR REVIEW OF A DECISION OF THE SURFACE TRANSPORTATION BOARD. (STB Docket No. AB-547X) Thomas F. McFarland, Jr., of McFarland & Herman, Chicago, Illinois, for Petitioners. Marilyn R. Levitt, Attorney, Surface Transportation Board, Washington, D.C. (Henri F. Rush, General Counsel, and Ellen D. Hanson, Deputy General Counsel, Surface Transportation Board, Washington, D.C.; Joel I. Klein, Assistant Attorney General; John J. Powers, III, and Robert J. Wiggers, Attorneys, Department of Justice, Washington, D.C., with her on the brief), for Respondents. Robert M. Noone, P.C., Glenwood Springs, Colorado, and Charles H. Montange, Seattle, Washington, filed a brief for Intervenor. Before BRORBY, McKAY, and MURPHY, Circuit Judges. McKAY, Circuit Judge. 1 The Surface Transportation Board (STB) dismissed petitioners' offer of financial assistance to intervenor-respondent Roaring Fork Railroad Holding Authority (RFRHA). We exercise jurisdiction under 28 U.S.C. 2321(a) and 2342. I. 2 Rail carriers must obtain STB authorization to abandon rail services over their lines. See 49 U.S.C. 10903(a)(1). RFRHA applied for permission to abandon a 33.44 mile line, known as the Aspen Branch. In pertinent part, the STB granted permission subject to the offer of financial assistance (OFA) provisions of 49 U.S.C. 10904. The OFA provisions create a four-month waiting period wherein "any person may offer to subsidize or purchase the railroad line that is the subject" of an abandonment application. 10904(c). If the STB finds that an offer meets certain criteria, the railroad is forced to sell the line to the offeror according to terms negotiated by the parties or, when necessary, terms imposed by the STB. See 10904(c)-(f). In the instant case, petitioners filed an OFA to buy the Aspen Branch, apparently hoping to use the tracks for the same purpose--light-rail passenger service--for which RFRHA intended to use them once rail freight service was abandoned. RFRHA moved to dismiss the OFA because the petitioners did not intend to provide continued rail freight service. 3 In its order, the STB asserted that "when disputed, an offeror must be able to demonstrate that its OFA is for continued rail freight service." Roaring Fork Railroad Holding Authority--Abandonment Exemption--In Garfield, Eagle, and Pitkin Counties, CO, STB Docket No. AB-547X, at 4 (served May 21, 1999) [hereinafter RFRHA decision]. To that end, the STB stated there must be some assurance of sufficient future rail freight traffic "to enable the operator [i.e., the offeror] to fulfill its commitment to provide that service." Id. Petitioners presented evidence of projected rail use, but the STB found the projections "too indefinite and insufficient to support continued freight rail operations, as the offerors readily concede." Id. at 5. Accordingly, it dismissed petitioners' OFA because it appeared unlikely to result in continued rail freight service. Moreover, the STB thought it unjust to use the OFA process to wrest a rail line from one person intending to use it for a legitimate public purpose only to give it to another who wants to put it to the same intended use. See id. II. 4 Petitioners claim the STB erred in dismissing their OFA because the OFA provisions do not expressly require the STB to consider rail service continuation as a factor in approving an OFA. They base their argument on the "plain" language of 10904(d), which provides that rail abandonment may be carried out after the specified waiting period unless the STB "finds that one or more financially responsible persons (including a governmental authority) have offered financial assistance." Petitioners assert that this provision unambiguously evinces Congress' intent to make financial responsibility the sole qualification for OFA approval. However, the Supreme Court has recently stated that "[i]n determining whether Congress has specifically addressed the question at issue, a reviewing court should not confine itself to examining a particular statutory provision in isolation." FDA v. Brown & Williamson Tobacco Corp., 529 U.S. 120, ___, 120 S. Ct. 1291, 1300 (2000). Rather, a court must read the relevant provisions in context and, insofar as possible, "interpret the statute 'as a symmetrical and coherent regulatory scheme.'" Id., 529 U.S. at ___, 120 S. Ct. at 1301 (quoting Gustafson v. Alloyd Co., 513 U.S. 561, 569 (1995)). 5 We agree with the Ninth Circuit that 10904, read as a whole, indicates Congress' intent that the STB may consider the likelihood of continued rail freight service as a factor in approving disputed OFAs. See Redmond-Issaquah R.R. Preservation Ass'n v. Surface Transp. Bd., 223 F.3d 1057, 1061 (9th Cir. 2000). Most notably, 10904 itself is entitled "Offers of financial assistance to avoid abandonment and discontinuance." (Emphasis added). Moreover, subsection (b)(1) requires rail carriers pursuing abandonment to provide prospective offerors "an estimate of the annual subsidy and minimum purchase price required to keep the line or portion of the line in operation." (Emphasis added). This provision makes little sense if the continuation-of-service factor plays no part in the OFA process. More fundamentally, we are troubled by the constitutional problems inherent in petitioners' interpretation. It would be difficult indeed to justify a statute that forces a rail carrier desiring to discontinue freight rail service to sell its lines solely because a "financially responsible" person offers to purchase them. Whereas a statute that forces the sale of potentially abandoned lines to "financially responsible" persons who will continue rail service at least furthers a legitimate government interest in preserving access to, and service over, rail lines. See, e.g., 10101 (outlining Congress' rail transportation policy). 6 Finding no express support in the text, petitioners look to legislative history for help. They correctly note that the former OFA provisions explicitly stated that before approving an OFA, the ICC (the STB's predecessor) must find that the offeror is financially responsible and "has offered financial assistance to enable the rail transportation to be continued." 49 U.S.C. 10905(d)(1) (1994). The current OFA provisions, as noted, do not contain an express rail-continuation requirement. Petitioners argue that this omission indicates Congress' intent to prohibit the STB from considering continued rail service as a factor. The legislative history, however, fails to explain the import of the omission, although, it does discuss the import of another unrelated, relatively minor change in the OFA process. See H.R. Conf. Rep. 104-422, at 181 (1995), reprinted in 1995 U.S.C.C.A.N. 850, 866. In light of Congress' willingness to explain more modest changes to the very same statute, we agree with the Ninth Circuit that it seems "highly implausible that Congress would eliminate the original aim of the OFA procedure without clearly expressing its intent to do so." Redmond-Issaquah R.R., 223 F.3d at 1062. 7 In short, while Congress has not specifically required the STB to consider continued rail service as a factor, there is no basis in the statute for concluding that Congress has specifically prohibited the STB from doing so. In the absence of a clear congressional expression on the issue, we must uphold the STB's interpretation of 10904 so long as the interpretation is "permissible." Brown & Williamson, 529 U.S. at ___, 120 S. Ct. at 1300. For the reasons stated above and in Redmond-Issaquah Railroad, we conclude that it was permissible for the STB to consider whether a disputed OFA was intended for continued rail service. III. 8 Petitioners contend that the STB's order is, nonetheless, arbitrary and capricious under 5 U.S.C. 706(2) because it fails to explain why they had to demonstrate a sufficient amount of projected rail traffic instead of just any amount of rail traffic in support of their OFA. Under 706(2)'s arbitrary-and-capricious standard, we will reverse the STB only if there has been a "'clear error of judgment.'" Am. Mining Congress v. Marshall, 671 F.2d 1251, 1255 (10th Cir. 1982) (quoting Citizens to Preserve Overton Park v. Volpe, 401 U.S. 402, 416 (1971)); see also Redmond-Issaquah R.R., 223 F.3d at 1063. 9 It is true that OFA approval does not require proof of some minimum amount of rail traffic. The ICC (the STB's predecessor) expressed the view that such a requirement "could impose an obstacle to rail service in some cases." Exemption of Rail Line Abandonments or Discontinuance--Offers of Fin. Assistance, 4 I.C.C.2d 164, 167 (1988) (emphasis added). For instance, where there is credible evidence that an OFA would result in continued rail service despite the fact that the service would not be self-sustaining, a minimum traffic requirement would be prohibitive. To illustrate, in Illinois Central R.R. Co.--Abandonment Exemption--in Perry County, IL, ICC Docket No. AB-43 (Sub-No. 164X) (served October 18, 1994) [hereinafter Perry County], the offeror, who owned an inactive coal mine along the rail line in question, wanted to subsidize the rail carrier to maintain freight rail service although the line was inactive and it was unknown when anyone, including the offeror, would use it in the future. Under the circumstances, the offeror's willingness to subsidize a line from which it could derive no benefit besides potential freight rail service persuaded the STB that the OFA was, in fact, for continued rail service. However, the fact that a minimum traffic requirement for OFAs might hinder continued rail service in "some cases"--such as Perry County--does not mean that such a requirement is inherently inappropriate in all cases, as petitioners seem to contend. 10 In the instant case, for example, the STB found that the traffic projections of the potential rail users under petitioners' OFA were "too indefinite and insufficient to support continued freight rail operations" and noted that "the offerors acknowledge that continued freight service would not be self-sustaining." RFRHA, at 5. Based on these uncontested facts, the STB concluded that "continued freight rail service would not be likely to result from this OFA proposal." Id. We will not fault the STB for presuming that no one is likely to continue to operate an unprofitable rail service. In the absence of persuasive evidence explaining how and why petitioners would operate the line despite incurring substantial losses, we cannot say the STB was arbitrary in dismissing the OFA. 11 Petitioners counter that if they acquired the Aspen Branch, they would subsidize freight rail service with profits from potential light-rail passenger service along the line. The STB considered but was not persuaded by this assertion. It noted that Congress had already tentatively earmarked $40,000,000 for RFRHA's own light-rail plans and, should petitioners acquire the line, that funding would be unavailable. See RFRHA, at 6 & n.19. Given the unusual state of affairs, the STB commented that "this case presents the anomalous situation in which any future reinstitution of rail freight service (as an adjunct to passenger service) appears to be more likely under RFRHA's own plans for the future of the right-of-way than through the OFA process." Id. at n.19. 12 The STB has adequately explained its decision, supported by its uncontested findings. We find no clear error of judgment. 13 AFFIRMED. 14 MURPHY, Circuit Judge, dissenting. 15 This case involves the STB's construction of a statute it administers and, thus, this court must first determine "whether Congress has spoken directly to the precise question at issue." Chevron, U.S.A., Inc. v. Natural Res. Def. Council, Inc., 467 U.S. 837, 842 (1984). Because Congress has directly addressed the issue in this case, I respectfully dissent from the majority's conclusion that it is permissible for the STB to consider an offeror's intention to provide continued rail freight service. 16 Section 10904(d)(1) clearly allows the STB to consider the offeror's financial ability to run the rail line. I also agree that the language of the statute evinces Congress' intent to encourage the continuation of rail freight service along lines that would otherwise be abandoned. See 29 U.S.C. 10904. Under the express language of the statute, however, an offeror is precluded from transferring the rail line or discontinuing rail freight service for a period of two years. See id. 10904(f)(4)(A). This language makes it clear that Congress only requires an offeror to continue rail service for a two-year period. Thus, Congress has precluded the STB from considering the offeror's intentions with respect to the rail line beyond the initial two-year period. 17 Petitioners have averred that rail freight service will not be discontinued for the obligatory two-year period and have identified five potential shippers who may have a need for the line. The STB has not made a finding that Petitioners intend to transfer or discontinue service on the line before the end of the second year after the transfer, or that Petitioners are financially unable to provide such service. Thus, Congress' intent as expressed in the statute is served by the OFA filed by Petitioners; the rail line in question will remain available to shippers seeking to run freight for the next two years. 18 The STB has attempted to supplement the clear and unambiguous language Congress used in 10904 to fit the unique circumstances presented by this case. There exists, however, a different statutory mechanism by which these unique circumstances could have been addressed. In cases where the rail line is appropriate for public purposes, the party seeking to abandon the line can be exempted from the OFA process. See 49 U.S.C. 10502; The Central Railroad Company of Indianapolis--Discontinuance of Service Exemption--In Clinton, Howard and Tipton Counties, STB Docket No. AB-289 (Sub-No. 4X), at 5 (served Jan. 15, 1999). Although RFRHA sought just such an exemption from the OFA process in this case, its request was denied by the STB because RFRHA did not present sufficient information to support the exemption. In its motion to dismiss the OFA, RFRHA requested the STB to reconsider the denial of the exemption. Although the STB indicated in hindsight that "it would have been appropriate to exempt this line from the OFA process," it did not specifically grant the exemption or vacate it's earlier decision denying the exemption. Roaring Fork Railroad Holding Authority--Abandonment Exemption--In Garfield, Eagle, and Pitkin Counties, CO, STB Docket No. AB-547X, at 6 (served May 21, 1999). RFRHA does not appeal from the STB's denial of the exemption. The unambiguous statutory language in 10904 should not be distorted as a means to reach the same ends that could have been reached if the STB had granted RFRHA an exemption from the OFA process. 19 Because congressional intent can be discerned from the plain language of 10904, the STB's interpretation is not entitled to any deference. See Chevron, 467 U.S. at 842-43. I conclude that Congress has expressly foreclosed the STB from considering whether the offeror intends to continue rail service beyond the two-year period expressly stated in the statute. I, therefore, dissent from the majority's conclusion that the STB has the implied authority to consider the offeror's intentions with respect to the rail line beyond the end of the two-year period.
IN THE TENTH COURT OF APPEALS No. 10-07-00124-CR KENNETH RICHARDS, Appellant v. THE STATE OF TEXAS, Appellee From the 278th District Court Walker County, Texas Trial Court No. 23483 MEMORANDUM OPINION A jury found appellant, Kenneth Richards, guilty of the offense of possession of a cellular telephone while an inmate of a correctional facility. After finding true the allegation in the enhancement paragraph that Richards had prior felony convictions, the jury assessed his punishment at confinement for twenty-five years. In five issues, Richards contends that the trial court erred in admitting his statements into evidence and failing to instruct the jury on accomplice-witness testimony. He also challenges the effectiveness of his counsel’s assistance at trial, the constitutionality of Penal Code section 38.11(j), and whether his sentence constitutes cruel and unusual punishment. We will affirm. Background On September 26, 2006, Richards was indicted for possession of a cell phone while in a correctional facility. Cathy Harvey, Richard’s ex-wife testified that she periodically visited him at the Ellis Unit of the Texas Department of Criminal Justice (TDCJ) where he was confined. In November of 2005, Richards asked Harvey to obtain a cell phone for him. She bought one with pre-paid minutes and at his request placed it under a sign on a road located about ten miles from the prison. She visited him on November 27, 2005, and told him that she had done what he asked, and he later told her that he had received the phone. Robert Hickman, who was also confined at Ellis Unit in 2005, testified that Richards offered him $100 to deliver packages of tobacco to another inmate. Hickman agreed to do so if he could use Richards’s cell phone. Richards gave the cell phone to Hickman, but Richards was arrested for possessing tobacco before he could give the tobacco to Hickman for delivery. Hickman later used the cell phone and hid it after Richards was arrested. John Riggle, an investigator with TDCJ, testified that he tried to interview Richards about the tobacco before his arrest. Instead, Richards asked to speak with Eddie Howell, a major at the Ellis Unit. Howell testified that Richards said that his wife dropped off a bag containing a cell phone and that Hickman hid it in the maintenance yard. Howell did not give Richards Miranda warnings or advise of him of his statutory Kenneth Richards, Jr. v. The State of Texas Page 2 rights under Article 38.22, Section 2(a) of the Texas Code of Criminal Procedure before he spoke to him. Howell also failed to record the conversation. After speaking with Richards, Howell recovered the cell phone from Hickman. Although Richards did not file a motion to suppress statements obtained in violation of Miranda and article 38.22 at trial, when the State offered the statements through the testimony of Riggle and Howell, Richards objected and a brief hearing was conducted outside the presence of the jury. The trial court overruled the objection and admitted the statements. Admission of Evidence Richards contends in his first issue that the court erred by admitting into evidence the statements he gave to Howell. He argues that his statements were the result of an unwarned custodial interrogation, were not recorded, and were involuntary because Howell promised to restore Richard’s status as a prison trustee in exchange for information. The State acknowledges that Richards did not receive Miranda warnings prior to making the challenged statements to Howell, but it argues that the evidence was correctly admitted by the trial court because the statements were not the product of “custodial interrogation.” We review a trial court's admission or exclusion of evidence for an abuse of discretion. McDonald v. State, 179 S.W.3d 571, 576 (Tex. Crim. App. 2005). “A trial court abuses its discretion when its decision is so clearly wrong as to lie outside that zone within which reasonable persons might disagree.” Id. The voluntariness of a statement given to law enforcement is determined from the totality of the circumstances. Wyatt v. Richards v. State Page 3 State, 23 S.W.3d 18, 23 (Tex. Crim. App. 2000); Kearney v. State, 181 S.W.3d 438, 444 (Tex. App.—Waco 2005, pet. ref’d). The court conducted a hearing, outside of the presence of the jury, to determine whether the statements Richards made to Howell were admissible. Investigator Riggle testified that, after he summoned Richards from his cell, he introduced himself and told Richards that he was investigating an allegation that tobacco had been brought into the Ellis Unit. Richards then immediately said that he wanted to talk to Howell before talking to Riggle, and Richards was escorted out of his office. Howell testified that Richards asked if he could meet with him. According to Howell, although he agreed to meet with Richards, he never requested an interview with him. After Richards was escorted to his office, Howell asked Richards “You wanted to talk to me. What is on your mind?” Richards then told Howell the details about his wife’s purchasing the cell phone and his offering it to Hickman in exchange for delivery of the tobacco. Richards testified that when he went to meet with Riggle, Riggle told him that he already had a statement from Richards’s ex-wife and that if he did not cooperate, Riggle would see to it that he served every day of his ten-year sentence. According to Richards, it was then that he requested to speak with Howell. He specifically wanted to speak with Howell because in a prior incident where Richards was charged with possession of tobacco, Howell had made the case “go away” after he cooperated and gave information. He said that on the instant occasion Howell told him he needed to know what was going on in his unit. According to Richards, he asked Howell to help Richards v. State Page 4 him out like he did before, and Howell told him that he would have to give really good information first. As he began to discuss small details, Howell repeatedly told him that the information was not enough and that he needed to give additional information. Richards eventually told him everything. Richards also testified that Howell offered to help him keep his trustee status if Richards helped him find the cell phone. According to Howell, he never promised Richards anything in exchange for his information. Howell testified as follows, [Q]: And during this conversation did you promise Kenneth Richards anything? [A]: No, I did not. [Q]: Did you tell him that anything bad would happen to him if he did not tell you the full story? [A]: No, I mean he was already found in possession of the tobacco by the farm manager. Disciplinary was already generated. I wasn’t involved in that process at all. And he was going to be disciplined by agency rules for his misconduct for possession of the tobacco. [Q]: And during the conversation did you ever subject him to any type of coercion that would cause him to give you statements? [A]: It was probably his expectation that there would be leniency in the disciplinary punishment for his disciplinary offense for the possession of the tobacco. But I never promised him that. Because, you know, he basically was caught red handed, and that wasn’t his first time to be found in possession of contraband. [Q]: And let me ask you specifically, if you didn’t ask him—I will use legal phrases. Did you subject him to any express questioning other than what he told you? [A]: No. You know, he came into the office and I asked him, you know, Mr. Riggle said you wanted to talk to me and what is on your mind. [Q]: And did you do anything that would indicate that you were going to help him out? [A]: No. Richards v. State Page 5 [Q]: Anything that he might have taken from you as being okay, he has now told me, I’m going to get a benefit? I’m going to give it all up? [A]: No, No. I didn’t do any of that. Article 38.22 generally precludes the use of a defendant’s statements that result from custodial interrogation absent compliance with its procedural safeguards. See TEX. CODE CRIM. PROC. ANN. art. 38.22 (Vernon 2005). But it does not preclude admission of statements that do not stem from custodial interrogation. See id. Thus, if Richards’s statements did not stem from custodial interrogation, article 38.22 does not require their exclusion. See Morris v. State, 897 S.W.2d 528, 531 (Tex. App.—El Paso 1995, no pet.). In this case, there is no dispute that Richards’s unrecorded oral statements were made while he was in custody; the only disputed issue is whether there was an "interrogation." "Interrogation 'refers not only to express questioning, but also to any words or actions on the part of the police (other than those normally attendant to arrest or custody) that the police should know are reasonably likely to elicit an incriminating response from the suspect.'" Miffleton v. State, 777 S.W.2d 76, 81-82 (Tex. Crim. App. 1989) (citing Rhode Island v. Innis, 446 U.S. 291, 301, 100 S.Ct. 1682, 1689-90, 64 L.Ed.2d 297 (1980)). When the officer's statements are designed to elicit incriminating statements from the defendant, it is interrogation. Id. at 82. Statements that the accused volunteers are admissible. Miranda v. Arizona, 384 U.S. 436, 478, 86 S.Ct. 1602, 1630, 16 L.Ed.2d 694 (1966); Jefferson v. State, 974 S.W.2d 887, 890 (Tex. App.—Austin 1998, no pet.). Thus, not all custodial questioning can be classified as "interrogation." Jones v. State, 795 S.W.2d 171, 174 (Tex. Crim. App. 1990). Courts have held a variety of Richards v. State Page 6 questioning to be outside the constitutional definition of "interrogation." Id. In Jones, the Court of Criminal Appeals gave some examples where certain questioning by police officers has not constituted "interrogation": For example, routine inquiries, questions incident to booking, broad general questions such as "what happened," and questions mandated by public safety concerns e.g. "where did you hide the weapon" when the weapon has just been hidden in the immediate vicinity. See generally Ringel, Searches And Seizures, Arrests And Confessions, § 27.4 (Clark Boardman Company, Ltd. 1987). In Texas…courts have held several police questions to be non-interrogative. Massie v. State, 744 S.W.2d 314 (Tex. App.—Dallas 1988, pet. ref'd.) ("Where are you going?" to a DWI suspect stopped on the street); DeLeon v. State, 758 S.W.2d 621 (Tex. App.—Houston [14th Dist.] 1988, no pet.) (asking a suspect where the murder weapon was). But see Sims v. State, 735 S.W.2d 913 (Tex. App.—Dallas 1987, pet. ref'd.) (holding that questions regarding when a defendant last ate, and asking what day, date and time it was did amount to interrogation). Id. at 174 n.3. Howell testified that after officers found a large quantity of tobacco at the Ellis Unit, he contacted Riggle’s office so that they could investigate. He knew that Riggle was interviewing Richards on questions of tobacco possession, but he did not know any other details surrounding the case. Both Howell and Riggle testified that Richards asked to meet with Howell after he was brought in to speak with Riggle. According to Howell, when Richards arrived to meet with him, Howell asked him “what’s on your mind?” Richards then began to explain how his wife had delivered tobacco and a cell phone. We conclude that the trial court was justified in finding that Richards's oral statements were not the product of custodial interrogation, but were spontaneous and Richards v. State Page 7 voluntary and thus admissible under section 5 of article 38.22. TEX. CODE CRIM. PROC. ANN. art. 38.22, § 5; Dossett v. State, 216 S.W.3d 7, 24 (Tex. App.—San Antonio 2006, pet. ref’d). Even though Richards was in custody, the evidence at trial showed that he made voluntary statements after asking to meet with Howell. Richards’s testimony conflicted with Howell’s when Richards claimed that Howell asked to meet with him (and not vice versa); however, the trial court was entitled to believe the officer’s version of the facts and disbelieve Richards. There was ample evidence from both officers that Richards initiated the discussion and was eager to talk about the cell phone and the tobacco. Dossett, 216 S.W.3d at 24. Many cases have held that such spontaneous, volunteered statements not made in response to interrogation are admissible, whether or not the defendant is in custody. See, e.g., Wiley v. State, 699 S.W.2d 637, 638-39 (Tex. App.—San Antonio 1985, pet. ref'd, untimely filed) (holding that defendant's statement, "Okay, I did it," made after being shown bloody clothes and bloody knife, was not in response to interrogation and was admissible); Smith v. State, 949 S.W.2d 333, 339 (Tex. App.—Tyler 1996, pet. ref'd) (holding captured defendant's unsolicited statement that it would not take him as long to escape from prison the next time was admissible); Higgins v. State, 924 S.W.2d 739, 743-45 (Tex. App.—Texarkana 1996, pet. ref'd) (holding defendant's spontaneous statement that he killed his wife made while in custody and in the back seat of police car was not the product of police questioning and was voluntary and admissible); De Leon v. State, 758 S.W.2d 621, 625 (Tex. App.—Houston [14th Dist.] 1988, no pet.) (defendant was handcuffed and questioned about a prison stabbing due to a blood stain on his knee; his Richards v. State Page 8 statement in response to police questioning about the location of the knives was suppressed, but his later, spontaneous statement, "I killed him," was held voluntary and admissible); Villarreal v. State, No. 04-05-00287-CR, 2005 Tex. App. Lexis 9300, at *4-5 (Tex. App.—San Antonio Nov. 9, 2005, pet. ref'd) (not designated for publication) (holding defendant was in custody, but was not being interrogated at the time of his spontaneous admission while officer was escorting him to an interview room that he killed his mother). Howell’s question to Richards, “what’s on your mind?” was non-interrogative and more akin to a greeting than a question reasonably calculated to elicit Richards’s response. Massie, 744 S.W.2d at 316-17. Thus, we cannot say that the trial court abused its discretion in admitting Richards’s statement because it was spontaneous and voluntary and not the result of interrogative questioning. Dossett, 216 S.W.3d at 24. We overrule Richards’s first issue. Accomplice-Witness Instruction In his second issue, Richards contends that Harvey is an accomplice and the court erred by failing to include an accomplice-witness instruction in the jury charge. Richards did not request this instruction at trial or object to its omission from the jury charge. A person indicted for the same crime as the defendant, or a lesser-included offense based on “alleged participation” in the “greater offense” is an accomplice as a matter of law. Ex parte Zepeda, 819 S.W.2d 874, 876 (Tex. Crim. App. 1991). A person who “participates with the defendant before, during, or after commission of a crime” Richards v. State Page 9 and may be prosecuted for the same crime as the defendant is an accomplice as a matter of fact. Id at 875-76. To be an accomplice, one must perform an affirmative act promoting the crime. See Paredes v. State, 129 S.W.3d 530, 536 (Tex. Crim. App. 2004). This participation must involve some affirmative act that promoted the commission of the offense with which the accused is charged. Bulington v. State, 179 S.W.3d 223, 229 (Tex. App.—Texarkana 2005, no pet.). If a person is an accomplice as a matter of law, the court must so instruct the jury. See Paredes, 129 S.W.3d at 536. If the evidence is conflicting as to whether a person is an accomplice, the court must submit the issue to the jury. See id.1 Assuming, without deciding, that Richards was entitled to an accomplice- witness instruction, we will reverse only if the unobjected-to error caused “egregious” harm.2 See Herron v. State, 86 S.W.3d 621, 632 (Tex. Crim. App. 2002). Omission of an accomplice-witness instruction is harmless unless “corroborating (non-accomplice) evidence is ‘so unconvincing in fact as to render the State’s overall case for conviction clearly and significantly less persuasive.’” Id. A defendant cannot be convicted based on accomplice testimony unless it is corroborated. See TEX. CODE CRIM. PROC. ANN. art. 38.14 (Vernon 2005); see also Cathey v. State, 992 S.W.2d 460, 462 (Tex. Crim App. 1999). Corroboration is insufficient if it 1 If a person is not an accomplice, “no charge need be given to the jury either that the witness is an accomplice witness as a matter of law or in the form of a fact issue whether the witness is an accomplice witness.” Gamez v. State, 737 S.W.2d 315, 322 (Tex. Crim. App. 1987). 2 Because Richards failed to object to the court’s omission, he must prove that the error caused him egregious harm. Richards v. State Page 10 “merely shows the commission of the offense,” but is sufficient if it tends to connect the defendant to the offense. TEX. CODE CRIM. PROC. ANN. art. 38.14.3 Corroborative evidence may be “circumstantial or direct.” Reed v. State, 744 S.W.2d 112, 126 (Tex. Crim. App. 1988). The record contains sufficient corroborative non-accomplice testimony tending to connect Richards to possession of the cell phone. Howell testified that Richards told him that he asked Harvey to purchase the cell phone and deliver it to him. Richards also told him that he had given the cell phone to Hickman. Howell then spoke to Hickman, who later admitted that he had the cell phone and told Howell where it was hidden. Riggle testified that after he was given the cell phone, he recovered six numbers, three of them associated with Hickman and one made to Lake Country Inn, the location Harvey testified she called to test the cell phone. This non-accomplice testimony establishes that Richards: (1) was in possession of the cell phone; (2) was housed in the correctional facility at the time he came into possession of the cell phone; and (3) gave the cell phone to Hickman so that he could use it. Collectively, these factors constitute sufficient corroboration and tend to connect Richards to the possession; thus, the jury could reasonably conclude from non- accomplice testimony that Richards possessed the cell phone. See Hernandez v. State, 939 S.W.2d 173, 178 (Tex. Crim. App. 1997). 3 It is not necessary that corroborative evidence establish the defendant’s guilt or “directly connect the defendant to the crime.” Cathey v. State, 992 S.W.2d 460, 462 (Tex. Crim App. 1999). Richards v. State Page 11 Therefore, we do not find the evidence so weak or unconvincing as to make the State’s case clearly and significantly less persuasive. See Herron, 86 S.W.3d at 632. Because Richards did not suffer egregious harm, the charge error is harmless. We overrule Richards’s second issue. Effective Assistance of Counsel In his third issue, Richards argues that trial counsel provided ineffective assistance by (1) failing to request an accomplice-witness instruction; (2) repeatedly mentioning that he invoked his right to remain silent when talking to officers; and (3) failing to object to Riggle’s testimony that he believed Harvey was telling the truth. The standard in Strickland v. Washington applies to a claim of ineffective assistance of counsel. Strickland v. Washington, 466 U.S. 668, 104 S.Ct. 2052, 80 L.Ed.2d 674 (1984). To prevail, a defendant must first show that his counsel’s performance was deficient. Id. at 687, 104 S.Ct. at 2064; see Mitchell v. State, 68 S.W.3d 640, 642 (Tex. Crim. App. 2002). Then it must be shown that this deficient performance prejudiced the defense. Strickland, 466 U.S. at 687, 104 S.Ct. at 2064. Appellate review of defense counsel’s representation is highly deferential and presumes that counsel’s actions fell within the wide range of reasonable and professional assistance. Mallett v. State, 65 S.W.3d 59, 63 (Tex. Crim. App. 2001); Tong v. State, 25 S.W.3d 707, 712 (Tex. Crim. App. 2000). In assessing Richards’s claims, we apply a strong presumption that trial counsel was competent. Thompson v. State, 9 S.W.3d 808, 813 (Tex. Crim. App. 1999). We presume counsel's actions and decisions were reasonably professional and were Richards v. State Page 12 motivated by sound trial strategy. See Jackson v. State, 877 S.W.2d 768, 771 (Tex. Crim. App. 1994). When, as in this case, there is no evidentiary record developed at a hearing on a motion for new trial, it is extremely difficult to show that trial counsel's performance was deficient. See Bone v. State, 77 S.W.3d 828, 833 (Tex. Crim. App. 2002). Accomplice Witness First, Richards argues that his counsel was ineffective for failing to request an instruction to the jury that Harvey was an accomplice witness. Because of the nature of the evidence corroborating Harvey’s testimony discussed in Richards’s second issue, we cannot say that but for his trial counsel's failure to request this instruction, the outcome of the proceeding would have been different. Consequently, Richards has failed to meet the second prong of Strickland. See Strickland, 466 U.S. at 697, 104 S. Ct. at 2069. Richards’s Right to Remain Silent Second, Richards contends that counsel’s questions on cross-examination that elicited testimony that he refused to talk to Riggle after being advised of his constitutional rights constituted deficient performance. Hall v. State, 161 S.W.3d 142, 154-55 (Tex. App.—Texarkana 2005, pet. ref’d) (counsel ineffective in failing to object to prosecutor’s statements and argument regarding defendant’s post-arrest silence); Brown v. State, 974 S.W.2d 289, 294 (Tex. App.—San Antonio 1998, pet. ref’d) (counsel ineffective in failing to object to testimony regarding defendant’s post-arrest silence). We evaluate counsel's performance while taking into consideration the totality of representation and the particular circumstances of this case. Thompson, 9 S.W.3d at 813; Richards v. State Page 13 Ex parte Felton, 815 S.W.2d 733, 735 (Tex. Crim. App. 1991). There is a strong presumption that counsel's conduct fell within the wide range of reasonable professional assistance. Strickland, 466 U.S. at 689, 104 S.Ct. at 2065; Tong, 25 S.W.3d at 712. Therefore, we will not use hindsight to second guess counsel's trial strategy. Hall, 161 S.W.3d at 152. Richards argues that the jury should not have been told that he invoked his right to remain silent, and the State agrees. However, under the facts of this case, proof that Richards invoked his rights once they were read to him could have supported Richards’s contention that the confession he made to Howell was involuntary and inadmissible. Because “there is at least the possibility that the conduct could have been [a] legitimate trial strategy, we must deny relief on an ineffective assistance claim on direct appeal" for this argument. Murphy v. State, 112 S.W.3d 592, 601 (Tex. Crim. App. 2003). Truthfulness Counsel elicited on cross-examination of Riggle that he believed that Harvey was truthful when he interviewed her. This, argues Richards, was inadmissible opinion testimony and counsel’s failure to object fell below the standard of reasonable professional representation. Schutz v. State, 957 S.W.2d 52 (Tex. Crim. App. 1997) (expert testimony that complainant is telling the truth is inadmissible); Miller v. State, 757 S.W.2d 880, 883 (Tex. App.—Dallas 1988, pet. ref’d). Counsel’s questioning of Riggle was as follows. Richards v. State Page 14 [Richards’s Counsel]: Not in two hours, her being truthful, her coming forward, answering every question you asked, and in two hours not once did you ever hear her put that phone in the hands of my client Richards [A]: No. Although it may have been unwise for Richards’s counsel to elicit inadmissible testimony on truthfulness, analyzing this claim we cannot say that counsel’s actions were not grounded in sound trial strategy. Walker v. State, 201 S.W.3d 841, 850 (Tex. App.—Waco 2006, pet. ref’d). Therefore, Richards has failed to meet the first prong of Strickland on this argument. Strickland, 466 U.S. at 687, 104 S.Ct. at 2064; Walker, 201 S.W.3d at 850. We overrule Richards’s third issue. Constitutionality of Section 38.11(j) Richards’s fourth issue complains that section 38.11(j) of the Penal Code violates the Equal Protection Clause of the Fourteenth Amendment. The language of section 38.11(j) is as follows: a person commits an offense if the person while an inmate of a correctional facility operated by or under contract with the Texas Department of Criminal Justice or while in the custody of a secure correctional facility or secure detention facility for juveniles possesses a cellular telephone. TEX. PEN. CODE ANN. § 38.11(j) (Vernon 2008). When reviewing the constitutionality of a statute, we presume that the statute is valid and that the legislature did not act unreasonably or arbitrarily in enacting it. Rodriguez v. State, 93 S.W.3d 60, 69 (Tex. Crim. App. 2002); see also TEX. GOV’T CODE ANN. § 311.021 (Vernon 1998). The burden is on Richards to prove the statute is unconstitutional. See Rodriguez, 93 S.W.3d at 69. The statute must be upheld if it can be Richards v. State Page 15 reasonably construed as constitutional. Brenneman v. State, 45 S.W.3d 729, 732 (Tex. App.—Corpus Christi 2001, no pet.). Inmates are not a suspect class and so their equal protection claims must be reviewed under a rational-basis test. Garcia v. Dretke, 388 F.3d 496, 499 (5th Cir. 2004). So long as the statute furthers some legitimate state interest, its constitutionality should be upheld. Id. Howell told the jury that he was immediately worried when he heard there was a cell phone on the unit because of security concerns. He testified that “we can’t fulfill our mission to provide safety if offenders are running around with cell phones.” Similarly, Riggle testified that the cell phone case was a top priority because the prison is always concerned about escape plans made using cell phones. Maintaining security in prisons is arguably a legitimate state interest and therefore we find no merit in Richards’s claim of an Equal Protection Clause violation. We overrule his fourth issue. Cruel and Unusual Punishment Richards’s final issue addresses whether his sentence amounts to cruel and unusual punishment.4 When the trial court sentenced Richards to twenty-five years in prison, he did not object on grounds of cruel and unusual punishment. See Steadman v. State, 160 S.W.3d 582, 586 (Tex. App.—Waco 2005, pet. ref’d) (claim that sentence constituted cruel and unusual punishment not preserved absent an objection at trial). Because this issue is not preserved, we overrule it. 4 Because Richards had four prior felony convictions, his sentence of twenty-five years was the minimum prescribed for his status and violation. Richards v. State Page 16 Conclusion Having overruled all of Richards's issues, we affirm the trial court's judgment. BILL VANCE Justice Before Chief Justice Gray, Justice Vance, and Justice Reyna (Chief Justice Gray concurs in the judgment with a note)* Affirmed Opinion delivered and filed December 31, 2008 Do not publish [CRPM] *(Chief Justice Gray concurs in the Court’s judgment to the extent it affirms the trial court’s judgment. A separate opinion will not issue. He notes, however, that it is not undisputed, as stated by the Court, that Richards was in custody. The question of what constitutes custody for purposes of custodial interrogation is not so clear cut when the person is already in prison. In this instance the question of custody is not necessary to resolve because the Court determines there was no interrogation. For this reason the conclusory statements that Richards was in custody are irrelevant to the disposition of this appeal. As such they are dicta and should be removed from the opinion.) Richards v. State Page 17
/* * Distributed under the Boost Software License, Version 1.0. * (See accompanying file LICENSE_1_0.txt or copy at * http://www.boost.org/LICENSE_1_0.txt) * * Copyright (c) 2014 Andrey Semashev */ /*! * \file atomic/detail/ops_emulated.hpp * * This header contains lockpool-based implementation of the \c operations template. */ #ifndef BOOST_ATOMIC_DETAIL_OPS_EMULATED_HPP_INCLUDED_ #define BOOST_ATOMIC_DETAIL_OPS_EMULATED_HPP_INCLUDED_ #include <boost/memory_order.hpp> #include <boost/atomic/detail/config.hpp> #include <boost/atomic/detail/storage_type.hpp> #include <boost/atomic/detail/operations_fwd.hpp> #include <boost/atomic/detail/lockpool.hpp> #include <boost/atomic/capabilities.hpp> #ifdef BOOST_HAS_PRAGMA_ONCE #pragma once #endif namespace boost { namespace atomics { namespace detail { template< typename T > struct emulated_operations { typedef T storage_type; static BOOST_FORCEINLINE void store(storage_type volatile& storage, storage_type v, memory_order) BOOST_NOEXCEPT { lockpool::scoped_lock lock(&storage); const_cast< storage_type& >(storage) = v; } static BOOST_FORCEINLINE storage_type load(storage_type const volatile& storage, memory_order) BOOST_NOEXCEPT { lockpool::scoped_lock lock(&storage); return const_cast< storage_type const& >(storage); } static BOOST_FORCEINLINE storage_type fetch_add(storage_type volatile& storage, storage_type v, memory_order) BOOST_NOEXCEPT { storage_type& s = const_cast< storage_type& >(storage); lockpool::scoped_lock lock(&storage); storage_type old_val = s; s += v; return old_val; } static BOOST_FORCEINLINE storage_type fetch_sub(storage_type volatile& storage, storage_type v, memory_order) BOOST_NOEXCEPT { storage_type& s = const_cast< storage_type& >(storage); lockpool::scoped_lock lock(&storage); storage_type old_val = s; s -= v; return old_val; } static BOOST_FORCEINLINE storage_type exchange(storage_type volatile& storage, storage_type v, memory_order) BOOST_NOEXCEPT { storage_type& s = const_cast< storage_type& >(storage); lockpool::scoped_lock lock(&storage); storage_type old_val = s; s = v; return old_val; } static BOOST_FORCEINLINE bool compare_exchange_strong( storage_type volatile& storage, storage_type& expected, storage_type desired, memory_order, memory_order) BOOST_NOEXCEPT { storage_type& s = const_cast< storage_type& >(storage); lockpool::scoped_lock lock(&storage); storage_type old_val = s; const bool res = old_val == expected; if (res) s = desired; expected = old_val; return res; } static BOOST_FORCEINLINE bool compare_exchange_weak( storage_type volatile& storage, storage_type& expected, storage_type desired, memory_order success_order, memory_order failure_order) BOOST_NOEXCEPT { return compare_exchange_strong(storage, expected, desired, success_order, failure_order); } static BOOST_FORCEINLINE storage_type fetch_and(storage_type volatile& storage, storage_type v, memory_order) BOOST_NOEXCEPT { storage_type& s = const_cast< storage_type& >(storage); lockpool::scoped_lock lock(&storage); storage_type old_val = s; s &= v; return old_val; } static BOOST_FORCEINLINE storage_type fetch_or(storage_type volatile& storage, storage_type v, memory_order) BOOST_NOEXCEPT { storage_type& s = const_cast< storage_type& >(storage); lockpool::scoped_lock lock(&storage); storage_type old_val = s; s |= v; return old_val; } static BOOST_FORCEINLINE storage_type fetch_xor(storage_type volatile& storage, storage_type v, memory_order) BOOST_NOEXCEPT { storage_type& s = const_cast< storage_type& >(storage); lockpool::scoped_lock lock(&storage); storage_type old_val = s; s ^= v; return old_val; } static BOOST_FORCEINLINE bool test_and_set(storage_type volatile& storage, memory_order order) BOOST_NOEXCEPT { return !!exchange(storage, (storage_type)1, order); } static BOOST_FORCEINLINE void clear(storage_type volatile& storage, memory_order order) BOOST_NOEXCEPT { store(storage, (storage_type)0, order); } static BOOST_FORCEINLINE bool is_lock_free(storage_type const volatile&) BOOST_NOEXCEPT { return false; } }; template< unsigned int Size, bool Signed > struct operations : public emulated_operations< typename make_storage_type< Size, Signed >::type > { }; } // namespace detail } // namespace atomics } // namespace boost #endif // BOOST_ATOMIC_DETAIL_OPS_EMULATED_HPP_INCLUDED_
We noticed that you're using an unsupported browser. The TripAdvisor website may not display properly.We support the following browsers:Windows: Internet Explorer, Mozilla Firefox, Google Chrome. Mac: Safari. Super friendly knowledgeable staff and good food in old west steak house decor with flowering lilacs at entrance. Visiting from Castle Rock to see Great Sand Dunes and we enjoyed steak, salmon ,shrimp and mussels over two days. The sweet onion rings were super delicious....More I stopped here for dinner while visiting the area on vacation. I really liked the atmosphere of this restaurant, which felt like an old ranch house with wood highlights and plenty of western decor. I was seated in a booth in the bar area, and...More Great food and a nice dining room and bar area. I had the burger and fries and my buddy had a steak. Mine was excellent (cooked perfect) and my buddy said his steak was excellent. Desert was delicious. It is fairly pricey for a small...More We stopped at this restaurant on a whim...driving through the town. The food was wonderful. We had steak and chicken fried chicken with potatoes beans, and bread. The portions were large, the cost extremely reasonable, the ambiance western, it was a great mean. I would...More The steaks looked great, but I opted for a hamburger, which was very good. The service was friendly, but very slow and spotty. Had to wait for my check for over 10 minutes, which is a peeve of mine. Just don't think it's wise to...More
Q: Change sources directory Visual C# I have a project where my sources are not next to my .csproj. I have added all the sources by link but the project is not compiling... Here is my error : CoreCompile: C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe /noconfig /nowarn:1701,1702,2008 /nostdlib+ /platform:x86 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /errorendlocation /preferreduilang:en-US /highentropyva- /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\PresentationCore.dll" /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\PresentationFramework.dll" /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.DataSetExtensions.dll" /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.Runtime.Serialization.dll" /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll" /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll" /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\UIAutomationProvider.dll" /reference:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\WindowsBase.dll" /debug+ /debug:full /filealign:512 /out:obj\x86\Debug\MyApp.exe /target:winexe /utf8output /win32icon:MyIcon.ico my_path\App.xaml.cs my_path\MainWindow.xaml.cs build_path\App.g.cs build_path\MainWindow.g.cs CSC : error CS5001: Program 'my.exe' does not contain a static 'Main' method suitable for an entry point I was wondering if it was possible to tell to my solution : "My sources are in my_path directory ?" I think this error is caused because it doesn't find some reference in my App.xaml : <Application x:Class="MyApp.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" StartupUri="MainWindow.xaml" > A: You need to make sure that you have the correct File Properties on your app.xaml. Inside Visual Studio right-click App.xaml and choose "Properties". It should read like this: Build Action: ApplicationDefinition Custom Tool: MSBuild:Compile
The Chinese Government will ban shops from handing out plastic carrier bags in June. It is estimated that the Chinese use as many plastic bags in a week as Australia uses in a year. Mr Dee says Australia's phase-out should be sped up. "Australia of course is looking to phase out plastic bags in 2009 and the fact that China desires to do it in less than six months, I think is a sign that ... we could do it faster than that.," he said. Mr Dee believes that the Chinese Government is using the issue as a signal that it is taking environmental issues very seriously. "The fact that the biggest country in the world, the biggest users of plastic bags, are moving to ban them ... is extremely important, because if it can be done in China it can be done in any country in the world."
Many products were launched by Xiaomi in India during the year 2016. You can watch the list of products in 360 View. Check whether the product you own is present in this. 1,500 total views, no views today
Magneto-optic dynamics in a ferromagnetic nematic liquid crystal. We investigate dynamic magneto-optic effects in a ferromagnetic nematic liquid crystal experimentally and theoretically. Experimentally we measure the magnetization and the phase difference of the transmitted light when an external magnetic field is applied. As a model we study the coupled dynamics of the magnetization, M, and the director field, n, associated with the liquid crystalline orientational order. We demonstrate that the experimentally studied macroscopic dynamic behavior reveals the importance of a dynamic cross-coupling between M and n. The experimental data are used to extract the value of the dissipative cross-coupling coefficient. We also make concrete predictions about how reversible cross-coupling terms between the magnetization and the director could be detected experimentally by measurements of the transmitted light intensity as well as by analyzing the azimuthal angle of the magnetization and the director out of the plane spanned by the anchoring axis and the external magnetic field. We derive the eigenmodes of the coupled system and study their relaxation rates. We show that in the usual experimental setup used for measuring the relaxation rates of the splay-bend or twist-bend eigenmodes of a nematic liquid crystal one expects for a ferromagnetic nematic liquid crystal a mixture of at least two eigenmodes.
Sadaf Kanwal Sadaf Kanwal is a Pakistani actress and model. She has played the role of Sharmeen Mukhtiyar in 2017 movie Balu Mahi. As a model, Kanwal has established a career and has been nominated for several awards including Lux Style Awards and Hum Awards. She later portrayed a supporting role in the period drama Alif (2019). Personal life Kanwal is the granddaughter of Pakistani senior actress Salma Mumtaz, and the niece of Nida Mumtaz. Filmography Awards and nominations References External links Category:Pakistani film actresses Category:Living people Category:Pakistani female models Category:Actresses from Karachi
This invention relates to a flat panel display and, more particularly, to a display of the row-backlight column-shutter type. An article by T. J. Nelson, J. S. Patel and P. D. T. Ngo in Applied Physics Letters, vol. 52, No. 13, Mar. 28, 1988, pages 1034-1036, describes the basic concept and configuration of a flat panel display of the row-backlight column-shutter type. Further details of such a display are set forth by Nelson and Patel in an article in Display, Vol. 10, April 1989, pages 76-80. In both articles, an array of ferroelectric liquid crystals is described as forming an advantageous column-shutter component for the display. In the aforementioned second-cited article and in a copending commonly assigned application, Ser. No. 180,442, filed Apr. 12, 1988, now U.S. Pat. No. 4,924,215 various instrumentalities are described for making the row-backlight component of such a flat panel display. These instrumentalities include conventional arrays of electroluminescent, plasma or vacuum-fluorescent elements arranged in rows that are sequentially activated to emit light. Efforts have continued by workers skilled in the art aimed at trying to devise yet additional ways in which to make an array of emitters suitable for forming the row-backlight component of a flat panel display. These efforts have been motivated by a desire to improve the row-backlight component in such ways as by increasing its peak brightness and efficiency, by decreasing the persistence of a row after it is deactivated, by facilitating row-by-row addressability of the array and by making the component relatively easy to manufacture. It was recognized that these efforts, if successful, could provide a flat panel display whose attractiveness for important commercial applications would be significantly enhanced.
Introduction I’m LCP, or Stasiek if you can pronounce that. Just a 20 years old guy from Poland who spends way too much time in front of computers. That’s how all my potted plants end up dead. My Journey I’ve been using computers for as long as I can remember, playing Solitaire, The Settlers, and other simple DOS games, because that’s what my parents and grandma liked to play. I started with Win95, 98, and 2000, before learning about Linux. My interest in design was sparked by the original iPhone icons, which I loved. In contrast with my hatred toward the Faenza icon theme, both have fairly similar style yet widely different results. That’s how I began exploring and learned from there. Correspondingly, my Linux journey started back in 2007 when my dad showed me Ubuntu, and just like what I did with Windows 2000 before, my pastime became installing and reinstalling Linux alongside Windows in different configurations (I apparently was consumed by the concept of installation and configuration, which might explain my YaST obsession?). Later in 2010, I had a tough time with a machine that wouldn’t take any distro with the exception of openSUSE (although it did end up with a few Linuxrc errors). Besides, I really liked its GNOME 2 config back then; it was really user friendly yet powerful. I gave KDE a shot but to this day I never really liked it. Contributing, how it all started… My first contribution was because of my consistent and annoying complaining to Richard Brown on Linux Gaming Discord about the sorrow state of artwork in Tumbleweed. I didn’t like anything there. I, it seemed too dark, too boring; stuff was barely visible due to contrast issues. He pointed me to contribute and make it better then, so I did. Around the same time me and some of other people from Linux Gaming Discord created the openSUSE Discord, and I reused some assets from the Discord to create the new branding. Even though my main focus has been artwork, I also take part in some coding, translations, and obviously testing. I enjoy all of it in general. It is a great way to make computing easier and more pleasant for other less experienced users. Actually, to me, my most valuable contribution has been encouraging people to use openSUSE and contribute to it, while doing my best to help them out when needed. Otherwise, I wouldn’t have been able to provide anything on my own because I rely on community to actively help me out with their judgment; just as I do help them out with mine. Side projects Outside of openSUSE I also work on Pixelfed, some Discord distros collaboration (artwork for Fedora and Gentoo discords on top of openSUSE one) and more recently been working on User Interface (UI) design for SuperTuxKart and some custom tiles for OpenSkyscraper in order to replace injecting the EXE file (but gamedev is hard, you know). One thing that needs more attention in openSUSE? Libyui-gtk needs more attention. It’s a library that was originally developed for YaST then got dropped, but Manatools still heavily depends on it. Any contribution to the development is encouraged and will help bring it back home. Gaming I don’t play as often as I used to because I’m busy contributing, but I love Minecraft, The Settlers 2 and Solitaire Spider, which its terminal version was my very first open source software project. Something I can talk about for hours Recently, it’s been radio buttons. The design we use in UIs doesn’t make much sense compared to the real life equivalent, as opposed to basically every other form element. But at the same time we can’t do much about it… now that people got used to this one. Plus, I don’t see a proper replacement. A lie about myself I like dogs. I’d like to add Please contribute to https://github.com/openSUSE/branding/issues/93, every voice matters!
More Stories You can choose any profession in the entertainment industry but if you are keen to become a critic then you must be ready to face the ire of the cine folks as well as their fans. This love-hate relationship has been strong since long and with the advent of social media, the war of words and unwanted statements is aggravating the situation. Currently, there is a talk that the fans of a big hero are searching for a film critic to kidnap him and give him a piece of their mind. It is heard that this critic is very friendly with a senior producer cum director so the fans are waiting for the critic at the producer’s office. Incidentally, this film critic fought with two heroes blatantly. While one hero fans gave him warning and left it there the other hero fans have decided to kidnap this critic without any warnings. At this point, this is being termed as a mere speculation and it is not sure how true all this is. But if anything untoward happens it won’t escape the public or media eye so let us wait and watch.
Hepsin, a cell surface serine protease identified in hepatoma cells, is overexpressed in ovarian cancer. Extracellular proteases mediate the digestion of neighboring extracellular matrix components in initial tumor growth, allow shedding or desquamation of tumor cells into the surrounding environment, provide the basis for invasion of basement membranes in target metastatic organs, and are required for release and activation of many growth and angiogenic factors. We identified overexpression of the serine protease hepsin gene in ovarian carcinomas and investigated the expression of this gene in 44 ovarian tumors (12 low malignant potential tumors and 32 carcinomas) and 10 normal ovaries. Quantitative PCR was used to determine the relative expression of hepsin compared to that of beta-tubulin. The mRNA expression levels of hepsin were significantly elevated in 7 of 12 low malignant potential tumors and in 27 of 32 carcinomas. On Northern blot analysis, the hepsin transcript was abundant in carcinoma but was almost never expressed in normal adult tissue, including normal ovary. Our results suggest that hepsin is frequently overexpressed in ovarian tumors and therefore may be a candidate protease in the invasive process and growth capacity of ovarian tumor cells.
1. Field of the Invention The present invention relates to image forming apparatuses that read an original and form an image of the original that has been read and to color shift correction methods thereof. 2. Description of the Related Art In recent years, in order to increase the speed of image formation, the same number of developing devices and photosensitive members as the number of coloring materials have been provided in electrophotographic method color image forming apparatuses, and there has been an increasing number of color image forming apparatuses of a method (tandem method) in which images of different colors are transferred successively onto an image carrying belt and onto a recording medium. By using this method, throughput can be greatly increased, but on the other hand color shift is produced originating in poor uniformity or accuracy of installment positioning of the lenses of a deflection scanning apparatus, or accuracy of positioning in assembling the deflection scanning apparatus itself to the image forming apparatus main unit. That is, tilting or curvature is produced in the scanning lines and the extent of this varies for each color, thereby producing a problem of color shift due to positional shifting of each color on the transfer paper, and as a result it becomes difficult to achieve high quality color images. On the other hand, since tilting occurs in an original itself due to the position in which the original is placed on the original platform in original reading sections of color image forming apparatuses, it is necessary to perform correction for each original at the time of reading, which necessitates a considerable processing capacity. As a method for countering this color shift, Japanese Patent Laid-Open No. 2002-116394 for example describes a method in which an optical sensor is used at an assembly step of the deflection scanning apparatus to measure a magnitude of bending in the scanning lines, and the lens is fastened after it is mechanically rotated to adjust the bending of the scanning lines. Furthermore, Japanese Patent Laid-Open No. 2003-241131 describes a method in which the magnitude of tilting of the scanning lines is measured using an optical sensor at a step of installing the deflection scanning apparatus to the image forming apparatus main unit, and the deflection scanning apparatus is mechanically tilted to adjust the tilt of the scanning lines when installing to the apparatus main unit. Here, in correcting an optical path of an optical system, it is necessary to mechanically operate a correction optical system including a light source and an f-theta lens, and mirrors or the like on the optical path, then align the positions of test toner images. For this reason, the methods described in Japanese Patent Laid-Open No. 2002-116394 and Japanese Patent Laid-Open No. 2003-241131 require high-precision moving components, and this incurs much higher costs. Further still, optical path corrections in optical systems require time until correction is completed, and therefore although it is impossible to carry out corrections frequently, the shifting in the optical path length changes by being affected by temperature increases or the like in the mechanical units. For this reason, the influence of temperature increases in the mechanical units cannot be eliminated even though corrections are carried out at a certain point in time, and therefore it is difficult to prevent color shift by correcting the optical path of the optical system. On the other hand, Japanese Patent Laid-Open No. 2004-170755 describes a method in which an optical sensor is used to measure magnitudes of tilting and curvature in the scanning lines, then bitmap image data is corrected so as to offset these and form a corrected image thereof. This method performs corrections electrically by processing the image data, and therefore it can handle color shift at a lower cost than the methods described in Japanese Patent Laid-Open No. 2002-116394 and Japanese Patent Laid-Open No. 2003-241131 in that mechanical adjustment members and adjustment steps during assembly are not required. Furthermore, in Japanese Patent Laid-Open No. 08-85237, image processing such as color processing and halftone processing is carried out, and raster image data is formed in a bitmap memory for each color component (C, M, Y, and K). After this, output coordinate positions of the image data for each color are automatically converted to output coordinate positions in which registration shifting has been corrected. And a configuration is disclosed in which positions of an optical beam modulated by a modifying means based on the converted image data for each color are modified by amounts smaller than the unit of the smallest dot of the color signal. However, in electrical color shift corrections, which are one method for handling color shift, it is necessary to execute the color shift corrections after executing skew corrections at the reading section side for an original that has been placed diagonally (hereinafter referred to as “skewed original”) in the original reading section. However, in ordinary skew corrections, the skew amount varies for each original reading, which is a problem in that high speed processing capabilities are necessary for calculating the tilting. Furthermore, by doubling the processing with skew correction processing and color shift correction processing, a possibility is sufficiently anticipated that deterioration in correction accuracy and reductions in calculation processing will occur. Further still, there is a problem in that apparatuses without a skew correction function cannot execute this. Furthermore, techniques are conceivable of supplementing electrical color shift corrections in devices having a skew correction processing function, however these are difficult to realize since skew corrections are uniquely determined for RGB originals, while the correction amounts vary for each color of CMYK in regard to color shift corrections.
################################################################################ # # intel-mediasdk # ################################################################################ INTEL_MEDIASDK_VERSION = 19.4.0 INTEL_MEDIASDK_SITE = http://github.com/Intel-Media-SDK/MediaSDK/archive INTEL_MEDIASDK_LICENSE = MIT INTEL_MEDIASDK_LICENSE_FILES = LICENSE INTEL_MEDIASDK_INSTALL_STAGING = YES INTEL_MEDIASDK_DEPENDENCIES = intel-mediadriver INTEL_MEDIASDK_CONF_OPTS = -DMFX_INCLUDE="$(@D)/api/include" $(eval $(cmake-package))
Q: Xamarin.Forms List view with simple primary text / detail text The example on the Xamarin website doesn't have code that shows how to simple take a list of data objects and populate a ListView with TextCells with primary text and detail text. My code looks like this: var newsListings = await App.Api.News.GetNewsAsync(true); var simpleNews = new List<TextCell>(); foreach (var newsData in newsListings.news) { simpleNews.Add(new TextCell() { Text = $"{newsData.displayText}", TextColor = Color.SlateGray, Detail = $"{MonthHelper.GetShortMonth(newsData.displayDate.Month,'.')} {newsData.displayDate.Day}, {newsData.displayDate.Year}", DetailColor=Color.DarkGray }); } NewsListing.ItemsSource = simpleNews; Simple XAML: <ScrollView> <ListView x:Name="NewsListing"></ListView> </ScrollView> The output is a ListView that says Xamarin.Forms.TextCell 27 times... A: You bind a list by creating an IEnumerable and assigning to it to ItemSource, and them by using a template to specify which properties from your data to display. public class Data { public string Primary { get; set; } public string Secondary { get; set; } } // in your page's OnAppearing listView.ItemsSource = new List<Data>() { // initialize your list // }; // XAML <ListView x:Name="listView"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding Primary}" DetailText="{Binding Secondary}"> <DataTemplate> </ListView.ItemTemplate> </ListView>
The House Oversight and Government Reform Committee has directed Reddit to preserve deleted posts alleged to have been written by the tech specialist who reportedly scrubbed Hillary Clinton’s emails from her private server. Those 2014 posts, which were flagged by Reddit users earlier this week, asked for help on how to hide a certain “VIP’s” email address. A committee aide told FoxNews.com that they have sent a preservation request to Reddit, and are interested in verifying the authenticity of the posts. Reddit did not respond to a request for comment from FoxNews.com. The posts, written by a user named "stonetear," are alleged to belong to Paul Combetta, the tech specialist with Platte River Networks who reportedly deleted Clinton’s emails and has received immunity from the Justice Department. The development, first reported by The Hill, indicates the committee is taking seriously the claims by citizen-detectives who unearthed the posts. “Stonetear” began deleting his comments and post after they were unearthed. However, users managed to archive them using a popular archiving website “archive.is” frequently used on the site. Chairman Jason Chaffetz, R-Utah, told The Hill that Reddit is cooperating with its investigation. The allegations "fit the pattern of what we think was happening,” Chaffetz told The Hill. Reddit users typically do not link real names to their accounts, but the website Etsy shows the profile “stonetear,” created in 2011, is registered to the name Paul Combetta. “Stonetear”’s posts include one from July 24, 2014 – the same month that the State Department first asked Clinton aide Cheryl Mills to turn over Clinton’s work-related emails from her personal server – where he asks other users how to “strip out a VIP's (VERY VIP) email address from a bunch of archived email.” Multiple users responded saying it was not possible, while another warned that if Microsoft Exchange allowed it, “it could result in major legal issues.” “Stonetear” responded: “The issue is that these emails involve the private email address of someone you'd recognize, and we're trying to replace it with a placeholder address as to not expose it.” In another post from Aug. 26, 2013, he asks for code that would delete thousands of emails from a server – almost two years before Combetta deleted emails from Clinton’s server, but just a few months after his company was hired to run her server, According to the FBI report on Clinton’s email issues, Clinton aide Cheryl Mills told the FBI that Clinton decided in December 2014 that she did not need emails older than 60 days, and Mills asked someone to implement that. The name of the person she asked was redacted by the FBI but later determined by The New York Times to be Combetta. FoxNews.com’s Adam Shaw, Judson Berger and Maxim Lott contributed to this report.
Dynamic thiol/disulphide homeostasis and pathogenesis of Kawasaki disease. Kawasaki disease (KD) is an acute, self-limited, systemic vasculitis of unknown etiology. In the present study, we investigated whether there is a relationship between KD and dynamic thiol/disulphide homeostasis. This case-control study involved KD patients and healthy controls. Plasma total, native and disulphide thiol and the disulphide/native, disulphide/total and native thiol/total thiol ratios of all patients and the control group were analyzed simultaneously. A total of 20 patients with KD (male/female, 12/8) and 25 age- and gender-matched healthy controls (male/female, 12/13) were evaluated. Native, total thiol and native thiol/total thiol ratio were significantly lower in KD patients than in the control group (P < 0.001). In contrast, disulphide thiol, disulphide/native thiol and disulphide/total thiol ratios were significantly higher in KD patients than control subjects (P < 0.001). In KD patients with coronary artery lesion (CAL), the native thiol and total thiol were significantly lower than in KD patients without CAL. In KD patients with CAL, the ratios of disulphide/total thiol and disulphide/native thiol were significantly higher than in those without CAL (P = 0.02 and P = 0.02, respectively), whereas the ratio of native/total thiol was significantly lower (P = 0.02). The KD patients had lower plasma thiol (native and total) and higher disulphide thiol than controls, indicating that dynamic thiol/disulphide homeostasis might be an important indicator of inflammation in KD. Alteration and shifting of thiol/disulphide homeostasis to the oxidized side are correlated with the pathogenesis of KD and CAL.