text stringlengths 12 4.76M | timestamp stringlengths 26 26 | url stringlengths 32 32 |
|---|---|---|
Grégory Tarlé
Grégory Tarlé (born 11 April 1983) is a French retired ice hockey player and coach, who coached the French national team at the 2019 IIHF Women's World Championship.
References
External links
Category:1983 births
Category:Living people
Category:French ice hockey coaches
Category:French ice hockey forwards
Category:Sportspeople from Rouen | 2024-04-07T01:26:35.434368 | https://example.com/article/6112 |
Tag Archives: Life
the music is reverberating around the house. and i sit here alone with a pint of rocky road ice-cream, drenched in my own tears.
i thought this crying has been because of “pms”. but that period has passed. and i still weep puddles of tears every day, or nights, for the past two weeks. when i shut the door of today behind – be it laundry-hanging, errands-running, friends hanging-out, movies-watching, cooking or translating, when i am left alone with nothing and no one but me, it is hard to suppress the feeling of sadness.
i cry because i am here in Chiang Mai, and my family is there in Bangkok.i cry because i have wonderful memories with this guy, whom i want to continue building more memories with. but the timing is not right. and we just have to press “pause” in this beautiful friendship. i cry because God tells me to wait, when i would rather race and reach the finished line. i cry because i feel so helpless, and all i can do is to accept God’s will. i cry because i am paying a high price of full obedience, and even though total surrender to Jesus is sweet and i would not trade anything in the world for this peace, this submission to Christ is. still. hard.
this year, God has called me to do unimaginable, out-of-comfort zone things. leaving my job. being rid of financial security. telling me to wait. pulling people i hold dear out. bearing me naked, physically, emotionally, socially and spiritually. all of these, so i may be completely, totally, wholeheartedly dependent on Him.
the only thing i am holding on to is the vision i received from Him while i spent time up in the mountain. God, enthroned, and me at His feet with Jesus and the Spirit wrapping their presence around me. the sense of safety. and the whispers of Him, “while you wait, worship Me. serve Me.”
so i am here, sitting on the floor, leaning against the wall, with an empty pint of rocky road ice-cream (yeah, i finished all of it). my face is still streaked with trails of tears, and all i have in my chest is tired sobs.
but because He is God, and i am me, i can say, even with gritted teeth, that “Lord, You can have your way…in me.”
because Jesus Christ has already paid the price, i am willing to be broken and molded new…so i can strive to be perfect, just like Him, by His power and mercy.
because although there are a million ways we can choose to live, doing what God thinks is right is the only best way to live.
and even though i will cry a thousand tears for the next 143 days, because i am human and can’t get over my obnoxious pride, thinking that i deserve better, i am still willing…to be at His disposal.
i should be in bed by now. instead, i am wide awake, listening to the clicking sound of my own keyboard typing. it is in the quiet of night i get to explore the inner part of me, the part that was hidden and cast aside during the rushed hours of the day.
have you ever woke out of bed and started doing things instinctively? we jump out of bed, brush our teeth, take a bath, pull out our outfit from the closet, get dressed, eat breakfast, drive to work, go about our day, drive home, eat dinner, read briefly, yawn, slip under the blanket and go to sleep. is it the same with you? most days, i do exactly like that.
one of my dad’s favorite childhood teaching was – be present. be there with yourself when your brain thinks, your mind plans, your hands touch, your heart feels and your lips speak. know what is being done at the moment and aim at the results. picture what you want for the final outcome.
dad and i loved playing badminton together. he taught me how to hold a racket, where to position myself in a court and how to serve burdies. he was an athlete. it was no doubt he was far more advanced than i was. but i dared myself to beat him. we were both competitive when it came to badminton. one evening, in the heat of my losing battle, when the sweat and tears blurred my eyes and my hands were shaking, dad shouted from the other side of the court,
“mink! don’t just hit burdies aimlessly! look at me, be present and serve!”
though i was young and didn’t even consider badminton as a career, i took the advice. i watched dad intensely, then looked at the burdie in my hand, threw it up and hit it with all my might…
if it were tennis, i scored an ace.
from that day on, i have tried to live by dad’s teaching – to be present in whatever circumstances, wherever i find myself in and whoever i am with. the moment is there for us to catch but, if we blink, if we don’t pay attention, it will slip away. and we might never get it back.
i am practicing being present every day. it is never easy, especially for us, women, who are multi-taskers. but it’s worthwhile. i literally instruct myself to brush my teeth, comb my hair, wash my hands, plan my work and do one thing at a time, as almost impossible as it might be. i also try to drop everything i am doing at the moment to be there for my family members or friends. sometimes the voices you listen to aren’t heard until you turn and look into their eyes.
working on being present is, to me, another form of meditation. it slows down my living pace, raises a sense of awareness of those around me and also directs my path. when your heart and head are clear, you hear the voice of the Lord even clearer.
the Bible scriptures show us how God is always present to us:
God is our refuge and strength, an ever-present help in trouble. (psalm 46:1)
“Am I only a God nearby,” declares the LORD, “and not a God far away?Who can hide in secret places so that I cannot see them?” declares the LORD.
“Do not I fill heaven and earth?” declares the LORD. (jeremiah 23:23-24)
yes, i know that, though we were made in His image, we are not God. and we can’t fill the whole heaven and earth the way He does. BUT we can make impact in someone’s world by being present in their lives. your mom and dad. your siblings. your spouses. your children. your friends. your lovers. your colleagues. your neighbors. how can we influence these people if we are distracted and drawn away all the time?
what are getting your attention now? your facebook friends or someone sitting next to you asking when we can go out for lunch? your unfinished marketing plan or your children’s plea for you to just take a look at their drawing? the corrupted influence over 100 people at work or the honest life-changing impact in a sunday-school child’s life?
living is making choices. even though i am still young and foolish, believe me, i have been to the moment of complete absence. i never realized i missed so much until everything passed away.
the rabbit leans back against the moon with all its grace and peace. and i sat on my small balcony staring at it with jealousy…trapped in fear and disappointment in this world of imperfection.
i take comfort in the ordinary of life: the swaying branches of tamarind trees outside my balcony, birds with its chirping and their swoopy flights in search for food, the bicycle ride in the evening, the laundry hanging in my room and grey’s anatomy at 7:oo pm. although many things go wrong more than right, there are things that still make sense.
my life is like a room after party, littered with cups, bottles, plastic bags and garbage. my heart is a mess. and i wonder when that good day’s gonna be mine. i just exist. i don’t live. again, i question myself how on earth i ever got here. one thing i learned from this is – when you try to control life, it bounces away from you. the tighter you grip, the quicker it gets loose.
i am tired of living. i am tired of having to run this race. i am burnt out, exhausted and empty. but God’s mercy is like a sweet drop of honey. romans 8:1 says,
“Therefore, there is now no condemnation for those who are in Christ Jesus, because through Christ Jesus the law of the Spirit who gives life has set you free from the law of sin and death.”
and r0mans 8:39-40,
No, in all these things we are more than conquerors through him who loved us. For I am convinced that neither death nor life, neither angels nor demons, neither the present nor the future, nor any powers, neither height nor depth, nor anything else in all creation, will be able to separate us from the love of God that is in Christ Jesus our Lord.
i know i don’t deserve any good things. i have sinned against the Lord and i should be condemned. but this is the promise i need to hold on to because it is a glimpse of life. surely, the miracle will happen. when it comes, i will make a dash for it and live with hope again.
humbleness is shown not only in our action but also our attitude. we could serve a mug of hot water to a coughing person; stand alongside and offer comfort to those who weep or quietly listen to the teaching, suggestion or correction from our friends. but if, deep down in our hearts, we don’t repent, then the act of humbleness is nothing but the disguised pride.
God’s love covers multitudes of sins. yesterday night, while i was looking through my friend’s photo albums, i came across this picture.
i sat pondering on the love and protection bestowed upon the little girl and how content she was in the Father’s arm…and tears took me by surprise. the song “all my desire” by ray watson was playing and i could hear God speak to me, “all my desire, all i require, all that i need is in You”.
it was a simple song with only 3 sentences. but it is the message essential to my ears and touching to my heart. why makes christian life so complicated? sometimes all one can do is to sit in her quiet corner with repentant and broken heart…and simply worship. let the Spirit soak Himself into her barren life. let Him heal her wounded heart with salve. and let herself be loved. because we fail so much. we crash into immovable rocks and stumble hard on the ground. and our first instinct is to blame others, is to be silent and play a martyr…not out of humility but of pride…just to get the attention. and then we run through a blaze of fire. we are burnt with our sinfulness. we can’t escape it. and at the end of the day, we crumple onto the floor, face streaked with tears…unstoppable and lose all our pride we held so tightly during the day.
and we realized that all we ever need is in Jesus.
there is nothing worst and most frightening than to be caged in fear. God is challenging me to accept myself the way i am and also take His love as it is. i am afraid of discovering who i am turning out to be. but i have to believe that the Lord is creative and His hand will never fail me.
“Lord, this change freaks me out. i don’t know what to do with myself. i seem to fail at everything. i am haunted by guilt and fear. it has been difficult, Father. but now i come to You…asking You to heal me, cure me and make me whole again. i give this self to You. please do unto it as You will. i thank You for You. when all else fall apart, i can count on You to be a good listener, a loving Father and a righteous Judge.”
it was a full weekend. i half hoped that i would be able to get lazy, lay around in my room with both TV and computer on, eat junk food and not have to do laundry. well, i did get to do all those things…for three hours. for the rest of my saturday and sunday, i spent at church.
having pulled myself out of comfy cushions, i quickly threw a t-shirt and a pair of shorts into my tote bag along with toothbrush, toothpaste, yancey’s “soul survivor” and the bible. i was running late so i shuffled my legs as fast as i could. i hopped on a red truck, hopped off and hopped on a yellow truck (yes, chiang-mai has colorful transportation. there are red, yellow, green, blue and white. i wish they’d add pink and purple to the transportation system). as i strode toward the church, something amazing happened.
i was feeling grumpy and poopy because i felt that coming to church on a weekend wasn’t a kind of fun i was looking for at the moment. i also had that nagging sense of guilt and wanted to hide from Him more than go into His presence. while i got closer to the church, i became more aware of my surroundings. the green rice plants were swaying against the evening breeze. the clouds were easing slowly as the sun started to set. and the half rainbow arched on the eastern sky over the lahu church’s green roof. God was inviting me into His gate…His kingdom. what a contrast! the loving God was welcoming a bad-tempered woman like me! grace…so sweet.
anyway, youth group had the fast and pray night on saturday. it was the first time for all of us to fast and pray together at church. we were psyched up and energetic for the 1st two hours. the next 2 hours, i saw sleepy eyes, yawning and even some nodding heads. 🙂 we had a great time of worship and prayer. one of our prayer points was the flooding in thailand, which gets more severe day by day. it is one issue that i want you to join in interceding for our brothers and sisters, who have lost their homes, farms and loved ones. there will be a lot of homeless and hungry stomachs once rainy season is over.
today, we, girls, still groggy, slipped out of our blankets and sleeping bags (someone set up the alarm clock in the middle of night. and that someone had too deep of the sleep that she didn’t even hear the alarm set off. everyone except her and her friend was woken out of sleep because of the constant ring). at 7:00am, we had morning service. then at 10am, we had the usual service. i was singing 2 special items with the youth group and church’s choir band. it was neat. then in the afternoon, we had fellowship with the youth group from our neighboring church. we rode in two trucks (i was having my skin toasted within 15 minutes. the sun was scorching. there was no way to hide from it). when we got there, about ten people were already there. we danced, sang and played games together. then we worshipped the Lord. i got goosebumps. that gathering was genuine. although we barely knew each other, when God’s spirit was upon us, we were a band of brothers and sisters in Christ. we shouted praises to His name. we sang from the top of our lungs (i had never done that for a very long time. almost 3 years). we prayed together. it was neat…just being in that place and worshipping God with the people i love. this kind of love didn’t take place because of time and endurance. it was spontaneous, coming from the fact that we were God’s chosen people and we were there because God wanted us to be there. and we were siblings.
after youth group, we came back to our church and hung out. i learned two new cords from my karen-burmese friend. he is a great musician. i enjoyed being more acquainted with him. he’s come to the church for a month or even more…but i just started to feel comfortable enough being around him and goofing around with him. yeah, i take time with friendships and relationships. then we had the evening service and had dinner. then here i am…home at last.
it was a long weekend un-separated from the church. but i’m grateful to be back and active at the church again. i love spending time there…and i love the people there…adults, youth or even children.
i am supposed to go to youth group meeting this afternoon. instead, i sat down and reached for my notebook computer. my 1st excuse of not wanting to go is because it is too far. 2nd is it’s saturday and i need rest. 3rd is i’m too lazy to take the bus rides. all were summarized to one decided answer – i am staying home. what a lazy bum!
i recalled typing this sentence in my past posts but i will do it again – i really miss home especially on weekends! it’s not that my activities will be much different from what i do here. saturday at home means sleeping in, eating, chatting with mom and sisters, maybe going out to Big C supermarket for a walk, or if i’m up to it, i will even take long bus ride to downtown bangkok. but the fun side of being home is after i wake up, i get to bug my sisters about being sleepy-heads and dragging them out of bed. i get to eat mom’s breakfast and go out with her for lunch. when i walk to Big C, my sisters will accompany me. and when i take a trip downtown, there are many people to watch along the rides and at the end destination, my friend will be waiting there.
chiangmai, as much of a home as it is to me now, still doesn’t bring the comfort i need when i need it. i LOVE chiangmai and i’m not planning to leave soon. but my problem is i’m lonely, even among people…even in the sea of faces. i do have life and i have got friends here….but i’m a bangkoker at heart (okay, annie, my friend who is actually from bangkok, is going to pick at me again because she says that i’m from nonthaburi, a metropolitan area. not bangkok. but i’ll save the story for another time. ^^) and, y’know, no place is like home.
meet annie. she’s a tour and visit specialist at compassion now. 🙂
this is a polite face. when you're close enough, you get many different kinds of expressions. hehe.
yet i need to be open-minded. being in chiang-mai, i take joy in my own comfortable space with cable TV, internet, books and a clean fridge; the view of mountains even when i’m right in the heart of the city; some few good companions from work and church whom i have shared meaningful memories with; easy trips to the woods and decent job. God has blessed me abundantly.
there is no solid point i’m trying to make this afternoon. i just have to write and to get my thoughts and feelings out. but i do hope that someone hears me. i need some meaning in my life, someone to live for and some clear directions for day-to-day life. and, yes, i remember writing about “passion” 2 days ago. i’m not losing my grasp of God. i just need some tangible moments right now.
family's meal never gets boring. added a friend, it's even more superb! from left: mo, mai, mom and my friend, manna. i miss you guys.
i have used too much of my head. i have been trained to plan ahead, to get thoughts organized and to make things productive. but now…i just want to use my heart to write. i am now closing my eyes and type from the deepest part of my heart.
i am struggling. i feel depressed. i am in the depth of questions, “what am i doing?”…”am i still fit for what i do?”…”would i be better off somewhere else?”. being in a corporate organization has pros and cons.
the good parts are training, empowerment, discipline, new experiences and many more. having worked here, God has opened my world to another side of the country and opened my heart for the karen people. i get to travel. my 1st official trip out of country was to china, when i had a training with other colleagues from america, india, indonesia and philippines. i get to use my skills of language to serve God and His people. i am living and working closely with christians. He answered my prayer, which i asked Him before i graduated, “three things, God. travel. use linguistic talent to serve You. and a christian organization.” what more do i need to ask?
but you can’t have the good without seeing any faults. i haven’t had much time for myself. i am being trained to be someone who is not me. i miss my old self…the girl who cared for others when they were in need….the person who was compassionate and could understand what others went through…the servant of God who was not bitter or resentful towards the world. i might not be as perfect as i should be but i was myself…and i loved that self.
now…i am weary. i feel like i’ve come to another step of life, a step higher. it feels so cold. i appreciate and cherish the work i do now because i know how much impact my work can make on children’s and other people’s lives all around the world. i am grateful for the investment and trust people have showered on me. photography. writing. trainings. but there are so many battles going on. and i’m losing the true person that i am. the more i try to be better, the more i sense failure. my thoughts are consumed with how imperfect i am, how much more i have to live up and how i will never be able to do it well.
living is not out of passion but of obligation and duty. i think of the biblical patriots, how a lot of them served the Lord but never got to see the promises made to them. what did they hold on to? the only answer i can think of is the faith in the Lord that got them thus far.
i don’t have any answers to the struggles i am battling with. i don’t know whether i’m in the right place or not. i used to know…i was quick to hear His voice…but now…my heart is hardened. the constant injustice that happens to children and women. the bad guys who still reign and rule. the wretchedness Satan brought upon this world and the selfishness that came with human nature. i must be on the wrong path. the more i serve, the more hopeless i become. it isn’t supposed to be this way. if i truly serve the Lord, i should be joyful and hopeful, shouldn’t i?
the world is so vast and i am so small. what right did i have to think i could change it?
but my eyes have seen too many witnesses…how God CAN make the impossible to the possible. you all know so well from the bible. the wall of jericho. gideon’s incredible victory. God’s protection over david’s life. five loaves and 2 fish. broken prisons and the shouts of glory.
then many real life’s stories. i cannot deny that God is here with me. but i am so lost. i don’t want to do anything but to find somewhere quiet…apart from people and work…a place i don’t have to think about earning money or thinking about where to find food. the place i and God meet alone.
a prodigal is still wandering out in the desert and trying to find her way home.
apart from God there is no lasting quenching of our spiritual hunger and thirst.
each of us was created in the image and likeness of God. we were made for God’s fellowship, and our hearts can never be satisfied without His communion. just as iron is attracted to a magnet, the soul in its state of hunger is drawn to God. – billy graham
…the huay bong village. that deep yearning…that longing for something unreachable is there. although i struggled with the language and the concept of self-worth, the place drew me nearer to God. every night, i would look forward to shut the wooden door behind me, crawled into the mosquito net and laid myself down on the hard mattress. i was eager for the moment because i knew that i would meet God there. in my sorrow and fear; in my disappointment with people and pain, the Spirit revealed Himself so tangibly to me.
in the dark room, under my flashlight, i pored over His comfort and promises. they were alive and spoke to me directly. i treasure such moments. this desire is indescribable. but imagine…when you have met with God, when everything else fades, and you know that this is ultimately “the” moment…and then you’re back to the normal, you just consistently want to be back there again with the one you love. i guess that’s a rough version of what i feel.
yes, the condition was tough. i was bitten by lice and insects. the enjoyment in leisure time wasn’t air-conditioned malls or internet but the interaction with neighbors, the run in open fields in the morning and the sit by the creek with breeze kissing cheeks.
but, surprisingly, i was okay with it.
i found Him not only in my dark bedroom or on my wet pillow but also in the cracks and wrinkles of the elderly’s ancient faces, the infant’s cackle, the moo of cows and water buffalos, the 6 smelly fish and 3 lemongrass, the sincere apology of the one i’ve come to appreciate and love and the foreign conversation i’m still getting used to. through the people and adventures, i experienced God’s love.
the knitted connection between people and nature is what the Lord intends for me. it helps me to understand the triune God better. we can’t be without one another.
…never measures success by how well things are going. instead, it measures success by a life centered in God’s will.
…never puts its own needs first. instead, it always thinks of others first.
…never looks to its own capabilities to solve a problem. instead, it relies fully on God’s power for guidance and success.
this small excerpt resonates with my current experience. as i mentioned yesterday, the sense of failure continues to nag at my heart. but how sweet God’s grace is! i don’t need to be afraid that i’m not living up to the bar. there are standards in life, at work and even at church. however, the only standard we have to live up to is the Lord’s.
God’s standard is not a long list of rules and to-dos. there is only one thing He requires from us – obedience.
a flip to the next page in “faith that breathes” is the real journey section by toby mac. he speaks of success this way:
“a lot of times they see dc Talk and me as just ‘big business and lights.’ but we know that’s not real life. the real life is who we are in Jesus and how we’re living that day to day.”
if success means doing everything one can, even neglecting his family or stabbing his friend’s back, so that he can be the top of everything, it’s a failure.
if success means putting on a fake smile when his life is shattering to pieces, it’s a lie.
if success means being responsible and strict but having no time to relax, he is not living a life.
success means living a Christ-centered life and committing to obeying His command and will even to the point of ridicule. that’s faith.
“brothers, think of what you were when you were called. not many of ou were wise by human standards; not many were influential; not many were of noble birth. but God chose the foolish things of the world to shame the wise; God chose the weak things of the world to shame the strong. He chose the lowly things of this world and the despised things – and the things that are not – to nullify the things that are, so that no one may boast before Him.” 1 corinthians 1:26-29
every human is flawed from birth in spite of our intelligence or social status. that is why grace is so sweet. sadly, many people are either unaware or just forget this fact as they grow up. so they go on separate directions and boast their ways to the end of their life…to the doomed eternity.
but the good news is God’s love and mercy endure forever. His kindness has brought many people from all over the world to repentance. and i strongly believe that one day the prophecy in the Bible will be fulfilled…”that at the name of Jesus every knee should bow, in heaven and on earth and under the earth, and every tongue confess that Jesus Christ is Lord, to the glory of God the Father.” whether or not everyone on this earth will be Christ followers, everyone will be obliged to kneel before Him. not only because of His fearful supremacy and incomparable power but also of His everlasting goodness, immeasurable grace and enduring love.
by ending today’s post, i am putting up some silly pictures of my time in bangkok with my family and my friend, manna, during songkran festival last week.
i just finished writing a well thought-out post with 1,000 plus words about survival and fear. there was even some scientific thought in it, which is very unusual for a sentimental person like me to do so. then i clicked “publish”…only to have it ALL lost because of the internet connection error. now i feel more than a failure. i spent 2 hours writing this piece…hoping to at least accomplish something…to have this sense of triumph.
alas.
i’m battling with contentment and self-worth. i keep wondering if i am doing the right thing and at the right place. i’m reminded of my friend’s birthday card she gave me this march. she said, “most important thing of all, keep being on your knees.”
despite feeling like a failure, lost and frustrated, God promises peace when we seek His face. preceding Easter this year, i want nothing but to press into His presence. fear and worries may tightly grip at my heart…giving me no space to breathe. but here’s the devotional passage i read this morning:
“when you live to please God and to keep the inner person healthy, you discover that life gradually becomes unified. instead of running here and there, trying to do everything and please everybody, you calmly face the challenges of each day without feeling pulled apart. you find it’s much easier to make decisions because life is centered on one thing: seeking first “the kingdom of God and His righteousness.” (warren w. wiersebe from “the twenty essential qualities”)
let’s kneel down and be with Him.
“even now,” declares the Lord, “return to Me with all your heart, with fasting and weeping and mourning.” rend your heart and not your garments. return to the Lord your God, for He is gracious and compassionate, slow to anger and abounding in love, and He relents from sending calamity. (joel 2:12-13) | 2023-12-22T01:26:35.434368 | https://example.com/article/5759 |
Patient safety challenges in low-income and middle-income countries.
The global burden of surgical disease is significant and growing. As a result, the vital role of essential surgical care and safe anesthesia in low-income and middle-income countries is gaining increasing attention. Importantly, vast disparities in access to essential surgery and safe anesthesia exist. In this review, we summarize the current knowledge surrounding the global crisis of inadequate anesthesia capacity and barriers to patient safety in low-income and middle-income countries. The major patient safety challenges in low-income and middle-income countries include a lack of well trained anesthesia providers, inadequate infrastructure, equipment, monitors, medicines, oxygen, and blood products, and an absence of meaningful data to guide policies and programs. Explicit mention of essential surgery and safe anesthesia in the Post-2015 Development Agenda is a critical step forward in advancing the cause of global perioperative care. Tracking surgical and anesthesia outcomes with a metric, such as the perioperative mortality rate, must be required at the hospital, country, and global level to guide improvement of surgical and anesthetic interventions aimed at the burden of surgical disease. | 2024-06-10T01:26:35.434368 | https://example.com/article/2218 |
---
abstract: |
Inference problems with conjectured statistical-computational gaps are ubiquitous throughout modern statistics, computer science, statistical physics and discrete probability. While there has been success evidencing these gaps from the failure of restricted classes of algorithms, progress towards a more traditional reduction-based approach to computational complexity in statistical inference has been limited. These average-case problems are each tied to a different natural distribution, high-dimensional structure and conjecturally hard parameter regime, leaving reductions among them technically challenging. Despite a flurry of recent success in developing such techniques, existing reductions have largely been limited to inference problems with similar structure – primarily mapping among problems representable as a sparse submatrix signal plus a noise matrix, which is similar to the common starting hardness assumption of planted clique ().
The insight in this work is that a slight generalization of the planted clique conjecture – secret leakage planted clique ($\pr{pc}_\rho$), wherein a small amount of information about the hidden clique is revealed – gives rise to a variety of new average-case reduction techniques, yielding a web of reductions relating statistical problems with very different structure. Based on generalizations of the planted clique conjecture to specific forms of $\pr{pc}_\rho$, we deduce tight statistical-computational tradeoffs for a diverse range of problems including robust sparse mean estimation, mixtures of sparse linear regressions, robust sparse linear regression, tensor PCA, variants of dense $k$-block stochastic block models, negatively correlated sparse PCA, semirandom planted dense subgraph, detection in hidden partition models and a universality principle for learning sparse mixtures. This gives the first reduction-based evidence supporting a number of statistical-computational gaps observed in the literature [@li2017robust; @balakrishnan2017computationally; @diakonikolas2017statistical; @chen2016statistical; @hajek2015computational; @brennan2018reducibility; @fan2018curse; @liu2018high; @richard2014statistical; @hopkins2015tensor; @wein2019kikuchi; @azizyan2013minimax; @verzelen2017detection].
We introduce a number of new average-case reduction techniques that also reveal novel connections to combinatorial designs based on the incidence geometry of $\mathbb{F}_r^t$ and to random matrix theory. In particular, we show a convergence result between Wishart and inverse Wishart matrices that may be of independent interest. The specific hardness conjectures for $\pr{pc}_\rho$ implying our statistical-computational gaps all are in correspondence with natural graph problems such as $k$-partite, bipartite and hypergraph variants of $\pr{pc}$. Hardness in a $k$-partite hypergraph variant of $\pr{pc}$ is the strongest of these conjectures and sufficient to establish all of our computational lower bounds. We also give evidence for our $\pr{pc}_\rho$ hardness conjectures from the failure of low-degree polynomials and statistical query algorithms. Our work raises a number of open problems and suggests that previous technical obstacles to average-case reductions may have arisen because planted clique is not the right starting point. An expanded set of hardness assumptions, such as $\pr{pc}_\rho$, may be a key first step towards a more complete theory of reductions among statistical problems.
author:
- 'Matthew Brennan [^1]'
- 'Guy Bresler [^2]'
bibliography:
- 'GB\_BIB.bib'
title: |
Reducibility and Statistical-Computational Gaps\
from Secret Leakage
---
\[part:intro\]
Introduction {#sec:1-intro}
============
Computational complexity has become a central consideration in statistical inference as focus has shifted to high-dimensional structured problems. A primary aim of the field of mathematical statistics is to determine how much data is needed for various estimation tasks, and to analyze the performance of practical algorithms. For a century, the focus has been on *information-theoretic* limits. However, the study of high-dimensional structured estimation problems over the last two decades has revealed that the much more relevant quantity – the amount of data needed by *computationally efficient* algorithms – may be significantly higher than what is achievable without computational constraints. These *statistical-computational gaps* were first observed to exist more than two decades ago [@valiant1984theory; @servedio1999computational; @decatur2000computational] but only recently have emerged as a trend ubiquitous in problems throughout modern statistics, computer science, statistical physics and discrete probability [@bottou2008tradeoffs; @chandrasekaran2013computational; @jordan2015machine]. Prominent examples arise in estimating sparse vectors from linear observations, estimating low-rank tensors, community detection, subgraph and matrix recovery problems, random constraint satisfiability, sparse principal component analysis and robust estimation.
Because statistical inference problems are formulated with probabilistic models on the observed data, there are natural barriers to basing their computational complexity as average-case problems on worst-case complexity assumptions such as $\text{P}\neq \text{NP}$ [@feigenbaum1993random; @bogdanov2006worst; @applebaum2008basing]. To cope with this complication, a number of different approaches have emerged to provide evidence for conjectured statistical-computational gaps. These can be roughly classified into two categories:
1. **Failure of Classes of Algorithms:** Showing that powerful classes of efficient algorithms, such as statistical query algorithms, the sum of squares (SOS) hierarchy and low-degree polynomials, fail up to the conjectured computational limit of the problem.
2. **Average-Case Reductions:** The traditional complexity-theoretic approach showing the existence of polynomial-time reductions relating statistical-computational gaps in problems to one another.
The line of research providing evidence for statistical-computational gaps through the failure of powerful classes of algorithms has seen a lot of progress in the past few years. A breakthrough work of [@barak2016nearly] developed the general technique of pseudocalibration for showing SOS lower bounds, and used this method to prove tight lower bounds for planted clique (). In [@hopkinsThesis], pseudocalibration motivated a general conjecture on the optimality of low-degree polynomials for hypothesis testing that has been used to provide evidence for a number of additional gaps [@hopkins2017efficient; @kunisky2019notes; @bandeira2019computational]. There have also been many other recent SOS lower bounds [@grigoriev2001linear; @deshpande2015improved; @ma2015sum; @meka2015sum; @kothari2017sum; @hopkins2018integrality; @raghavendra2018high; @hopkins2017power; @mohanty2019lifting]. Other classes of algorithms for which there has been progress in a similar vein include statistical query algorithms [@feldman2013statistical; @feldman2015complexity; @diakonikolas2017statistical; @diakonikolas2019efficient], classes of circuits [@razborov1997natural; @rossman2008constant; @rossman2014monotone], local algorithms [@gamarnik2017limits; @linial1992locality] and message-passing algorithms [@zdeborova2016statistical; @lesieur2015mmse; @lesieur2016phase; @krzakala2007gibbs; @ricci2018typology; @bandeira2018notes]. Another line of work has aimed to provide evidence for computational limits by establishing properties of the energy landscape of solutions that are barriers to natural optimization-based approaches [@achlioptas2008algorithmic; @gamarnik2017high; @arous2017landscape; @arous2018algorithmic; @ros2019complex; @chen2019suboptimality; @gamarnik2019landscape].
While there has been success evidencing statistical-computational gaps from the failure of these classes of algorithms, progress towards a traditional reduction-based approach to computational complexity in statistical inference has been more limited. This is because reductions between average-case problems are more constrained and overall very different from reductions between worst-case problems. Average-case combinatorial problems have been studied in computer science since the 1970’s [@karp1977probabilistic; @kuvcera1977expected]. In the 1980’s, Levin introduced his theory of average-case complexity [@levin1986average], formalizing the notion of an average-case reduction and obtaining abstract completeness results. Since then, average-case complexity has been studied extensively in cryptography and complexity theory. A survey of this literature can be found in [@bogdanov2006average] and [@goldreich2011notes]. As discussed in [@Barak2017] and [@goldreich2011notes], average-case reductions are notoriously delicate and there is a lack of available techniques. Although technically difficult to obtain, average-case reductions have a number of advantages over other approaches. Aside from the advantage of being future-proof against new classes of algorithms, showing that a problem of interest is hard by reducing from $\pr{pc}$ effectively *subsumes* hardness for classes of algorithms known to fail on $\pr{pc}$ and thus gives stronger evidence for hardness. Reductions preserving gaps also directly relate phenomena across problems and reveal insights into how parameters, hidden structures and noise models correspond to one another.
Worst-case reductions are only concerned with transforming the *hidden structure* in one problem to another. For example, a worst-case reduction from $\pr{3-sat}$ to $k\pr{-independent-set}$ needs to ensure that the hidden structure of a satisfiable $\pr{3-sat}$ formula is mapped to a graph with an independent set of size $k$, and that an unsatisfiable formula is not. Average-case reductions need to not only transform the structure in one problem to that of another, but also precisely map between the *natural distributions* associated with problems. In the case of the example above, all classical worst-case reductions use gadgets that map random $\pr{3-sat}$ formulas to a very unnatural distribution on graphs. Average-case problems in statistical inference are also fundamentally *parameterized*, with parameter regimes in which the problem is information-theoretically impossible, possible but conjecturally computationally hard and computationally easy. To establish the strongest possible lower bounds, reductions need to exactly fill out one of these three parameter regimes – the one in which the problem is conjectured to be computationally hard. These subtleties that arise in devising average-case reductions will be discussed further in Section \[subsec:1-desiderata\].
Despite these challenges, there has been a flurry of recent success in developing techniques for average-case reductions among statistical problems. Since the seminal paper of [@berthet2013complexity] showing that a statistical-computational gap for a distributionally-robust formulation of sparse PCA follows from the conjecture, there have been a number of average-case reductions among statistical problems. Reductions from $\pr{pc}$ have been used to show lower bounds for RIP certification [@wang2016average; @koiran2014hidden], biclustering detection and recovery [@ma2015computational; @cai2015computational; @caiwu2018; @brennan2019universality], planted dense subgraph [@hajek2015computational; @brennan2019universality], testing $k$-wise independence [@alon2007testing], matrix completion [@chen2015incoherence] and sparse PCA [@berthet2013optimal; @berthet2013complexity; @wang2016statistical; @gao2017sparse; @brennan2019optimal]. Several reduction techniques were introduced in [@brennan2018reducibility], providing the first web of average-case reductions among a number of problems involving sparsity. More detailed surveys of these prior average-case reductions from $\pr{pc}$ can be found in the introduction section of [@brennan2018reducibility] and in [@wu2018statistical]. There also have been a number of average-case reductions in the literature starting with different assumptions than the conjecture. Hardness conjectures for random CSPs have been used to show hardness in improper learning complexity [@daniely2014average], learning DNFs [@daniely2016complexityDNF] and hardness of approximation [@feige2002relations]. Recent reductions also map from a 3-uniform hypergraph variant of the conjecture to SVD for random 3-tensors [@zhang2017tensor] and between learning two-layer neural networks and tensor decomposition [@mondelli2018connection].
A common criticism to the reduction-based approach to computational complexity in statistical inference is that, while existing reductions have introduced nontrivial techniques for mapping precisely between different natural distributions, they are not yet capable of transforming between problems with dissimilar *high-dimensional structures*. In particular, the vast majority of the reductions referenced above map among problems representable as a *sparse submatrix signal plus a noise matrix*, which is similar to the common starting hardness assumption . Such a barrier would be fatal to a satisfying reduction-based theory of statistical-computational gaps, as the zoo of statistical problems with gaps contains a broad range of very different high-dimensional structures. This leads directly to the following central question that we aim to address in this work.
Can statistical-computational gaps in problems with different high-dimensional structures be related to one another through average-case reductions?
Overview {#subsec:1-overview}
--------
![The web of reductions carried out in this paper. An edge indicates existence of a reduction transferring computational hardness from the tail to the head. Edges are labeled with associated reduction techniques and unlabelled edges correspond to simple reductions or specializing a problem to a particular case.[]{data-label="fig:web"}](web4.pdf){width="\textwidth"}
The main objective of this paper is to provide the first evidence that relating differently structured statistical problems through reductions is possible. We show that mild generalizations of the conjecture to $k$-partite and bipartite variants of are naturally suited to a number of new average-case reduction techniques. These techniques map to problems breaking out of the sparse submatrix plus noise structure that seemed to constrain prior reductions. They thus show that revealing a tiny amount of information about the hidden clique vertices substantially increases the reach of the reductions approach, providing the first web of reductions among statistical problems with significantly different structure. Our techniques also yield reductions beginning from hypergraph variants of which, along with the $k$-partite and bipartite variants mentioned above, can be unified under a single assumption that we introduce – the secret leakage planted clique ($\pr{pc}_\rho$) conjecture. This conjecture makes a precise prediction of what information about the hidden clique can be revealed while $\pr{pc}$ remains hard.
A summary of our web of average-case reductions is shown in Figure \[fig:web\]. Our reductions yield tight statistical-computational gaps for a range of differently structured problems, including robust sparse mean estimation, variants of dense stochastic block models, detection in hidden partition models, semirandom planted dense subgraph, negatively correlated sparse PCA, mixtures of sparse linear regressions, robust sparse linear regression, tensor PCA and a universality principle for learning sparse mixtures. This gives the first reduction-based evidence supporting a number of gaps observed in the literature [@li2017robust; @balakrishnan2017computationally; @diakonikolas2017statistical; @chen2016statistical; @hajek2015computational; @brennan2018reducibility; @fan2018curse; @liu2018high; @richard2014statistical; @hopkins2015tensor; @wein2019kikuchi; @azizyan2013minimax; @verzelen2017detection]. In particular, there are no known reductions deducing these gaps from the ordinary $\pr{pc}$ conjecture. Similar to [@brennan2018reducibility], several average-case problems emerge as natural intermediates in our reductions, such as negative sparse PCA and imbalanced sparse Gaussian mixtures. The specific instantiations of the $\pr{pc}_\rho$ conjecture needed to obtain these lower bounds correspond to natural $k$-partite, bipartite and hypergraph variants of $\pr{pc}$. Among these hardness assumptions, we show that hardness in a $k$-partite hypergraph variant of $\pr{pc}$ ($k\pr{-hpc}^s$) is the strongest and sufficient to establish all of our computational lower bounds. We also give evidence for our hardness conjectures from the failure of low-degree polynomials and statistical query algorithms.
Our results suggest that may not be the right starting point for average-case reductions among statistical problems. However, surprisingly mild generalizations of $\pr{pc}$ are all that are needed to break beyond the structural constraints of previous reductions. Generalizing to either $\pr{pc}_\rho$ or $k\pr{-hpc}^s$ unifies all of our reductions under a single hardness assumption, now capturing reductions to a range of dissimilarly structured problems including supervised learning tasks and problems over tensors. This suggests $\pr{pc}_\rho$ and $k\pr{-hpc}^s$ are both much more powerful candidate starting points than $\pr{pc}$ and, more generally, that these may be a key first step towards a more complete theory of reductions among statistical problems. Although we often will focus on providing evidence for statistical-computational gaps, we emphasize that our main contribution is more general – our reductions give a new set of techniques for relating differently structured statistical problems that seem likely to have applications beyond the problems we consider here.
The rest of the paper is structured as follows. The next section gives general background on average-case reductions and several criteria that they must meet in order to show strong computational lower bounds for statistical problems. In Section \[sec:1-PC\], we introduce the $\pr{pc}_\rho$ conjecture and the specific instantiations of this conjecture that imply our computational lower bounds, such as $k\pr{-hpc}^s$. In Section \[sec:1-problems\] we formally introduce the problems in Figure \[fig:web\] and state our main theorems. In Section \[sec:1-techniques\], we describe the key ideas underlying our techniques and we conclude Part \[part:intro\] by discussing a number of questions arising from these techniques in Section \[sec:1-open-problems\]. Parts \[part:reductions\] and \[part:lower-bounds\] are devoted to formally introducing our reduction techniques and applying them, respectively. Part \[part:reductions\] begins with Section \[sec:2-preliminaries\], which introduces reductions in total variation and the corresponding hypothesis testing formulation for each problem we consider that it will suffice to reduce to. In the rest of Part \[part:reductions\], we introduce our main reduction techniques and give several initial applications of these techniques to reduce to a subset of the problems that we consider. Part \[part:lower-bounds\] begins with a further discussion of the $\pr{pc}_\rho$ conjecture, where we show that $k\pr{-hpc}^s$ is our strongest assumption and provide evidence for the $\pr{pc}_\rho$ conjecture from the failure of low-degree tests and the statistical query model. The remainder of Part \[part:lower-bounds\] is devoted to our other reductions and deducing the computational lower bounds in our main theorems from Section \[sec:1-problems\]. At the end of Part \[part:lower-bounds\], we discuss the implications of our reductions to estimation and recovery formulations of the problems that we consider. Reading Part \[part:intro\], Section \[sec:2-secret-leakage\] and the pseudocode for our reductions gives an accurate summary of the theorems and ideas in this work. We note that a preliminary draft of this work containing a small subset of our results appeared in [@brennan2019average].
Desiderata for Average-Case Reductions {#subsec:1-desiderata}
--------------------------------------
As discussed in the previous section, average-case reductions are delicate and more constrained than their worst-case counterparts. In designing average-case reductions between problems in statistical inference, the essential challenge is to reduce to instances that are *hard up to the conjectured computational barrier*, without destroying the *naturalness* of the distribution over instances. Dissecting this objective further yields four general criteria for a reduction between the problems $\mP$ and $\mP'$ to be deemed to show strong computational lower bounds for $\mP'$. These objectives are to varying degrees at odds with one another, which is what makes devising reductions a challenging task. To illustrate these concepts, our running example will be our reduction from $\pr{pc}_\rho$ to robust sparse linear regression (SLR). Some parts of this discussion are slightly simplified for clarity. The following are our four criteria.
1. **Aesthetics:** If $\mP$ and $\mP'$ each have a specific canonical distribution then a reduction must faithfully map these distributions to one another. In our example, this corresponds to mapping the independent $0$-$1$ edge indicators in a random graph to noisy Gaussian samples of the form $y = \langle \beta, X \rangle + \mN(0, 1)$ with $X \sim \mN(0, I_d)$ and where an $\epsilon$-fraction are corrupted.
2. **Mapping Between Different Structures:** A reduction must simultaneously map all possible latent signals of $\mP$ to that of $\mP'$. In our example, this corresponds to mapping each possible clique position in $\pr{pc}_\rho$ to a specific mixture over the hidden vector $\beta$. A reduction in this case would also need to map between possibly very differently structured data, e.g., in robust SLR the dependence of $(X, y)$ on $\beta$ is intricate and the $\epsilon$-fraction of corrupted samples also produces latent structure across samples. These are both very different than the planted signal plus noise form of the clique in $\pr{pc}_\rho$.
3. **Tightness to Algorithms:** A reduction showing computational lower bounds that are tight against what efficient algorithms can achieve needs to map the conjectured computational limits of $\mP$ to those of $\mP'$. In our example, $\pr{pc}_\rho$ in general has a conjectured limit depending on $\rho$, which may for instance be at $K = o(\sqrt{N})$ when the clique is of size $K$ in a graph with $N$ vertices. In contrast, robust SLR has the conjectured limit at $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$, where $\tau$ is the $\ell_2$ error to which we wish to estimate $\beta$, $k$ is the sparsity of $\beta$ and $n$ is the number of samples.
4. **Strong Lower Bounds for Parameterized Problems:** In order to show that a certain constraint $\mathcal{C}$ *defines* the computational limit of $\mP'$ through this reduction, we need the reduction to fill out the possible parameter sequences within $\mathcal{C}$. For example, to show that $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$ truly captures the correct dependence in our computational lower bound for robust SLR, it does not suffice to produce a single sequence of points $(n, k, d, \tau, \epsilon)$ for which this is true, or even a one parameter curve. There are four parameters in the conjectured limit and a reduction showing that this is the correct dependence needs to fill out any possible combination of growth rates in these parameters permitted by $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$. The fact that the initial problem $\mP$ has a conjectured limit depending on only two parameters can make achieving this criterion challenging.
We remark that the third criterion requires that reductions are *information preserving* in the sense that they do not degrade the underlying level signal used by optimal efficient algorithms. This necessitates that the amount of additional randomness introduced in reductions to achieve aesthetic requirements is negligible. The fourth criterion arises from the fact that statistical problems are generally described by a tuple of parameters and are therefore actually an entire family of problems. A full characterization of the computational feasibility of a problem therefore requires addressing all possible scalings of the parameters.
All of the reductions carried out in this paper satisfy all four desiderata. Several of the initial reductions from $\pr{pc}$ in the literature met most but not all of these criteria. For example, the reductions in [@berthet2013complexity; @wang2016statistical] to sparse PCA map to a distribution in a distributionally robust formulation of the problem as opposed to the canonical Gaussian formulation in the spiked covariance model. Similarly [@cai2015computational] reduces to a distributionally robust formulation of submatrix localization. The reduction in [@gao2017sparse] only shows tight computational lower bounds for sparse PCA at a particular point in the parameter space when $\theta = \tilde{\Theta}(1)$ and $n = \tilde{\Theta}(k^2)$. However, a number of reductions in the literature have successfully met all of these four criteria [@ma2015computational; @hajek2015computational; @zhang2017tensor; @brennan2018reducibility; @brennan2019optimal; @brennan2019universality].
We remark that it can be much easier to only satisfy some of these desiderata – in particular, many natural reduction ideas meet a subset of these four criteria but fail to show nontrivial computational lower bounds. For instance, it is often straightforward to construct a reduction that degrades the level of signal. The simple reduction that begins with $\pr{pc}$ and randomly subsamples edges with probability $n^{-\alpha}$ yields an instance of planted dense subgraph with the correct distributional aesthetics. However, this reduction fails to be tight to algorithms and furthermore fails to show any meaningful tradeoff between the size of the planted dense subgraph and the sparsity of the graph.
Another natural reduction to robust sparse mean estimation first maps from $\pr{pc}$ to Gaussian biclustering using one of the reductions in [@ma2015computational; @brennan2018reducibility; @brennan2019universality], computes the sum $v$ of all of the rows of this matrix, then uses Gaussian cloning as in [@brennan2018reducibility] to produce $n$ weak copies of $v$ and finally outputs these copies with an an $\epsilon$-fraction corrupted. This reduction can be verified to produce a valid instance of robust sparse mean estimation in its canonical Gaussian formulation, but fails to show any nontrivial hardness above its information-theoretic limit. Conceptually, this is because the reduction is generating the $\epsilon$-fraction of the corruptions itself. On applying a robust sparse mean estimation blackbox to solve $\pr{pc}$, the reduction could just as easily have revealed which samples it corrupted. This would allow the blackbox to only have to solve sparse mean estimation, which has no statistical-computational gap. In general, a reduction showing tight computational lower bounds cannot generate a non-negligible amount randomness that produces the hardness of the target problem. Instead, this $\epsilon$-fraction must come from the hidden clique in the input $\pr{pc}$ instance. In Section \[subsec:1-tech-encoding\], we discuss how our reductions obliviously encode cliques into the hidden structures in the problems we consider.
We also remark that many problems that appear to be similar from the perspective of designing efficient algorithms can be quite different to reduce to. This arises from differences in their underlying stochastic models that efficient algorithms do not have to make use of. For example, although ordinary sparse PCA and sparse PCA with a negative spike can be solved by the same efficient algorithms, the former has a signal plus noise decomposition while the latter does not and has negatively correlated as opposed to positively correlated planted entries. We will see that these subtle differences are significant in designing reductions.
Planted Clique and Secret Leakage {#sec:1-PC}
=================================
In this section, we introduce planted clique and our generalization of the planted clique conjecture. In the *planted clique problem* ($\pr{pc}$), the task is to find the vertex set of a $k$-clique planted uniformly at random in an $n$-vertex Erdős-Rényi graph $G$. Planted clique can equivalently be formulated as a testing problem $\pr{pc}(n, k, 1/2)$ [@alon2007testing] between the two hypotheses $$H_0: G \sim \mG(n, 1/2) \quad \text{and} \quad H_1: G \sim \mG(n, k, 1/2)$$ where $\mG(n, 1/2)$ denotes the $n$-vertex Erdős-Rényi graph with edge density $1/2$ and $\mG(n, k, 1/2)$ the distribution resulting from planting a $k$-clique uniformly at random in $\mG(n, 1/2)$. This problem can be solved in quasipolynomial time by searching through all vertex subsets of size $(2 + \epsilon) \log_2 n$ if $k > (2 + \epsilon) \log_2 n$. The *Planted Clique Conjecture* is that there is no polynomial time algorithm solving $\pr{pc}(n, k, 1/2)$ if $k = o(\sqrt{n})$.
There is a plethora of evidence in the literature for the $\pr{pc}$ conjecture. Spectral algorithms, approximate message passing, semidefinite programming, nuclear norm minimization and several other polynomial-time combinatorial approaches all appear to fail to solve $\pr{pc}$ exactly when $k = o(\sqrt{n})$ [@alon1998finding; @feige2000finding; @mcsherry2001spectral; @feige2010finding; @ames2011nuclear; @dekel2014finding; @deshpande2015finding; @chen2016statistical]. Lower bounds against low-degree sum of squares relaxations [@barak2016nearly] and statistical query algorithms [@feldman2013statistical] have also been shown up to $k = o(\sqrt{n})$.
#### Secret Leakage $\pr{pc}$.
We consider a slight generalization of the planted clique problem, where the input graph $G$ comes with some information about the vertex set of the planted clique. This corresponds to the vertices in the $k$-clique being chosen from some distribution $\rho$ other than the uniform distribution of $k$-subsets of $[n]$, as formalized in the following definition.
Given a distribution $\rho$ on $k$-subsets of $[n]$, let $\mG_\rho(n, k, 1/2)$ be the distribution on $n$-vertex graphs sampled by first sampling $G \sim \mG(n, 1/2)$ and $S \sim \rho$ independently and then planting a $k$-clique on the vertex set $S$ in $G$. Let $\pr{pc}_\rho(n, k, 1/2)$ denote the resulting hypothesis testing problem between $H_0: G \sim \mG(n, 1/2)$ and $H_1: G \sim \mG_\rho(n, k, 1/2)$.
All of the $\rho$ that we will consider will be uniform over the $k$-subsets that satisfy some constraint. In the cryptography literature, modifying a problem such as $\pr{pc}$ with a promise of this form is referred to as information leakage about the secret. There is a large body of work on leakage-resilient cryptography recently surveyed in [@kalai2019survey]. The hardness of the Learning with Errors (LWE) problem has been shown to be unconditionally robust to leakage [@dodis2010public; @goldwasser10], and it is left as an interesting open problem to show that a similar statement holds true for <span style="font-variant:small-caps;">pc</span>.
Both $\pr{pc}$ and $\pr{pc}_\rho$ fall under the class of general parameter recovery problems where the task is to find $P_S$ generating the observed graph from a family of distributions $\{ P_S \}$. In the case of $\pr{pc}$, $P_S$ denotes the distribution $\mG(n, k, 1/2)$ conditioned on the $k$-clique being planted on $S$. Observe that the conditional distributions $\{ P_S \}$ are the same in $\pr{pc}$ and $\pr{pc}_\rho$. Secret leakage can be viewed as placing a prior on the parameter $S$ of interest, rather than changing the main average-case part of the problem – the family $\{ P_S \}$. When $\rho$ is uniform over a family of $k$-subsets, secret leakage corresponds to imposing a worst-case constraint on $S$. In particular, consider the maximum likelihood estimator (MLE) for a general parameter recovery problem given by $$\hat{S} = \arg \max_{S \in \text{supp}(\rho)} P_S(G)$$ As $\rho$ varies, only the search space of the MLE changes while the objective remains the same. We make the following precise conjecture of the hardness of $\pr{pc}_\rho(n, k, 1/2)$ for the distributions $\rho$ we consider. Given a distribution $\rho$, let $p_{\rho}(s) = \bP_{S, S' \sim \rho^{\otimes 2}}[|S \cap S'| = s]$ be the probability mass function of the size of the intersection of two independent random sets $S$ and $S'$ drawn from $\rho$.
\[conj:sl-conj\] Let $\rho$ be one of the distributions on $k$-subsets of $[n]$ given below in Conjecture \[conj:hard-conj\]. Suppose that there is some $p_0 = o_n(1)$ and constant $\delta > 0$ such that $p_{\rho}(s)$ satisfies the tail bounds $$p_{\rho}(s) \le p_0 \cdot \left\{ \begin{array}{ll} 2^{-s^2} &\textnormal{if } 1 \le s^2 < d \\ s^{-2d-4} &\textnormal{if } s^2 \ge d \end{array} \right.$$ for any parameter $d = O_n((\log n)^{1 + \delta})$. Then there is no polynomial time algorithm solving $\pr{pc}_\rho(n, k, 1/2)$.
While this conjecture is only stated for the specific $\rho$ corresponding to the hardness assumptions used in our reductions, we believe it should hold for a wide class of $\rho$ with sufficient symmetry. The motivation for the decay condition on $p_\rho$ in the $\pr{pc}_\rho$ conjecture is from low-degree polynomials, which we show in Section \[subsec:2-low-degree\] fail to solve $\pr{pc}_\rho$ subject to this condition. The *low-degree conjecture* – that low-degree polynomials predict the computational barriers for a broad class of inference problems – has been shown to match conjectured statistical-computational gaps in a number of problems [@hopkins2017efficient; @hopkinsThesis; @kunisky2019notes; @bandeira2019computational]. We discuss this conjecture, the technical conditions arising in its formalizations and how these relate to $\pr{pc}_\rho$ in Section \[subsec:2-low-degree\]. Specifically, we discuss the importance of symmetry and the requirement on $d$ in generalizing Conjecture \[conj:sl-conj\] to further $\rho$. In contrast to low-degree polynomials, because the SQ model only concerns problems with a notion of samples, it seems ill-suited to accurately predict the computational barriers in $\pr{pc}_\rho$ for every $\rho$. However, in Section \[subsec:2-sq\], we show SQ lower bounds supporting the $\pr{pc}_\rho$ conjecture for specific $\rho$ related to our hardness assumptions. We also remark that the distribution $p_{\rho}$ is an overlap distribution, which has been linked to conjectured statistical-computational gaps using techniques from statistical physics [@zdeborova2016statistical].
#### Hardness Conjectures for Specific $\rho$.
In our reductions, we will only need the $\pr{pc}_\rho$ conjecture for specific $\rho$, all of which are simple and correspond to their own hardness conjectures in natural mild variants of $\pr{pc}$. Secret leakage can be viewed as a way to conceptually unify these different assumptions. These $\rho$ all seem to avoid revealing enough information about $S$ to give rise to new polynomial time algorithms to solve $\pr{pc}_{\rho}$. In particular, spectral algorithms consistently seem to match our conjectured computational limits for $\pr{pc}_\rho$ for the different $\rho$ we consider.
We now introduce these specific hardness assumptions and briefly outline how each can be produced from an instance of $\pr{pc}_\rho$. This is more formally discussed in Section \[subsec:2-sl-verifying\]. Let $\mG_{B}(m, n, 1/2)$ denote the distribution on bipartite graphs $G$ with parts of size $m$ and $n$ wherein each edge between the two parts is included independently with probability $1/2$.
- **$k$-partite :** Suppose that $k$ divides $n$ and let $E$ be a partition of $[n]$ into $k$ parts of size $n/k$. Let $k\pr{-pc}_E(n, k, 1/2)$ be $\pr{pc}_\rho(n, k, 1/2)$ where $\rho$ is uniformly distributed over all $k$-sets intersecting each part of $E$ in exactly one element.
- **bipartite :** Let $\pr{bpc}(m, n, k_m, k_n, 1/2)$ be the problem of testing between $H_0 : G \sim \mG_B(m, n, 1/2)$ and $H_1$ under which $G$ is formed by planting a complete bipartite graph with $k_m$ and $k_n$ vertices in the two parts, respectively, in a graph sampled from $\mG_B(m, n, 1/2)$. This problem can be realized as a bipartite subgraph of an instance of $\pr{pc}_\rho$.
- **$k$-part bipartite :** Suppose that $k_n$ divides $n$ and let $E$ be a partition of $[n]$ into $k_n$ parts of size $n/k_n$. Let $k\pr{-bpc}_E(m, n, k_m, k_n, 1/2)$ be $\pr{bpc}$ where the $k_n$ vertices in the part of size $n$ are uniform over all $k_n$-sets intersecting each part of $E$ in exactly one element, as in the definition of $k\pr{-pc}_E$. As with $\pr{bpc}$, this problem can be realized as a bipartite subgraph of an instance of $\pr{pc}_\rho$, now with additional constraints on $\rho$ to enforce the $k$-part restriction.
- **$k$-partite hypergraph :** Let $k, n$ and $E$ be as in the definition of $k\pr{-pc}$. Let $k\pr{-hpc}^s_E(n, k, 1/2)$ where $s \ge 3$ be the problem of testing between $H_0$, under which $G$ is an $s$-uniform Erdős-Rényi hypergraph where each hyperedge is included independently with probability $1/2$, and $H_1$, under which $G$ is first sampled from $H_0$ and then a $k$-clique with one vertex chosen uniformly at random from each part of $E$ is planted in $G$. This problem has a simple correspondence with $\pr{pc}_\rho$: there is a specific $\rho$ that corresponds to unfolding the adjacency tensor of this hypergraph problem into a matrix. We will show more formally how to produce $k\pr{-hpc}^s_E(n, k, 1/2)$ from $\pr{pc}_\rho$ in Section \[subsec:2-sl-verifying\].
Since $E$ is always revealed in these problems, it can without loss of generality be taken to be any partition of $[n]$ into $k$ equally-sized parts. Consequently, we will often simplify notation by dropping the subscript $E$ from the above notation. We conjecture the following computational barriers for these graph problems, each of which matches the decay rate condition on of $p_{\rho}(s)$ in $\pr{pc}_\rho$ conjecture, as we will show in Section \[subsec:2-sl-verifying\].
\[conj:hard-conj\] Suppose that $m$ and $n$ are polynomial in one another. Then there is no $\textnormal{poly}(n)$ time algorithm solving the following problems:
1. $k\pr{-pc}(n, k, 1/2)$ when $k = o(\sqrt{n})$;
2. $\pr{bpc}(m, n, k_m, k_n, 1/2)$ when $k_n = o(\sqrt{n})$ and $k_m = o(\sqrt{m})$;
3. $k\pr{-bpc}(m, n, k_m, k_n, 1/2)$ when $k_n = o(\sqrt{n})$ and $k_m = o(\sqrt{m})$; and
4. $k\pr{-hpc}^s(n, k, 1/2)$ for $s \ge 3$ when $k = o(\sqrt{n})$.
From an entropy viewpoint, the $k$-partite assumption common to these variants of $\pr{pc}_\rho$ only reveals a very small amount of information about the location of the clique. In particular, both the uniform distribution over $k$-subsets and over $k$-subsets respecting a given partition $E$ have $(1 + o(1))k \log_2 n$ bits of entropy. We also remark that the $\pr{pc}_\rho$ conjecture, as stated, implies the thresholds in the conjecture above up to arbitrarily small polynomial factors i.e. where the thresholds are $k = O(n^{1/2 - \epsilon})$, $k_n = O(n^{1/2 - \epsilon})$ and $k_m = O(m^{1/2 - \epsilon})$ for arbitrarily small $\epsilon > 0$. As we will discuss in \[subsec:2-low-degree\], the low-degree conjecture also supports the stronger thresholds in Conjecture \[conj:hard-conj\]. We also note that our reductions continue to show tight hardness up to arbitrarily small polynomial factors even under these weaker assumptions. As mentioned in Section \[subsec:1-overview\], our hardness assumption for $k\pr{-hpc}^s$ is the strongest of those in Conjecture \[conj:hard-conj\]. Specifically, in Section \[subsec:2-sl-verifying\] we give simple reductions showing that (4) in Conjecture \[conj:hard-conj\] implies (1), (2) and (3).
We remark that the discussion in this section also applies to planted dense subgraph ($\pr{pds}$) problems. In the $\pr{pds}$ variant of a $\pr{pc}$ problem, instead of planting a $k$-clique in $\mG(n, 1/2)$, a dense subgraph $\mG(k, p)$ is planted in $\mG(n, q)$ where $p > q$. We conjecture that all of the hardness assumptions remain true for $\pr{pds}$ with constant edge densities $0 < q < p \le 1$. Note that $\pr{pc}$ is an instance of $\pr{pds}$ with $p = 1$ and $q = 1/2$. All of the reductions beginning with $\pr{pc}_\rho$ in this work will also yield reductions beginning from secret leakage planted dense subgraph problems $\pr{pds}_\rho$. In particular, they will continue to apply with a small loss in the amount of signal when $q = 1/2$ and $p = 1/2 + n^{-\epsilon}$ for a small constant $\epsilon > 0$. As discussed in [@brennan2019optimal], $\pr{pds}$ conjecturally has no quasipolynomial time algorithms in this regime and thus our reductions would transfer lower bounds above polynomial time. In this parameter regime, the barriers of $\pr{pds}$ also appear to be similar to those of detection in the sparsely spiked Wigner model, which also conjecturally has no quasipolynomial time algorithms [@hopkins2017power]. Throughout this work, we will denote the $\pr{pds}$ variants of the problems introduced above by $k\pr{-pds}(n, k, p, q)$, $\pr{bpds}(m, n, k_m, k_n, p, q)$, $k\pr{-bpds}(m, n, k_m, k_n, p, q)$ and $k\pr{-hpds}^s(n, k, p, q)$.
Problems and Statistical-Computational Gaps {#sec:1-problems}
===========================================
In this section, we introduce the problems we consider and give informal statements of our main theorems, each of which is a tight computational lower bound implied by a conjecture in the previous section. These statistical-computational gaps follow from a variety of different average-case reduction techniques that are outlined in the next section and will be the focus in the rest of this work. Before stating our main results, we clarify precisely what we mean by *solving* and showing a *computational lower bound* for a problem. All of the computational lower bounds in this section are implied by one of the assumptions in Conjecture \[conj:hard-conj\]. As mentioned previously, they also follow from $\pr{pds}$ variants of these assumptions or only from the hardness of $k\pr{-hpc}^s$, which is the strongest assumption.
#### Statistical Problems and Algorithms.
Every problem $\mP(n, a_1, a_2, \dots, a_t)$ we consider is parameterized by a natural parameter $n$ and has several other parameters $a_1(n), a_2(n), \dots, a_t(n)$, which will typically be implicit functions of $n$. If $\mP$ is a hypothesis testing problem with observation $X$ and hypotheses $H_0$ and $H_1$, an algorithm $\mathcal{A}$ is deemed to solve $\mP$ subject to the constraints $\mathcal{C}$ if it has asymptotic Type I$+$II error bounded away from $1$ when $(n, a_1, a_2, \dots, a_t) \in \mathcal{C}$ i.e. if $\bP_{H_0}\left[ \mathcal{A}(X) = H_1 \right] + \bP_{H_1}\left[ \mathcal{A}(X) = H_0 \right] = 1 - \Omega_n(1)$. Furthermore, we say that there is no algorithm solving $\mP$ in polynomial time under the constraints $\mathcal{C}$ if for any sequence of parameters $\{(n, a_1, a_2, \dots, a_t)\}_{n = 1}^\infty \subseteq \mathcal{C}$, there is no polynomial time algorithm solving $\mP(n, a_1, a_2, \dots, a_t)$ with Type I$+$II error bounded away from $1$ as $n \to \infty$. If $\mP$ is an estimation problem with a parameter $\theta$ of interest and loss $\ell$, then $\mathcal{A}$ solves $\mP$ subject to the constraints $\mathcal{C}$ if $\ell(\mathcal{A}(X), \theta) \le \epsilon$ is true with probability $1 - o_n(1)$ when $(n, a_1, a_2, \dots, a_t, \epsilon) \in \mathcal{C}$, where $\epsilon = \epsilon(n)$ is a function of $n$.
#### Computational Lower Bounds.
We say there is a computational lower bound for $\mathcal{P}$ subject to the constraint $\mathcal{C}$ if for any sequence of parameters $\{(n, a_1(n), a_2(n), \dots, a_t(n))\}_{n = 1}^\infty \subseteq \mathcal{C}$ there is another sequence given by $\{(n_i, a'_1(n_i), a'_2(n_i), \dots, a'_t(n_i))\}_{i = 1}^\infty \subseteq \mathcal{C}$ such that $\mP(n_i, a'_1(n_i), a'_2(n_i), \dots, a'_t(n_i))$ cannot be solved in $\text{poly}(n_i)$ time and $\lim_{i \to \infty} \log a_k'(n_i)/\log a_k(n_i) = 1$. In other words, there is a lower bound at $\mathcal{C}$ if, for any sequence $s$ in $\mathcal{C}$, there is another sequence of parameters that cannot be solved in polynomial time and whose growth matches the growth of a subsequence of $s$. Thus all of our computational lower bounds are *strong lower bounds* in the sense that rather than show that a single sequence of parameters is hard, we show that parameter sequences filling out *all possible growth rates* in $\mathcal{C}$ are hard.
The constraints $\mathcal{C}$ will typically take the form of a system of asymptotic inequalities. Furthermore, each of our computational lower bounds for estimation problems will be established through a reduction to a hypothesis testing problem which then implies the desired lower bound. The exact formulations for these intermediate hypothesis testing problems can be found in Section \[subsec:2-formulations\] and how they also imply lower bounds for estimation and recovery variants of our problems is discussed in Section \[subsec:2-estimation\]. Throughout this work, we will use the terms detection and hypothesis testing interchangeably. We say that two parameters $a$ and $b$ are polynomial in one another if there is a constant $C > 0$ such that $a^{1/C} \le b \le a^C$ as $a \to \infty$. Throughout the paper, we adopt the standard asymptotic notation $O(\cdot), \Omega(\cdot), o(\cdot), \omega(\cdot)$ and $\Theta(\cdot)$. We let $\tilde{O}(\cdot)$ and analogous variants denote these relations up to $\text{polylog}(n)$ factors. Here, $n$ is the natural parameter of the problem under consideration and will typically be clear from context. We remark that the argument of $\tilde{O}(\cdot)$ will often be polynomially large or small in $n$, in which case our notation recovers the typical definition of $\tilde{O}(\cdot)$. Furthermore, all of these definitions also apply to the discussion in the previous section.
#### Canonical Simplest Average-Case Formulations.
All of our reductions are to the canonical simplest average-case formulations of the problems we consider. For example, all $k$-sparse unit vectors in our lower bounds are binary and in $\{0, 1/\sqrt{k} \}^d$, and the rank-1 component in our lower bound for tensor PCA is sampled from a Rademacher prior. Our reductions are all also to the canonical simple vs. simple hypothesis testing formulation for each of our problems and, as discussed in [@brennan2018reducibility], this yields strong computational lower bounds, is often technically more difficulty and crucially allows reductions to naturally be composed with one another.
Robust Sparse Mean Estimation {#subsec:1-problems-rsme}
-----------------------------
The study of robust estimation began with Huber’s contamination model [@huber1992robust; @huber1965robust] and observations of Tukey [@tukey1975mathematics]. Classical robust estimators have typically either been computationally intractable or heuristic [@huber2011robust; @tukey1975mathematics; @yatracos1985rates]. Recent breakthrough works [@diakonikolas2016robust; @lai2016agnostic] gave the first efficient algorithms for high-dimensional robust estimation, which sparked an active line of research into robust algorithms for other high-dimensional problems [@awasthi2014power; @li2017robust; @balakrishnan2017computationally; @charikar2017learning; @diakonikolas2018robustly; @klivans2018efficient; @diakonikolas2019efficient; @hopkins2019hard; @dong2019quantum]. The most canonical high-dimensional robust estimation problem is robust sparse mean estimation, which has an intriguing statistical-computational gap induced by robustness.
In sparse mean estimation, the observations $X_1, X_2, \dots, X_n$ are $n$ independent samples from $\mN(\mu, I_d)$ where $\mu$ is an unknown $k$-sparse vector in $\mathbb{R}^d$ of bounded $\ell_2$ norm and the task is to estimate $\mu$ within an $\ell_2$ error of $\tau$. This is a gapless problem, as taking the largest $k$ coordinates of the empirical mean runs in $\text{poly}(d)$ time and achieves the information-theoretically optimal sample complexity of $n = \Theta(k \log d/\tau^2)$.
If an $\epsilon$-fraction of these samples are corrupted arbitrarily by an adversary, this yields the robust sparse mean estimation problem $\pr{rsme}(n, k, d, \tau, \epsilon)$. As discussed in [@li2017robust; @balakrishnan2017computationally], for $\| \mu - \mu' \|_2$ sufficiently small, it holds that $\TV\left( \mN(\mu, I_d), \mN(\mu', I_d) \right) = \Theta(\| \mu - \mu' \|_2)$. Furthermore, an $\epsilon$-corrupted set of samples can simulate distributions within $O(\epsilon)$ total variation from $\mN(\mu, I_d)$. Therefore $\epsilon$-corruption can simulate $\mN(\mu', I_d)$ if $\|\mu' - \mu\|_2 = O(\epsilon)$ and it is impossible to estimate $\mu$ with $\ell_2$ distance less than this $O(\epsilon)$. This implies that the minimax rate of estimation for $\mu$ is $O(\epsilon)$, even for very large $n$. As shown in [@li2017robust; @balakrishnan2017computationally], the information-theoretic threshold for estimating at this rate in the $\epsilon$-corrupted model remains at $n = \Theta(k \log d/\epsilon^2)$ samples. However, the best known polynomial-time algorithms from [@li2017robust; @balakrishnan2017computationally] require $n = \tilde{\Theta}(k^2 \log d/\epsilon^2)$ samples to estimate $\mu$ within $\tau = \Theta(\epsilon \sqrt{\log \epsilon^{-1}})$ in $\ell_2$. In Sections \[subsec:3-rsme-reduction\] and \[subsec:3-rsme\], we give a reduction showing that these polynomial time algorithms are optimal, yielding the first average-case evidence for the $k$-to-$k^2$ statistical-computational gap conjectured in [@li2017robust; @balakrishnan2017computationally]. Our reduction applies to more general rates $\tau$ and obtains the following tradeoff.
\[thm:rsme-lb\] If $k, d$ and $n$ are polynomial in each other, $k = o(\sqrt{d})$ and $\epsilon < 1/2$ is such that $(n, \epsilon^{-1})$ satisfies condition , then the $k\pr{-bpc}$ conjecture implies that there is a computational lower bound for $\pr{rsme}(n, k, d, \tau, \epsilon)$ at all sample complexities $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$.
For example, taking $\epsilon = 1/3$ and $\tau = \tilde{O}(1)$ shows that there is a $k$-to-$k^2$ gap between the information-theoretically optimal sample complexity of $n = \tilde{\Theta}(k)$ and the computational lower bound of $n = \tilde{o}(k^2)$. Note that taking $\tau = O(\epsilon)$ in Theorem \[thm:rsme-lb\] recovers exactly the tradeoff in [@li2017robust; @balakrishnan2017computationally], with the dependence on $\epsilon$. Our reduction to $\pr{rsme}$ is based on dense Bernoulli rotations and constructions of combinatorial design matrices based on incidence geometry in $\mathbb{F}_r^t$, as is further discussed in Sections \[sec:1-techniques\] and \[sec:2-bernoulli-rotations\].
In Theorem \[thm:rsme-lb\], denotes a technical condition arising from number-theoretic constraints in our reduction that require that $\epsilon^{-1} = n^{o(1)}$ or $\epsilon^{-1} = \tilde{\Theta}(n^{-1/2t})$ for some positive integer $t$. As $\epsilon^{-1} = n^{o(1)}$ is the primary regime of interest in the $\pr{rsme}$ literature, this condition is typically trivial. We discuss the condition in more detail in Section \[sec:3-robust-and-supervised\] and give an alternate reduction removing it from Theorem \[thm:rsme-lb\] in the case where $\epsilon = \tilde{\Theta}(n^{-c})$ for some constant $c \in [0, 1/2]$.
Our result also holds in the stronger Huber’s contamination model where an $\epsilon$-fraction of the $n$ samples are chosen at random and replaced with i.i.d. samples from another distribution $\mathcal{D}$. The prior work of [@diakonikolas2017statistical] showed that SQ algorithms require $n = \tilde{\Omega}(k^2)$ samples to solve $\pr{rsme}$, establishing the conjectured $k$-to-$k^2$ gap in the SQ model. However, our work is the first to make a precise prediction of the computational barrier in $\pr{rsme}$ as a function of both $\epsilon$ and $\tau$. As will be discussed in Section \[subsec:3-rsme-reduction\], our reduction from $k\pr{-pc}$ maps to the instance of $\pr{rsme}$ under the adversary introduced in [@diakonikolas2017statistical].
Dense Stochastic Block Models {#subsec:1-problems-sbm}
-----------------------------
The stochastic block model (SBM) is the canonical model for community detection, having independently emerged in the machine learning and statistics [@holland1983stochastic], computer science [@bui1987graph; @dyer1989solution; @boppana1987eigenvalues], statistical physics [@decelle2011asymptotic] and mathematics communities [@bollobas2007phase]. It has been the subject of a long line of research, which has recently been surveyed in [@abbe2017community; @moore2017computer]. In the $k$-block SBM, a vertex set of size $n$ is uniformly at random partitioned into $k$ latent communities $C_1, C_2, \dots, C_k$ each of size $n/k$ and edges are then included in the graph $G$ independently such that intra-community edges appear with probability $p$ while inter-community edges appear with probability $q < p$. The exact recovery problem entails finding $C_1, C_2, \dots, C_k$ and the weak recovery problem, also known as community detection, entails outputting nontrivial estimates $\hat{C}_1, \hat{C}_2, \dots, \hat{C}_k$ with $|C_i \cap \hat{C}_i| \ge (1 + \Omega(1))n/k$.
Community detection in the SBM is often considered in the sparse regime, where $p = a/n$ and $q = b/n$. In [@decelle2011asymptotic], non-rigorous arguments from statistical physics were used to form the precise conjecture that weak recovery begins to be possible in $\text{poly}(n)$ time exactly at the *Kesten-Stigum* threshold $\pr{snr} = (a - b)^2/k(a + (k - 1)b) > 1$. When $k = 2$, the algorithmic side of this conjecture was confirmed with methods based on belief propagation [@mossel2018proof], spectral methods and non-backtracking walks [@massoulie2014community; @bordenave2015non], and it was shown to be information-theoretically impossible to solve weak recovery below the Kesten-Stigum threshold in [@mossel2015reconstruction; @deshpande2015asymptotic]. The algorithmic side of this conjecture for general $k$ was subsequently resolved with approximate acyclic belief propagation in [@abbe2015detection; @abbe2016achieving; @abbe2018proof] and has also been shown using low-degree polynomials, tensor decomposition and color coding [@hopkins2017efficient]. A statistical-computational gap is conjectured to already arise at $k = 4$ [@abbe2018proof] and the information-theoretic limit for community detection has been shown to occur for large $k$ at $\pr{snr} = \Theta(\log k/k)$, which is much lower than the Kesten-Stigum threshold [@banks2016information]. Rigorous evidence for this statistical-computational gap has been much more elusive and has only been shown for low-degree polynomials [@hopkins2017efficient] and variants of belief propagation. Another related line of work has exactly characterized the thresholds for exact recovery in the regime $p, q = \Theta(\log n/n)$ when $k = 2$ [@abbe2015exact; @hajek2016achieving; @hajek2016achievingb].
The $k$-block SBM for general edge densities $p$ and $q$ has also been studied extensively under the names graph clustering and graph partitioning in the statistics and computer science communities. A long line of work has developed algorithms recovering the latent communities in this regime, including a wide range of spectral and convex programming techniques [@boppana1987eigenvalues; @dyer1989solution; @condon2001algorithms; @mcsherry2001spectral; @bollobas2004max; @coja2010graph; @rohe2011spectral; @chaudhuri2012spectral; @nadakuditi2012graph; @chen2012clustering; @ames2014guaranteed; @anandkumar2014tensor; @chen2014improved; @chen2016statistical]. A comparison and survey of these results can be found in [@chen2014improved]. As discussed in [@chen2016statistical], for growing $k$ satisfying $k = O(\sqrt{n})$ and $p$ and $q$ with $p = \Theta(q)$ and $1 - p = \Theta(1 - q)$, the best known $\text{poly}(n)$ time algorithms all only work above $$\frac{(p - q)^2}{q(1 - q)} \gtrsim \frac{k^2}{n}$$ which is an asymptotic extension of the Kesten-Stigum threshold to general $p$ and $q$. In contrast, the statistically optimal rate of recovery is again roughly a factor of $k$ lower at $\tilde{\Omega}(k/n)$. Furthermore, up to $\log n$ factors, the Kesten-Stigum threshold is both when efficient exact recovery algorithms begin to work and where the best efficient weak recovery algorithms are conjectured to fail [@chen2016statistical].
In this work, we show computational lower bounds matching the Kesten-Stigum threshold up to a constant factor in a mean-field analogue of recovering a first community $C_1$ in the $k$-SBM, where $p$ and $q$ are bounded away from zero and one. Consider a sample $G$ from the $k$-SBM restricted to the union of the other communities $C_2, \dots, C_k$. This subgraph has average edge density approximately given by $\hat{q} = (p - q) \cdot (k - 1) \cdot (n/k)^2 \cdot (n - n/k)^{-2} + q = (k - 1)^{-1} \cdot p + (1 - (k - 1)^{-1}) \cdot q$. Now consider the task of recovering the community $C_1$ in the graph $G'$ in which the subgraph on $C_2, \dots, C_k$ is replaced by the corresponding mean-field Erdős-Rényi graph $\mG(n - n/k, \hat{q})$. Formally, let $G'$ be the graph formed by first choosing $C_1$ at random and sampling edges as follows:
- include edges within $C_1$ with probability $P_{11} = p$;
- include edges between $C_1$ and $[n]\backslash C_1$ with probability $P_{12} = q$; and
- includes edges within $[n]\backslash C_1$ with probability $P_{22}$ where $P_{22} = (k - 1)^{-1} \cdot p + (1 - (k - 1)^{-1}) \cdot q$.
We refer to this model as the imbalanced SBM and let $\pr{isbm}(n, k, P_{11}, P_{12}, P_{22})$ denote the problem of testing between this model and Erdős-Rényi graphs of the form $\mG(n, P_0)$. As we will discuss in Section \[subsec:2-formulations\], lower bounds for this formulation also imply lower bounds for weakly and exactly recovering $C_1$. We remark that under our notation for $\pr{isbm}$, the hidden community $C_1$ has size $n/k$ and $k$ is the number of communities in the analogous $k$-block SBM described above.
As we will discuss in Section \[sec:3-community\], $\pr{isbm}$ can also be viewed as a model of single community detection with uniformly calibrated expected degrees. Note that the expected degree of a vertex in $C_1$ is $nP_{22} - p$ and the expected degree of a vertex in $C_1 \backslash [n]$ is $(n - 1)P_{22}$, which differ by at most $1$. Similar models with two imbalanced communities and calibrated expected degrees have appeared previously in [@neeman2014non; @verzelen2015community; @perry2017semidefinite; @caltagirone2018recovering]. As will be discussed in Section \[subsec:1-problems-semicr\], the simpler planted dense subgraph model of single community recovery has a detection threshold that differs from the Kesten-Stigum threshold, even though the Kesten-Stigum threshold is conjectured to be the barrier for recovering the planted dense subgraph. This is because non-uniformity in expected degrees gives rise to simple edge-counting tests that do not lead to algorithms for recovering the planted subgraph. Our main result for $\pr{isbm}$ is the following lower bound up to the asymptotic Kesten-Stigum threshold.
\[thm:isbm-lb\] Suppose that $(n, k)$ satisfy condition , that $k$ is prime or $k = \omega_n(1)$ and $k = o(n^{1/3})$, and suppose that $q \in (0, 1)$ satisfies $\min\{q, 1 - q \} = \Omega_n(1)$. If $P_{22} = (k - 1)^{-1} \cdot p + (1 - (k - 1)^{-1}) \cdot q$, then the $k\pr{-pc}$ conjecture implies that there is a computational lower bound for $\pr{isbm}(n, k, p, q, P_{22})$ at all levels of signal below the Kesten-Stigum threshold of $\frac{(p - q)^2}{q(1 - q)} = \tilde{o}(k^2/n)$.
This directly provides evidence for the conjecture that $(p - q)^2/q(1 - q) = \tilde{\Theta}(k^2/n)$ defines the computational barrier for community recovery in general $k$-SBMs made in [@chen2016statistical]. While the statistical-computational gaps in $\pr{pc}$ and $k$-SBM are the two most prominent conjectured gaps in average-case problems over graphs, they are very different from an algorithmic perspective and evidence for computational lower bounds up to the Kesten-Stigum threshold has remained elusive. Our reduction yields a first step towards understanding the relationship between these gaps.
Testing Hidden Partition Models {#subsec:1-problems-hidden-partition}
-------------------------------
We also introduce two testing problems we refer to as the Gaussian and bipartite hidden partition models. We give a reduction and algorithms that show these problems have a statistical-computational gap, and we tightly characterize their computational barriers based on the $k\pr{-pc}$ conjecture. The main motivation for introducing these problems is to demonstrate the versatility of our reduction technique dense Bernoulli rotations in transforming hidden structure. A description of dense Bernoulli rotations and the construction of a key design tensor used in our reduction can be found in Section \[sec:2-bernoulli-rotations\].
The task in the bipartite hidden partition model problem is to test for the presence of a planted $rK$-vertex subgraph, sampled from an $r$-block stochastic block model, in an $n$-vertex random bipartite graph. The Gaussian hidden partition model problem is a corresponding Gaussian analogue. These are both multi-community variants of the subgraph stochastic block model considered in [@brennan2018reducibility], which corresponds to the setting in which $r = 2$. The multi-community nature of the planted subgraph yields a more intricate hidden structure, and the additional free parameter $r$ yields a more complicated computational barrier. The work of [@chen2016statistical] considered the related task of recovering the communities in the Gaussian and bipartite hidden partition models. We remark that conjectured computational limits for this recovery task differ from the detection limits we consider.
Formally, our hidden partition problems are defined as follows. Let $C = (C_1, C_2, \dots, C_r)$ and $D = (D_1, D_2, \dots, D_r)$ are chosen independently and uniformly at random from the set of all sequences of length $r$ consisting of disjoint $K$-subsets of $[n]$. Consider the random matrix $M$ sampled by first sampling $C$ and $D$ and then sampling $$M_{ij} \sim \left\{ \begin{array}{ll} \mN(\gamma, 1) &\textnormal{if } i \in C_h \textnormal{ and } j \in D_h \textnormal{ for some } h \in [r] \\ \mN\left(-\frac{\gamma}{r - 1}, 1 \right) &\textnormal{if } i \in C_{h_1} \textnormal{ and } j \in D_{h_2} \textnormal{ where } h_1 \neq h_2 \\ \mN(0, 1) &\textnormal{otherwise} \end{array} \right.$$ independently for each $1 \le i, j \le n$. The problem $\pr{ghpm}(n, r, K, \gamma)$ is to test between $H_0 : M \sim \mN(0, 1)^{\otimes n \times n}$ and an alternative hypothesis $H_1$ under which $M$ is sampled as outlined above. The problem $\pr{bhpm}(n, r, K, P_0, \gamma)$ is a bipartite graph analogue of this problem with ambient edge density $P_0$, edge density $P_0 + \gamma$ within the communities in the subgraph and $P_0 - \frac{\gamma}{r - 1}$ on the rest of the subgraph.
As we will show in Section \[sec:3-hidden-partition\], an empirical variance test succeeds above the threshold $\gamma_{\text{comp}}^2 = \tilde{\Theta}(n/rK^2)$ and an exhaustive search succeeds above $\gamma_{\text{IT}}^2 = \tilde{\Theta}(1/K)$ in $\pr{ghpm}$ and $\pr{bhpm}$ where $P_0$ is bounded away from $0$ and $1$. Thus our main lower bounds for these two problems confirm that this empirical variance test is approximately optimal among efficient algorithms and that both problems have a statistical-computational gap assuming the $k\pr{-pc}$ conjecture.
\[thm:ghpm-lb\] Suppose that $r^2 K^2 = \tilde{\omega}(n)$ and $(\lceil r^2 K^2/n \rceil, r)$ satisfies condition , suppose $r$ is prime or $r = \omega_n(1)$ and suppose that $P_0 \in (0, 1)$ satisfies $\min\{P_0, 1 - P_0 \} = \Omega_n(1)$. Then the $k\pr{-pc}$ conjecture implies that there is a computational lower bound for each of $\pr{ghpm}(n, r, K, \gamma)$ for all levels of signal $\gamma^2 = \tilde{o}(n/rK^2)$. This same lower bound also holds for $\pr{bhpm}(n, r, K, P_0, \gamma)$ given the additional condition $n = o(rK^{4/3})$.
We also remark that the empirical variance and exhaustive search tests along with our lower bound do not support the existence of a statistical-computational gap in the case when the subgraph is the entire graph with $n = rK$, which is our main motivation for considering this subgraph variant. We remark that a number of the technical conditions in the theorem such as condition and $n = o(rK^{4/3})$ are trivial in the parameter regime where the number of communities is not very large with $r = n^{o(1)}$ and when the total size of the hidden communities is large with $rK = \tilde{\Theta}(n^{c})$ where $c > 3/4$. In this regime, these problems have a nontrivial statistical-computational gap that our result tightly characterizes.
Semirandom Planted Dense Subgraph and the Recovery Conjecture {#subsec:1-problems-semicr}
-------------------------------------------------------------
=\[font=\]
(,) – (,) node\[right\] [$\beta$]{}; (,) – (,) node\[above\] [$\alpha$]{};
at (15, 0) \[below\] [$1$]{}; at (7.5, 0) \[below\] [$\frac{1}{2}$]{}; at (0, 0) \[left\] [$0$]{}; at (0, 10) \[left\] [$2$]{}; at (0, 5) \[left\] [$1$]{}; at (0, 3.33) \[left\] [$\frac{2}{3}$]{}; at (10, 0) \[below\] [$\frac{2}{3}$]{};
(0, 0) – (7.5, 0) – (10, 3.33) – (0, 0); (7.5, 0) – (15, 10) – (15, 0) – (7.5, 0); (0, 0) – (10, 3.33) – (15, 10) – (0, 10) – (0, 0);
at (3.75, 9.5) [*Community Detection*]{}; at (12.2, 6)\[rotate=54, anchor=south\] [$\pr{snr} \asymp \frac{n^2}{k^4}$]{}; at (4, 1.2)\[rotate=20, anchor=south\] [$\pr{snr} \asymp \frac{1}{k}$]{}; at (6.5, 6) [IT impossible]{}; at (12, 2) [poly-time]{}; at (6.5, 1.25) [PC-hard]{};
=\[font=\]
(,) – (,) node\[right\] [$\beta$]{}; (,) – (,) node\[above\] [$\alpha$]{};
at (15, 0) \[below\] [$1$]{}; at (7.5, 0) \[below\] [$\frac{1}{2}$]{}; at (0, 0) \[left\] [$0$]{}; at (0, 10) \[left\] [$2$]{}; at (0, 5) \[left\] [$1$]{};
(7.5, 0) – (15, 5) – (10, 3.33) – (7.5, 0); (0, 0) – (7.5, 0) – (10, 3.33) – (0, 0); (7.5, 0) – (15, 5) – (15, 0) – (7.5, 0); (0, 0) – (15, 5) – (15, 10) – (0, 10) – (0, 0);
at (3.75, 9.5) [*Community Recovery*]{}; at (11.3, 1.1)\[rotate=33, anchor=south\] [$\pr{snr} \asymp \frac{n}{k^2}$]{}; at (4, 1.2)\[rotate=20, anchor=south\] [$\pr{snr} \asymp \frac{1}{k}$]{}; at (7.5, 6) [IT impossible]{}; at (13, 0.75) [poly-time]{}; at (6.5, 1.25) [PC-hard]{}; at (11, 3) [open]{};
In the planted dense subgraph model of single community recovery, the observation is a sample from $\mG(n, k, P_1, P_0)$ which is formed by planting a random subgraph on $k$ vertices from $\mG(k, P_1)$ inside a copy of $\mG(n, P_0)$, where $P_1 > P_0$ are allowed to vary with $n$ and satisfy that $P_1 = O(P_0)$. Detection and recovery of the hidden community in this model have been studied extensively [@arias2014community; @butucea2013detection; @verzelen2015community; @hajek2015computational; @chen2016statistical; @hajek2016information; @montanari2015finding; @candogan2018finding] and this model has emerged as a canonical example of a problem with a detection-recovery computational gap. While it is possible to efficiently detect the presence of a hidden subgraph of size $k=\tilde \Omega(\sqrt{n})$ if $(P_1 - P_0)^2/P_0(1 - P_0) = \tilde{\Omega}(n^2/k^4)$, the best known polynomial time algorithms to *recover* the subgraph require a higher signal at the Kesten-Stigum threshold of $(P_1 - P_0)^2/P_0(1 - P_0) = \tilde{\Omega}(n/k^2)$.
In each of [@hajek2015computational; @brennan2018reducibility] and [@brennan2019universality], it has been conjectured that the recovery problem is hard below this threshold of $\tilde{\Theta}(n/k^2)$. This Recovery Conjecture was even used in [@brennan2018reducibility] as a hardness assumption to show detection-recovery gaps in other problems including biased sparse PCA and Gaussian biclustering. A line of work has tightly established the conjectured detection threshold through reductions from the conjecture [@hajek2015computational; @brennan2018reducibility; @brennan2019universality], while the recovery threshold has remained elusive. Planted clique maps naturally to the detection threshold in this model, so it seems unlikely that the conjecture could also yield lower bounds at the tighter recovery threshold, given that recovery and detection are known to be equivalent for [@alon2007testing]. These prior lower bounds and the conjectured detection-recovery gap in $\pr{pds}$ are depicted in Figure \[fig:pdsdetrecgap\].
We show that the $k\pr{-pc}$ conjecture implies the Recovery Conjecture for *semirandom* community recovery in the regime where $P_0 = \Theta(1)$. Semirandom adversaries provide an alternate notion of robustness against constrained modifications that heuristically appear to increase the signal strength [@blum1995coloring]. Algorithms and lower bounds in semirandom problems have been studied for a number of problems, including the stochastic block model [@feige2001heuristics; @moitra2016robust], planted clique [@feige2000finding], unique games [@kolla2011play], correlation clustering [@mathieu2010correlation; @makarychev2015correlation], graph partitioning [@makarychev2012approximation], 3-coloring [@david2016effect] and clustering mixtures of Gaussians [@vijayaraghavan2018clustering]. Formally we consider the problem $\pr{semi-cr}(n, k, P_1, P_0)$ where a semirandom adversary is allowed to remove edges outside of the planted subgraph from a graph sampled from $\mG(n, k, P_1, P_0)$. The task is to test between this model and an Erdős-Rényi graph $\mG(n, P_0)$ similarly perturbed by a semirandom adversary. As we will discuss in Section \[subsec:2-formulations\], lower bounds for this formulation extend to approximately recovering the hidden community under a semirandom adversary. In Section \[sec:semirandom\], we prove the following theorem – that the computational barrier in the detection problem shifts to the recovery threshold in $\pr{semi-cr}$.
\[thm:semi-cr-lb\] If $k$ and $n$ are polynomial in each other with $k = \Omega(\sqrt{n})$ and $0 < P_0 < P_1 \le 1$ where $\min\{P_0, 1 - P_0 \} = \Omega(1)$, then the $k\pr{-pc}$ conjecture implies that there is a computational lower bound for $\pr{semi-cr}(n, k, P_1, P_0)$ at $\frac{(P_1 - P_0)^2}{P_0(1 - P_0)} = \tilde{o}(n/k^2)$.
A related reference is the reduction in [@cai2015computational], which proves a detection-recovery gap in the context of sub-Gaussian submatrix localization based on the hardness of finding a planted $k$-clique in a random $n/2$-regular graph. The relationship between our lower bound and that of [@cai2015computational] is discussed in more detail in Section \[sec:semirandom\]. From an algorithmic perspective, the convexified maximum likelihood algorithm from [@chen2016statistical] complements our lower bound – a simple monotonicity argument shows that it continues to solve the community recovery problem above the Kesten-Stigum threshold under a semirandom adversary.
Negatively Correlated Sparse Principal Component Analysis {#subsec:1-problems-negspca}
---------------------------------------------------------
In sparse principal component analysis (PCA), the observations $X_1, X_2, \dots, X_n$ are $n$ independent samples from $\mN(0, \Sigma)$ where the eigenvector $v$ corresponding to the largest eigenvalue of $\Sigma$ is $k$-sparse, and the task is to estimate $v$ in $\ell_2$ norm or find its support. Sparse PCA has many applications ranging from online visual tracking [@wang2013online] and image compression [@majumdar2009image] to gene expression analysis [@zou2006sparse; @chun2009expression; @parkhomenko2009sparse; @chan2010using]. Showing lower bounds for sparse PCA can be reduced to analyzing detection in the spiked covariance model [@johnstoneSparse04], which has hypotheses $$H_0:X \sim \mN(0, I_d)^{\otimes n} \quad\text{ and }\quad H_1:X \sim \mN(0, I_d + \theta vv^\top)^{\otimes n}$$ Here, $H_1$ is the composite hypothesis where $v \in \mathbb{R}^d$ is unknown and allowed to vary over all $k$-sparse unit vectors. The information-theoretically optimal rate of detection is at the level of signal $\theta = \Theta(\sqrt{k \log d/n})$ [@berthet2013optimal; @cai2015optimal; @wang2016statistical]. However, when $k = o(\sqrt{d})$, the best known polynomial time algorithms for sparse PCA require that $\theta = \Omega(\sqrt{k^2/n})$. Since the seminal paper of [@berthet2013complexity] initiated the study of statistical-computational gaps through the $\pr{pc}$ conjecture, this $k$-to-$k^2$ gap for sparse PCA has been shown to follow from the $\pr{pc}$ conjecture in a sequence of papers [@berthet2013optimal; @berthet2013complexity; @wang2016statistical; @gao2017sparse; @brennan2018reducibility; @brennan2019optimal].
In negatively correlated sparse PCA, the eigenvector $v$ of interest instead corresponds to the *smallest eigenvalue* of $\Sigma$. Negative sparse PCA can similarly be formulated as a hypothesis testing problem $\pr{neg-spca}(n, k, d, \theta)$, where the alternative hypothesis is instead given by $H_1: X \sim \mN(0, I_d - \theta vv^\top)^{\otimes n}$. Similar algorithms as in ordinary sparse PCA continue to work in the negative setting – the information-theoretic limit of the problem remains at $\theta = \Theta(\sqrt{k \log d/n})$ and the best known efficient algorithms still require $\theta = \Omega(\sqrt{k^2/n})$. However, negative sparse PCA is stochastically a *very differently structured* problem than ordinary sparse PCA. A sample from the ordinary spiked covariance model can be expressed as $$X_i = \sqrt{\theta} \cdot gv + \mN(0, I_d)$$ where $g \sim \mN(0, 1)$ is independent of the $\mN(0, I_d)$ term. This signal plus noise representation is a common feature in many high-dimensional statistical models and is crucially used in the reductions showing hardness for sparse PCA in [@berthet2013optimal; @berthet2013complexity; @wang2016statistical; @gao2017sparse; @brennan2018reducibility; @brennan2019optimal]. Negative sparse PCA does not admit a representation of this form, making it an atypical planted problem and different from ordinary sparse PCA, despite the deceiving similarity between their optimal algorithms. The lack of this representation makes reducing to Negative sparse PCA technically challenging. Negatively spiked PCA was also recently related to the hardness of finding approximate ground states in the Sherrington-Kirkpatrick model [@bandeira2019computational]. However, ordinary PCA does not seem to share this connection. In Section \[sec:2-neg-spca\], we give a reduction obtaining the following computational lower bound for $\pr{neg-spca}$ from the $\pr{bpc}$ conjecture.
\[thm:neg-spca-lb\] If $k, d$ and $n$ are polynomial in each other, $k = o(\sqrt{d})$ and $k = o(n^{1/6})$, then the $\pr{bpc}$ conjecture implies a computational lower bound for $\pr{neg-spca}(n, k, d, \theta)$ at all levels of signal $\theta = \tilde{o}(\sqrt{k^2/n})$.
We deduce this theorem and discuss its conditions in detail in Section \[subsec:3-neg-spca\]. A key step in our reduction to $\pr{neg-spca}$ involves randomly rotating the positive semidefinite square root of the inverse of an empirical covariance matrix. In analyzing this step, we prove a novel convergence result in random matrix theory, which may be of independent interest. Specifically, we characterize when a Wishart matrix and its inverse converge in KL divergence. This is where the parameter constraint $k = o(n^{1/6})$ in the theorem above arises. We believe that this is an artefact of our techniques and extending the theorem to hold without this condition is an interesting open problem. A similar condition arose in the strong lower bounds of [@brennan2019optimal]. We remark that conditions of this form *do not affect the tightness* of our lower bounds, but rather only impose a constraint on the level of sparsity $k$. More precisely, for each fixed level of sparsity $k = \tilde{\Theta}(n^{\alpha})$, there is conjectured statistical-computational gap in $\theta$ between the information-theoretic barrier of $\theta = \Theta(\sqrt{k \log d/n})$ and computational barrier of $\theta = \tilde{o}(\sqrt{k^2/n})$. Our reduction tightly establishes this gap for all $\alpha \in (0, 1/6]$. Our main motivation for considering $\pr{neg-spca}$ is that it seems to have a fundamental connection to the structure of *supervised problems* where ordinary sparse PCA does not. In particular, our reduction to $\pr{neg-spca}$ is a crucial subroutine in reducing to mixtures of sparse linear regressions and robust sparse linear regression. This is discussed further in Sections \[sec:1-techniques\], \[sec:2-neg-spca\] and \[sec:2-supervised\].
Unsigned and Mixtures of Sparse Linear Regressions {#subsec:1-problems-mslr}
--------------------------------------------------
In learning mixtures of sparse linear regressions (SLR), the task is to learn $L$ sparse linear functions capturing the relationship between features and response variables in heterogeneous samples from $L$ different sparse regression problems. Formally, the observations $(X_1, y_1), (X_2, y_2), \dots, (X_n, y_n)$ are $n$ independent sample-label pairs given by $y_i = \langle \beta, X_i \rangle + \eta_i$ where $X_i \sim \mN(0, I_d)$, $\eta_i \sim \mN(0, 1)$ and $\beta$ is chosen from a mixture distribution $\nu$ over a finite set $k$-sparse vectors $\{\beta_1, \beta_2, \dots, \beta_L\}$ of bounded $\ell_2$ norm. The task is to estimate the components $\beta_j$ that are sufficiently likely under $\nu$ in $\ell_2$ norm i.e. to within an $\ell_2$ distance of $\tau$.
Mixtures of linear regressions, also known as the hierarchical mixtures of experts model in the machine learning community [@jordan1994hierarchical], was first introduced in [@quandt1978estimating] and has been studied extensively in the past few decades [@de1989mixtures; @wedel1995mixture; @mclachlan2004finite; @zhu2004hypothesis; @faria2010fitting]. Recent work on mixtures of linear regressions has focussed on efficient algorithms with finite-sample guarantees [@chaganty2013spectral; @chen2014convex; @yi2014alternating; @balakrishnan2017statistical; @chen2017convex; @li2018learning]. The high-dimensional setting of mixtures of SLRs was first considered in [@stadler2010l], which proved an oracle inequality for an $\ell_1$-regularization approach, and variants of the EM algorithm for mixtures of SLRs were analyzed in [@wang2014high; @yi2015regularized]. Recent work has also studied a different setting for mixtures of SLRs where the covariates $X_i$ can be designed by the learner [@yin2018learning; @krishnamurthy2019sample].
We show that a statistical-computational gap emerges for mixtures of SLRs even in the simplest case where there are $L = 2$ components, the mixture distribution $\nu$ is known to sample each component with probability $1/2$ and the task is to estimate even just one of the components $\{ \beta_1, \beta_2\}$ to within $\ell_2$ norm $\tau$. We refer to this simplest setup for learning mixtures of SLRs as $\pr{mslr}(n, k, d, \tau)$. The following computational lower bound is deduced in Section \[subsec:3-slr\] and is a consequence of the reduction in Section \[sec:2-supervised\].
\[thm:mslr-lb\] If $k, d$ and $n$ are polynomial in each other, $k = o(\sqrt{d})$ and $k = o(n^{1/6})$, then the $k\pr{-bpc}$ conjecture implies that there is a computational lower bound for $\pr{mslr}(n, k, d, \tau)$ at all sample complexities $n = \tilde{o}(k^2/\tau^4)$.
As we will discuss in Section \[subsec:2-formulations\], we will prove this theorem by reducing to the problem of testing between the mixtures of SLRs model when $\beta_1 = - \beta_2$ and a null hypothesis under which $y$ and $X$ are independent. A closely related work [@fan2018curse] studies a nearly identical testing problem in the statistical query model. They tightly characterize the information-theoretic limit of this problem, showing that it occurs at the sample complexity $n = \tilde{\Theta}(k \log d /\tau^4)$. Therefore our reduction establishes a $k$-to-$k^2$ statistical-computational gap in this model of learning mixtures of SLRs. In [@fan2018curse], it is also shown that efficient algorithms in the statistical query model suffer from this same $k$-to-$k^2$ gap.
Our reduction to the hypothesis testing formulation of $\pr{mslr}$ above is easily seen to imply that the same computational lower bound holds for an unsigned variant $\pr{uslr}(n, k, d, \tau)$ of SLR, where the $n$ observations $(X_1, y_1), (X_2, y_2), \dots, (X_n, y_n)$ now of the form $y_i = |\langle \beta, X_i \rangle + \eta_i|$ for a fixed unknown $\beta$. Note that by the symmetry of $\mN(0, 1)$, $y_i$ is equidistributed to $||\langle \beta, X_i \rangle | + \eta_i|$ and thus is a noisy observation of $|\langle \beta, X_i \rangle |$. In general, noisy observations of the phaseless modulus $|\langle \beta, X_i \rangle |$ from some conditional link distribution $\bP( \cdot \, | \, |\langle \beta, X_i \rangle | )$ yields a general instance of phase retrieval [@mondelli2018fundamental; @celentano2020estimation]. As observed in [@fan2018curse], the problem $\pr{uslr}$ is close to the canonical formulation of sparse phase retrieval (SPR) where $\bP( \cdot \, | \, |\langle \beta, X_i \rangle | )$ is $\mN(|\langle \beta, X_i \rangle |^2, \sigma^2)$, which has been studied extensively and has a conjectured $k$-to-$k^2$ statistical-computational gap [@li2013sparse; @schniter2014compressive; @candes2015phase; @cai2016optimal; @wang2017sparse; @hand2018phase; @barbier2019optimal; @celentano2020estimation]. Our lower bounds provide partial evidence for this conjecture and it is an interesting open problem to give a reduction to the canonical formulation of SPR and other sparse GLMs through average-case reductions.
The reduction to $\pr{mslr}$ showing Theorem \[thm:mslr-lb\] in Section \[sec:2-supervised\] is our capstone reduction. It showcases a wide range of our techniques including dense Bernoulli rotations, constructions of combinatorial design matrices from $\mathbb{F}_r^t$, our reduction to $\pr{neg-spca}$ and its connection to random matrix theory, and an additional technique of combining instances of different unsupervised problems into a supervised problem. We give an overview of these techniques in Section \[sec:1-techniques\]. Furthermore, $\pr{mslr}$ is a very differently structured problem from any of our variants of $\pr{pc}$ and it is surprising that the tight statistical-computational gap for $\pr{mslr}$ can be derived from their hardness. We remark that our lower bounds for $\pr{mslr}$ inherit the technical condition that $k = o(n^{1/6})$ from our reduction to $\pr{neg-spca}$. As before, this does not affect the fact that we show tight hardness and it is an interesting open problem to remove this condition.
Robust Sparse Linear Regression {#subsec:1-problems-robust-slr}
-------------------------------
In ordinary SLR, the observations $(X_1, y_1), (X_2, y_2), \dots, (X_n, y_n)$ are independent sample-label pairs given by $y_i = \langle \beta, X_i \rangle + \eta_i$ where $X_i \sim \mN(0, \Sigma)$, $\eta_i \sim \mN(0, 1)$ and $\beta$ is an unknown $k$-sparse vector with bounded $\ell_2$ norm. The task is to estimate $\beta$ to within $\ell_2$ norm $\tau$. When $\Sigma$ is well-conditioned, SLR is a gapless problem with the computationally efficient LASSO attaining the information-theoretically optimal sample complexity of $n = \Theta(k \log d/\tau^2)$ [@tibshirani1996regression; @bickel2009simultaneous; @raskutti2010restricted]. When $\Sigma$ is not well-conditioned, SLR has a statistical-computational gap based on its restricted eigenvalue constant [@zhang2014lower]. As with robust sparse mean estimation, the robust SLR problem $\pr{rslr}(n, k, d, \tau, \epsilon)$ is obtained when a computationally-unbounded adversary corrupts an arbitrary $\epsilon$-fraction of the observed sample-label pairs. In this work, we consider the simplest case of $\Sigma = I_d$ where SLR is gapless but, as we discuss next, robustness seems to induce a statistical-computational gap.
Robust regression is a well-studied classical problem in statistics [@rousseeuw2005robust]. Efficient algorithms remained elusive for decades, but recent breakthroughs in sum of squares algorithms [@klivans2018efficient; @karmalkar2019list; @raghavendra2020list], filtering approaches [@diakonikolas2019efficient] and robust gradient descent [@chen2017distributed; @prasad2018robust; @diakonikolas2019sever] have led to the first efficient algorithms with provable guarantees. A recent line of work has also studied efficient algorithms and barriers in the high-dimensional setting of robust SLR [@chen2013robust; @balakrishnan2017computationally; @liu2018high; @liu2019high]. Even in the simplest case of $\Sigma = I_d$ where the covariates $X_i$ have independent entries, the best known polynomial time algorithms suggest robust SLR has a $k$-to-$k^2$ statistical-computational gap. As shown in [@gao2020robust], similar to $\pr{rsme}$, robust SLR is only information-theoretically possible if $\tau = \Omega(\epsilon)$. In [@balakrishnan2017computationally; @liu2018high], it is shown that polynomial-time ellipsoid-based algorithms solve robust SLR with $n = \tilde{\Theta}(k^2 \log d/\epsilon^2)$ samples when $\tau = \tilde{\Theta}(\epsilon)$. Furthermore, [@liu2018high] shows that an $\pr{rsme}$ oracle can be used to solve robust SLR with only a $\tilde{\Theta}(1)$ factor loss in $\tau$ and the required number of samples $n$. As noted in [@li2017robust], $n = \Omega(k \log d/\epsilon^2)$ samples suffice to solve $\pr{rsme}$ inefficiently when $\tau = \Theta(\epsilon)$. Combining these observations yields an inefficient algorithm for robust SLR with sample complexity $n = \tilde{\Theta}(k \log d/\epsilon^2)$ samples when $\tau = \tilde{\Theta}(\epsilon)$, confirming that the best known efficient algorithms suggest a $k$-to-$k^2$ statistical-computational gap. In [@chen2013robust; @liu2019high], efficient algorithms are shown to succeed in an alternative regime where $n = \tilde{\Theta}(k \log d)$, $\epsilon = \tilde{O}(1/\sqrt{k})$ and $\tau = \tilde{O}(\epsilon \sqrt{k})$.
All of these algorithms suggest that the correct computational sample complexity for robust SLR is $n = \tilde{\Omega}(k^2 \epsilon^2/\tau^4)$. In Section \[subsec:3-slr\], we deduce the following tight computational lower bound for $\pr{rslr}$ providing evidence for this conjecture.
\[thm:rslr-lb\] If $k, d$ and $n$ are polynomial in each other, $k = o(n^{1/6})$, $k = o(\sqrt{d})$ and $\epsilon < 1/2$ is such that $\epsilon = \tilde{\Omega}(n^{-1/2})$, then the $k\pr{-bpc}$ conjecture implies that there is a computational lower bound for $\pr{rslr}(n, k, d, \tau, \epsilon)$ at all sample complexities $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$.
We present the reductions to $\pr{mslr}$ and $\pr{rslr}$ together as a single unified reduction $k\pr{-pds-to-mslr}$ in Section \[sec:2-supervised\]. As is discussed in Section \[subsec:3-slr\], $\pr{mslr}$ and $\pr{rslr}$ are obtained by setting $r = \epsilon^{-1} = 2$ and $\epsilon < 1/2$, respectively. The theorem above follows from a slightly modified version of this reduction, $k\pr{-pds-to-mslr}_R$, that removes the technical condition that otherwise arises in applying $k\pr{-pds-to-mslr}$ with $r = n^{\Omega(1)}$. This turns out to be more important here than in the context of $\pr{rsme}$ because, as in the reduction to $\pr{mslr}$, this reduction to $\pr{rslr}$ inherits the technical condition that $k = o(n^{1/6})$ from our reduction to $\pr{neg-spca}$. This condition implicitly imposes a restriction on $\epsilon$ to satisfy that $\epsilon = \tilde{O}(n^{-1/3})$, since $\tau = \Omega(\epsilon)$ must be true for the problem to not be information-theoretically impossible. Thus our regime of interest for $\pr{rslr}$ is a regime where the technical condition is nontrivial.
As in the case of $\pr{mslr}$ and $\pr{neg-spca}$, we emphasize that the condition $k = o(n^{1/6})$ does not affect the tightness of our lower bounds, merely restricting their regime of application. In particular, the theorem above yields a tight nontrivial statistical-computational gap in the entire parameter regime when $k = o(n^{1/6})$, $\tau = \Omega(\epsilon)$ and $\epsilon = \tilde{\Theta}(n^{-c})$ where $c$ is any constant in the interval $[1/3, 1/2]$. We remark that the condition $k = o(n^{1/6})$ seems to be an artefact of our techniques rather than necessary.
In the context of $\pr{rslr}$, we view our main contribution as a set of reduction techniques relating $\pr{pc}_\rho$ to the very differently structured problem $\pr{rslr}$, rather than the resulting computation lower bound itself. A byproduct of our reduction is the explicit construction of an adversary modifying an $\epsilon$-fraction of the samples in robust SLR that produces the $k$-to-$k^2$ statistical-computational gap in the theorem above. This adversary turns out to be surprisingly nontrivial on its own, but is a direct consequence of the structure of the reduction. This is discussed in more detail in Sections \[subsec:2-mixtures-slr\] and \[subsec:3-slr\].
Tensor Principal Component Analysis {#subsec:1-problems-tpca}
-----------------------------------
In Tensor PCA, the observation is a single order $s$ tensor $T$ with dimensions $n^{\otimes s} = n \times n \times \cdots \times n$ given by $T \sim \theta v^{\otimes s} + \mN(0, 1)^{\otimes n^{\otimes s}}$, where $v$ has a Rademacher prior and is distributed uniformly over $\{-1, 1\}^n$ [@richard2014statistical]. The task is to recover $v$ within nontrivial $\ell_2$ error $o(\sqrt{n})$ and is only information-theoretically possible if $\theta = \tilde{\omega}\left(n^{(1 - s)/2}\right)$ [@richard2014statistical; @lesieur2017statistical; @chen2018phase; @jagannath2018statistical; @chen2019phase; @perry2020statistical], in which case $v$ can be recovered through exhaustive search. The best known polynomial-time algorithms all require the higher signal strength $\theta = \tilde{\Omega}(n^{-s/4})$, at which point $v$ can be recovered through spectral algorithms [@richard2014statistical], the sum of squares hierarchy [@hopkins2015tensor; @hopkins2016fast] and spectral algorithms based on the Kikuchi hierarchy [@wein2019kikuchi]. Lower bounds up to this conjectured computational barrier have been shown in the sum of squares hierarchy [@hopkins2015tensor; @hopkins2017power] and for low-degree polynomials [@kunisky2019notes]. A number of natural “local” algorithms have also been shown to fail given much stronger levels of signal up to $\theta = \tilde{o}(n^{-1/2})$, including approximate message passing, the tensor power method, Langevin dynamics and gradient descent [@richard2014statistical; @anandkumar2014tensor; @arous2018algorithmic].
We give a reduction showing that the $\pr{pc}_\rho$ conjecture implies an optimal computational lower bound at $\theta = \tilde{\Omega}(n^{-s/4})$ for tensor PCA. We show this lower bound against efficient algorithms with a low false positive probability of error in the hypothesis testing formulation of tensor PCA where $T \sim \mN(0, 1)^{\otimes n^{\otimes s}}$ under $H_0$ and $T$ is sampled from the tensor PCA distribution described above under $H_1$. More precisely, we prove the following theorem in Sections \[sec:2-hypergraph-planting\] and \[sec:3-tensor\].
\[thm:tpca-lb\] Let $n$ be a parameter and $s \ge 3$ be a constant, then the $k\pr{-hpc}^s$ conjecture implies a computational lower bound for $\pr{tpca}^s(n, \theta)$ when $\theta = \tilde{o}(n^{-s/4})$ against $\textnormal{poly}(n)$ time algorithms $\mathcal{A}$ solving $\pr{tpca}^s(n, \theta)$ with a low false positive probability of $\bP_{H_0}[\mathcal{A}(T) = H_1] = O(n^{-s})$.
Lemma \[lem:one-side-estimation\] in Section \[sec:3-tensor\] shows that any $\text{poly}(n)$ time algorithm solving the recovery formulation of tensor PCA yields such an algorithm $\mathcal{A}$, and thus this theorem implies our desired computational lower bound. This low false positive probability of error condition on $\mathcal{A}$ arises from the fact that our reduction to $\pr{tpca}$ is a *multi-query* average-case reduction, requiring multiple calls to a tensor PCA blackbox to solve $k\pr{-hpc}^s$. This feature is a departure from the rest of our reductions and the other average-case reductions to statistical problems in the literature, all of which are reductions in total variation, as will be described in Section \[subsec:2-tvreductions\], and thus only require a single query. This feature is a requirement of our technique for completing hypergraphs that will be described further in Sections \[subsec:1-tech-completing\] and \[sec:2-hypergraph-planting\].
We note that most formulations of tensor PCA in the literature also assume that the noise tensor of standard Gaussians is symmetric [@richard2014statistical; @wein2019kikuchi]. However, given that the planted rank-1 component $v^{\otimes s}$ is symmetric as it is in our formulation, the symmetric and asymmetric noise models have a simple equivalence up to a constant factor loss in $\theta$. Averaging the entries of the asymmetric model over all permutations of its $s$ coordinates shows one direction of this equivalence, and the other is achieved by reversing this averaging procedure through Gaussian cloning as in Section 10 of [@brennan2018reducibility]. A closely related work is that of [@zhang2017tensor], which gives a reduction from $\pr{hpc}^3$ to the problem of detecting a planted rank-1 component in a 3-tensor of Gaussian noise. Aside being obtained through different techniques, their result differs from ours in two ways: (1) the rank-1 components they considered were sparse, rather than sampled from a Rademacher prior; and (2) their reduction necessarily produces asymmetric rank-1 components. Although the limits of tensor PCA when $s \ge 3$ with sparse and Rademacher priors are similar, they can be very different in other problems. For example, in the matrix case when $s = 2$, a sparse prior yields a problem with a statistical-computational gap while a Rademacher prior does not. We also remark that ensuring the symmetry of the planted rank-1 component is a technically difficult step and part of the motivation for our completing hypergraphs technique in Section \[sec:2-hypergraph-planting\].
Universality for Learning Sparse Mixtures {#subsec:1-problems-universality}
-----------------------------------------
When $\epsilon = 1/2$, our reduction to robust sparse mean estimation also implicitly shows tight computational lower bounds at $n = \tilde{o}(k^2/\tau^4)$ for learning sparse Gaussian mixtures. In this problem the task is to estimate two vectors $\mu_1, \mu_2$ up to $\ell_2$ error $\tau$, where the $\mu_i$ have bounded $\ell_2$ norms and a $k$-sparse difference $\mu_1 - \mu_2$, given samples from an even mixture of $\mN(\mu_1, I_d)$ and $\mN(\mu_2, I_d)$. In general, learning in Gaussian mixture models with sparsity has been studied extensively over the past two decades [@raftery2006variable; @pan2007penalized; @maugis2009variable; @maugis2011non; @azizyan2013minimax; @azizyan2015efficient; @malsiner2016model; @verzelen2017detection; @fan2018curse]. Recent work has established finite-sample guarantees for efficient and inefficient algorithms and proven information-theoretic lower bounds for the two-component case [@azizyan2013minimax; @verzelen2017detection; @fan2018curse]. These works conjectured that this problem has the $k$-to-$k^2$ statistical-computational gap shown by our reduction. In [@fan2018curse], a tight computational lower bound matching ours was established in the statistical query model.
So far, despite having a variety of different hidden structures, the problems we have considered have all had either Gaussian or Bernoulli noise distributions. As we will describe in Section \[sec:1-techniques\], our techniques also crucially use a number of properties of the Gaussian distribution. This naturally raises the question: do our techniques have implications beyond simple noise distributions? Our final reduction answers this affirmatively, showing that our lower bound for learning sparse Gaussian mixtures implies computational lower bounds for a wide universality class of noise distributions. This lower bound includes the optimal gap in learning sparse Gaussian mixtures and the optimal gaps in [@berthet2013optimal; @berthet2013complexity; @wang2016statistical; @gao2017sparse; @brennan2018reducibility] for sparse PCA as special cases. This reduction requires introducing a new type of rejection kernel, that we refer to as symmetric 3-ary rejection kernels, and is described in Sections \[subsec:1-tech-universality\] and \[subsec:srk\].
In Section \[sec:universality\], we show computational lower bounds for the *generalized learning sparse mixtures* problem $\pr{glsm}$. In $\pr{glsm}(n, k, d, \mU)$ where $\mathcal{U} = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}})$, the elements of the family $\{\mP_{\nu}\}_{\nu \in \mathbb{R}}$ and $\mQ$ are distributions on a measurable space, such that the pairs $(\mP_{\nu}, \mQ)$ all satisfy mild conditions permitting efficient computation outlined in Section \[subsec:srk\], and $\mD$ is a mixture distribution on $\mathbb{R}$. The observations in $\pr{glsm}$ are $n$ independent samples $X_1, X_2, \dots, X_n$ formed as follows:
- for each sample $X_i$, draw some latent variable $\nu_i \sim \mD$ and
- sample $(X_i)_j \sim \mP_{\nu_i}$ if $j \in S$ and $(X_i)_j \sim \mQ$ otherwise, independently
where $S$ is some unknown subset containing $k$ of the $d$ coordinates. The task is to recover $S$ or distinguish from an $H_0$ in which all of the data is drawn i.i.d. from $\mQ$. Given a collection of distributions $\mU$, we define $\mU$ to be in our universality class $\pr{uc}(N)$ with level of signal $\tau_{\mU}$ if it satisfies the following conditions.
\[defn:univ-signal\] Given a parameter $N$, define the collection of distributions $\mathcal{U} = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}})$ implicitly parameterized by $N$ to be in the universality class $\pr{uc}(N)$ if
- the pairs $(\mP_{\nu}, \mQ)$ are all computable pairs, as in Definition \[def:computable\], for all $\nu \in \mathbb{R}$;
- $\mD$ is a symmetric distribution about zero and $\bP_{\nu \sim \mD}[\nu \in [-1, 1]] = 1 - o(N^{-1})$; and
- there is a level of signal $\tau_{\mathcal{U}} \in \mathbb{R}$ such that for all $\nu \in [-1, 1]$ such that for any fixed constant $K > 0$, it holds that $$\left| \frac{d\mP_{\nu}}{d\mQ} (x) - \frac{d\mP_{-\nu}}{d\mQ} (x) \right| = O_N\left(\tau_{\mathcal{U}} \right) \quad \textnormal{and} \quad \left|\frac{d\mP_{\nu}}{d\mQ} (x) + \frac{d\mP_{-\nu}}{d\mQ} (x) - 2 \right| = O_N\left( \tau_{\mathcal{U}}^2 \right)$$ with probability at least $1 - O\left(N^{-K}\right)$ over each of $\mP_{\nu}, \mP_{-\nu}$ and $\mQ$.
Our main result establishes a computational lower bound for $\pr{glsm}$ instances with $\mU \in \pr{uc}(n)$ in terms of the level of signal $\tau_{\mU}$. As mentioned above, this theorem implies optimal lower bounds for learning sparse mixtures of Gaussians, sparse PCA and many more natural problem formulations described in Section \[subsec:universalitydiscussion\].
\[thm:glsm-lb\] Let $n, k$ and $d$ be polynomial in each other and such that $k = o(\sqrt{d})$. Suppose that the collections of distributions $\mU = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}})$ is in $\pr{uc}(n)$. Then the $k\pr{-bpc}$ conjecture implies a computational lower bound for $\pr{glsm}\left(n, k, d, \mU \right)$ at all sample complexities $n = \tilde{o}\left(\tau_{\mU}^{-4}\right)$.
Technical Overview {#sec:1-techniques}
==================
We now outline our main technical contributions and the central ideas behind our reductions. These techniques will be formally introduced in Part \[part:reductions\] and applied in our problem-specific reductions to deduce our main theorems stated in the previous section in Part \[part:lower-bounds\].
Rejection Kernels {#subsec:1-tech-rk}
-----------------
Rejection kernels are a reduction primitive introduced in [@brennan2018reducibility; @brennan2019universality] for *algorithmic changes of measure*. Related reduction primitives for changes of measure to Gaussians and binomial random variables appeared earlier in [@ma2015computational; @hajek2015computational]. Given two input Bernoulli probabilities $0 < q < p \le 1$, a rejection kernel simultaneously maps $\text{Bern}(p)$ and $\text{Bern}(q)$ approximately in total variation to samples from two arbitrary distributions $\mP$ and $\mQ$. Note that in this setup, the rejection kernel primitive is oblivious to whether the true distribution of its input is $\text{Bern}(p)$ or $\text{Bern}(q)$. The main idea behind rejection kernels is that, under suitable conditions on $\mP$ and $\mQ$, this can be achieved through a rejection sampling scheme that samples $x \sim \mQ$ and rejects with a probability that depends on $x$ and on whether the input was $0$ or $1$. Rejection kernels are discussed in more depth in Section \[sec:2-rejection-kernels\]. In this work, we will need the following two instantiations of the framework developed in [@brennan2018reducibility; @brennan2019universality]:
- *Gaussian Rejection Kernels:* Rejection kernels mapping $\text{Bern}(p)$ and $\text{Bern}(q)$ to within $O(R_{\pr{rk}})$ total variation of $\mN(\mu, 1)$ and $\mN(0, 1)$ where $\mu = \Theta\left(1/\sqrt{\log R_{\text{rk}}^{-1}}\right)$ and $p, q$ are fixed constants.
- *Bernoulli Cloning:* A rejection kernel mapping $\text{Bern}(p)$ and $\text{Bern}(q)$ exactly to $\text{Bern}(P)^{\otimes t}$ and $\text{Bern}(Q)^{\otimes t}$ where $$\frac{1 - p}{1 - q} \le \left( \frac{1 - P}{1 - Q} \right)^t \quad \text{and} \quad \left( \frac{P}{Q} \right)^t \le \frac{p}{q}$$
By performing computational changes of measure, these primitives are crucial in mapping to desired distributional aesthetics. However, they also play an important role in transforming hidden structure. Gaussian rejection kernels grant access to an arsenal of measure-preserving transformations of high-dimensional Gaussian vectors for mapping between different hidden structures while preserving independence in the noise distribution. Bernoulli cloning is crucial in removing the symmetry in adjacency matrices of $\pr{pc}$ instances and adjacency tensors of $\pr{hpc}$ instances, as in the $\pr{To-Submatrix}$ procedure in [@brennan2019universality]. We introduce a $k$-partite variant of this procedure that maps the adjacency matrix of $k\pr{-pds}$ to a matrix of independent Bernoulli random variables while respecting the constraint that there is one planted entry per block of the $k$-partition. This procedure is discussed in more detail in Section \[subsec:1-tech-completing\] and will serve as a crucial preprocessing step for dense Bernoulli rotations, which involves taking linear combinations of functions of entries of this matrix that crucially must be independent.
Dense Bernoulli Rotations {#subsec:1-tech-dbr}
-------------------------
This technique is introduced in Section \[sec:2-bernoulli-rotations\] and is one of our main primitives for *transforming hidden structure* that will be applied repeatedly throughout our reductions. Let $\pr{pb}(n, i, p, q)$ denote the planted bit distribution over $V \in \{0, 1\}^n$ with independent entries satisfying that $V_j \sim \text{Bern}(q)$ unless $j = i$, in which case $V_i \sim \text{Bern}(p)$. Given an input vector $V \in \{0, 1\}^n$, the goal of dense Bernoulli rotations is to output a vector $V' \in \mathbb{R}^m$ such that, for each $i \in [n]$, $V'$ is close in total variation to $\mN(c \cdot A_i, I_m)$ if $V \sim \pr{pb}(n, i, p, q)$. Here, $A_1, A_2, \dots, A_n \in \mathbb{R}^m$ are a given sequence of target mean vectors, $p$ and $q$ are fixed constants and $c$ is a scaling factor with $c = \tilde{\Theta}(1)$. The reduction must satisfy these approximate Markov transition conditions oblivious to the planted bit $i$ and also preserve independent noise, by mapping $\text{Bern}(q)^{\otimes n}$ to $\mN(0, I_m)$ approximately in total variation.
Let $A \in \mathbb{R}^{m \times n}$ denote the matrix with columns $A_1, A_2, \dots, A_n$. If the rows of $A$ are orthogonal unit vectors, then the goal outlined above can be achieved using the isotropy of the distribution $\mN(0, I_n)$. More precisely, consider the reduction that form $V_1 \in \mathbb{R}^n$ by applying Gaussian rejection kernels entrywise to $V$ and then outputs $AV_1$. If $V \sim \pr{pb}(n, i, p, q)$, then the rejection kernels ensure that $V_1$ is close in total variation to $\mN(\mu \cdot \mathbf{1}_i, I_n)$ and thus $V' = AV_1$ is close to $\mN(\mu \cdot A_i, I_m)$. However, if the rows of $A$ are not orthogonal, then the entries of the output are potentially very dependent and have covariance matrix $AA^\top$ instead of $I_m$. This can be remedied by adding a *noise-correction term* to the output: generate $U \sim \mN(0, I_m)$ and instead output $$V' = \lambda^{-1} \cdot AV_1 + \left( I_m - \lambda^{-2} \cdot AA^\top \right)^{1/2} \cdot U$$ where $\lambda$ is an upper bound on the largest singular value of $A$ and $\left( I_m - \lambda^{-2} AA^\top \right)^{1/2}$ is the positive semidefinite square root of $I_m - \lambda^{-2} \cdot AA^\top$. If $V \sim \pr{pb}(n, i, p, q)$, it now follows that $V'$ is close in total variation to $\mN(\mu \lambda^{-1} \cdot A_i, I_m)$ where $\mu$ can be taken to be $\mu = \Theta(1/\sqrt{\log n})$. This reduction also preserves independent noise, mapping $\text{Bern}(q)^{\otimes n}$ approximately to $\mN(0, I_m)$.
Dense Bernoulli rotations thus begin with a random vector of independent entries and one unknown elevated bit and produce a vector with independent entries and an unknown elevated *pattern* from among an arbitrary prescribed set $A_1, A_2, \dots, A_n$. Furthermore, the dependence of the signal strength $\mu \lambda^{-1}$ in the output instance $V'$ on these $A_1, A_2, \dots, A_n$ is entirely through the singular values of $A$. This yields a general structure-transforming primitive that will be used throughout our reductions. Each such use will consist of many local applications of dense Bernoulli rotations that will be stitched together to produce a target distribution. These local applications will take three forms:
- *To Rows Restricted to Column Parts:* The adjacency matrix of $k\pr{-bpc}$ consists of $k_n k_m$ blocks each consisting of the edge indicators in $E_i \times F_j$ for each pair of the parts $E_i, F_j$ from the given partitions of $[n]$ and $[m]$. In our reductions to robust sparse mean estimation, mixtures of SLRs, robust SLR and universality for learning sparse mixtures, we apply dense Bernoulli rotations separately to each row in each of these blocks.
- *To Vectorized Adjacency Matrix Blocks:* In our reductions to dense stochastic block models, testing hidden partition models and semirandom single community detection, we first pre-process the adjacency matrix of $k\pr{-pc}$ with $\pr{To-}k\pr{-Partite-Submatrix}$. We then apply dense Bernoulli rotations to $\mathbb{R}^{h^2}$ vectorizations of each $h \times h$ block in this matrix corresponding a pair of parts in the given partition i.e. of the form $E_i \times E_j$.
- *To Vectorized Adjacency Tensor Blocks:* In our reduction to tensor PCA with order $s$, after completing the adjacency tensor of the input $k\pr{-hpc}$ instance, we apply dense Bernoulli rotations to $\mathbb{R}^{h^s}$ vectorizations of each $h \times h \times \cdots \times h$ block corresponding to an $s$-tuple of parts.
We remark that while dense Bernoulli rotations heavily rely on distributional properties of isotropic Gaussian vectors, their implications extend far beyond statistical problems with Gaussian noise. Entrywise thresholding produces planted graph problems and we will show that multiple thresholds followed by applying 3-ary symmetric rejection kernels maps to a large universality class of noise distributions. These applications of dense Bernoulli rotations generally reduce the problem of transforming hidden structure to a constrained combinatorial construction problem – the task of designing a set of mean output vectors $A_1, A_2, \dots, A_n$ that have nearly orthogonal rows and match the combinatorial structure in the target statistical problem.
Design Matrices and Tensors {#subsec:1-tech-design-matrices}
---------------------------
#### Design Matrices.
To construct these vectors $A_1, A_2, \dots, A_n$ for our applications of dense Bernoulli rotations, we introduce several families of matrices based on the incidence geometry of finite fields. In our reduction to robust sparse mean estimation, we will show that the adversary that corrupts an $\epsilon$-fraction of the samples by resampling them from $\mN(-c \cdot \mu, I_d)$ produces the desired $k$-to-$k^2$ statistical-computational gap. This same adversarial construction was used in [@diakonikolas2017statistical]. Here, $\mu \in \mathbb{R}^d$ denotes the $k$-sparse mean of interest. As will be further discussed at the beginning of Section \[sec:2-bernoulli-rotations\], on applying dense Bernoulli rotations to rows restricted to parts of the partition of column partition, our desiderata for the mean vectors $A_1, A_2, \dots, A_n$ reduce to the following:
- $A$ contains two distinct values $\{x, y\}$, and an $\epsilon'$-fraction of each column is $y$ where $\epsilon \ge \epsilon' = \Theta(\epsilon)$;
- the rows of $A$ are unit vectors and nearly orthogonal with $\lambda = O(1)$; and
- $A$ is nearly an isometry as a linear transformation from $\mathbb{R}^n \to \mathbb{R}^m$.
The first criterion above is enough to ensure the correct distributional aesthetics and hidden structure in the output of our reduction. The second and third criteria turn out to be necessary and sufficient for the reduction to show tight computational lower bounds up to the conjectured barrier of $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$. We remark that the third criterion also is equivalent to $m = \tilde{\Theta}(n)$ given the second. Thus our task is to design nearly square, nearly orthogonal matrices containing two distinct entries with an $\epsilon'$-fraction of one present in each column. Note that if $\epsilon = 1/2$, this is exactly achieved by Hadamard matrices. For $\epsilon < 1/2$, our desiderata are nearly met by the following natural generalization of Hadamard matrices that we introduce. Note that the rows of a Hadamard matrix can be generated as a reweighted incidence matrix between the hyperplanes and points of $\mathbb{F}_2^t$. Let $r$ be a prime number with $\epsilon^{-1} \le r = O(\epsilon^{-1})$ and consider the $\ell \times r^t$ matrix $A$ where $\ell = \frac{r^t - 1}{r - 1}$ with entries given by $$A_{ij} = \frac{1}{\sqrt{r^t(r - 1)}} \cdot \left\{ \begin{matrix} 1 & \textnormal{if } P_j \not \in V_i \\ 1 - r & \textnormal{if } P_j \in V_i \end{matrix} \right.$$ where $V_1, V_2, \dots, V_{\ell}$ is an enumeration of the $(t - 1)$-dimensional subspaces of $\mathbb{F}_r^t$ and $P_1, P_2, \dots, P_{r^t}$ is an enumeration of the points in $\mathbb{F}_r^t$. This construction nearly meets our three criteria, with one minor issue that the column corresponding to $0 \in \mathbb{F}_r^t$ only contains one entry. A more serious issue is that $\ell = \Theta(r^{t - 1})$ and $A$ is far from an isometry if $r \gg 1$, which leads to a suboptimal computational lower bound for $\pr{rsme}$.
These issues are both remedied by adding in additional rows for all affine shifts of the hyperplanes $V_1, V_2, \dots, V_{\ell}$. The resulting matrix has dimensions $r\ell \times r^t$ and, although its rows are no longer orthogonal, its largest singular value is $\sqrt{1 + (r - 1)^{-1}}$. The resulting matrix $K_{r, t}$ is used in our applications of dense Bernoulli rotations to reduce to robust sparse mean estimation, mixtures of SLRs, robust SLR and to show universality for learning sparse mixtures. Note that for any two rows $r_i$ and $r_j$ of $K_{r, t}$, the outer product $r_i r_j^\top$ is a zero-centered mean adjacency matrix of an imbalanced 2-block stochastic block model. This observation suggests that the Kronecker product $K_{r, t} \otimes K_{r, t}$ can be used in dense Bernoulli rotations to map to these SBMs. Surprisingly, this overall reduction yields tight computational lower bounds up to the Kesten-Stigum threshold for dense SBMs, and using the matrix $(K_{3, t} \otimes I_s) \otimes (K_{3, t} \otimes I_s)$ yields tight computational lower bounds for semirandom single community detection. We remark that, in this case, it is again crucial that $K_{r, t}$ is approximately square – if the matrix $A$ defined above were used in place of $K_{r, t}$, our reduction would show a lower bound suboptimal to the Kesten-Stigum threshold by a factor of $r$. Our reduction to order $s$ tensor PCA applies dense Bernoulli rotations to vectorizations of each tensor block with the $s$th order Kronecker product $K_{2, t} \otimes K_{2, t} \otimes \cdots \otimes K_{2, t}$. We remark that these instances of $K_{2, t}$ in this Kronecker product could be replaced by Hadamard matrices in dimension $2^t$.
In Section \[subsec:2-Rne\], we introduce a natural alternative to $K_{r, t}$ – a random matrix $R_{n, \epsilon}$ that *approximately* satisfies the three desiderata above. In our reductions to $\pr{rsme}$ and $\pr{rslr}$, this random matrix has the advantage of eliminating the number-theoretic condition arising from applying dense Bernoulli rotations with $K_{r, t}$, which has nontrivial restrictions in the very small $\epsilon$ regime when $\epsilon = n^{-\Omega(1)}$. However, the approximate properties of $R_{n, \epsilon}$ are insufficient to map exactly to our formulations of $\pr{isbm}, \pr{semi-cr}, \pr{ghpm}$ and $\pr{bhpm}$, where the sizes of the hidden communities are known. A more detailed comparison of $K_{r, t}$ and $R_{n, \epsilon}$ can be found in Section \[subsec:2-Rne\]. The random matrix $R_{n, \epsilon}$ is closely related to the adjacency matrices of sparse random graphs, and establishing $\lambda = O(1)$ requires results on their spectral concentration from the literature. For a consistent and self-contained exposition, we present our reductions with $K_{r, t}$, which has a comparatively simple analysis, and only outline extensions of our reductions using $R_{n, \epsilon}$.
#### Design Tensors.
Our final reduction using dense Bernoulli rotations is to testing hidden partition models. This reduction requires a more involved construction for $A$ that we only sketch here and defer a detailed discussion to Section \[subsec:2-design-tensors\]. Again applying dense Bernoulli rotations to vectorizations of each block of the input $k\pr{-pc}$ instance, our goal is to construct a tensor $T_{r, t}$ such that each slice has the same block structure as an $r$-block SBM and the slices are approximately orthogonal under the matrix inner product. A natural construction is as follows: index each slice by a pair of hyperplanes $(V_i, V_j)$, label the rows and columns of each slice by $\mathbb{F}_r^t$ and plant $r$ communities on the entries with indices in $(V_i + au_i) \times (V_j + au_j)$ for each $a \in \mathbb{F}_r$. Here $u_i$ and $u_j$ are arbitrary vectors not in $V_i$ and $V_j$, respectively, and thus $V_i + au_i$ ranges over all affine shifts of $V_i$ for $a \in \mathbb{F}_r$. An appropriate choice of weights $x$ and $y$ on and off of these communities yields slices that are exactly orthogonal.
However, this construction suffers from the same issue as the construction of $A$ above – there are $O(r^{2t - 2})$ slices each of which has $r^{2t}$ entries, making the matrix formed by vectorizing the slices of this tensor far from square. This can be remedied by creating additional slices further indexed by a nonconstant linear function $L : \mathbb{F}_r \to \mathbb{F}_r$ such that communities are now planted on $(V_i + au_i) \times (V_j + L(a) \cdot u_j)$ for each $a \in \mathbb{F}_r$. There are $r(r - 1)$ such linear functions $L$, making the vectorization of this tensor nearly square. Furthermore, it is shown in Section \[subsec:2-design-tensors\] that this matrix has largest singular value $\sqrt{1 + (r - 1)^{-1}}$. We remark that this property is quite brittle, as substituting other families of bijections for $L$ can cause this largest singular value to increase dramatically. Taking the Kronecker product of each slice of this tensor $T_{r, t}$ with $I_s$ now yields the family of matrices used in our reduction to testing hidden partition models.
We remark that in all of these reductions with both design matrices and design tensors, dense Bernoulli rotations are applied locally within the blocks induced by the partition accompanying the $\pr{pc}_\rho$ instance. In all cases, our constructions ensure that the fact that the planted bits within these blocks take the form of a submatrix is sufficient to stitch together the outputs of these local applications of dense Bernoulli rotations into a single instance with the desired hidden structure. While we did not discuss this constraint in choosing the design matrices $A$ for each of our reductions, it will be a key consideration in the proofs throughout this work. Surprisingly, the linear functions $L$ in the construction of $T_{r, t}$ directly lead to a community alignment property proven in Section \[subsec:2-design-tensors\] that allow slices of this tensor to be consistently stitched together. Furthermore, we note that unlike $K_{r, t}$, the tensor $T_{r, t}$ does not seem to have a random matrix analogue that is tractable to bound in spectral norm.
#### Parameter Correspondence with Dense Bernoulli Rotations.
In several of our reductions using dense Bernoulli rotations, a simple heuristic predicts our computational lower bound in the target problem. Let $X$ be a data tensor, normalized and centered so that each entry has mean zero and variance $1$, and then consider the $\ell_2$ norm of the expected tensor $\bE[X]$. Our applications of rejection kernels typically preserve this $\ell_2$ norm up to $\text{polylog}(n)$ factors. Since our design matrices are approximate isometries, most of our applications of dense Bernoulli rotations also approximately preserve this $\ell_2$ norm. Thus comparing the $\ell_2$ norms of the input $\pr{pc}_\rho$ instance and output instance in our reductions yields a heuristic for predicting the resulting computational lower bound. For example, our adversary in $\pr{rsme}$ produces a matrix $\bE[X] \in \mathbb{R}^{d \times n}$ consisting of columns of the form $\tau \cdot k^{-1/2} \cdot \mathbf{1}_S$ and $\epsilon^{-1} (1 - \epsilon)\tau \cdot k^{-1/2} \cdot \mathbf{1}_S$, up to constant factors where $S$ is the hidden support of $\mu$. The $\ell_2$ norm of this matrix is $\Theta(\tau \sqrt{n/\epsilon})$. The $\ell_2$ norm of the matrix $\bE[X]$ corresponding to the starting $k\pr{-bpc}$ instance can be verified to be just below $o(k^{1/2} n^{1/4})$, when the $k\pr{-bpc}$ instance is nearly at its computational barrier. Equating these two $\ell_2$ norms yields the relation $n = \Theta(k^2 \epsilon^2/\tau^4)$, which is exactly our computational barrier for $\pr{rsme}$. Similar heuristic derivations of our computational barriers are produced for $\pr{isbm}$, $\pr{ghpm}$, $\pr{bhpm}$, $\pr{semi-cr}$ and $\pr{tpca}$ at the beginnings of Sections \[sec:3-all-community\] and \[sec:3-tensor\]. We remark that for some of our problems with central steps other than dense Bernoulli rotations, such as $\pr{mslr}$, $\pr{rslr}$ and $\pr{glsm}$, this heuristic does not apply.
Decomposing Linear Regression and Label Generation {#subsec:1-tech-decomposing}
--------------------------------------------------
Our reductions to mixtures of SLRs and robust SLR in Section \[sec:2-supervised\] are motivated by the following simple initial observation. Suppose $(X, y)$ is a single sample from unsigned SLR with $y = \gamma R \cdot \langle v, X \rangle + \mN(0, 1)$ where $R \in \{-1, 1\}$ is a Rademacher random variable, $v \in \mathbb{R}^d$ is a $k$-sparse unit vector, $X \sim \mN(0, I_d)$ and $\gamma \in (0, 1)$. A standard conditioning property of Gaussian vectors yields that the conditional distribution of $X$ given $R$ and $y$ is another jointly Gaussian vector, as shown below. Our observation is that this conditional distribution can be decomposed into a sum of our adversarial construction for robust sparse mean estimation and an independent instance of negative sparse PCA. More formally, we have that $$\begin{aligned}
X | R, y &\sim \mN\left( \frac{R\gamma \cdot y}{1 + \gamma^2} \cdot v, \, I_d - \frac{\gamma^2}{1 + \gamma^2} \cdot vv^\top \right) \\
&\sim \underbrace{\frac{1}{\sqrt{2}} \cdot \mN\left( R\tau \cdot v, \, I_d \right)}_{\text{Our } \pr{rsme} \text{ adversary with } \epsilon \, = \, 1/2} + \, \, \, \, \underbrace{\frac{1}{\sqrt{2}} \cdot \mN\left( 0, \, I_d - \theta vv^\top \right)}_{\text{Negative Sparse PCA}}\end{aligned}$$ where $\tau = \tau(y) = \frac{\gamma \sqrt{2}}{1 + \gamma^2} \cdot y$ and $\theta = \frac{2\gamma^2}{1 + \gamma^2}$. Note that the marginal distribution of $y$ is $\mN(0, 1 + \gamma^2)$ and thus it typically holds that $|y| = \Theta(1)$. When this unsigned SLR instance is at its computational barrier of $n = \tilde{\Theta}(k^2/\gamma^4)$ and $|y| = \Theta(1)$, then $n = \tilde{\Theta}(k^2/\tau^4)$ and $\theta = \tilde{\Theta}(\sqrt{k^2/n})$. Therefore surprisingly, both of the $\pr{rsme}$ and $\pr{neg-spca}$ in the decomposition above are also at their computational barriers.
Now consider task of instead reducing from $k\pr{-bpc}$ to the problem of estimating $v$ from $n$ independent samples from the conditional distribution $\mL(X | \, |y| = 1)$. In light of the observations above, it suffices to first use Bernoulli cloning to produce two independent copies of $k\pr{-bpc}$, reduce these two copies as outlined below and then take the sum of the two outputs of these reductions.
- *Producing Our* *Adversary*: One of the two copies of $k\pr{-bpc}$ should be mapped to a tight instance of our adversarial construction for $\pr{rsme}$ with $\epsilon = 1/2$ through local applications of dense Bernoulli rotations with design matrix $K_{r, t}$ or $R_{n, \epsilon}$, as described previously.
- *Producing* : The other copy should be mapped to a tight instance of negative sparse PCA. This requires producing negatively correlated data from positively correlated data, and will need new techniques that we discuss next.
We remark that while these two output instances must be independent, it is important that they share the same latent vector $v$. Bernoulli cloning ensures that the two independent copies of $k\pr{-pc}$ have the same clique vertices and thus the output instances have this desired property.
This reduction can be extended to reduce to the true joint distribution of $(X, y)$ as follows. Consider replacing each sample $X_1$ of the output $\pr{rsme}$ instance by $$X_2 = cy \cdot X_1 + \sqrt{1 - c^2y^2} \cdot \mN(0, I_d)$$ where $c$ is some scaling factor and $y$ is independently sampled from $\mN(0, 1 + \gamma^2)$, truncated to lie in the interval $[-T, T]$ where $cT \le 1$. Observe that if $X_1 \sim \mN(R\tau \cdot v, I_d)$, then $X_2 \sim \mN(cR\tau y \cdot v, I_d)$ conditioned on $y$. In Section \[subsec:2-mixtures-slr\], we show that a suitable choice of $c, T$ and tweaking $\tau$ in the reduction above tightly maps to the desired distribution of mixtures of SLRs. Analogous observations and performing the $\pr{rsme}$ sub-reduction with $\epsilon < 1/2$ can be used to show tight computational lower bounds for robust SLR. We remark that this produces a more complicated adversarial construction for robust SLR that may be of independent interest. The details of this adversary can be found in Section \[subsec:2-mixtures-slr\].
Producing Negative Correlations and Inverse Wishart Matrices {#subsec:1-tech-inverse-wishart}
------------------------------------------------------------
To complete our reductions to mixtures of SLRs and robust SLR, it suffices to give a tight reduction from $k\pr{-bpc}$ to $\pr{neg-spca}$. Although $\pr{neg-spca}$ and ordinary $\pr{spca}$ share the same conjectured computational barrier at $\theta = \Theta(\sqrt{k^2/n})$ and can be solved by similar efficient algorithms above this barrier, as stochastic models, the two are very different. As discussed in Section \[subsec:1-problems-negspca\], ordinary $\pr{spca}$ admits a signal plus noise representation while $\pr{neg-spca}$ does not. This representation was crucially used in prior reductions showing optimal computational lower bounds for $\pr{spca}$ in [@berthet2013optimal; @berthet2013complexity; @wang2016statistical; @gao2017sparse; @brennan2018reducibility; @brennan2019optimal]. Furthermore, the planted entries in a $\pr{neg-spca}$ sample are *negatively correlated*. In contrast, the edge indicators of $\pr{pc}_\rho$ are positively correlated and all prior reductions from $\pr{pc}$ have only produced hidden structure that is also positively correlated.
We first simplify the task of reducing to $\pr{neg-spca}$ with an observation used in the reduction to $\pr{spca}$ in [@brennan2019optimal]. Suppose that $n \ge m + 1$ and let $m$ be such that $m/k^2$ tends slowly to infinity. If $X$ is an $m \times n$ matrix with columns $X_1, X_2, \dots, X_n \sim_{\text{i.i.d.}} \mN(0, \Sigma)$ where $\Sigma \in \mathbb{R}^{m \times m}$ is positive semidefinite, then the conditional distribution of $X$ given its rescaled empirical covariance matrix $\hat{\Sigma} = \sum_{i = 1}^n X_i X_i^\top$ is $\hat{\Sigma}^{1/2} R$ where $R$ is an independent $m \times n$ matrix sampled from Haar measure on the Stiefel manifold. This implies that it suffices to reduce to $\hat{\Sigma}$ in the case where $\Sigma = I_d - \theta vv^\top$ in order to map to $\pr{neg-spca}$, as $X$ can be generated from $\hat{\Sigma}$ by randomly sampling this Haar measure. This measure can then be sampled efficiently by applying Gram-Schmidt to the rows of an $m \times n$ matrix of independent standard Gaussians.
Let $\mW_m(n, \Sigma)$ be the law of $\hat{\Sigma}$, or in other words the Wishart distribution with covariance matrix $\Sigma$, and let $\mW_m^{-1}(n, \Sigma)$ denote the distribution of its inverse. The matrices $\mW_m(n, \Sigma)$ and $\mW_m^{-1}(n, \beta \cdot \Sigma^{-1})$ where $\beta^{-1} = n(n - m - 1)$ have a number of common properties including close low-order moments. Furthermore, if $\Sigma = I_d - \theta vv^\top$ then $\Sigma^{-1} = I_d + \theta' vv^\top$ where $\theta' = \frac{\theta}{1 - \theta}$, which implies that $\mW_m^{-1}(n, \beta \cdot \Sigma^{-1})$ is a rescaling of the inverse of the empirical covariance matrix of a set of samples from ordinary $\pr{spca}$. This motivates our main reduction to $\pr{neg-spca}$ in Section \[subsec:2-neg-spca-reduction\], which roughly proceeds in the following two steps.
1. Begin with a small instance of $\pr{bpc}$ with $m = \omega(k^2)$ vertices on the left and $n$ on the right. Apply either the reduction of [@brennan2018reducibility] or [@brennan2019optimal] to reduce to an ordinary $\pr{spca}$ instance $(X_1, X_2, \dots, X_n)$ in dimension $m$ with $n$ samples and signal strength $\theta'$.
2. Form the rescaled empirical covariance matrix $\hat{\Sigma} = \sum_{i = 1}^n X_i X_i^\top$ and $$Y = \sqrt{n(n - m - 1)} \cdot \hat{\Sigma}^{-1/2} R$$ Output the columns of $Y$ after padding them to be $d$-dimensional with i.i.d. $\mN(0, 1)$ random variables.
The key detail in this reduction is that $\hat{\Sigma}^{1/2}$ in process of regenerating $X$ from $\hat{\Sigma}$ described above has been replaced by the positive semidefinite square root $\hat{\Sigma}^{-1/2}$ of a rescaling of the empirical precision matrix. As we will show in Section \[subsec:2-neg-spca-reduction\], establishing total variation guarantees for this reduction amounts to answering the following nonasymptotic question from random matrix theory that may be of independent interest: when do $\mW_m(n, \Sigma)$ and $\mW_m^{-1}(n, \beta \cdot \Sigma^{-1})$ converge in total variation for all positive semidefinite matrices $\Sigma$? A simple reduction shows that the general case is equivalent to the isotropic case when $\Sigma = I_m$. In Section \[subsec:2-inverse-wishart\], we answer this question, showing that these two matrices converge in KL divergence if and only if $n \gg m^3$. Our result is of the same flavor as a number of recent results in random matrix theory showing convergence in total variation between Wishart and $\pr{goe}$ matrices [@jiang2015approximation; @bubeck2016testing; @bubeck2016entropic; @racz2019smooth]. This condition amounts to constraining our reduction to the low-sparsity regime $k \ll n^{1/6}$. As discussed in Section \[subsec:1-problems-negspca\], this condition does not affect the tightness of our lower bounds and seems to be an artefact of our techniques that possibly can be removed.
Completing Tensors from Hypergraphs and Tensor PCA {#subsec:1-tech-completing}
--------------------------------------------------
As alluded to in the above discussion of rejection kernels, it is important that the entries in the vectors to which we apply dense Bernoulli rotations are independent and that none of these entries is missing. In the context of reductions beginning with $k\pr{-pc}$, $k\pr{-hpc}$, $\pr{pc}$ and $\pr{hpc}$, establishing this entails pre-processing steps to remove the symmetry of the input adjacency matrix and add in missing entries. As discussed in Section 1.1 of [@brennan2019universality], these missing entries in the matrix case have led to technical complications in the prior reductions in [@hajek2015computational; @brennan2018reducibility; @brennan2019universality; @brennan2019optimal]. In reductions to tensor PCA, completing these pre-processing steps in the tensor case seems unavoidable in order to produce the canonical formulation of tensor PCA with a symmetric rank-1 spike $v^{\otimes s}$ as discussed in Section \[subsec:1-problems-tpca\].
In order to motivate our discussion of the tensor case, we first consider the matrix case. Asymmetrizing the adjacency matrix of an input $\pr{pc}$ instance can be achieved through a simple application of Bernoulli cloning, but adding in the missing diagonal entries is more subtle. Note that the desired diagonal entries contain nontrivial information about the vertices in the planted clique – they are constrained to be $1$ along the vertices of the clique and independent $\text{Bern}(1/2)$ random variables elsewhere. This is roughly the information gained on revealing a single vertex from the planted clique. In the matrix case, the following trick effectively produces an instance of $\pr{pc}$ with the diagonal entries present. Add in $1$’s along the entire diagonal and randomly embed the resulting matrix as a principal minor in a larger matrix with off-diagonal entries sampled from $\text{Bern}(1/2)$ and on-diagonal entries sampled so that the total number of $1$’s on the diagonal has the correct binomial distribution. This trick appeared in the $\pr{To-Submatrix}$ procedure in [@brennan2019universality] for general $\pr{pds}$ instances, and is adapted in this work for $k\pr{-pds}$ as the reduction $\pr{To-}k\pr{-Partite-Submatrix}$ in Section \[sec:2-rejection-kernels\]. This reduction is an important pre-processing step in mapping to dense stochastic block models, testing hidden partition models and semirandom planted dense subgraph.
The tensor case is not as simple as the matrix case. While asymmetrizing can be handled similarly with Bernoulli cloning, the missing entries of the adjacency tensor of $\pr{hpc}$ are now more numerous and correspond to any entry with two equal indices. Unlike in the matrix case, the information content in these entries alone is enough to solve $\pr{hpc}$. For example, in 3-uniform $\pr{hpc}$, the missing set of entries $(i, i, j)$ should have the same distribution as the completed adjacency matrix of an entire instance of planted clique with the same hidden clique vertices. Thus a reduction that randomly generates these missing entries as in the matrix case is no longer possible without knowing the solution to the input $\pr{hpc}$ instance. However, if an oracle were to have revealed a single vertex of the hidden clique, we would be able to use the hyperedges containing this vertex to complete the missing entries of the adjacency tensor. In general, given an $\pr{hpc}$ instance of arbitrary order $s$, a more involved cloning and embedding procedure detailed in Section \[sec:2-hypergraph-planting\] completes the missing entries of the adjacency tensor given oracle access to $s - 1$ vertices of the hidden clique. Our reduction to tensor PCA in Sections \[sec:2-hypergraph-planting\] and \[sec:3-tensor\] iterates over all $(s - 1)$-tuples of vertices in the input $\pr{hpc}$ instance, uses this procedure to complete the missing entries of the adjacency tensor, applies dense Bernoulli rotations as described previously and then feeds the output instance to a blackbox solving tensor PCA. The reduction only succeeds in mapping to the correct distribution on tensor PCA in iterations that successfully guess $s - 1$ vertices of the planted clique. However, we show that this is sufficient to deduce tight computational lower bounds for tensor PCA. We remark that this reduction is the first reduction in total variation from $\pr{pc}_{\rho}$ that seems to require multiple calls to a blackbox solving the target problem.
Symmetric 3-ary Rejection Kernels and Universality {#subsec:1-tech-universality}
--------------------------------------------------
So far, all of our reductions have been to problems with Gaussian or Bernoulli data and our techniques have often relied heavily on the properties of jointly Gaussian vectors. Our last reduction technique shows that the consequences of these reductions extend far beyond Gaussian and Bernoulli problems. We introduce a new rejection kernel in Section \[subsec:srk\] and show in Section \[sec:universality\] that, when applied entrywise to the output of our reduction to $\pr{rsme}$ when $\epsilon = 1/2$, this rejection kernel yields a universal computational lower bound for a general variant of learning sparse mixtures with nearly arbitrary marginals.
Because sparse mixture models necessarily involve at least three distinct marginal distributions, a deficit in degrees of freedom implies that the existing framework for rejection kernels with binary entries cannot yield nontrivial hardness. We resolve this issue by considering rejection kernels with a slightly larger input space, and introduce a general framework for 3-ary rejection kernels with entries in $\{-1, 0, 1\}$ in Section \[subsec:srk\]. We show in Section \[sec:universality\] that first mapping each entry of our $\pr{rsme}$ instance with $\epsilon = 1/2$ into $\{-1, 0, 1\}$ by thresholding at intervals of the form $(-\infty, -T], (-T, T)$ and $[T, \infty)$ with $T = \Theta(1)$ and then applying 3-ary rejection kernels entrywise is a nearly lossless reduction. In particular, it yields new computational lower bounds for a wide universality class that tightly recover optimal computational lower bounds for sparse PCA, learning mixtures of exponentially distributed data, the original $\pr{rsme}$ instance with $\epsilon = 1/2$ and many other sparse mixture formulations. The implications of this reduction are discussed in detail in Section \[subsec:universalitydiscussion\].
Encoding Cliques as Structural Priors {#subsec:1-tech-encoding}
-------------------------------------
As discussed in Section \[subsec:1-desiderata\], reductions from $\pr{pc}_\rho$ showing tight computational lower bounds cannot generate a non-negligible part of the hidden structure in the target problem themselves, but instead must encode the hidden clique of the input instance into this structure. In this section, we outline how our reductions implicitly encode hidden cliques. Note that the hidden subset of vertices corresponding to a clique in $\pr{pc}_{\rho}$ has $\Theta(k \log n)$ bits of entropy while the distribution over the hidden structure in the target problems that we consider can have much higher entropy. For example, the Rademacher prior on the planted vector $v$ in Tensor PCA has $n$ bits of entropy and the distribution over hidden partitions in testing partition models has entropy $\Theta(r^2 K^2 \log n \log r)$.
Although our reductions inject randomness to produce the desired noise distributions of target problems, the induced maps encoding the clique in $\pr{pc}_\rho$ as a new hidden structure typically do not inject randomness. Consequently, our reductions generally show hardness for priors over the hidden structure in our target problems with entropy $\Theta(k \log n)$. This then implies a lower bound for our target problems, because the canonical uniform priors with which they are defined are the *hardest priors*. For example, every instance of $\pr{pc}_\rho$ reduces to uniform prior over cliques as in $\pr{pc}$ by randomly relabelling nodes. Similarly, a tensor PCA instance with a fixed planted vector $v$ reduces to the formulation in which $v$ is uniformly distributed on $\{-1, 1\}^n$ by taking the entrywise product of the tensor PCA instance with $u^{\otimes s}$ where $u$ is chosen u.a.r. from $\{-1, 1\}^n$. Thus our reductions actually show slightly stronger computational lower bounds than those stated in our main theorems – they show lower bounds for our target problems with *nonuniform* priors on their hidden structures. These nonuniform priors arise from the encodings of planted cliques into target hidden structure implicitly in our reductions, several of which we summarize below. Our reductions often involve aesthetic pre-processing and post-processing steps to reduce to canonical uniform priors and often subsample the output instance. To simplify our discussion, we omit these steps in describing the clique encodings induced by our reductions.
- **Robust Sparse Mean Estimation and SLR:** Let $S_L$ and $S_R$ be the sets of left and right clique vertices of the input $k\pr{-bpc}$ instance and let $[N] = E_1 \cup E_2 \cup \cdots \cup E_{k_N}$ be the given partition of the right vertices. The support of the $k$-sparse vector in our output $\pr{rsme}$ and $\pr{rslr}$ instances is simply $S_L$. Let $r$ be a prime and let $E_1' \cup E_2' \cup \cdots \cup E_{k_N}'$ be a partition of the output $n$ samples into parts of size $r\ell$ where $\ell = \frac{r^t - 1}{r - 1}$. Label each of element of $E_i'$ with a affine shift of a hyperplane in $\mathbb{F}_r^t$ and each element of $E_i$ with a point of $\mathbb{F}_r^t$. For each $i$, our adversary corrupts each sample in $E_i'$ corresponding to an affine shift of a hyperplane containing the point corresponding to the unique element in $S_R \cap E_i$.
- **Dense Stochastic Block Models:** Let $S$ be the set of clique vertices of the input $k\pr{-pc}$ instance and let $E$ be the given partition of the its vertices $[N]$. Let $E'$ be a partition of the output $n$ vertices again into parts of size $r\ell$. Label elements in each part as above. Our output $\pr{isbm}$ instance has its smaller community supported on the union of the vertices across all $E_i'$ corresponding to affine shifts containing the points in $\mathbb{F}_r^t$ corresponding to the vertices $S$.
- **Mixtures of SLRs and Generalized Learning Sparse Mixtures:** Let $S_L, S_R, k, k_N, N, n$ and $E$ be as above. The support of the $k$-sparse vector in our output $\pr{mslr}$ and $\pr{glsm}$ instances is again simply $S_L$. Let $H_1, H_2, \dots, H_{2^t - 1} \in \{-1, 1\}^{2^t}$ be the zero-sum rows of a Hadamard matrix and let $E'$ be a partition of the output $n$ samples into $k_N$ blocks of size $2^t$. The output instance sets the $j$th sample in $E_i'$ to be from the first part of the mixture if and only if the $j$th entry of $H_{s}$ is $1$ where $s$ is the unique element in $S_R \cap E_i$. In other words, the mixture pattern along $E_i'$ is given by the $(S_R \cap E_i)$th row of a Hadamard matrix.
- **Tensor PCA:** Let $S$ be the set of clique vertices of the input $k\pr{-hpc}$ instance and let $E$ and $N$ be as above. Similarly to $\pr{mslr}$ and $\pr{glsm}$, the planted vector $v$ of our output $\pr{tpca}$ instance is the concatenation of the $(S \cap E_i)$th rows of a Hadamard matrix.
Our reduction to testing hidden partition models induces a more intricate encoding of cliques similar to that of dense stochastic block models described above. We remark that each of these encodings arises directly from design matrices and tensors based on $K_{r, t}$ used in the dense Bernoulli rotation step of our reductions.
Further Directions and Open Problems {#sec:1-open-problems}
====================================
In this section, we describe several further directions and problems left open in this work. These directions mainly concern the $\pr{pc}_\rho$ conjecture and our reduction techniques.
#### Further Evidence for $\pr{pc}_\rho$ Conjectures.
In this work, we give evidence for the $\pr{pc}_\rho$ conjecture from the failure of low-degree polynomials and for specific instantiations of the $\pr{pc}_\rho$ conjecture from the failure of SQ algorithms. An interesting direction for future work is to show sum of squares lower bounds for $\pr{pc}_\rho$ and $k\pr{-hpc}^s$ supporting this conjecture. A priori, this seems to be a technically difficult task as the sum of squares lower bounds in [@barak2016nearly] only apply to the prior in planted clique where every vertex is included in the clique independently with probability $k/n$. Thus it even remains open to extend these lower bounds to the uniform prior over $k$-subsets of $[n]$.
#### How do Priors on Hidden Structure Affect Hardness?
In this work, we showed that slightly altering the prior over the hidden structure of $\pr{pc}$ gave rise to a problem much more amenable to average-case reductions. This raises a broad question: for general problems $\mP$ with hidden structure, how does changing the prior over this hidden structure affect its hardness? In other words, for natural problems other than $\pr{pc}$, how does the conjectured computational barrier change with $\rho$? Another related direction for future work is whether other choices of $\rho$ in the $\pr{pc}_\rho$ conjecture give meaningful assumptions that can be mapped to more natural problems than the ones we consider here. Furthermore, it would be interesting to study how reductions carry ensembles of problems with a general prior $\rho$ to one another. For instance, is there a reduction between $\pr{pc}$ and another problem, such as $\pr{spca}$, such that every hard prior in $\pr{pc}_\rho$ is mapped to a corresponding hard prior in $\pr{spca}$?
#### Generalizations of Dense Bernoulli Rotations.
In this work, dense Bernoulli rotations were an extremely important subroutine, serving as our simplest primitive for transforming hidden structure. An interesting technical direction for future work is to find similar transformations mapping to other distributions. More concretely, dense Bernoulli rotations approximately mapped from $\pr{pb}(n, i, 1, 1/2)$ to the $n$ distributions $\mD_i = \mN(c \cdot A_i, I_m)$, respectively, and mapped from $\text{Bern}(1/2)^{\otimes m}$ to $\mD = \mN(0, I_m)$. Are there other similar reductions mapping from these planted bit distributions to different ensembles of $\mD, \mD_1, \mD_2, \dots, \mD_n$? Furthermore, can these maps be used to show tight computational lower bounds for natural problems? For example, two possibly interesting ensembles of $\mD, \mD_1, \mD_2, \dots, \mD_n$ are:
1. $\mD_i = \otimes_{j = 1}^m \text{Bern}(P_{ij} n^{-\alpha})$ and some $\mD$ where $P \in [0, 1]^{n \times m}$ is a fixed matrix of constants and $\alpha > 0$.
2. $\mD_i = \mN(c \cdot A_i, I_m - c^2 A_i A_i^\top)$ and $\mD = \mN(0, I_m)$.
The first example above corresponds to whether or not there is a *sparse* analogue of Bernoulli rotations that can be used to show tight computational lower bounds. A natural approach to (1) is to apply dense Bernoulli rotations and map each entry into $\{0, 1\}$ by thresholding at some large real number $T = \Theta(\sqrt{\log n})$. While this maps to an ensemble of the form in (1), this reduction seems *lossy*, in the sense that it discards signal in the input instance, and it does not appear to show tight computational lower bounds for any natural problem. The second example above presents a set of $\mD_i$ with the same expected covariance matrices as $\mD$. Note that in ordinary dense Bernoulli rotations the expected covariance matrices for each $i$ are $I_m + c^2 \cdot A_i A_i^\top$ and often a degree-2 polynomial suffices to distinguish them from $\mD$. More generally, a natural question is: are there analogues of dense Bernoulli rotations that are tight to algorithms given by polynomials of degree higher than 2?
#### General Reductions to Supervised Problems.
Our last open problem is more concrete than the previous two. In our reductions to $\pr{mslr}$ and $\pr{rslr}$, we crucially use a subroutine mapping to $\pr{neg-spca}$. This subroutine requires that $k = \tilde{o}(n^{1/6})$ in order to show convergence in KL divergence between the Wishart and inverse Wishart distributions. Is there a reduction that relaxes this requirement to $k = \tilde{o}(n^{\alpha})$ where $1/6 < \alpha < 1/2$? Providing a reduction for $\alpha$ arbitrarily close to $1/2$ would essentially fill out all parameter regimes of interest in our computational lower bounds for $\pr{mslr}$ and $\pr{rslr}$. Any reduction relaxing this constraint to some $\alpha$ with $\alpha > 1/6$ seems as though it would require new techniques and be technically interesting. Another question related to our reductions to $\pr{mslr}$ and $\pr{rslr}$ is: can our label generation technique be generalized to handle more general link functions $\sigma$ i.e. generalized linear models where each sample-label pair $(X, y)$ satisfies $y = \sigma(\langle \beta, X \rangle) + \mN(0, 1)$? In particular, is there a reduction mapping to the canonical formulation of sparse phase retrieval with $\sigma(t) = t^2$? Although the statistical-computational gap for this formulation of sparse phase retrieval seems closely related to our computational lower bound for $\pr{mslr}$, any such reduction seems as though it would be interesting from a technical viewpoint.
\[part:reductions\]
Preliminaries and Problem Formulations {#sec:2-preliminaries}
======================================
In this section, we establish notation and some preliminary observations for proving our main theorems from Section \[sec:1-problems\]. We already defined our notion of computational lower bounds and solving detection and recovery problems in Section \[sec:1-problems\]. In this section, we begin by stating our conventions for detection problems and adversaries. In Section \[subsec:2-tvreductions\], we introduce the framework for reductions in total variation to show computational lower bounds for detection problems. In Section \[subsec:2-formulations\], we then state detection formulations for each of our problems of interest that it will suffice to exhibit reductions to. Finally, in Section \[subsec:2-notation\], we introduce the key notation that will be used throughout the paper. Later in Section \[subsec:2-estimation\], we discuss how our reductions and lower bounds for the detection formulations in Section \[subsec:2-formulations\] imply lower bounds for natural estimation and recovery variants of our problems.
Conventions for Detection Problems and Adversaries {#subsec:2-definitions}
--------------------------------------------------
We begin by describing our general setup for detection problems and the notions of robustness and types adversaries that we consider.
#### Detection Problems.
In a detection task $\mP$, the algorithm is given a set of observations and tasked with distinguishing between two hypotheses:
- a *uniform* hypothesis $H_0$ corresponding to the natural noise distribution for the problem; and
- a *planted* hypothesis $H_1$, under which observations are generated from this distribution but with a latent planted structure.
Both $H_0$ and $H_1$ can either be simple hypotheses consisting of a single distribution or a composite hypothesis consisting of multiple distributions. Our problems typically are such that either: (1) both $H_0$ and $H_1$ are simple hypotheses; or (2) both $H_0$ and $H_1$ are composite hypotheses consisting of the set of distributions that can be induced by some constrained adversary.
As discussed in [@brennan2018reducibility] and [@hajek2015computational], when detection problems need not be composite by definition, average-case reductions to natural simple vs. simple hypothesis testing formulations are stronger and technically more difficult. In these cases, composite hypotheses typically arise because a reduction gadget precludes mapping to the natural simple vs. simple hypothesis testing formulation. We remark that simple vs. simple formulations are the hypothesis testing problems that correspond to average-case decision problems $(L, \mathcal{D})$ as in Levin’s theory of average-case complexity. A survey of average-case complexity can be found in [@bogdanov2006average].
#### Adversaries.
The robust estimation literature contains a number of adversaries capturing different notions of model misspecification. We consider the following three central classes of adversaries:
1. **$\epsilon$-corruption**: A set of samples $(X_1, X_2, \dots, X_n)$ is an $\epsilon$-corrupted sample from a distribution $\mD$ if they can be generated by giving a set of $n$ samples drawn i.i.d. from $\mD$ to an adversary who then changes at most $\epsilon n$ of them arbitrarily.
2. **Huber’s contamination model**: A set of samples $(X_1, X_2, \dots, X_n)$ is an $\epsilon$-contamination of $\mD$ in Huber’s model if $$X_1, X_2, \dots, X_n \sim_{\text{i.i.d.}} \pr{mix}_{\epsilon}(\mD, \mD_O)$$ where $\mD_O$ is an unknown outlier distribution chosen by an adversary. Here, $\pr{mix}_{\epsilon}(\mD, \mD_O)$ denotes the $\epsilon$-mixture distribution formed by sampling $\mD$ with probability $(1 - \epsilon)$ and $\mD_O$ with probability $\epsilon$.
3. **Semirandom adversaries**: Suppose that $\mD$ is a distribution over collections of observations $\{ X_i \}_{i \in I}$ such that an unknown subset $P \subseteq I$ of indices correspond to a planted structure. A sample $\{ X_i \}_{i \in I}$ is semirandom if it can be generated by giving a sample from $\mD$ to an adversary who is allowed decrease $X_i$ for any $i \in I \backslash P$. Some formulations of semirandom adversaries in the literature also permit increases in $X_i$ for any $i \in P$. Our lower bounds apply to both adversarial setups.
All adversaries in these models of robustness are computationally unbounded and have access to randomness – meaning that they also have access to any hidden structure in a problem that can be recovered information theoretically. Given a single distribution $\mD$ over a set $X$, any one of these three adversaries produces a set of distributions $\pr{adv}(\mD)$ that can be obtained after corruption. When formulated as detection problems, the hypotheses $H_0$ and $H_1$ are of the form $\pr{adv}(\mD)$ for some $\mD$. We remark that $\epsilon$-corruption can simulate contamination in Huber’s model at a slightly smaller $\epsilon'$ within $o(1)$ total variation. This is because a sample from Huber’s model has $\text{Bin}(n, \epsilon')$ samples from $\mD_O$. An adversary resampling $\min\{\text{Bin}(n, \epsilon'), \epsilon n\}$ samples from $\mD_O$ therefore simulates Huber’s model within a total variation distance bounded by standard concentration for the Binomial distribution.
Reductions in Total Variation and Computational Lower Bounds {#subsec:2-tvreductions}
------------------------------------------------------------
In this section, we introduce our framework for reductions in total variation, state a general condition for deducing computational lower bounds from reductions in total variation and state a number of properties of total variation that we will use in analyzing our reductions.
#### Average-Case Reductions in Total Variation.
We give approximate reductions in total variation to show that lower bounds for one hypothesis testing problem imply lower bounds for another. These reductions yield an exact correspondence between the asymptotic Type I$+$II errors of the two problems. This is formalized in the following lemma, which is Lemma 3.1 from [@brennan2018reducibility] stated in terms of composite hypotheses $H_0$ and $H_1$. The main quantity in the statement of the lemma can be interpreted as the smallest total variation distance between the reduced object $\mathcal{A}(X)$ and the closest mixture of distributions from either $H_0'$ or $H_1'$. The proof of this lemma is short and follows from the definition of total variation. Given a hypothesis $H_i$, we let $\Delta(H_i)$ denote the set of all priors over the set of distributions valid under $H_i$.
\[lem:3a\] Let $\mP$ and $\mP'$ be detection problems with hypotheses $H_0, H_1$ and $H_0', H_1'$, respectively. Let $X$ be an instance of $\mathcal{P}$ and let $Y$ be an instance of $\mP'$. Suppose there is a polynomial time computable map $\mathcal{A}$ satisfying $$\sup_{P \in H_0} \inf_{\pi \in \Delta(H_0')} \TV\left( \mL_{P}(\mathcal{A}(X)), \bE_{P' \sim \pi} \, \mL_{P'}(Y) \right) + \sup_{P \in H_1} \inf_{\pi \in \Delta(H_1')} \TV\left( \mL_{P}(\mathcal{A}(X)), \bE_{P' \sim \pi} \, \mL_{P'}(Y) \right) \le \delta$$ If there is a randomized polynomial time algorithm solving $\mP'$ with Type I$+$II error at most $\epsilon$, then there is a randomized polynomial time algorithm solving $\mP$ with Type I$+$II error at most $\epsilon + \delta$.
If $\delta = o(1)$, then given a blackbox solver $\mathcal{B}$ for $\mathcal{P}'_D$, the algorithm that applies $\mathcal{A}$ and then $\mathcal{B}$ solves $\mathcal{P}_D$ and requires only a single query to the blackbox. We now outline the computational model and conventions we adopt throughout this paper. An algorithm that runs in randomized polynomial time refers to one that has access to $\text{poly}(n)$ independent random bits and must run in $\text{poly}(n)$ time where $n$ is the size of the instance of the problem. For clarity of exposition, in our reductions we assume that explicit real-valued expressions can be exactly computed and that we can sample a biased random bit $\text{Bern}(p)$ in polynomial time. We also assume that the sampling and density oracles described in Definition \[def:computable\] can be computed in $\text{poly}(n)$ time. For simplicity of exposition, we assume that we can sample $\mN(0, 1)$ in $\text{poly}(n)$ time.
#### Deducing Strong Computational Lower Bounds for Detection from Reductions.
Throughout Part \[part:lower-bounds\], we will use the guarantees for our reductions to show computational lower bounds. For clarity and to avoid redundancy, we will outline a general recipe for showing these hardness results. All lower bounds that will be shown in Part \[part:lower-bounds\] are *computational lower bounds* in the sense introduced in the beginning of Section \[subsec:2-definitions\]. Consider a problem $\mP$ with parameters $(n, a_1, a_2, \dots, a_t)$ and hypotheses $H_0$ and $H_1$ with a conjectured computationally hard regime captured by the constraint set $\mathcal{C}$. In order to show a computational lower bound at $\mathcal{C}$ based on one of our hardness assumptions, it suffices to show that the following is true:
\[cond:lb\]
This can be seen to suffice as follows. Suppose that $\mathcal{A}$ solves $\mP$ for some possible growth rate in $\mathcal{C}$ i.e. there is a sequence $\{(n_i, a'_1(n_i), a'_2(n_i), \dots, a'_t(n_i))\}_{i = 1}^\infty \subseteq \mathcal{C}$ with this growth rate such that $\mathcal{A}$ has Type I$+$II error $1 - \Omega_{n_i}(1)$ on $\mP(n_i, a'_1(n_i), a'_2(n_i), \dots, a'_t(n_i))$. By Lemma \[lem:3a\], it follows that $\mathcal{A} \circ \mathcal{R}$ also has Type I$+$II error $1 - \Omega_{n_i}(1)$ on the sequence of inputs $\{G_i\}_{i = 1}^\infty$, which contradicts the conjecture that they are hard instances. The three conditions above will be verified in a number of theorems in Part \[part:lower-bounds\].
#### Remarks on Deducing Computational Lower Bounds.
We make several important remarks on the recipe outlined above. In all of our applications of Condition \[cond:lb\], the second sequence of parameters $(n_i, a'_1(n_i), a'_2(n_i), \dots, a'_t(n_i))$ will either be exactly a subsequence of the original parameter sequence $(n, a_1(n), a_2(n), \dots, a_t(n))$ or will have one parameter $a_i' \neq a_i$ different from the original. However, the ability to pass to a subsequence will be crucial in a number of cases where number-theoretic constraints on parameters impact the tightness of our computational lower bounds. These constraints will arise in our reductions to robust sparse mean estimation, robust SLR and dense stochastic block models. They are discussed more in Section \[sec:3-robust-and-supervised\].
#### Properties of Total Variation.
The analysis of our reductions will make use of the following well-known facts and inequalities concerning total variation distance.
\[tvfacts\] The distance $\TV$ satisfies the following properties:
1. (Tensorization) Let $P_1, P_2, \dots, P_n$ and $Q_1, Q_2, \dots, Q_n$ be distributions on a measurable space $(\mathcal{X}, \mathcal{B})$. Then $$\TV\left( \prod_{i = 1}^n P_i, \prod_{i = 1}^n Q_i \right) \le \sum_{i = 1}^n \TV\left( P_i, Q_i \right)$$
2. (Conditioning on an Event) For any distribution $P$ on a measurable space $(\mathcal{X}, \mathcal{B})$ and event $A \in \mathcal{B}$, it holds that $$\TV\left( P(\cdot | A), P \right) = 1 - P(A)$$
3. (Conditioning on a Random Variable) For any two pairs of random variables $(X, Y)$ and $(X', Y')$ each taking values in a measurable space $(\mathcal{X}, \mathcal{B})$, it holds that $$\TV\left( \mL(X), \mL(X') \right) \le \TV\left( \mL(Y), \mL(Y') \right) + \bE_{y \sim Y} \left[ \TV\left( \mL(X | Y = y), \mL(X' | Y' = y) \right)\right]$$ where we define $\TV\left( \mL(X | Y = y), \mL(X' | Y' = y) \right) = 1$ for all $y \not \in \textnormal{supp}(Y')$.
Given an algorithm $\mathcal{A}$ and distribution $\mP$ on inputs, let $\mathcal{A}(\mP)$ denote the distribution of $\mathcal{A}(X)$ induced by $X \sim \mP$. If $\mathcal{A}$ has $k$ steps, let $\mathcal{A}_i$ denote the $i$th step of $\mathcal{A}$ and $\mathcal{A}_{i\text{-}j}$ denote the procedure formed by steps $i$ through $j$. Each time this notation is used, we clarify the intended initial and final variables when $\mathcal{A}_{i}$ and $\mathcal{A}_{i\text{-}j}$ are viewed as Markov kernels. The next lemma from [@brennan2019universality] encapsulates the structure of all of our analyses of average-case reductions. Its proof is simple and included in Appendix \[subsec:appendix-2-tv\] for completeness.
\[lem:tvacc\] Let $\mathcal{A}$ be an algorithm that can be written as $\mathcal{A} = \mathcal{A}_m \circ \mathcal{A}_{m-1} \circ \cdots \circ \mathcal{A}_1$ for a sequence of steps $\mathcal{A}_1, \mathcal{A}_2, \dots, \mathcal{A}_m$. Suppose that the probability distributions $\mP_0, \mP_1, \dots, \mP_m$ are such that $\TV(\mathcal{A}_i(\mP_{i-1}), \mP_i) \le \epsilon_i$ for each $1 \le i \le m$. Then it follows that $$\TV\left( \mathcal{A}(\mP_0), \mP_m \right) \le \sum_{i = 1}^m \epsilon_i$$
The next lemma bounds the total variation between unplanted and planted samples from binomial distributions. This will serve as a key computation in the proof of correctness for the reduction primitive $\pr{To-}k\textsc{-Partite-Submatrix}$. We remark that the total variation upper bound in this lemma is tight in the following sense. When all of the $P_i$ are the same, the expected value of the sum of the coordinates of the first distribution is $k(P_i - Q)$ higher than that of the second. The standard deviation of the second sum is $\sqrt{kmQ(1 - Q)}$ and thus when $k(P_i - Q)^2 \gg mQ(1 - Q)$, the total variation below tends to one. The proof of this lemma can be found in Appendix \[subsec:appendix-2-tv\].
\[lem:bernproduct\] If $k, m \in \mathbb{N}$, $P_1, P_2, \dots, P_k \in [0, 1]$ and $Q \in (0, 1)$, then $$\TV\left( \otimes_{i = 1}^k \left( \textnormal{Bern}(P_i) + \textnormal{Bin}(m - 1, Q) \right), \textnormal{Bin}(m, Q)^{\otimes k} \right) \le \sqrt{\sum_{i = 1}^k \frac{(P_i - Q)^2}{2mQ(1 - Q)}}$$
Here, $\mL_1 + \mL_2$ denotes the convolution of two given probability measures $\mL_1$ and $\mL_2$. The next lemma bounds the total variation between two binomial distributions. Its proof can be found in Appendix \[subsec:appendix-2-tv\].
\[lem:bintv\] Given $P \in [0, 1]$, $Q \in (0, 1)$ and $n \in \mathbb{N}$, it follows that $$\TV\left( \textnormal{Bin}(n, P), \textnormal{Bin}(n, Q) \right) \le |P - Q| \cdot \sqrt{\frac{n}{2Q(1 - Q)}}$$
Problem Formulations as Detection Tasks {#subsec:2-formulations}
---------------------------------------
In this section, we formulate each problem for which we will show computational lower bounds as a detection problem. More precisely, for each problem $\mP$ introduced in Section \[sec:1-problems\], we introduce a detection variant $\mP'$ such that a blackbox for $\mP$ also solves $\mP'$. Some of these formulations were already implicitly introduced or will be reintroduced in future sections. We gather all of these formulations here for convenience. Throughout this work, to simplify notation, we will refer to problems $\mP$ and their detection formulations $\mP'$ introduced in this section using the same notation. Furthermore, we will often denote the distribution over instances under the alternative hypothesis $H_1$ of the detection formulation for $\mP$ with the notation $\mP_D$, when $H_1$ is a simple hypothesis. We will also often parameterize $\mP_D$ by $\theta$ to denote $\mP_D$ conditioned on the latent hidden structure $\theta$. When $H_1$ is composite, $\mP_D$ denotes the set of distributions permitted under $H_1$. These general conventions are introduced on a per problem basis in this section. In Section \[subsec:2-estimation\], we show that our reductions and lower bounds for these detection formulations also imply lower bounds for analogous estimation and recovery variants.
#### Robust Sparse Mean Estimation.
Our hypothesis testing formulation for the problem $\pr{rsme}(n, k, d, \tau, \epsilon)$ has hypotheses given by $$\begin{aligned}
H_0 : (X_1, X_2, \dots, X_n) &\sim_{\textnormal{i.i.d.}} \mN(0, I_d) \\
H_1 : (X_1, X_2, \dots, X_n) &\sim_{\textnormal{i.i.d.}} \pr{mix}_{\epsilon}\left( \mN(\tau \cdot \mu_R, I_d), \mD_O \right)\end{aligned}$$ where $\mD_O$ is any adversarially chosen outlier distribution on $\mathbb{R}^d$, where $\mu_R \in \mathbb{R}^d$ is a random $k$-sparse unit vector chosen uniformly at random from all such vectors with entries in $\{0, 1/\sqrt{k}\}$. Note that $H_1$ is a composite hypothesis here since $\mD_O$ is arbitrary. Note also that this is a formulation of $\pr{rsme}$ in Huber’s contamination model, and therefore lower bounds for this detection problem imply corresponding lower bounds under stronger $\epsilon$-corruption adversaries.
As discussed in Section \[subsec:1-problems-rsme\], $\pr{rsme}$ is only information-theoretically feasible when $\tau = \Omega(\epsilon)$. Consider any algorithm that produces some estimate $\hat{\mu}$ satisfying that $\| \hat{\mu} - \mu \|_2 < \tau/2$ with probability $1/2 + \Omega(1)$ in the estimation formulation for $\pr{rsme}$ with hidden $k$-sparse vector $\mu$, as described in Section \[subsec:1-problems-rsme\]. This algorithm would necessarily output some $\hat{\mu}$ with $\| \hat{\mu} \|_2 < \tau/2$ under $H_0$ and some $\hat{\mu}$ with $\| \hat{\mu} \|_2 > \tau/2$ under $H_1$ with probability $1/2 + \Omega(1)$ in the hypothesis testing formulation above, thus solving it in the sense of Section \[sec:1-problems\]. Thus any computational lower bounds for this hypothesis testing formulation also implies a lower bound for the typical estimation formulation of $\pr{rsme}$.
#### Dense Stochastic Block Models.
Given a subset $C_1 \subseteq [n]$ of size $n/k$, let $\pr{isbm}_D(n, C_1, P_{11}, P_{12}, P_{22})$ denote the distribution on $n$-vertex graphs $G'$ introduced in Section \[subsec:1-problems-sbm\] conditioned on $C_1$. Furthermore, let $\pr{isbm}_D(n, k, P_{11}, P_{12}, P_{22})$ denote the mixture of these distributions induced by choosing $C_1$ uniformly at random from the $(n/k)$-subsets of $[n]$. The problem $\pr{isbm}(n, k, P_{11}, P_{12}, P_{22})$ introduced in Section \[subsec:1-problems-sbm\] is already a hypothesis testing problem, with hypotheses $$H_0 : G \sim \mG\left(n, P_0 \right) \quad \text{and} \quad H_1 : G \sim \pr{isbm}_D(n, k, P_{11}, P_{12}, P_{22})$$ where $H_0$ is a composite hypothesis and $P_0$ can vary over all edge densities in $(0, 1)$. As we will discuss at the end of this section, computational lower bounds for this hypothesis testing problem imply lower bounds for the problem of recovering the hidden community $C_1$.
#### Testing Hidden Partition Models.
Let $C = (C_1, C_2, \dots, C_r)$ and $D = (D_1, D_2, \dots, D_r)$ be two fixed sequences, each consisting of disjoint $K$-subsets of $[n]$. Let $\pr{ghpm}_D(n, r, C, D, \gamma)$ denote the distribution over random matrices $M \in \mathbb{R}^{n \times n}$ introduced in Section \[subsec:1-problems-hidden-partition\] conditioned on the fixed sequences $C$ and $D$. We denote the mixture over these distributions induced by choosing $C$ and $D$ independently and uniformly at random from all admissible such sequences as $\pr{ghpm}_D(n, r, K, \gamma)$. Similarly, we let $\pr{bhpm}_D(n, r, C, P_0, \gamma)$ denote the distribution over bipartite graphs $G$ with two parts of size $n$, each indexed by $[n]$ with edges included independently with probability $$\bP\left[ (i, j) \in E(G) \right] = \left\{ \begin{array}{ll} P_0 + \gamma &\textnormal{if } i \in C_h \textnormal{ and } j \in D_h \textnormal{ for some } h \in [r] \\ P_0 - \frac{\gamma}{r - 1} &\textnormal{if } i \in C_{h_1} \textnormal{ and } j \in D_{h_2} \textnormal{ where } h_1 \neq h_2 \\ P_0 &\textnormal{otherwise} \end{array} \right.$$ where $P_0, \gamma \in (0, 1)$ be such that $\gamma/r \le P_0 \le 1 - \gamma$. Then let $\pr{bhpm}_D(n, r, K, P_0, \gamma)$ denote the mixture formed by choosing $C$ and $D$ randomly as in $\pr{ghpm}_D$. The problems $\pr{ghpm}(n, r, C, D, \gamma)$ and $\pr{bhpm}(n, r, K, P_0, \gamma)$ are simple hypothesis testing problems given by $$\begin{array}{lll}
H_0: M \sim \mN(0, 1)^{\otimes n \times n} &\text{and} &H_1: M \sim \pr{ghpm}_D(n, r, K, \gamma) \\
H_0: G \sim \mG_B(n, n, P_0) &\text{and} &H_1: G \sim \pr{bhpm}_D(n, r, K, P_0, \gamma)
\end{array}$$ where $\mG_B(n, n, P_0)$ denotes the Erdős-Rényi distribution over bipartite graphs with two parts each indexed by $[n]$ and where each edge is included independently with probability $P_0$.
#### Semirandom Planted Dense Subgraph.
Our hypothesis testing formulation for $\pr{semi-cr}(n, k, P_1, P_0)$ has observation $G \in \mG_n$ and two composite hypotheses given by $$\begin{aligned}
&H_0 : G \sim \mathbb{P}_0 \quad \textnormal{for some } \mathbb{P}_0 \in \pr{adv}\left(\mG(n, P_0)\right) \\
&H_1 : G \sim \mathbb{P}_1 \quad \textnormal{for some } \mathbb{P}_1 \in \pr{adv}\left(\mG(n, k, P_1, P_0)\right)\end{aligned}$$ Here, $\pr{adv}\left(\mG(n, k, P_1, P_0)\right)$ denotes the set of distributions induced by a semirandom adversary that can only remove edges outside of the planted dense subgraph $S$. Similarly, the set $\pr{adv}\left(\mG(n, P_0)\right)$ corresponds to an adversary that can remove any edges from the Erdős-Rényi graph $\mG(n, P_0)$. We will discuss at the end of this section, how computational lower bounds for this hypothesis testing formulation imply lower bounds for the problem of approximately recovering the vertex subset corresponding to the planted dense subgraph.
#### Negative Sparse PCA.
Our hypothesis testing formulation for $\pr{neg-spca}(n, k, d, \theta)$ is the spiked covariance model introduced in [@johnstoneSparse04] and used to formulate ordinary $\pr{spca}$ in [@gao2017sparse; @brennan2018reducibility; @brennan2019optimal]. This problem has hypotheses given by $$\begin{aligned}
H_0 : (X_1, X_2, \dots, X_n) &\sim_{\textnormal{i.i.d.}} \mN(0, I_d) \\
H_1 : (X_1, X_2, \dots, X_n) &\sim_{\textnormal{i.i.d.}} \mN\left( 0, I_d - \theta vv^\top \right)\end{aligned}$$ where $v \in \mathbb{R}^d$ is a $k$-sparse unit vector with entries in $\{0, 1/\sqrt{k}\}$ chosen uniformly at random.
#### Unsigned and Mixtures of SLRs.
Given a vector $v \in \mathbb{R}^d$, let $\pr{lr}_d(v)$ be the distribution of a single sample-label pair $(X, y) \in \mathbb{R}^d \times \mathbb{R}$ given by $$y = \langle v, X \rangle + \eta \quad \text{where } X \sim \mN(0, I_d) \text{ and } \eta \sim \mN(0, 1) \text{ are independent}$$ Given a subset $S \subseteq [n]$, let $\pr{mslr}_D(n, S, d, \tau, 1/2)$ denote the distribution over $n$ independent sample-label pairs $(X_1, y_1), (X_2, y_2), \dots, (X_n, y_n)$ each distributed as $$(X_i, y_i) \sim \pr{lr}_d(\tau s_i v_S) \quad \text{where } s_i \sim_{\text{i.i.d.}} \text{Rad}$$ where $v_S = |S|^{-1/2} \cdot \mathbf{1}_S$ and $\text{Rad}$ denotes the Rademacher distribution which is uniform over $\{-1, 1\}$. Note that this is a even mixture of sparse linear regressions with hidden unit vectors $v_S$ and $-v_S$ and signal strength $\tau$. Let $\pr{mslr}_D(n, k, d, \tau, 1/2)$ denote the mixture of these distributions induced by choosing $S$ uniformly at random from all $k$-subsets of $[n]$. Our hypothesis testing formulation for $\pr{mslr}(n, k, d, \tau)$ has two simple hypotheses given by $$\begin{aligned}
H_0 : \left\{ (X_i, y_i) \right\}_{i \in [n]} &\sim \left( \mN(0, I_d) \otimes \mN\left(0, 1 + \tau^2\right) \right)^{\otimes n} \\
H_1 : \left\{ (X_i, y_i) \right\}_{i \in [n]} &\sim \pr{mslr}_D(n, k, d, \tau, 1/2)\end{aligned}$$ Our hypothesis testing formulation of $\pr{uslr}(n, k, d, \tau)$ is a simple derivative of this formulation obtained by replacing each observation $(X_i, y_i)$ with $(X_i, |y_i|)$. We remark that, unlike $\pr{rsme}$ where an estimation algorithm trivially solved the hypothesis testing formulation, the hypothesis $H_0$ here is not an instance of $\pr{mslr}$ corresponding to a hidden vector of zero. This is because the labels $y_i$ under $H_0$ have variance $1 + \tau^2$, whereas they would have variance $1$ if they were this instance of $\pr{mslr}$. However, this detection problem still yields hardness for the estimation variants of $\pr{mslr}$ and $\pr{uslr}$ described in Section \[subsec:1-problems-mslr\], albeit with a slightly more involved argument. This is discussed in Section \[subsec:2-estimation\].
#### Robust SLR.
Our hypothesis testing formulation for $\pr{rslr}(n, k, d, \tau, \epsilon)$ has hypotheses given by $$\begin{aligned}
H_0 : \left\{ (X_i, y_i) \right\}_{i \in [n]} &\sim \left( \mN(0, I_d) \otimes \mN\left(0, 1 + \tau^2\right) \right)^{\otimes n} \\
H_1 : \left\{ (X_i, y_i) \right\}_{i \in [n]} &\sim_{\textnormal{i.i.d.}} \pr{mix}_{\epsilon}\left( \pr{lr}_d(\tau v), \mD_O \right)\end{aligned}$$ where $\mD_O$ is any adversarially chosen outlier distribution on $\mathbb{R}^d \times \mathbb{R}$, where $v \in \mathbb{R}^d$ is a random $k$-sparse unit vector chosen uniformly at random from all such vectors with entries in $\{0, 1/\sqrt{k}\}$. As with the other formulations of SLR, we defer discussing the implications of lower bounds in this formulation for the estimation task described in Section \[subsec:1-problems-robust-slr\] to Section \[subsec:2-estimation\].
#### Tensor PCA.
Let $\pr{tpca}^s_D(n, \theta)$ denote the distribution on order $s$ tensors $T \in \mathbb{R}^{n^{\otimes s}}$ with dimensions all equal to $n$ given by $T = v^{\otimes s} + G$ where $G \sim \mN(0, 1)^{\otimes n^{\otimes s}}$ and $v \in \{-1, 1\}^n$ is chosen independently and uniformly at random. As already introduced in Section \[subsec:1-problems-tpca\], our hypothesis testing formulation for $\pr{tpca}^s(n, \theta)$ is given by $$H_0: T \sim \mN(0, 1)^{\otimes n^{\otimes s}} \quad\text{ and }\quad H_1: T \sim \pr{tpca}^s_D(n, \theta)$$ Unlike the other problems we consider, our reductions only show computational lower bounds for blackboxes solving this hypothesis testing problem with a low false positive probability. As we will show in Section \[sec:3-tensor\], this implies a lower bound for the canonical estimation formulation for tensor PCA.
#### Generalized Learning Sparse Mixtures.
Let $\{\mP_{\mu}\}_{\mu \in \mathbb{R}}$ and $\mQ$ be distributions on an arbitrary measurable space $(\mathcal{X}, \mathcal{B})$ and let $\mD$ be a mixture distribution on $\mathbb{R}$. Let $\pr{glsm}_D(n, S, d, \{\mP_{\mu}\}_{\mu \in \mathbb{R}}, \mQ, \mD)$ denote the distribution over $X_1, X_2, \dots, X_n \in \mathcal{X}^d$ introduced in Section \[subsec:1-problems-universality\] and let $\pr{glsm}_D(n, k, d, \{\mP_{\mu}\}_{\mu \in \mathbb{R}}, \mQ, \mD)$ denote the mixture over these distributions induced by sampling $S$ uniformly at random from the family of $k$-subsets of $[n]$. Our general sparse mixtures detection problem $\pr{glsm}(n, S, d, \{\mP_{\mu}\}_{\mu \in \mathbb{R}}, \mQ, \mD)$ is the following simple vs. simple hypothesis testing formulation $$H_0 : (X_1, X_2, \dots, X_n) \sim_{\textnormal{i.i.d.}} \mQ^{\otimes d} \quad \text{and} \quad H_1 : (X_1, X_2, \dots, X_n) \sim \pr{glsm}_D\left(n, k, d, \{\mP_{\mu}\}_{\mu \in \mathbb{R}}, \mQ, \mD\right)$$ Lower bounds for this formulation directly imply lower bounds for algorithms that return an estimate $\hat{S}$ of $S$ given samples from $\pr{glsm}_D(n, S, d, \{\mP_{\mu}\}_{\mu \in \mathbb{R}}, \mQ, \mD)$ with $|\hat{S} \Delta S| < k/2$ with probability $1/2 + \Omega(1)$ for all $|S| \le k$. Note that under $H_0$, such an algorithm would output some set $\hat{S}$ of size less than $k/2$ and, under $H_1$, it would output a set of size greater than $k/2$, each with probability $1/2 + \Omega(1)$. Thus thresholding $|\hat{S}|$ at $k/2$ solves this detection formulation in the sense of Section \[sec:1-problems\].
Notation {#subsec:2-notation}
--------
In this section, we establish notation that will be used repeatedly throughout this paper. Some of these definitions are repeated later upon use for convenience. Let $\mL(X)$ denote the distribution law of a random variable $X$ and given two laws $\mL_1$ and $\mL_2$, let $\mL_1 + \mL_2$ denote $\mL(X + Y)$ where $X \sim \mL_1$ and $Y \sim \mL_2$ are independent. Given a distribution $\mathcal{P}$, let $\mathcal{P}^{\otimes n}$ denote the distribution of $(X_1, X_2, \dots, X_n)$ where the $X_i$ are i.i.d. according to $\mathcal{P}$. Similarly, let $\mathcal{P}^{\otimes m \times n}$ denote the distribution on $\mathbb{R}^{m \times n}$ with i.i.d. entries distributed as $\mathcal{P}$. We let $\mathbb{R}^{n^{\otimes s}}$ denote the set of all order $s$ tensors with dimensions all $n$ in size that contain $n^s$ entries. The distribution $\mP^{\otimes n^{\otimes s}}$ denotes a tensor of these dimensions with entries independently sampled from $\mP$. We say that two parameters $a$ and $b$ are polynomial in one another if there is a constant $C > 0$ such that $a^{1/C} \le b \le a^C$ as $a \to \infty$. In this paper, we adopt the standard asymptotic notation $O(\cdot), \Omega(\cdot), o(\cdot), \omega(\cdot)$ and $\Theta(\cdot)$. We let $a \asymp b$, $a \lesssim b$ and $a \gtrsim b$ be shorthands for $a = \Theta(b), a = O(b)$ and $a = \Omega(b)$, respectively. In all problems that we consider, our main focus is on the polynomial order of growth at computational barriers, usually in terms of a natural parameter $n$. Given a natural parameter $n$ that will usually be clear from context, we let $a = \tilde{O}(b)$ be a shorthand for $a = O\left(b \cdot (\log n)^c \right)$ for some constant $c > 0$, and define $\tilde{\Omega}(\cdot), \tilde{o}(\cdot), \tilde{\omega}(\cdot)$ and $\tilde{\Theta}(\cdot)$ analogously. Oftentimes, it will be true that $b$ is polynomial in $n$, in which case $n$ can be replaced by $b$ in the definition above.
Given a finite or measurable set $\mathcal{X}$, let $\text{Unif}[\mathcal{X}]$ denote the uniform distribution on $\mathcal{X}$. Let $\text{Rad}$ be shorthand for $\text{Unif}[\{-1, 1\}]$, corresponding to the special case of a Rademacher random variable. Let $\TV$, $\KL$ and $\chi^2$ denote total variation distance, KL divergence and $\chi^2$ divergence, respectively. Let $\mN(\mu, \Sigma)$ denote a multivariate normal random vector with mean $\mu \in \mathbb{R}^d$ and covariance matrix $\Sigma$, where $\Sigma$ is a $d \times d$ positive semidefinite matrix, and let $\text{Bern}(p)$ denote the Bernoulli distribution with probability $p$. Let $[n] = \{1, 2, \dots, n\}$ and $\mG_n$ be the set of simple graphs on $n$ vertices. Let $\mG(n, p)$ denote the Erdős-Rényi distribution over $n$-vertex graphs where each edge is included independently with probability $p$. Let $\mG_B(m, n, p)$ denote the Erdős-Rényi distribution over $(m + n)$-vertex bipartite graphs with $m$ left vertices, $n$ right vertices and such that each of the $mn$ possible edges included independently with probability $p$. Throughout this paper, we will refer to bipartite graphs with $m$ left vertices and $n$ right vertices and matrices in $\{0, 1\}^{m \times n}$ interchangeably. Let $\mathbf{1}_S$ denote the vector $v \in \mathbb{R}^n$ with $v_i = 1$ if $i \in S$ and $v_i = 0$ if $i \not \in S$ where $S \subseteq [n]$. Let $\pr{mix}_{\epsilon}(\mD_1, \mD_2)$ denote the $\epsilon$-mixture distribution formed by sampling $\mD_1$ with probability $(1 - \epsilon)$ and $\mD_2$ with probability $\epsilon$. Given a partition $E$ of $[N]$ with $k$ parts, let $\mU_N(E)$ denote the uniform distribution over all $k$-subsets of $[N]$ containing exactly one element from each part of $E$.
Given a matrix $M \in \mathbb{R}^{n \times n}$, the matrix $M_{S, T} \in \mathbb{R}^{k \times k}$ where $S, T$ are $k$-subsets of $[n]$ refers to the minor of $M$ restricted to the row indices in $S$ and column indices in $T$. Furthermore, $(M_{S, T})_{i, j} = M_{\sigma_S(i), \sigma_T(j)}$ where $\sigma_S : [k] \to S$ is the unique order-preserving bijection and $\sigma_T$ is analogously defined. Given an index set $I$, subset $S \subseteq I$ and pair of distributions $(\mP, \mQ)$, let $\mathcal{M}_I(S, \mP, \mQ)$ denote the distribution of a collection of independent random variables $(X_i : i \in I)$ with $X_i \sim \mP$ if $i \in S$ and $X_i \sim \mQ$ if $i \not \in S$. When $S$ is a random set, this $\mathcal{M}_I(S, \mP, \mQ)$ denotes a mixture over the randomness of $S$ e.g. $\mathcal{M}_{[N]}(\mU_N(E), \mP, \mQ)$ denotes a mixture of $\mathcal{M}_{[N]}(S, \mP, \mQ)$ over $S \sim \mU_N(E)$. Generally, given an index set $I$ and $|I|$ distributions $\mP_1, \mP_2, \dots, \mP_{|I|}$, let $\mathcal{M}_I(\mP_i : i \in I)$ denote the distribution of independent random variables $(X_i : i \in I)$ with $X_i \sim \mP_i$ for each $i \in I$. The planted Bernoulli distribution $\pr{pb}(n, i, p, q)$ is over $V \in \{0, 1\}^n$ with independent entries satisfying that $V_j \sim \text{Bern}(q)$ unless $j = i$, in which case $V_i \sim \text{Bern}(p)$. In other words, $\pr{pb}(n, i, p, q)$ is a shorthand for $\mathcal{M}_{[n]}\left(\{ i \}, \text{Bern}(p), \text{Bern}(q)\right)$. Similarly, the planted dense subgraph distribution $\mG(n, S, p, q)$ can be written as $\mathcal{M}_I\left(\binom{S}{2}, \text{Bern}(p), \text{Bern}(q) \right)$ where $I = \binom{[n]}{2}$.
Rejection Kernels and Reduction Preprocessing {#sec:2-rejection-kernels}
=============================================
In this section, we present several average-case reduction primitives that will serve as the key subroutines and preprocessing steps in our reductions. These include pre-existing subroutines from the rejection kernels framework introduced in [@brennan2018reducibility; @brennan2019universality; @brennan2019optimal], such as univariate rejection kernels from binary inputs and $\pr{Gaussianize}$. We introduce the primitive $\pr{To-}k\textsc{-Partite-Submatrix}$, which is a generalization of $\pr{To-Submatrix}$ from [@brennan2019universality] that maps from the $k$-partite variant of planted dense subgraph to Bernoulli matrices, by filling in the missing diagonal and symmetrizing. We also introduce a new variant of rejection kernels called symmetric 3-ary rejection kernels that will be crucial in our reductions showing universality of lower bounds for sparse mixtures.
Gaussian Rejection Kernels {#subsec:2-gaussian}
--------------------------
**Algorithm** $\textsc{rk}_G(\mu, B)$
*Parameters*: Input $B \in \{0, 1\}$, Bernoulli probabilities $0 < q < p \le 1$, Gaussian mean $\mu$, number of iterations $N$, let $\varphi_\mu(x) = \frac{1}{\sqrt{2\pi}} \cdot \exp\left(- \frac{1}{2}(x - \mu)^2 \right)$ denote the density of $\mN(\mu, 1)$
1. Initialize $z \gets 0$.
2. Until $z$ is set or $N$ iterations have elapsed:
1. Sample $z' \sim \mN(0, 1)$ independently.
2. If $B = 0$, if the condition $$p \cdot \varphi_0(z') \ge q \cdot \varphi_{\mu}(z')$$ holds, then set $z \gets z'$ with probability $1 - \frac{q \cdot \varphi_\mu(z')}{p \cdot \varphi_0(z')}$.
3. If $B = 1$, if the condition $$(1 - q) \cdot \varphi_\mu(z' + \mu) \ge (1 - p) \cdot \varphi_0(z' + \mu)$$ holds, then set $z \gets z' + \mu$ with probability $1 - \frac{(1 - p) \cdot \varphi_0(z' + \mu)}{(1 - q) \cdot \varphi_\mu(z' + \mu)}$.
3. Output $z$.
**Algorithm** $\textsc{Gaussianize}$
*Parameters*: Collection of variables $X_i \in \{0, 1\}$ for $i \in I$ where $I$ is some index set with $|I| = n$, rejection kernel parameter $R_{\pr{rk}}$, Bernoulli probabilities $0 < q < p \le 1$ with $p - q = R_{\pr{rk}}^{-O(1)}$ and $\min(q, 1 - q) = \Omega(1)$ and a target means $0 \le \mu_{i} \le \tau$ for each $i \in I$ where $\tau > 0$ is a parameter
1. Form the collection of variables $Y \in \mathbb{R}^{I}$ by setting $$Y_i \gets \textsc{rk}_{G}(\mu_i, X_i)$$ for each $i \in I$ where each $\textsc{rk}_{G}$ is run with parameter $R_{\pr{rk}}$ and $N_{\text{it}} = \lceil 6\delta^{-1} \log R_{\pr{rk}} \rceil$ iterations where $\delta = \min \left\{ \log \left( \frac{p}{q} \right), \log \left( \frac{1 - q}{1 - p} \right) \right\}$.
2. Output the collection of variables $(Y_i : i \in I)$.
Rejection kernels are a framework in [@brennan2018reducibility; @brennan2019universality; @brennan2019optimal] for algorithmic changes of measure based on rejection sampling. Related reduction primitives for changes of measure to Gaussians and binomial random variables appeared earlier in [@ma2015computational; @hajek2015computational]. Rejection kernels mapping a pair of Bernoulli distributions to a target pair of scalar distributions were introduced in [@brennan2018reducibility]. These were extended to arbitrary high-dimensional target distributions and applied to obtain universality results for submatrix detection in [@brennan2019universality]. A surprising and key feature of both of these rejection kernels is that they are not lossy in mapping one computational barrier to another. For instance, in [@brennan2019universality], multivariate rejection kernels were applied to increase the relative size $k$ of the planted submatrix, faithfully mapping instances tight to the computational barrier at lower $k$ to tight instances at higher $k$. This feature is also true of the scalar rejection kernels applied in [@brennan2018reducibility].
In this work, we will only need a subset of prior results on rejection kernels. In this section, we give an overview of the key guarantees for Gaussian rejection kernels with binary inputs from [@brennan2018reducibility] and for $\textsc{Gaussianize}$ from [@brennan2019optimal]. We will also need a new ternary input variant of rejection kernels that will be introduced in Section \[subsec:srk\]. We begin by introducing the Gaussian rejection kernel $\pr{rk}_G(\mu, B)$ which maps $B \in \{0, 1\}$ to a real valued output and is parameterized by some $0 < q < p \le 1$. The map $\pr{rk}_G(\mu, B)$ transforms two Bernoulli inputs approximately into Gaussians. Specifically, it satisfies the two Markov transition properties $$\pr{rk}_G(\mu, B) \approx \mN(0, 1) \quad \text{if } B \sim \text{Bern}(q) \quad \quad \text{and} \quad \quad \pr{rk}_G(\mu, B) \approx \mN(\mu, 1) \quad \text{if } B \sim \text{Bern}(p)$$ where $\pr{rk}_G(\mu, B)$ can be computed in $\text{poly}(n)$ time, the $\approx$ above are up to $O_n(n^{-3})$ total variation distance and $\mu = \Theta(1/\sqrt{\log n})$. The maps $\pr{rk}_G(\mu, B)$ can be implemented with the rejection sampling scheme shown in Figure \[fig:rej-kernel\]. The total variation guarantees for Gaussian rejection kernels are captured formally in the following theorem.
\[lem:5c\] Let $R_{\pr{rk}}$ be a parameter and suppose that $p = p(R_{\pr{rk}})$ and $q = q(R_{\pr{rk}})$ satisfy that $0 < q < p \le 1$, $\min(q, 1 - q) = \Omega(1)$ and $p - q \ge R_{\pr{rk}}^{-O(1)}$. Let $\delta = \min \left\{ \log \left( \frac{p}{q} \right), \log \left( \frac{1 - q}{1 - p} \right) \right\}$. Suppose that $\mu = \mu(R_{\pr{rk}}) \in (0, 1)$ satisfies that $$\mu \le \frac{\delta}{2 \sqrt{6\log R_{\pr{rk}} + 2\log (p-q)^{-1}}}$$ Then the map $\textsc{rk}_{\text{G}}$ with $N = \left\lceil 6\delta^{-1} \log R_{\pr{rk}} \right\rceil$ iterations can be computed in $\text{poly}(R_{\pr{rk}})$ time and satisfies $$\TV\left(\textsc{rk}_{\text{G}}(\mu, \textnormal{Bern}(p)), \mN(\mu, 1) \right) = O\left(R_{\pr{rk}}^{-3}\right) \quad \text{and} \quad \TV\left(\textsc{rk}_{\text{G}}(\mu, \textnormal{Bern}(q)), \mN(0, 1) \right) = O\left(R_{\pr{rk}}^{-3}\right)$$
The proof of this lemma consists of showing that the distributions of the outputs $\pr{rk}_G(\mu, \text{Bern}(p))$ and $\pr{rk}_G(\mu, \text{Bern}(q))$ are close to $\mN(\mu, 1)$ and $\mN(0, 1)$ when conditioned to lie in the set of $x$ with $\frac{1 - p}{1 - q} \le \frac{\varphi_\mu(x)}{\varphi_0(x)} \le \frac{p}{q}$ and then showing that this event occurs with probability close to one. The original framework in [@brennan2018reducibility] mapped binary inputs to more general pairs of target distributions than $\mN(\mu, 1)$ and $\mN(0, 1)$, however we will only require binary-input rejection kernels in the Gaussian. A multivariate extension of this framework appeared in [@brennan2019universality].
Given an index set $I$, subset $S \subseteq I$ and pair of distributions $(\mP, \mQ)$, let $\mathcal{M}_I(S, \mP, \mQ)$ denote the distribution of a collection of independent random variables $(X_i : i \in I)$ with $X_i \sim \mP$ if $i \in S$ and $X_i \sim \mQ$ if $i \not \in S$. More generally, given an index set $I$ and $|I|$ distributions $\mP_1, \mP_2, \dots, \mP_{|I|}$, let $\mathcal{M}_I(\mP_i : i \in I)$ denote the distribution of independent random variables $(X_i : i \in I)$ with $X_i \sim \mP_i$ for each $i \in I$. For example, a planted clique in $\mG(n, 1/2)$ on the set $S \subseteq [n]$ can be written as $\mathcal{M}_I\left(\binom{S}{2}, \text{Bern}(1), \text{Bern}(1/2) \right)$ where $I = \binom{[n]}{2}$.
We now review the guarantees for the subroutine $\textsc{Gaussianize}$. The variant presented here is restated from [@brennan2019optimal] to be over a general index set $I$ rather than matrices, and with the rejection kernel parameter $R_{\pr{rk}}$ decoupled from the size $n$ of $I$, as shown in Figure \[fig:rej-kernel\]. $\textsc{Gaussianize}$ maps a set of planted Bernoulli random variables to a set of independent Gaussian random variables with corresponding planted means. The procedure applies a Gaussian rejection kernel entrywise and its total variation guarantees follow by a simple application of the tensorization property of $\TV$ from Fact \[tvfacts\].
\[lem:gaussianize\] Let $I$ be an index set with $|I| = n$ and let $R_{\pr{rk}}$, $0 < q < p \le 1$ and $\delta$ be as in Lemma \[lem:5c\]. Let $\mu_i$ be such that $0 \le \mu_i \le \tau$ for each $i \in I$ where the parameter $\tau > 0$ satisfies that $$\tau \le \frac{\delta}{2 \sqrt{6\log R_{\pr{rk}} + 2\log (P - Q)^{-1}}}$$ The algorithm $\mathcal{A} = \textsc{Gaussianize}$ runs in $\textnormal{poly}(n, R_{\pr{rk}})$ time and satisfies that $$\begin{aligned}
\TV\left( \mathcal{A}(\mathcal{M}_I(S, \textnormal{Bern}(P), \textnormal{Bern}(Q))), \, \mathcal{M}_I\left( \mN(\mu_i \cdot \mathbf{1}(i \in S), 1) : i \in I \right) \right) &= O\left(n \cdot R_{\pr{rk}}^{-3}\right)\end{aligned}$$ for all subsets $S \subseteq I$.
Cloning and Planting Diagonals {#subsec:2-planting-diagonals}
------------------------------
We begin by reviewing the subroutine $\textsc{Graph-Clone}$, shown in Figure \[fig:clone\], which was introduced in [@brennan2019universality] and produces several independent samples from a planted subgraph problem given a single sample. Its properties as a Markov kernel are stated in the next lemma, which is proven by showing the two explicit expressions for $\bP[x^{ij} = v]$ in Step 1 define valid probability distributions and then explicitly writing the mass functions of $\mathcal{A}\left( \mG(n, q) \right)$ and $\mathcal{A}\left( \mG(n, S, p, q) \right)$.
**Algorithm** <span style="font-variant:small-caps;">Graph-Clone</span>
*Inputs*: Graph $G \in \mG_n$, the number of copies $t$, parameters $0 < q < p \le 1$ and $0 < Q < P \le 1$ satisfying $\frac{1 - p}{1 - q} \le \left( \frac{1 - P}{1 - Q} \right)^t$ and $\left( \frac{P}{Q} \right)^t \le \frac{p}{q}$
1. Generate $x^{ij} \in \{0, 1\}^t$ for each $1 \le i < j \le n$ such that:
- If $\{i, j \} \in E(G)$, sample $x^{ij}$ from the distribution on $\{0, 1\}^t$ with $$\bP[x^{ij} = v] = \frac{1}{p - q} \left[ (1 - q) \cdot P^{|v|_1} (1 - P)^{t - |v|_1} - (1 - p) \cdot Q^{|v|_1} (1 - Q)^{t - |v|_1} \right]$$
- If $\{i, j \} \not \in E(G)$, sample $x^{ij}$ from the distribution on $\{0, 1\}^t$ with $$\bP[x^{ij} = v] = \frac{1}{p - q} \left[ p \cdot Q^{|v|_1} (1 - Q)^{t - |v|_1} - q \cdot P^{|v|_1} (1 - P)^{t - |v|_1} \right]$$
2. Output the graphs $(G_1, G_2, \dots, G_t)$ where $\{i, j\} \in E(G_k)$ if and only if $x^{ij}_k = 1$.
\[lem:graphcloning\] Let $t \in \mathbb{N}$, $0 < q < p \le 1$ and $0 < Q < P \le 1$ satisfy that $$\frac{1 - p}{1 - q} \le \left( \frac{1 - P}{1 - Q} \right)^t \quad \text{and} \quad \left( \frac{P}{Q} \right)^t \le \frac{p}{q}$$ Then the algorithm $\mathcal{A} = \textsc{Graph-Clone}$ runs in $\textnormal{poly}(t, n)$ time and satisfies that for each $S \subseteq [n]$, $$\mathcal{A}\left( \mG(n, q) \right) \sim \mG(n, Q)^{\otimes t} \quad \text{and} \quad \mathcal{A}\left( \mG(n, S, p, q) \right) \sim \mG(n, S, P, Q)^{\otimes t}$$
Graph cloning more generally produces a method to clone a set of Bernoulli random variables indexed by a general index set $I$ instead of the possible edges of a graph on the vertex set $[n]$. The guarantees for this subroutine are stated in the following lemma. We remark that both of these lemmas will always be applied with $t = O(1)$, resulting in a constant loss in signal strength.
\[lem:bern-clone\] Let $I$ be an index set with $|I| = n$, let $t \in \mathbb{N}$, $0 < q < p \le 1$ and $0 < Q < P \le 1$ satisfy that $$\frac{1 - p}{1 - q} \le \left( \frac{1 - P}{1 - Q} \right)^t \quad \text{and} \quad \left( \frac{P}{Q} \right)^t \le \frac{p}{q}$$ There is an algorithm $\mathcal{A} = \textsc{Bernoulli-Clone}$ that runs in $\textnormal{poly}(t, n)$ time and satisfying $$\begin{aligned}
&\mathcal{A}\left( \mathcal{M}_I(\textnormal{Bern}(q)) \right) \sim \mathcal{M}_I(\textnormal{Bern}(Q))^{\otimes t} \quad \text{and} \\
&\mathcal{A}\left( \mathcal{M}_I(S, \textnormal{Bern}(p), \textnormal{Bern}(q)) \right) \sim \mathcal{M}_I(S, \textnormal{Bern}(P), \textnormal{Bern}(Q))^{\otimes t}\end{aligned}$$ for each $S \subseteq I$.
**Algorithm** $\pr{To-}k\textsc{-Partite-Submatrix}$
*Inputs*: $k$ instance $G \in \mG_N$ with clique size $k$ that divides $N$ and partition $E$ of $[N]$, edge probabilities $0 < q < p \le 1$ with $q = N^{-O(1)}$ and target dimension $n \ge \left(\frac{p}{Q} + 1 \right)N$ where $Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{p = 1\}} \left( \sqrt{q} - 1 \right)$ and $k$ divides $n$
1. Apply $\textsc{Graph-Clone}$ to $G$ with edge probabilities $P = p$ and $Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{p = 1\}} \left( \sqrt{q} - 1 \right)$ and $t = 2$ clones to obtain $(G_1, G_2)$.
2. Let $F$ be a partition of $[n]$ with $[n] = F_1 \cup F_2 \cup \cdots \cup F_k$ and $|F_i| = n/k$. Form the matrix $M_{\text{PD}} \in \{0, 1\}^{n \times n}$ as follows:
1. For each $t \in [k]$, sample $s_1^t \sim \text{Bin}(N/k, p)$ and $s_2^t \sim \text{Bin}(n/k, Q)$ and let $S_t$ be a subset of $F_t$ with $|S_t| = N/k$ selected uniformly at random. Sample $T_1^t \subseteq S_t$ and $T_2^t \subseteq F_t \backslash S_t$ with $|T_1^t| = s_1^t$ and $|T_2^t| = \max\{s_2^t - s_1^t, 0 \}$ uniformly at random.
2. Now form the matrix $M_{\text{PD}}$ such that its $(i, j)$th entry is $$(M_{\text{PD}})_{ij} = \left\{ \begin{array}{ll} \mathbf{1}_{\{\pi_t(i), \pi_t(j)\} \in E(G_1)} & \text{if } i < j \text{ and } i, j \in S_t \\ \mathbf{1}_{\{\pi_t(i), \pi_t(j)\} \in E(G_2)} & \text{if } i > j \text{ and } i, j \in S_t \\ \mathbf{1}_{\{ i \in T_1^t \}} & \text{if } i = j \text{ and } i, j \in S_t \\ \mathbf{1}_{\{i \in T_2^t\}} & \text{if } i = j \text{ and } i, j \in F_t \backslash S_t \\ \sim_{\text{i.i.d.}} \text{Bern}(Q) & \text{if } i \neq j \text{ and } (i, j) \not \in S_t^2 \text{ for a } t \in [k] \end{array} \right.$$ where $\pi_t : S_t \to E_t$ is a bijection chosen uniformly at random.
3. Output the matrix $M_{\text{PD}}$ and the partition $F$.
We now introduce the procedure $\pr{To-}k\textsc{-Partite-Submatrix}$, which is shown in Figure \[fig:tosubmatrix\] and will be crucial in our reductions to dense variants of the stochastic block model. This reduction clones the upper half of the adjacency matrix of the input graph problem to produce an independent lower half and plants diagonal entries while randomly embedding into a larger matrix to hide the diagonal entries in total variation. $\pr{To-}k\textsc{-Partite-Submatrix}$ is similar to $\textsc{To-Submatrix}$ in [@brennan2019universality] and $\textsc{To-Bernoulli-Submatrix}$ in [@brennan2019optimal] but ensures that the random embedding step accounts for the $k$-partite promise of the input $k$ instance. Completing the missing diagonal entries in the adjacency matrix will be crucial to apply one of our main techniques, Bernoulli rotations, which will be introduced in the next section.
The next lemma states the total variation guarantees of $\pr{To-}k\textsc{-Partite-Submatrix}$ and is a $k$-partite variant of Theorem 6.1 in [@brennan2019universality]. Although technically more subtle than the analysis of $\textsc{To-Submatrix}$ in [@brennan2019universality], this proof is tangential to our main reduction techniques and deferred to Appendix \[subsec:appendix-2-k-partite\]. Given a partition $E$ of $[N]$ with $k$ parts, let $\mU_N(E)$ denote the uniform distribution over all $k$-subsets of $[N]$ containing exactly one element from each part of $E$.
\[lem:submatrix\] Let $0 < q < p \le 1$ and $Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{p = 1\}} \left( \sqrt{q} - 1 \right)$. Suppose that $n$ and $N$ are such that $$n \ge \left( \frac{p}{Q} + 1 \right) N \quad \text{and} \quad k \le QN/4$$ Also suppose that $q = N^{-O(1)}$ and both $N$ and $n$ are divisible by $k$. Let $E = (E_1, E_2, \dots, E_k)$ and $F = (F_1, F_2, \dots, F_k)$ be partitions of $[N]$ and $[n]$, respectively. Then it follows that the algorithm $\mathcal{A} = \pr{To-}k\textsc{-Partite-Submatrix}$ runs in $\textnormal{poly}(N)$ time and satisfies $$\begin{aligned}
\TV\left( \mathcal{A}(\mG(N, \mU_N(E), p, q)), \, \mathcal{M}_{[n] \times [n]} \left(\mU_n(F), \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right) &\le 4k \cdot \exp \left( - \frac{Q^2N^2}{48pkn} \right) + \sqrt{\frac{C_Q k^2}{2n}} \\
\TV\left( \mathcal{A}(\mG(N, q)), \, \textnormal{Bern}\left( Q \right)^{\otimes n \times n} \right) &\le 4k \cdot \exp \left( - \frac{Q^2N^2}{48pkn} \right)\end{aligned}$$ where $C_Q = \max \left\{ \frac{Q}{1 - Q}, \frac{1 - Q}{Q} \right\}$.
For completeness, we give an intuitive summary of the technical subtleties arising in the proof of this lemma. After applying $\textsc{Graph-Clone}$, the adjacency matrix of the input graph $G$ is still missing its diagonal entries. The main difficulty in producing these diagonal entries is to ensure that entries corresponding to vertices in the planted subgraph are properly sampled from $\text{Bern}(p)$. To do this, we randomly embed the original $N \times N$ adjacency matrix in a larger $n \times n$ matrix with i.i.d. entries from $\text{Bern}(Q)$ and sample all diagonal entries corresponding to entries of the original matrix from $\text{Bern}(p)$. The diagonal entries in the new $n - N$ columns are chosen so that the supports on the diagonals within each $F_t$ each have size $\text{Bin}(n/k, Q)$. Even though this causes the sizes of the supports on the diagonals in each $F_t$ to have the same distribution under both $H_0$ and $H_1$, the randomness of the embedding and the fact that $k = o(\sqrt{n})$ ensures that this is hidden in total variation.
Symmetric 3-ary Rejection Kernels {#subsec:srk}
---------------------------------
**Algorithm** <span style="font-variant:small-caps;">3-srk</span>$(B, \mP_+, \mP_-, \mQ)$
*Parameters*: Input $B \in \{-1, 0, 1\}$, number of iterations $N$, parameters $a \in (0, 1)$ and sufficiently small nonzero $\mu_1, \mu_2 \in \mathbb{R}$, distributions $\mP_+, \mP_-$ and $\mQ$ over a measurable space $(X, \mathcal{B})$ such that $(\mP_+, \mQ)$ and $(\mP_-, \mQ)$ are computable pairs
1. Initialize $z$ arbitrarily in the support of $\mQ$.
2. Until $z$ is set or $N$ iterations have elapsed:
1. Sample $z' \sim \mQ$ independently and compute the two quantities $$\mL_1(z') = \frac{d\mP_+}{d\mQ} (z') - \frac{d\mP_-}{d\mQ} (z') \quad \text{and} \quad \mL_2(z') = \frac{d\mP_+}{d\mQ} (z') + \frac{d\mP_-}{d\mQ} (z') - 2$$
2. Proceed to the next iteration if it does not hold that $$2|\mu_1| \ge \left| \mL_1(z') \right| \quad \text{and} \quad \frac{2|\mu_2|}{\max\{a, 1 - a\}} \ge |\mL_2(z')|$$
3. Set $z \gets z'$ with probability $P_A(x, B)$ where $$P_A(x, B) = \frac{1}{2} \cdot \left\{ \begin{array}{ll} 1+ \frac{a}{4\mu_2} \cdot \mL_2(z') + \frac{1}{4\mu_1} \cdot \mL_1(z') &\text{if } B = 1 \\ 1 - \frac{1 - a}{4\mu_2} \cdot \mL_2(z') &\text{if } B = 0 \\ 1+ \frac{1}{4\mu_2} \cdot \mL_2(z') - \frac{a}{4\mu_1} \cdot \mL_1(z') &\text{if } B = -1 \end{array} \right.$$
3. Output $z$.
In this section, we introduce symmetric 3-ary rejection kernels, which will be the key gadget in our reduction showing universality of lower bounds for learning sparse mixtures in Section \[sec:universality\]. In order to map to universal formulations of sparse mixtures, it is crucial to produce a nontrivial instance of a sparse mixture with multiple planted distributions. Since previous rejection kernels all begin with binary inputs, they do not have enough degrees of freedom to map to three output distributions. The symmetric 3-ary rejection kernels $3\pr{-srk}$ introduced in this section overcome this issue by mapping from distributions supported on $\{-1, 0, 1\}$ to three output distributions $\mP_+, \mP_-$ and $\mQ$. In order to produce clean total variation guarantees, these rejection kernels also exploit symmetry in their three input distributions on $\{-1, 0, 1\}$.
Let $\text{Tern}(a, \mu_1, \mu_2)$ where $a \in (0, 1)$ and $\mu_1, \mu_2 \in \mathbb{R}$ denote the probability distribution on $\{-1, 0, 1\}$ such that if $B \sim \text{Tern}(a, \mu_1, \mu_2)$ then $$\bP[X = -1] = \frac{1 - a}{2} - \mu_1 + \mu_2, \quad \bP[X = 0] = a - 2\mu_2, \quad \bP[X = 1] = \frac{1 - a}{2} + \mu_1 + \mu_2$$ if all three of these probabilities are nonnegative. The map $3\pr{-srk}(B)$, shown in Figure \[fig:srej-kernel\], sends an input $B \in \{-1, 0, 1\}$ to a set $X$ simultaneously satisfying three Markov transition properties:
1. if $B \sim \text{Tern}(a, \mu_1, \mu_2)$, then $3\textsc{-srk}(B)$ is close to $\mP_+$ in total variation;
2. if $B \sim \text{Tern}(a, -\mu_1, \mu_2)$, then $3\textsc{-srk}(B)$ is close to $\mQ$ in total variation; and
3. if $B \sim \text{Tern}(a, 0, 0)$, then $3\textsc{-srk}(B)$ is close to $\mP_-$ in total variation.
In order to state our main results for $3\pr{-srk}(B)$, we will need the notion of computable pairs from [@brennan2019universality]. The definition below is that given in [@brennan2019universality], without the assumption of finiteness of KL divergences. This assumption was convenient for the Chernoff exponent analysis needed for multivariate rejection kernels in [@brennan2019universality]. Since our rejection kernels are univariate, we will be able to state our universality conditions directly in terms of tail bounds rather than Chernoff exponents.
\[def:computable\] Define a pair of sequences of distributions $(\mP, \mQ)$ over a measurable space $(X, \mathcal{B})$ where $\mP = (\mP_n)$ and $\mQ = (\mQ_n)$ to be computable if:
1. there is an oracle producing a sample from $\mQ_n$ in $\textnormal{poly}(n)$ time;
2. for all $n$, $\mP_n$ and $\mQ_n$ are mutually absolutely continuous and the likelihood ratio satisfies $$\bE_{x \sim \mQ_n} \left[\frac{d\mP_n}{d\mQ_n}(x) \right] = \bE_{x \sim \mP_n}\left[\left( \frac{d\mP_n}{d\mQ_n}(x) \right)^{-1} \right] = 1$$ where $\frac{d\mP_n}{d\mQ_n}$ is the Radon-Nikodym derivative; and
3. there is an oracle computing $\frac{d\mP_n}{d\mQ_n} (x)$ in $\textnormal{poly}(n)$ time for each $x \in X$.
We remark that the second condition above always holds for discrete distributions and generally for most well-behaved distributions $\mP$ and $\mQ$. We now state our main total variation guarantees for $3\pr{-srk}$. The proof of the next lemma follows a similar structure to the analysis of rejection sampling as in Lemma 5.1 of [@brennan2018reducibility] and Lemma 5.1 of [@brennan2019universality]. However, the bounds that we obtain are different than those in [@brennan2018reducibility; @brennan2019universality] because of the symmetry of the three input $\text{Tern}$ distributions. The proof of this lemma is deferred to Appendix \[subsec:appendix-3-ary\].
\[lem:srk\] Let $a \in (0, 1)$ and $\mu_1, \mu_2 \in \mathbb{R}$ be nonzero and such that $\textnormal{Tern}(a, \mu_1, \mu_2)$ is well-defined. Let $\mP_+, \mP_-$ and $\mQ$ be distributions over a measurable space $(X, \mathcal{B})$ such that $(\mP_+, \mQ)$ and $(\mP_-, \mQ)$ are computable pairs with respect to a parameter $n$. Let $S \subseteq X$ be the set $$S = \left\{x \in X : 2|\mu_1| \ge \left| \frac{d\mP_+}{d\mQ} (x) - \frac{d\mP_-}{d\mQ} (x) \right| \quad \textnormal{and} \quad \frac{2|\mu_2|}{\max\{a, 1 - a\}} \ge \left|\frac{d\mP_+}{d\mQ} (x) + \frac{d\mP_-}{d\mQ} (x) - 2 \right| \right\}$$ Given a positive integer $N$, then the algorithm $3\textsc{-srk} : \{-1, 0, 1\} \to X$ can be computed in $\textnormal{poly}(n, N)$ time and satisfies that $$\left. \begin{array}{r} \TV\left( 3\textsc{-srk}(\textnormal{Tern}(a, \mu_1, \mu_2)), \mP_+ \right) \\ \TV\left( 3\textsc{-srk}(\textnormal{Tern}(a, -\mu_1, \mu_2)), \mP_- \right) \\\TV\left( 3\textsc{-srk}(\textnormal{Tern}(a, 0, 0)), \mQ \right) \end{array} \right\} \le 2\delta \left(1 + |\mu_1|^{-1} + |\mu_2|^{-1} \right) + \left( \frac{1}{2} + \delta \left( 1 + |\mu_1|^{-1} + |\mu_2|^{-1} \right) \right)^N$$ where $\delta > 0$ is such that $\bP_{X \sim \mP_+}[X \not \in S]$, $\bP_{X \sim \mP_-}[X \not \in S]$ and $\bP_{X \sim \mQ}[X \not \in S]$ are upper bounded by $\delta$.
Dense Bernoulli Rotations {#sec:2-bernoulli-rotations}
=========================
In this section, we formally introduce dense Bernoulli rotations and constructions for their design matrices and tensors, which will play an essential role in all of our reductions. For an overview of the main high level ideas underlying these techniques, see Sections \[subsec:1-tech-dbr\] and \[subsec:1-tech-design-matrices\]. As mentioned in Sections \[subsec:1-tech-dbr\], dense Bernoulli rotations map $\pr{pb}(T, i, p, q)$ to $\mN\left( \mu \lambda^{-1} \cdot A_{i}, I_m\right)$ for each $i \in [T]$ and $\textnormal{Bern}(q)^{\otimes T}$ to $\mN\left( 0, I_m\right)$ approximately in total variation, where $\mu = \tilde{\Theta}(1)$, the vectors $A_1, A_2, \dots, A_T \in \mathbb{R}^m$ are for us to design and $\lambda$ is an upper bound on the singular values of the matrix with columns $A_i$.
Simplifying some technical details, our reduction to $\pr{rsme}$ in Section \[subsec:3-rsme-reduction\] roughly proceeds as follows: (1) its input is a $k\pr{-bpc}$ instance with parts of size $M$ and $N$ and biclique dimensions $k = k_M$ and $k_N$; (2) it applies dense Bernoulli rotations with $p = 1$ and $q = 1/2$ to the $Mk_N$ vectors of length $T = N/k_N$ representing the adjacency patterns in $\{0, 1\}^{N/k_N}$ between each of the $M$ left vertices and each part in the partition of the right vertices; and (3) it pads the resulting matrix with standard normals so that it has $d$ rows. Under $H_1$, the result is a $d \times k_N m$ matrix $\mathbf{1}_S u^\top + \mN(0, 1)^{\otimes d \times k_N m}$ where $S$ is the left vertex set of the biclique and $u$ consists of scaled concatenations of the $A_i$. We design the adversary so that the target data matrix $D$ in $\pr{rsme}$ is roughly of the form $$D_{ij} \sim \left\{ \begin{array}{ll} \mN\left(\tau k^{-1/2}, 1\right) &\text{if } i \in S \text{ and } j \text{ is not corrupted} \\ \mN\left(\epsilon^{-1}(1 - \epsilon) \tau k^{-1/2}, 1\right) &\text{if } i \in S \text{ and } j \text{ is corrupted} \\ \mN(0, 1) &\text{otherwise} \end{array} \right.$$ for each $i \in [d]$ and $j \in [n]$ where $n = k_N m$. Matching the two distributions above, we arrive at the following desiderata for the $A_i$.
- We would like each $\lambda^{-1} A_i$ to consist of $(1 - \epsilon') m$ entries equal to $\tau k^{-1/2}$ and $\epsilon' m$ entries equal to $\epsilon'^{-1}(1 - \epsilon') \tau k^{-1/2}$ where $\tau$ is just below the desired computational barrier $\tau = \tilde{\Theta}(k^{1/2} \epsilon^{1/2} n^{-1/4})$ and $\epsilon' \le \epsilon$ where $\epsilon' = \Theta(\epsilon)$.
- Now observe that the norm of any such $\lambda^{-1} A_i$ is $\Theta\left( \tau \epsilon^{-1/2} m^{1/2} k^{-1/2} \right)$ which is just below a norm of $\tilde{\Theta}(m^{1/2} n^{-1/4})$ at the computational barrier for $\pr{rsme}$. Note that the normalization by $\lambda^{-1}$ ensures that each $\lambda^{-1} A_i$ has $\ell_2$ norm at most $1$. To be as close to the computational barrier as possible, it is necessary that $m^{1/2} n^{-1/4} = \tilde{\Theta}(1)$ which rearranges to $m = \tilde{\Theta}(k_N)$ since $n = k_N m$.
- When the input is an instance of $k\pr{-bpc}$ nearly at its computational barrier, we have that $N = \tilde{\Theta}(k_N^2)$ and thus our necessary condition above implies that $m = \tilde{\Theta}(N/k_N) = \tilde{\Theta}(T)$, and hence that $A$ is nearly square. Furthermore, if we take the $A_i$ to be unit vectors, our desiderata that the $\lambda^{-1} A_i$ have norm $\tilde{\Theta}(m^{1/2} n^{-1/4})$ reduces to $\lambda = \tilde{\Theta}(1)$.
Summarizing this discussion, we arrive at exactly the three conditions outlined in Section \[subsec:1-tech-design-matrices\]. We remark that while these desiderata are tailored to $\pr{rsme}$, they will also turn out to be related to the desired properties of $A$ in our other reductions. We now formally introduce dense Bernoulli rotations.
Mapping Planted Bits to Spiked Gaussian Tensors {#subsec:2-planted-bits}
-----------------------------------------------
**Algorithm** <span style="font-variant:small-caps;">Bern-Rotations</span>
*Inputs*: Vector $V \in \{0, 1\}^n$, rejection kernel parameter $R_{\pr{rk}}$, Bernoulli probability parameters $0 < q < p \le 1$, output dimension $m$, an $m \times n$ matrix $A$ with singular values all at most $\lambda > 0$, intermediate mean parameter $\mu > 0$
1. Form $V_1 \in \{0, 1\}^n$ by applying $\pr{Gaussianize}$ to the entries in the vector $V$ with rejection kernel parameter $R_{\pr{rk}}$, Bernoulli probabilities $q$ and $p$ and target mean parameters all equal to $\mu$.
2. Sample a vector $U \sim \mN(0, 1)^{\otimes m}$ and let $\left(I_m - \lambda^{-2} \cdot AA^\top\right)^{1/2}$ be the positive semidefinite square root of $I_m - \lambda^{-2} \cdot AA^\top$. Now form the vector $$V_2 = \lambda^{-1} \cdot AV_1 +\left(I_m - \lambda^{-2} \cdot AA^\top\right)^{1/2}U$$
3. Output the vector $V_2$.
**Algorithm** <span style="font-variant:small-caps;">Tensor-Bern-Rotations</span>
*Inputs*: Order $s$ tensor $T \in \mathcal{T}_{s, n}(\{0, 1\})$, rejection kernel parameter $R_{\pr{rk}}$, Bernoulli probability parameters $0 < q < p \le 1$, output dimension $m$, an $m \times n$ matrices $A_1, A_2, \dots, A_s$ with singular values less than or equal to $\lambda_1, \lambda_2, \dots, \lambda_s > 0$, respectively, mean parameter $\mu > 0$
1. Flatten $T$ into the vector $V_1 \in \{0, 1\}^{n^s}$, form the Kronecker product $A = A_1 \otimes A_2 \otimes \cdots \otimes A_s$ and set $\lambda = \lambda_1 \lambda_2 \cdots \lambda_s$.
2. Let $V_2$ be the output of $\pr{Bern-Rotations}$ applied to $V_1$ with parameters $R_{\pr{rk}}$, $0 < q < p \le 1, A, \lambda, \mu$ and output dimension $m^s$.
3. Rearrange the entries of $V_2$ into a tensor $T_1 \in \mathcal{T}_{s, m}(\mathbb{R})$ and output $T_1$.
Let $\pr{pb}(n, i, p, q)$ and $\pr{pb}(S, i, p, q)$ denote the planted bit distributions defined in Sections \[subsec:1-tech-dbr\] and \[subsec:2-notation\]. The procedures $\textsc{Bern-Rotations}$ and its derivative $\textsc{Tensor-Bern-Rotations}$ are shown in Figure \[fig:bern-rotations\]. Recall that the subroutine $\textsc{Gaussianize}$ was introduced in Figure \[fig:rej-kernel\]. Note that positive semidefinite square roots of $n \times n$ matrices can be computed in $\text{poly}(n)$ time. The two key Markov transition properties for these procedures that will be used throughout the paper are as follows.
\[lem:bern-rotations\] Let $m$ and $n$ be positive integers and let $A \in \mathbb{R}^{m \times n}$ be a matrix with singular values all at most $\lambda > 0$. Let $R_{\pr{rk}}$, $0 < q < p \le 1$ and $\mu$ be as in Lemma \[lem:5c\]. Let $\mathcal{A}$ denote $\textsc{Bern-Rotations}$ applied with rejection kernel parameter $R_{\pr{rk}}$, Bernoulli probability parameters $0 < q < p \le 1$, output dimension $m$, matrix $A$ with singular value upper bound $\lambda$ and mean parameter $\mu$. Then $\mathcal{A}$ runs in $\textnormal{poly}(n, R_{\pr{rk}})$ time and it holds that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \pr{pb}(n, i, p, q) \right), \, \mN\left( \mu \lambda^{-1} \cdot A_{\cdot, i}, I_m\right) \right) &= O\left(n \cdot R_{\pr{rk}}^{-3}\right) \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes n} \right), \, \mN\left( 0, I_m\right) \right) &= O\left(n \cdot R_{\pr{rk}}^{-3}\right)\end{aligned}$$ for all $i \in [n]$, where $A_{\cdot, i}$ denotes the $i$th column of $A$.
Let $\mathcal{A}_1$ denote the first step of $\mathcal{A} = \pr{Bern-Rotations}$ with input $V$ and output $V_1$, and let $\mathcal{A}_2$ denote the second step of $\mathcal{A}$ with input $V_1$ and output $V_2$. Fix some index $i \in [n]$. Now Lemma \[lem:gaussianize\] implies $$\label{eqn:gaussianize1}
\TV\left( \mathcal{A}_1\left( \pr{pb}(n, i, p, q) \right), \, \mN\left( \mu \cdot e_i, I_n \right) \right) = O\left(n \cdot R_{\pr{rk}}^{-3}\right)$$ where $e_i \in \mathbb{R}^n$ is the $i$th canonical basis vector. Suppose that $V_1 \sim \mN\left( \mu \cdot e_i, I_n \right)$ and let $V_1 = \mu \cdot e_i + W$ where $W \sim \mN(0, I_n)$. Note that the entries of $AW$ are jointly Gaussian and $\text{Cov}(AW) = AA^\top$. Therefore, we have that $$AV_1 = \mu \cdot A_{\cdot, i} + AW \sim \mN\left( \mu \cdot A_{\cdot, i}, AA^\top \right)$$ If $U \sim \mN(0, 1)^{\otimes m}$ is independent of $W$, then the entries of $AW + \left(\lambda^2 \cdot I_m - \cdot AA^\top\right)^{1/2}U$ are jointly Gaussian. Furthermore, since both terms are mean zero and independent the covariance matrix of this vector is given by $$\begin{aligned}
\text{Cov}\left( AW + \left(\lambda^2 \cdot I_m - AA^\top\right)^{1/2}U \right) &= \text{Cov}\left( AW \right) + \text{Cov}\left( \left(\lambda^2 \cdot I_m - AA^\top\right)^{1/2}U \right) \\
&= AA^\top + (\lambda^2 \cdot I_m - AA^\top) = \lambda^2 \cdot I_m\end{aligned}$$ Therefore it follows that $AW + \left(\lambda^2 \cdot I_m - AA^\top\right)^{1/2}U \sim \mN(0, \lambda^2 \cdot I_m)$ and furthermore that $$V_2 = \lambda^{-1} \cdot AV_1 +\left(I_m - \lambda^{-2} \cdot AA^\top\right)^{1/2}U \sim \mN\left( \mu \lambda^{-1} \cdot A_{\cdot, i}, I_m\right)$$ Where $V_2 \sim \mathcal{A}_2\left( \mN\left( \mu \cdot e_i, I_n \right) \right)$. Now applying $\mathcal{A}_2$ to both distributions in Equation (\[eqn:gaussianize1\]) and the data-processing inequality prove that $\TV\left( \mathcal{A}\left( \pr{pb}(n, i, p, q) \right), \, \mN\left( \mu \lambda^{-1} \cdot A_{\cdot, i}, I_m\right) \right) = O\left(n \cdot R_{\pr{rk}}^{-3}\right)$. This argument analyzing $\mathcal{A}_2$ applied with $\mu = 0$ yields that $\mathcal{A}_2\left( \mN(0, I_n) \right) \sim \mN(0, I_m)$. Combining this with $$\TV\left( \mathcal{A}_1\left( \textnormal{Bern}(q)^{\otimes n} \right), \, \mN\left( 0, I_n \right) \right) = O\left(n \cdot R_{\pr{rk}}^{-3}\right)$$ from Lemma \[lem:gaussianize\] now yields the bound $\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes n} \right), \, \mN\left( 0, I_n\right) \right) = O\left(n \cdot R_{\pr{rk}}^{-3}\right)$, which completes the proof of the lemma.
\[cor:tensor-bern-rotations\] Let $s, m$ and $n$ be positive integers, let $A_1, A_2, \dots, A_s \in \mathbb{R}^{m \times n}$ be matrices with singular values less than or equal to $\lambda_1, \lambda_2, \dots, \lambda_s > 0$, respectively. Let $R_{\pr{rk}}$, $0 < q < p \le 1$ and $\mu$ be as in Lemma \[lem:5c\]. Let $\mathcal{A}$ denote $\textsc{Tensor-Bern-Rotations}$ applied with parameters $0 < q < p \le 1$, output dimension $m$, matrix $A = A_1 \otimes A_2 \otimes \cdots \otimes A_s$ with singular value upper bound $\lambda = \lambda_1 \lambda_2 \cdots \lambda_s$ and mean parameter $\mu$. If $s$ is a constant, then $\mathcal{A}$ runs in $\textnormal{poly}(n, R_{\pr{rk}})$ time and it holds that for each $e \in [n]^s$, $$\begin{aligned}
\TV\left( \mathcal{A}\left( \pr{pb}_s(n, e, p, q) \right), \, \mN\left( \mu (\lambda_1\lambda_2 \cdots \lambda_s)^{-1} \cdot A_{\cdot, e_1} \otimes A_{\cdot, e_2} \otimes \cdots \otimes A_{\cdot, e_s}, I_m^{\otimes s} \right) \right) &= O\left( n^s \cdot R_{\pr{rk}}^{-3} \right) \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes n^{\otimes s}} \right), \, \mN\left( 0, I_m^{\otimes s} \right) \right) &= O\left( n^s \cdot R_{\pr{rk}}^{-3} \right)\end{aligned}$$ where $A_{\cdot, i}$ denotes the $i$th column of $A$.
Let $\sigma_i^j$ for $1 \le i \le r_j$ be the nonzero singular values of $A_j$ for each $1 \le j \le s$. Then the nonzero singular values of the Kronecker product $A = A_1 \otimes A_2 \otimes \cdots \otimes A_s$ are all of the products $\sigma_{i_1}^1 \sigma_{i_2}^2 \cdots \sigma_{i_s}^s$ for all $(i_1, i_2, \dots, i_s)$ with $1 \le i_j \le r_j$ for each $1 \le j \le s$. Thus if $\sigma_i^j \le \lambda_j$ for each $1 \le j \le s$, then $\lambda = \lambda_1 \lambda_2 \cdots \lambda_s$ is an upper bound on the singular values of $A$. The corollary now follows by applying Lemma \[lem:bern-rotations\] with parameters $p, q, \mu$ and $\lambda$, matrix $A$, output dimension $m^s$ and input dimension $n^s$.
$\mathbb{F}_r^t$ Design Matrices {#subsec:2-design-matrices}
--------------------------------
In this section, we introduce a family of matrices $K_{r, t}$ that plays a key role in constructing the matrices $A$ in our applications of dense Bernoulli rotations. Throughout this section, $r$ will denote a prime number and $t$ will denote a fixed positive integer. As outlined in the beginning of this section and in Section \[subsec:1-tech-design-matrices\], there are three desiderata of the matrices $K_{r, t}$ that are needed for our applications of dense Bernoulli rotations. In the context of $K_{r, t}$, these three properties are:
1. The rows of $K_{r, t}$ are unit vectors and close to orthogonal in the sense that the largest singular value of $K_{r, t}$ is bounded above by a constant.
2. The matrices $K_{r, t}$ both contain exactly two distinct real values as entries.
3. The matrices $K_{r, t}$ contain a fraction of approximately $1/r$ negative entries per column.
The matrices $K_{r, t}$ are constructed based on the incidence structure of the points in $\mathbb{F}_r^t$ with the Grassmanian of hyperplanes in $\mathbb{F}_r^t$ and their affine shifts. The construction of $K_{r, t}$ is motivated by the projective geometry codes and their applications to constructing 2-block designs. We remark that a classic trick counting the number of ordered $d$-tuples of linearly independent vectors in $\mathbb{F}_r^t$ shows that the number of $d$-dimensional subspaces of $\mathbb{F}_r^t$ is $$|\text{Gr}(d, \mathbb{F}_r^t)| = \frac{(r^t - 1)(r^t - r) \cdots (r^t - r^{d - 1})}{(r^d - 1)(r^d - r) \cdots (r^d - r^{d - 1})}$$ This implies that the number of hyperplanes in $\mathbb{F}_r^t$ is $\ell = \frac{r^t - 1}{r - 1}$. We now give the definition of the matrix $K_{r, t}$ as a weighted incidence matrix between the points of $\mathbb{F}_r^t$ and affine shifts of the hyperplanes in the Grassmanian $\text{Gr}(t - 1, \mathbb{F}_r^t)$.
\[defn:Krt\] Let $P_1, P_2, \dots, P_{r^t}$ be an enumeration of the points in $\mathbb{F}_r^t$ and $V_1, V_2, \dots, V_\ell$, where $\ell = \frac{r^t - 1}{r - 1}$, be an enumeration of the hyperplanes in $\mathbb{F}_r^t$. For each $V_i$, let $u_i \neq 0$ denote a vector in $\mathbb{F}_r^t$ not contained in $V_i$. Define $K_{r, t} \in \mathbb{R}^{r\ell \times r^t}$ to be the matrix with the following entries $$(K_{r, t})_{r(i-1) + a + 1, j} = \frac{1}{\sqrt{r^t(r - 1)}} \cdot \left\{ \begin{matrix} 1 & \textnormal{if } P_j \not \in V_i + au_i \\ 1 - r & \textnormal{if } P_j \in V_i + au_i \end{matrix} \right.$$ for each $a \in \{0, 1, \dots, r - 1\}$ where $V_i + v$ denotes the affine shift of $V_i$ by $v$.
We now establish the key properties of $K_{r, t}$ in the following simple lemma. Note that the lemma implies that the submatrix consisting of the rows of $K_{r, t}$ corresponding to hyperplanes in $\mathbb{F}_r^t$ has rows that are exactly orthogonal. However, the additional rows of $K_{r, t}$ corresponding to affine shifts of these hyperplanes will prove crucial in preserving *tightness to algorithms* in our average-case reductions. As established in the subsequent lemma, these additional rows only mildly perturb the largest singular value of the matrix.
\[lem:suborthogonalmatrices\] If $r \ge 2$ is prime, then $K_{r, t}$ satisfies that:
1. for each $1 \le i \le kr\ell$, it holds that $\|(K_{r, t})_i\|_2 = 1$;
2. the inner product between the rows $(K_{r, t})_i$ and $(K_{r, t})_j$ where $i \neq j$ are given by $$\langle (K_{r, t})_i, (K_{r, t})_j \rangle = \left\{ \begin{array}{ll} -(r - 1)^{-1} & \textnormal{if } \lfloor (i-1)/r\rfloor = \lfloor (j-1)/r\rfloor \\ 0 & \textnormal{otherwise} \end{array} \right.$$
3. each column of $K_{r, t}$ contains exactly $\frac{r^t - 1}{r - 1}$ entries equal to $\frac{1 - r}{\sqrt{r^t(r - 1)}}$.
Let $r_i$ denote the $i$th row $(K_{r, t})_i$ of $K_{r, t}$. Fix a pair $1 \le i < j \le r\ell$ and let $1 \le i' \le j' \le \ell$ and $a, b \in \{0, 1, \dots, r - 1\}$ be such that $i = r(i' - 1) + a$ and $j = r(j' - 1) + b$. The affine subspaces of $\mathbb{F}_r^t$ corresponding to $r_i$ and $r_j$ are then $A_i = V_{i'} + au_{i'}$ and $A_j = V_{j'} + bu_{j'}$. Observe that $$\| r_i \|_2^2 = (r^t - |A_i|) \cdot \frac{1}{r^t(r - 1)} + |A_i| \cdot \frac{(1 - r)^2}{r^t(r - 1)} = 1$$ Similarly, we have that $$\langle r_i, r_j \rangle = (r^t - |A_i \cup A_j|) \cdot \frac{1}{r^t(r - 1)} + (|A_i \cup A_j| - |A_i \cap A_j|) \cdot \frac{1 - r}{r^t(r - 1)} + |A_i \cap A_j| \cdot \frac{(1 - r)^2}{r^t(r - 1)}$$ for each $1 \le i, j \le r\ell$. Since the size of a subspace is invariant under affine shifts, we have that $|A_i| = |V_{i'}| = |A_j| = |V_{j'}| = r^{t - 1}$. Furthermore, since $A_i \cap A_j$ is the intersection of two affine shifts of subspaces of dimension $t - 1$ of $\mathbb{F}_r^t$, it follows that $A_i \cap A_j$ is either empty, an affine shift of a $(t - 2)$-dimensional subspace or equal to both $A_i$ and $A_j$. Note that if $i \neq j$, then $A_i$ and $A_j$ are distinct. We remark that when $t = 1$, each $A_i$ is an affine shift of the trivial hyperplane $\{0\}$ and thus is a singleton. Now note that the intersection $A_i \cap A_j$ is only empty if $A_i$ and $A_j$ are affine shifts of one another which occurs if and only if $\lfloor (i-1)/r\rfloor = i' = j' = \lfloor (j-1)/r\rfloor$. In this case, it follows that $|A_i \cup A_j| = |A_i| + |A_j| = 2r^{t - 1}$. In this case, we have $$\begin{aligned}
\langle r_i, r_j \rangle &= (r^t - 2r^{t - 1}) \cdot \frac{1}{r^t(r - 1)} + 2r^{t - 1} \cdot \frac{1 - r}{r^t(r - 1)} = - (r - 1)^{-1}\end{aligned}$$ If $i' \neq j'$, then $A_i \cap A_j$ is the affine shift of a $(t - 2)$-dimensional subspace which implies that $|A_i \cap A_j| = r^{t - 1}$. Furthermore, $|A_i \cup A_j| = |A_i| + |A_j| - |A_i \cap A_j| = 2r^{t - 1} - r^{t - 2}$. In this case, we have that $$\langle r_i, r_j \rangle = (r - 1)^2 \cdot \frac{1}{r^2(r - 1)} - 2(r - 1) \cdot \frac{1}{r^2} + \frac{r - 1}{r^2} = 0$$ This completes the proof of (2). We remark that this last case never occurs if $t = 1$. Now note that any point is in exactly one affine shift of each $V_i$. Therefore each column contains exactly $\ell$ negative entries, which proves (3).
The next lemma uses the computation of $\langle (K_{r, t})_i, (K_{r, t})_j \rangle$ above to compute the singular values of $K_{r, t}$.
\[lem:Krtsv\] The nonzero singular values of $K_{r, t}$ are $\sqrt{1 + (r - 1)^{-1}}$ with multiplicity $(r - 1)\ell$.
Lemma \[lem:suborthogonalmatrices\] shows that $(K_{r, t})(K_{r, t})^\top$ is block-diagonal with $\ell$ blocks of dimension $r \times r$. Furthermore, each block is of the form $\left(1 + (r - 1)^{-1} \right) I_r - (r - 1)^{-1} \mathbf{1} \mathbf{1}^\top$. The eigenvalues of each of these blocks are $1 + (r - 1)^{-1}$ with multiplicity $r - 1$ and $0$ with multiplicity $1$. Thus the eigenvalues of $(K_{r, t})(K_{r, t})^\top$ are $1 + (r - 1)^{-1}$ and $0$ with multiplicities $(r - 1)\ell$ and $\ell$, respectively, implying the result.
$\mathbb{F}_r^t$ Design Tensors {#subsec:2-design-tensors}
-------------------------------
In this section, we introduce a family of tensors $T_{r, t}^{(V_i, V_j, L)}$ that will be used in $\pr{Tensor-Bern-Rotations}$ in the matrix case with $s = 2$ to map to hidden partition models in Section \[sec:3-hidden-partition\]. An overview of how these tensors will be used in dense Bernoulli rotations was given in Section \[subsec:1-tech-design-matrices\]. Similar to the previous section, the $T_{r, t}^{(V_i, V_j, L)}$ are constructed to have the following properties:
1. Given a pair of hyperplanes $(V_i, V_j)$ and a linear function $L : \mathbb{F}_r \to \mathbb{F}_r$, the slice $T_{r, t}^{(V_i, V_j, L)}$ of the constructed tensor is an $r^t \times r^t$ matrix with Frobenius norm $\left\| T_{r, t}^{(V_i, V_j, L)} \right\|_F = 1$.
2. These slices are approximately orthogonal in the sense that the Gram matrix with entries given by the matrix inner products $\text{Tr}\left( T_{r, t}^{(V_i, V_j, L)} \cdot T_{r, t}^{(V_{i'}, V_{j'}, L')} \right)$ has a bounded spectral norm.
3. Each slice $T_{r, t}^{(V_i, V_j, L)}$ contains two distinct entries and is an average signed adjacency matrix of a hidden partition model i.e. has these two entries arranged into an $r$-block community structure.
4. Matrices formed by specific concatentations of $T_{r, t}^{(V_i, V_j, L)}$ into larger matrices remain the average signed adjacency matrices of hidden partition models. This will be made precise in Lemma \[lem:comm-align-tensors\] and will be important in our reduction from $k\pr{-pc}$.
The construction of the family of tensors $T_{r, t}^{(V_i, V_j, L)}$ is another construction using the incidence geometry of $\mathbb{F}_r^t$, but is more involved than the two constructions in the previous section. Throughout this section, we let $V_1, V_2, \dots, V_\ell$ and $P_1, P_2, \dots, P_{r^t}$ be an enumeration of the hyperplanes and points of $\mathbb{F}_r^t$ as in Definition \[defn:Krt\]. Furthermore, for each $V_i$, we fix a particular point $u_i \neq 0$ of $\mathbb{F}_r^t$ not contained in $V_i$. In order to introduce the family $T_{r, t}^{(V_i, V_j, L)}$, we first define the following important class of bipartite graphs.
\[defn:Grt\] For each $1 \le i \le \ell$, let $A^i_0 \cup A_1^i \cup \cdots \cup A_{r - 1}^i$ be the partition of $\mathbb{F}_r^t$ given by the affine shifts $A^i_x = (V_i + xu_i)$ for each $x \in \mathbb{F}_r$. Given two hyperplanes $V_i, V_j$ and linear function $L : \mathbb{F}_r \to \mathbb{F}_r$, define the bipartite graph $G_{r, t}(V_i, V_j, L)$ with two parts of size $r^t$, each indexed by points in $\mathbb{F}_r^t$, as follows:
1. all of the edges between the points with indices in $A^i_x$ in the left part of $G_{r, t}(V_i, V_j, L)$ and the points with indices in $A^j_y$ in the right part are present if $L(x) = y$; and
2. none of the edges between the points of $A^i_x$ on the left and $A^j_y$ on the right are present if $L(x) \neq y$.
We now define the slices of the tensor $T_{r, t}$ to be weighted adjacency matrices of the bipartite graphs $G_{r, t}(V_i, V_j, L)$ as in the following definition.
\[defn:Trt\] For any two hyperplanes $V_i, V_j$ and linear function $L : \mathbb{F}_r \to \mathbb{F}_r$, define the $r^t \times r^t$ matrix $T_{r, t}^{(V_i, V_j, L)}$ to have entries given by $$\left( T_{r, t}^{(V_i, V_j, L)} \right)_{k,l} = \frac{1}{r^t \sqrt{r - 1}} \cdot \left\{ \begin{array}{ll} r - 1 & \textnormal{if } ( P_k, P_l ) \in E\left( G_{r, t}(V_i, V_j, L) \right) \\ -1 & \textnormal{otherwise} \end{array} \right.$$ for each $1 \le k, l \le r^t$.
The next two lemmas establish that the tensor $T_{r, t}$ satisfies the four desiderata discussed above, which will be crucial in our reduction to hidden partition models.
\[lem:suborthogonaltensors\] If $r \ge 2$ is prime, then $T_{r, t}$ satisfies that:
1. for each $1 \le i, j \le r^t$ and linear function $L$, it holds that $\left\| T_{r, t}^{(V_i, V_j, L)} \right\|_F = 1$;
2. the inner product between the slices $T_{r, t}^{(V_i, V_j, L)}$ and $T_{r, t}^{(V_{i'}, V_{j'}, L')}$ where $(V_i, V_j, L) \neq (V_{i'}, V_{j'}, L')$ is $$\textnormal{Tr}\left( T_{r, t}^{(V_i, V_j, L)} \cdot T_{r, t}^{(V_{i'}, V_{j'}, L')} \right) = \left\{ \begin{array}{ll} -(r - 1)^{-1} & \textnormal{if } (V_i, V_j) = (V_{i'}, V_{j'}) \textnormal{ and } L = L' + a \textnormal{ for some } a \neq 0 \\ 0 & \textnormal{if } (V_i, V_j) \neq (V_{i'}, V_{j'}) \textnormal{ or } L \neq L' + a \textnormal{ for all } a \in \mathbb{F}_r \end{array} \right.$$
Fix two triples $(V_i, V_j, L)$ and $(V_{i'}, V_{j'}, L')$ and let $G_1 = G_{r, t}(V_i, V_j, L)$ and $G_2 = G_{r, t}(V_{i'}, V_{j'}, L')$. Now observe that $$\begin{aligned}
\textnormal{Tr}\left( T_{r, t}^{(V_i, V_j, L)} \cdot T_{r, t}^{(V_{i'}, V_{j'}, L')} \right) &= \frac{1}{r^{2t}(r - 1)} \cdot (r - 1)^2 \cdot |E(G_1) \cap E(G_2)| \nonumber \\
&\quad \quad - \frac{1}{r^{2t}(r - 1)} \cdot (r - 1) \cdot \left( |E(G_1) \cup E(G_2)| - |E(G_1) \cap E(G_2)| \right) \nonumber \\
&\quad \quad + \frac{1}{r^{2t}(r - 1)} \cdot \left( r^{2t} - |E(G_1) \cup E(G_2)| \right) \label{eqn:inner-matrix}\end{aligned}$$ Now note that since $L$ is a function, there are exactly $r$ pairs $(x, y) \in \mathbb{F}_r^2$ such that $L(x) = y$ and thus exactly $r$ pairs of left and right sets $(A^i_x, A^j_y)$ that are completely connected by edges in $G_1$. This implies that there are $|E(G_1)| = |E(G_2)| = r^{2t - 1}$ edges in both $G_1$ and $G_2$. We now will show that $$\label{eqn:int-sizes}
|E(G_1) \cap E(G_2)| = \left\{ \begin{array}{ll} r^{2t - 1} & \textnormal{if } (V_i, V_j, L) = (V_{i'}, V_{j'}, L') \\ r^{2t - 2} & \textnormal{if } (V_i, V_j) \neq (V_{i'}, V_{j'}) \textnormal{ or } L \neq L' + a \textnormal{ for all } a \in \mathbb{F}_r \\ 0 & \textnormal{if } (V_i, V_j) = (V_{i'}, V_{j'}) \textnormal{ and } L = L' + a \textnormal{ for some } a \neq 0 \end{array} \right.$$ We remark that, as in the proof of Lemma \[lem:suborthogonalmatrices\], it is never true that $(V_i, V_j) \neq (V_{i'}, V_{j'})$ if $t = 1$. The first case follows immediately from the fact that $|E(G_1)| = r^{2t - 1}$. Now consider the case in which $V_i \neq V_{i'}$ and $V_j \neq V_{j'}$. As in the proof of Lemma \[lem:suborthogonalmatrices\], any pair of affine spaces $A^{i}_x$ and $A^{i'}_{x'}$ either intersects in an affine space of dimension $t - 2$, an affine space of dimension $t - 1$ if $A^{i}_x = A^{i'}_{x'}$ are equal and in the empty set if $A^{i}_x$ and $A^{i'}_{x'}$ are affine shifts of one another. Since $V_i \neq V_{i'}$, only the first of these three options is possible. Therefore, for all $x, x', y, y' \in \mathbb{F}_r$, it follows that $(A_x^i \times A_y^j) \cap (A_{x'}^{i'} \times A_{y'}^{j'}) = (A_x^i \cap A_{x'}^{i'}) \times (A_y^j \times A_{y'}^{j'})$ has size $r^{2t - 4}$ since both $A_x^i \cap A_{x'}^{i'}$ and $A_y^j \times A_{y'}^{j'}$ are affine spaces of dimension $t - 2$. Now observe that $$|E(G_1) \cap E(G_2)| = \sum_{L(x) = y} \sum_{L'(x') = y'} \left| \left(A_x^i \times A_y^j\right) \cap \left(A_{x'}^{i'} \times A_{y'}^{j'}\right) \right| = r^2 \cdot r^{2t - 4} = r^{2t - 2}$$ since there are exactly $r$ pairs $(x, y)$ with $L(x) = y$. Now suppose that $V_i = V_{i'}$ and $V_j \neq V_{j'}$. In this case, we have that $A_x^i \cap A_{x'}^{i'}$ is empty if $x \neq x'$ and otherwise has size $|A_x^i| = r^{t - 1}$. Thus it follows that $$\left| \left(A_x^i \times A_y^j\right) \cap \left(A_{x'}^{i'} \times A_{y'}^{j'}\right) \right| = \left\{ \begin{array}{ll} r^{2t - 3} & \textnormal{if } x = x' \\ 0 &\textnormal{otherwise} \end{array} \right.$$ This implies that $$|E(G_1) \cap E(G_2)| = \sum_{L(x) = y} \sum_{L'(x') = y'} \left| \left(A_x^i \times A_y^j\right) \cap \left(A_{x'}^{i'} \times A_{y'}^{j'}\right) \right| = r \cdot r^{2t - 3} = r^{2t - 2}$$ since for each fixed $x = x'$, there is a unique pair $(y, y')$ with $L(x) = y$ and $L(x') = y'$. The case in which $V_i \neq V_{i'}$ and $V_j = V_{j'}$ is handled by a symmetric argument. Now suppose that $(V_i, V_j) = (V_{i'}, V_{j'})$. It follows that $(A_x^i \times A_y^j) \cap (A_{x'}^{i'} \times A_{y'}^{j'})$ has size $r^{2t - 2}$ if $x = x'$ and $y = y'$, and is empty otherwise. The formula above therefore implies that $|E(G_1) \cap E(G_2)|$ is $r^{2t - 2}$ times the number of solutions to $L(x) = L'(x)$. Since $L - L'$ is linear, the number of solutions is $0$ if $L - L'$ is constant and not equal to zero, $1$ if $L - L'$ is not constant or $r$ if $L = L'$. This completes the proof of Equation (\[eqn:int-sizes\]). Now observe that $|E(G_1) \cup E(G_2)| = |E(G_1)| + |E(G_2)| - |E(G_1) \cap E(G_2)| = 2r^{2t - 1} - |E(G_1) \cap E(G_2)|$. Substituting this expression for $|E(G_1) \cup E(G_2)|$ into Equation (\[eqn:inner-matrix\]) yields that $$\textnormal{Tr}\left( T_{r, t}^{(V_i, V_j, L)} \cdot T_{r, t}^{(V_{i'}, V_{j'}, L')} \right) = \frac{r^2}{r^{2t}(r - 1)} \cdot |E(G_1) \cap E(G_2)| - \frac{1}{r - 1}$$ Combining this with the different cases of Equation (\[eqn:int-sizes\]) shows part (2) of the lemma. Part (1) of the lemma follows from this computation and fact that $$\left\| T_{r, t}^{(V_i, V_j, L)} \right\|_F^2 = \textnormal{Tr}\left( \left( T_{r, t}^{(V_i, V_j, L)} \right)^2 \right)$$ This completes the proof of the lemma.
We now define an unfolded matrix variant of the tensor $T_{r, t}$ that will be used in our applications of $\pr{Tensor-Bern-Rotations}$ to map to hidden partition models. The row indexing in $M_{r, t}$ will be important and related to the community alignment property of $T_{r, t}$ that will be established in Lemma \[lem:comm-align-tensors\].
\[defn:unfolded-Trt\] Let $M_{r, t}$ be an $(r - 1)^2 \ell^2 \times r^{2t}$ matrix with entries given by $$\left( M_{r, t} \right)_{a(r - 1)\ell^2 + i'(r - 1)\ell + b\ell + j' + 1, ir^{t} + j + 1} = \left( T_{r, t}^{(V_{i' + 1}, V_{j' + 1}, L_{a + 1,b + 1})} \right)_{i, j}$$ for each $0 \le i', j' \le (r - 1)\ell - 1$, $0 \le a, b \le r - 2$ and $0 \le i, j \le r^t - 1$, where $L_{c, d} : \mathbb{F}_r \to \mathbb{F}_r$ denotes the linear function given by $L_{c, d}(x) = cx + d$.
The next lemma is similar to Lemma \[lem:Krtsv\] and deduces the singular values of $M_{r, t}$ from Lemma \[lem:suborthogonaltensors\]. The proof is very similar to that of Lemma \[lem:Krtsv\].
\[lem:Mrtsv\] The nonzero singular values of $M_{r, t}$ are $\sqrt{1 + (r - 1)^{-1}}$ with multiplicity $(r - 1)(r - 2)\ell^2$ and $(r - 1)^{-1/2}$ with multiplicity $(r - 1)\ell^2$.
Observe that the rows of $M_{r, t}$ are formed by vectorizing the slices of $T_{r, t}$. Thus Lemma \[lem:suborthogonaltensors\] implies that $(M_{r, t})(M_{r, t})^\top$ is block-diagonal with $(r - 1)\ell^2$ blocks of dimension $(r - 1) \times (r - 1)$, where each block corresponds to slices with indices $(V_i, V_j, L_{c, d})$ where $i, j$ and $c$ are fixed on over each block while $d$ ranges over $\{1, 2, \dots, r - 1\}$. Furthermore, each block is of the form $\left(1 + (r - 1)^{-1} \right) I_{r - 1} - (r - 1)^{-1} \mathbf{1} \mathbf{1}^\top$. The eigenvalues of each of these blocks are $1 + (r - 1)^{-1}$ with multiplicity $r - 2$ and $(r - 1)^{-1}$ with multiplicity $1$. Thus the eigenvalues of $(M_{r, t})(M_{r, t})^\top$ are $1 + (r - 1)^{-1}$ and $(r - 1)^{-1}$ with multiplicities $(r - 1)(r - 2)\ell^2$ and $(r - 1)\ell^2$, respectively, which implies the result.
Given $m^2$ matrices $M^{1,1}, M^{1,2}, \dots, M^{k,k} \in \mathbb{R}^{n \times n}$, let $\mathcal{C}\left(M^{1,1}, M^{1,2}, \dots, M^{k,k}\right)$ denote the matrix $X \in \mathbb{R}^{kn \times kn}$ formed by concatenating the $M^{i,j}$ with $$X_{an + b + 1, cn + d + 1} = M^{a + 1, c + 1}_{b+1, d+1} \quad \text{for all } 0 \le a, c \le k - 1 \text{ and } 0 \le b, d \le n - 1$$ We refer to a matrix $M \in \mathbb{R}^{n \times n}$ as a $k$-block matrix for some $k$ that divides $n$ if there are two values $x_1, x_2 \in \mathbb{R}$ and two partitions $[n] = E_1 \cup E_2 \cup \cdots \cup E_k = F_1 \cup F_2 \cup \cdots \cup F_k$ both into parts of size $n/k$ such that $$M_{ij} = \left\{ \begin{array}{ll} x_1 & \text{if } (i, j) \in E_h \times F_h \text{ for some } 1 \le h \le k \\ x_2 &\text{otherwise} \end{array} \right.$$ The next lemma shows an alignment property of different slices of $T_{r, t}$ that will be crucial in stitching together the local applications of $\pr{Tensor-Bern-Rotations}$ with $M_{r, t}$ in our reduction to hidden partition models. This lemma will use indexing the in $M_{r, t}$ and the role of linear functions $L$ in defining the affine block graphs $G_{r, t}$.
\[lem:comm-align-tensors\] Let $1 \le s_1, s_2, \dots, s_k \le (r - 1)\ell$ be arbitrary indices and $$M^{i, j} = T_{r, t}^{(V_{i'}, V_{j'}, L)} \quad \textnormal{for each } 1 \le i, j \le k$$ where $i'$ and $j'$ are the unique $1 \le i', j' \le \ell$ such that $i' \equiv s_i \pmod{\ell}$ and $j' \equiv s_j \pmod{\ell}$ and $L(x) = ax + b$ where $a = \lceil s_i/\ell \rceil$ and $b = \lceil s_j/\ell \rceil$. Then it follows that $\mathcal{C}\left(M^{1,1}, M^{1,2}, \dots, M^{k,k}\right)$ is an $r$-block matrix.
Let $t_i = i'$ be the unique $1 \le i' \le \ell$ such that $i' \equiv s_i \pmod{\ell}$ and let $a_i = \lceil s_i/\ell \rceil \in \{1, 2, \dots, r - 1\}$ for each $1 \le i \le \ell$. Furthermore, let $L_{ij}(x) = a_i x + a_j$ for $1 \le i, j \le k$ and, for each $x \in \mathbb{R}$ and $1 \le i \le \ell$, let $A^i_x$ be the affine spaces as in Definition \[defn:Grt\]. Note that since $0 < a_i < r$, it follows that each $L_{ij}$ is a non-constant and hence invertible linear function. Given a subset $S \subseteq \mathbb{F}_r^t$ and some $s \in \mathbb{N}$, let $I(s, S)$ denote the set of indices $I(s, S) = \{ s + i : P_i \in S\}$.
Now define the partition $[kr^t] = E_0 \cup E_2 \cup \cdots \cup E_{r-1}$ as follows $$E_i = \bigcup_{j = 1}^k I\left((j - 1)r^t, A^{t_j}_{x_{ij}}\right) \quad \text{where } x_{ij} = L_{j1}^{-1}(L_{11}(i))$$ and similarly define the partition $[kr^t] = F_0 \cup F_2 \cup \cdots \cup F_{r-1}$ as follows $$F_i = \bigcup_{j = 1}^k I\left((j - 1)r^t, A^{t_j}_{y_{ij}}\right) \quad \text{where } y_{ij} = L_{1j}(i)$$ Let $X \in \mathbb{R}^{kr^t \times kr^t}$ denote the matrix $X = \mathcal{C}\left(M^{1,1}, M^{1,2}, \dots, M^{k,k}\right)$. We will show that $$\label{eqn:blockstruct}
X_{a, b} = \frac{r - 1}{r^t\sqrt{r - 1}} \quad \text{if } (a, b) \in E_i \times F_i \text{ for some } 0 \le i \le r - 1$$ Suppose that $(a, b) \in E_i \times F_i$ and specifically that $(j_a - 1)r^t + 1 \le a \le j_a r^t$ and $(j_b - 1)r^t + 1 \le b \le j_b r^t$ for some $1 \le j_a, j_b \le k$. The definitions of $E_i$ and $F_i$ imply that $z_a \in A_{x_{ij_a}}^{t_{j_a}}$ where $z_a = P_{a - (j_a - 1)r^t}$ and $z_b \in A_{y_{ij_b}}^{t_{j_b}}$ where $z_b = P_{b - (j_b - 1)r^t}$. Note that $$X_{a, b} = M^{j_a, j_b}_{a - (j_a - 1)r^t, b - (j_b - 1)r^t}$$ by the definition of $\mathcal{C}$. Therefore by Definition \[defn:Trt\], it suffices to show that $(z_a, z_b)$ is an edge of the bipartite graph $G_{r, t}(V_{t_{j_a}}, V_{t_{j_b}}, L_{j_a j_b})$ for all such $(a, b)$ to establish (\[eqn:blockstruct\]). By Definition \[defn:Grt\], $(z_a, z_b)$ is an edge if and only if $L_{j_a j_b}(x_{ij_a}) = y_{ij_b}$. Observe that the definitions of $x_{ij_a}$ and $y_{ij_b}$ yield that $$\begin{aligned}
a_{j_a} x_{ij_a} + a_1 &= L_{j_a 1}(x_{ij_a}) = L_{11}(i) = a_1 \cdot i + a_1 \label{eqn:linear-cons} \\
y_{ij_b} &= L_{1j_b}(i) = a_1 \cdot i + a_{j_b} \nonumber \\
L_{j_a j_b}(x) &= a_{j_a} x + a_{j_b} \nonumber\end{aligned}$$ Adding $a_{j_b} - a_1$ to both sides of Equation (\[eqn:linear-cons\]) therefore yields that $$L_{j_a j_b}(x_{ij_a}) = a_{j_a} x_{ij_a} + a_{j_b} = a_1 \cdot i + a_{j_b} = y_{ij_b}$$ which completes the proof of (\[eqn:blockstruct\]). Now note that each $M^{i, j}$ contains exactly $r^{2t - 1}$ entries equal to $(r - 1)/r^t\sqrt{r - 1}$ and thus $X$ contains exactly $k^2 r^{2t - 1}$ such entries. The definitions of $E_i$ and $F_i$ imply that they each contain exactly $kr^{t - 1}$ elements. Thus $\cup_{i = 0}^{r - 1} E_i \times F_i$ contains $k^2 r^{2t - 1}$ elements. Therefore (\[eqn:blockstruct\]) also implies that $X_{a, b} = - 1/r^t\sqrt{r - 1}$ for all $(a, b) \not \in \cup_{i = 0}^{r - 1} E_i \times F_i$. This proves that $X$ is an $r$-block matrix and completes the proof of the lemma.
The community alignment property shown in this lemma is directly related to the indexing of rows in $M_{r, t}$. More precisely, the above lemma implies that for any subset $S \subseteq [(r - 1)\ell]$, the rows of $M_{r, t}$ indexed by elements in the support of $\mathbf{1}_S \otimes \mathbf{1}_S$ can be arranged as sub-matrices of an $|S| r^t \times |S| r^t$ matrix that is an $r$-block matrix. This property will be crucial in our reduction from $k\pr{-pc}$ and $k\pr{-pds}$ to hidden partition models in Section \[sec:3-hidden-partition\].
A Random Matrix Alternative to $K_{r, t}$ {#subsec:2-Rne}
-----------------------------------------
In this section, we introduce the random matrix analogue $R_{n, \epsilon}$ of $K_{r, t}$ defined below. Rather than have all independent entries, $R_{n, \epsilon}$ is constrained to be symmetric. This ends up being technically convenient, as it suffices to bound the eigenvalues of $R_{n, \epsilon}$ in order to upper bound its largest singular value. This symmetry also yields a direct connection between the eigenvalues of $R_{n, \epsilon}$ and the eigenvalues of sparse random graphs, which have been studied extensively.
\[defn:Rne\] If $\epsilon \in (0, 1/2]$, let $R_{n, \epsilon} \in \mathbb{R}^{n \times n}$ denote the random symmetric matrix with independent entries sampled as follows $$(R_{n, \epsilon})_{ij} = (R_{n, \epsilon})_{ji} \sim \left\{ \begin{array}{ll} -\sqrt{\frac{1 - \epsilon}{\epsilon n}} & \textnormal{with prob. } \epsilon \\ \sqrt{\frac{\epsilon}{(1 - \epsilon)n}} & \textnormal{with prob. } 1 - \epsilon \end{array} \right.$$ for all $1 \le i < j \le n$, and $(R_{n, \epsilon})_{ii} = \sqrt{\frac{\epsilon}{(1 - \epsilon)n}}$ for each $1 \le i \le n$.
We now establish the key properties of the matrix $R_{n, \epsilon}$. Consider the graph $G$ where $\{i, j\} \in E(G)$ if and only if $(R_{n, \epsilon})_{ij}$ is negative. By definition, we have that $G$ is an $\epsilon$-sparse Erdős-Rényi graph with $G \sim \mG(n, \epsilon)$. Furthermore, if $A$ is the adjacency matrix of $G$, a direct calculation yields that $R_{n, \epsilon}$ can be expressed as $$\label{eqn:Rne-decomp}
R_{n, \epsilon} = \sqrt{\frac{\epsilon}{(1 - \epsilon)n}} \cdot I_n + \frac{1}{\sqrt{\epsilon(1 - \epsilon)n}} \cdot \left( \bE[A] - A \right)$$ A line of work has given high probability upper bounds on the largest eigenvalue of $\bE[A] - A$ in order to study concentration of sparse Erdős-Rényi graphs in the spectral norm of their adjacency matrices [@furedi1981eigenvalues; @vu2005spectral; @feige2005spectral; @lu2013spectra; @bandeira2016sharp; @le2017concentration]. As outlined in [@le2017concentration], the works [@furedi1981eigenvalues; @vu2005spectral; @lu2013spectra] apply Wigner’s trace method to obtain spectral concentration results for general random matrices that, in this context, imply with high probability that $$\left\| \bE[A] - A \right\| = 2\sqrt{d} \left( 1 + o_n(1) \right) \quad \text{for } d \gg (\log n)^4$$ where $d = \epsilon n$ and $\| \cdot \|$ denotes the spectral norm on $n \times n$ symmetric matrices. In [@feige2005spectral; @bandeira2016sharp; @le2017concentration], it is shown that this requirement on $d$ can be relaxed and that it holds with high probability that $$\left\| \bE[A] - A \right\| = O_n(\sqrt{d}) \quad \text{for } d = \Omega_n(\log n)$$ These results, the fact that $R_{n, \epsilon}$ is symmetric and the above expression for $R_{n, \epsilon}$ in terms of $A$ are enough to establish our main desired properties of $R_{n, \epsilon}$, which are stated formally in the following lemma.
\[lem:Rne\] If $\epsilon \in (0, 1/2]$ satisfies that $\epsilon n = \omega_n(\log n)$, there is a constant $C > 0$ such that the random matrix $R_{n, \epsilon}$ satisfies the following two conditions with probability $1 - o_n(1)$:
1. the largest singular value of $R_{n, \epsilon}$ is at most $C$; and
2. every column of $R_{n, \epsilon}$ contains between $\epsilon n - C\sqrt{\epsilon n \log n}$ and $\epsilon n + C\sqrt{\epsilon n \log n}$ negative entries.
The number of negative entries in the $i$th column of $R_{n, \epsilon}$ is distributed as $\text{Bin}(n - 1, \epsilon)$. A standard Chernoff bound for the binomial distribution yields that if $X \sim \text{Bin}(n - 1, \epsilon)$, then $$\bP\left[ |X - (n - 1)\epsilon | \ge \delta (n - 1)\epsilon \right] \le 2 \exp\left( - \frac{\delta^2 (n - 1)\epsilon}{3} \right)$$ for all $\delta \in (0, 1)$. Setting $\delta = C' \sqrt{n^{-1} \epsilon^{-1} \log n}$ for a sufficiently large constant $C' > 0$ and taking a union bound over all columns $i$ implies that property (2) in the lemma statement holds with probability $1 - o_n(1)$. We now apply Theorem 1.1 in [@le2017concentration] as in the first example in Section 1.4, where the graph is not modified. Since $\epsilon n = \omega_n(\log n)$, this yields with probability $1 - o_n(1)$ that $$\left\| \bE[A] - A \right\| \le C''\sqrt{d}$$ for some constant $C'' > 0$, where $A$ and $d$ are as defined above. The decomposition of $R_{n, \epsilon}$ in Equation (\[eqn:Rne-decomp\]) now implies that with probability $1 - o_n(1)$ $$\| R_{n, \epsilon} \| \le \sqrt{\frac{\epsilon}{(1 - \epsilon)n}} + \frac{1}{\sqrt{\epsilon(1 - \epsilon)n}} \cdot C'' \sqrt{d} = O_n(1)$$ since $\epsilon \in (0, 1/2]$ and $d = \epsilon n$. This establishes that property (1) holds with probability $1 - o_n(1)$. A union bound over (1) and (2) now completes the proof of the lemma.
While $R_{n, \epsilon}$ and $K_{r, t}$ satisfy similar conditions needed by our reductions, they also have differences that will dictate when one is used over the other. The following highlights several key points in comparing these two matrices.
- $R_{n, \epsilon}$ and $K_{r, t}$ are analogous when $n = r^t$ and $\epsilon = 1/r$. In this case, both matrices contain the same two values $1/\sqrt{r^t(r - 1)}$ and $-\sqrt{(r - 1)/r^t}$. The rows of $K_{r, t}$ are unit vectors and the rows of $R_{n, \epsilon}$ are approximately unit vectors – property (2) in Lemma \[lem:Rne\] implies that the norm of each row is $1 \pm O_n(\sqrt{(\epsilon n)^{-1} \log n})$. Like $K_{r, t}$, Lemma \[lem:Rne\] implies that $R_{n, \epsilon}$ is also approximately orthogonal with largest singular value bounded above by a constant.
- While $K_{r, t}$ has exactly a $(1/r)$-fraction of entries in each column that are negative, $R_{n, \epsilon}$ only has *approximately* an $\epsilon$-fraction of entries in each of its columns that are negative. For some of our reductions, such as our reductions to $\pr{rsme}$ and $\pr{rslr}$, having approximately an $\epsilon$-fraction of its entries equal to the negative value in Definition \[defn:Rne\] is sufficient. In our reductions to $\pr{isbm}$, $\pr{ghpm}$, $\pr{bhpm}$ and $\pr{semi-cr}$, it will be important that $K_{r, t}$ contains exactly $(1/r)$-fraction of negative entries per column. The approximate guarantee of $R_{n, \epsilon}$ would correspond to only showing lower bounds against algorithms that are *adaptive* and do not need to know the sizes of the hidden communities.
- As is mentioned in Section \[subsec:1-problems-sbm\] and will be discussed in Section \[sec:3-robust-and-supervised\], our applications of dense Bernoulli rotations with $K_{r, t}$ will generally be tight when a natural parameter $n$ in our problems satisfies that $\sqrt{n} = \tilde{\Theta}(r^t)$. This imposes a number theoretic condition () on the pair $(n, r)$, arising from the fact that $t$ must be an integer, which generally remains a condition in the computational lower bounds we show for $\pr{isbm}$, $\pr{ghpm}$ and $\pr{bhpm}$. In contrast, $R_{n, \epsilon}$ places no number-theoretic constraints on $n$ and $\epsilon$, which can be arbitrary, and thus the condition () can be removed from our computational lower bounds for $\pr{rsme}$ and $\pr{rslr}$. We remark that when $r = n^{o(1)}$, which often is the regime of interest in problems such as $\pr{rsme}$, then the condition is trivial and places no further constraints on $(n, r)$ as will be shown in Lemma \[lem:propT\].
- $R_{n, \epsilon}$ is random while $K_{r, t}$ is fixed. In our reductions, it is often important that the same design matrix is used throughout multiple applications of dense Bernoulli rotations. Since $R_{n, \epsilon}$ is a random matrix, this requires generating a single instance of $R_{n, \epsilon}$ and using this one instance throughout our reductions. In each of our reductions, we will rejection sample $R_{n, \epsilon}$ until it satisfies the two criteria in Lemma \[lem:Rne\] for a maximum of $O((\log n)^2)$ rounds, and then use the resulting matrix throughout all applications of dense Bernoulli rotations in that reduction. The probability bounds in Lemma \[lem:Rne\] imply that the probability no sample from $R_{n, \epsilon}$ satisfying these criteria is found is $n^{-\omega_n(1)}$. This is a failure mode for our reductions and contributes a negligible $n^{-\omega_n(1)}$ to the total variation distance between the output of our reductions and their target distributions.
- For some of our reductions, applying dense Bernoulli rotations with either of the two matrices $R_{n, \epsilon}$ or $K_{r, t}$ yields the same guarantees. This is the case for our reductions to $\pr{mslr}$, $\pr{glsm}$ and $\pr{tpca}$, where $r = 2$ and the condition () is trivial and mapping to columns with approximately half of their entries negative is sufficient. As mentioned above, this is also the case when $r \asymp \epsilon^{-1} = n^{o(1)}$ in $\pr{rsme}$.
- Some differences between $R_{n, \epsilon}$ and $K_{r, t}$ that are unimportant for our reductions include that $R_{n, \epsilon}$ is exactly square while $K_{r, t}$ is only approximately square and that $R_{n, \epsilon}$ is symmetric while $K_{r, t}$ is not.
For consistency, the pseudocode and analysis for all of our reductions are written with $K_{r, t}$ rather than $R_{n, \epsilon}$. Modifying our reductions to use $R_{n, \epsilon}$ is straightforward and consists of adding the rejection sampling step to sample $R_{n, \epsilon}$ discussed above. In Sections \[subsec:3-rsme-reduction\], \[subsec:2-mixtures-slr\] and \[sec:3-robust-and-supervised\], we discuss in more detail how to make these modifications to our reductions to $\pr{rsme}$ and $\pr{rslr}$ and the computational lower bounds they yield.
There are several reasons why we present our reductions with $K_{r, t}$ rather than $R_{n, \epsilon}$. The analysis of $K_{r, t}$ in Section \[subsec:2-design-matrices\] is simple and self-contained while the analysis of $R_{n, \epsilon}$ requires fairly involved results from random matrix theory. The construction of $K_{r, t}$ naturally extends to $T_{r, t}$ while a random tensor analogue $T_{r, t}$ seems as though it would be prohibitively difficult to analyze. Reductions with $K_{r, t}$ give an explicit encoding of cliques into the hidden structure of our target problems as discussed in Section \[subsec:1-tech-encoding\], yielding slightly stronger and more explicit computational lower bounds in this sense.
Negatively Correlated Sparse PCA {#sec:2-neg-spca}
================================
This section is devoted to giving a reduction from bipartite planted dense subgraph to negatively correlated sparse PCA, the high level of which was outlined in Section \[subsec:1-tech-inverse-wishart\]. This reduction will be used in the next section as a crucial subroutine in reductions to establish conjectured statistical-computational gaps for two supervised problems: mixtures of sparse linear regressions and robust sparse linear regression. The analysis of this reduction relies on a result on the convergence of the Wishart distribution and its inverse. This result is proven in the second half of this section.
Reducing to Negative Sparse PCA {#subsec:2-neg-spca-reduction}
-------------------------------
**Algorithm** $\chi^2\textsc{-Random-Rotation}$
*Inputs*: Matrix $M \in \{0, 1\}^{m \times n}$, Bernoulli probabilities $0 < q < p \le 1$, planted subset size $k_n$ that divides $n$ and a parameter $\tau > 0$
1. Sample $r_1, r_2, \dots, r_n \sim_{\text{i.i.d.}} \sqrt{\chi^2(n/k_n)}$ and truncate the $r_j$ with $r_j \gets \min\left\{ r_j, 2 \sqrt{n/k_n} \right\}$ for each $j \in [n]$.
2. Compute $M'$ by applying $\textsc{Gaussianize}$ to $M$ with Bernoulli probabilities $p$ and $q$, rejection kernel parameter $R_{\pr{rk}} = mn$, parameter $\tau$ and target mean values $\mu_{ij} = \frac{1}{2} \tau \cdot r_j \cdot \sqrt{k_n/n}$ for each $i \in [m]$ and $j \in [n]$.
3. Sample an orthogonal matrix $R \in \mathbb{R}^{n\times n}$ from the Haar measure on the orthogonal group $\mathcal{O}_n$ and output the columns of the matrix $M'R$.
**Algorithm** $\textsc{bpds-to-neg-spca}$
*Inputs*: Matrix $M \in \{0, 1\}^{m \times n}$, Bernoulli probabilities $0 < q < p \le 1$, planted subset size $k_n$ that divides $n$ and a parameter $\tau > 0$, target dimension $d \ge m$
1. Compute $X = (X_1, X_2, \dots, X_n)$ where $X_i \in \mathbb{R}^m$ as the columns of the matrix output by $\chi^2\textsc{-Random-Rotation}$ applied to $M$ with parameters $p, q, k_n$ and $\tau$.
2. Compute $\hat{\Sigma} = \sum_{i = 1}^n X_i X_i^\top$ and let $R \in \mathbb{R}^{m \times n}$ be the top $m$ rows of an orthogonal matrix sampled from the Haar measure on the orthogonal group $\mathcal{O}_n$ and compute the matrix $$M' = \sqrt{n(n - m - 1)} \cdot \hat{\Sigma}^{-1/2} R$$ where $\hat{\Sigma}^{-1/2}$ is the positive semidefinite square root of the inverse of $\hat{\Sigma}$.
3. Output the columns of the $d \times n$ matrix with upper left $m \times n$ submatrix $M'$ and all remaining entries sampled i.i.d. from $\mN(0, 1)$.
In this section, we give our reduction $\textsc{bpds-to-neg-spca}$ from bipartite planted dense subgraph to negatively correlated sparse PCA, which is shown in Figure \[fig:neg-spca-reduction\]. This reduction is described with the input bipartite graph as its adjacency matrix of Bernoulli random variables. A key subroutine in this reduction is the procedure $\chi^2\textsc{-Random-Rotation}$ from [@brennan2019optimal], which is also shown in Figure \[fig:neg-spca-reduction\]. The lemma below provides total variation guarantees for $\chi^2\textsc{-Random-Rotation}$ and is adapted from Lemma 4.6 from [@brennan2019optimal] to be in our notation and apply to the generalized case where the input matrix $M$ is rectangular instead of square.
This lemma can be proven with an identical argument to that in Lemma 4.6 from [@brennan2019optimal], with the following adjustment of parameters to the rectangular case. The first two steps of $\chi^2\textsc{-Random-Rotation}$ maps $\mathcal{M}_{[m] \times [n]}(S \times T, p, q)$ approximately to $$\frac{\tau}{2} \sqrt{\frac{k_n}{n}} \cdot \mathbf{1}_S u_T^\top + \mN(0, 1)^{\otimes m \times n}$$ where $u_T$ is the vector with $(u_T)_i = r_i$ if $i \in T$ and $(u_T)_i = 0$ otherwise. The argument in Lemma 4.6 from [@brennan2019optimal] shows that the final step of $\chi^2\textsc{-Random-Rotation}$ maps this distribution approximately to $$\frac{\tau}{2} \sqrt{\frac{k_n}{n}} \cdot \mathbf{1}_S w^\top + \mN(0, 1)^{\otimes m \times n}$$ where $w \sim \mN(0, I_n)$. Now observe that the entries of this matrix are zero mean and jointly Gaussian. Furthermore, the columns are independent and have covariance matrix $I_m + \frac{\tau^2 k_n |S|}{4n} \cdot v_S v_S^\top$ where $v_S = |S|^{-1/2} \cdot \mathbf{1}_S$. Summarizing the result of this argument, we have the following lemma.
\[lem:randomrotations\] Given parameters $m, n$, let $0 < q < p \le 1$ be such that $p - q = (mn)^{-O(1)}$ and $\min(q, 1 - q) = \Omega(1)$, let $k_n \le n$ be such that $k_n$ divides $n$ and let $\tau > 0$ be such that $$\tau \le \frac{\delta}{2 \sqrt{6\log (mn) + 2\log (p - q)^{-1}}} \quad \text{where} \quad \delta = \min \left\{ \log \left( \frac{p}{q} \right), \log \left( \frac{1 - q}{1 - p} \right) \right\}$$ The algorithm $\mathcal{A} = \chi^2\textsc{-Random-Rotation}$ runs in $\textnormal{poly}(m, n)$ time and satisfies that $$\begin{aligned}
\TV\left( \mathcal{A}\left(\mathcal{M}_{[m] \times [n]}(S \times T, p, q)\right), \, \mN\left(0, I_m + \frac{\tau^2 k_n |S|}{4n} \cdot v_S v_S^\top \right)^{\otimes n} \right) &\le O\left((mn)^{-1}\right) + k_n(4e^{-3})^{n/2k_n} \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n}\right), \, \mN(0, 1)^{\otimes m \times n} \right) &= O\left((mn)^{-1}\right)\end{aligned}$$ where $v_S = \frac{1}{\sqrt{|S|}} \cdot \mathbf{1}_S \in \mathbb{R}^m$ for all subsets $S \subseteq [m]$ and $T \subseteq [n]$ with $|T| = k_n$.
Throughout the remainder of this section, we will need to use properties of the Wishart and inverse Wishart distributions. These distributions on random matrices are defined as follows.
\[defn:wishart\] Let $n$ and $d$ be positive integers and $\Sigma \in \mathbb{R}^{d \times d}$ be a positive semidefinite matrix. The Wishart distribution $\mathcal{W}_d(n, \Sigma)$ is the distribution of the matrix $\hat{\Sigma} = \sum_{i = 1}^n X_i X_i^\top$ where $X_1, X_2, \dots, X_n \sim_{\textnormal{i.i.d.}} \mN(0, \Sigma)$.
\[defn:inverted-wishart\] Let $n, d$ and $\Sigma$ be as in Definition \[defn:wishart\]. The inverted Wishart distribution $\mathcal{W}^{-1}_d(n, \Sigma)$ is the distribution of $\hat{\Sigma}^{-1}$ where $\hat{\Sigma} \sim \mathcal{W}_d(n, \Sigma)$.
In order to analyze $\textsc{bpds-to-neg-spca}$, we also will need the following observation from [@brennan2019optimal]. This is a simple consequence of the fact that the distribution $\mN(0, I_n)$ is isotropic and thus invariant under multiplication by elements of the orthogonal group $\mO_n$.
\[lem:invariance\] Suppose that $n \ge d$ and let $\Sigma \in \mathbb{R}^{d \times d}$ be a fixed positive definite matrix and let $\Sigma_e \sim \mathcal{W}_d(n, \Sigma)$. Let $R \in \mathbb{R}^{d \times n}$ be the matrix consisting of the first $d$ rows of an $n \times n$ matrix chosen randomly and independently of $\Sigma_e$ from the Haar measure $\mu_{\mathcal{O}_n}$ on $\mathcal{O}_n$. Let $(Y_1, Y_2, \dots, Y_n)$ be the $n$ columns of $\Sigma_e^{1/2} R$, then $Y_1, Y_2, \dots, Y_n \sim_{\textnormal{i.i.d.}} \mN(0, \Sigma)$.
We now will state and prove the main total variation guarantees for $\textsc{bpds-to-neg-spca}$ in the theorem below. The proof of the theorem below crucially relies on the upper bound in Theorem \[thm:inverse-wishart\] on the KL divergence between Wishart matrices and their inverses. Proving this KL divergence bound is the focus of the next subsection.
\[thm:neg-spca\] Let $m, n, p, q, k_n$ and $\tau$ be as in Lemma \[lem:randomrotations\] and suppose that $d \ge m$ and $n \gg m^3$ as $n \to \infty$. Fix any subset $S \subseteq [m]$ and let $\theta_S$ be given by $$\theta_S = \frac{\tau^2 k_n |S|}{4n + \tau^2 k_n |S|}$$ Then algorithm $\mathcal{A} = \pr{bpds-to-neg-spca}$ runs in $\textnormal{poly}(m, n)$ time and satisfies that $$\begin{aligned}
\TV\left( \mathcal{A}\left(\mathcal{M}_{[m] \times [n]}(S \times T, p, q)\right), \, \mN\left(0, I_d - \theta_S v_S v_S^\top \right)^{\otimes n} \right) &\le O\left(m^{3/2} n^{-1/2} \right) + k_n(4e^{-3})^{n/2k_n} \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n}\right), \, \mN(0, 1)^{\otimes d \times n} \right) &= O\left(m^{3/2} n^{-1/2}\right)\end{aligned}$$ where $v_S = \frac{1}{\sqrt{|S|}} \cdot \mathbf{1}_S \in \mathbb{R}^d$ for all subsets $S \subseteq [m]$ and $T \subseteq [n]$ with $|T| = k_n$.
Let $\mathcal{A}_{\text{1}}$ denote the application of $\chi^2\textsc{-Random-Rotation}$ with input $M$ and output $X$ in Step 1 of $\mathcal{A}$. Let $\mathcal{A}_{\text{2a}}$ denote the Markov transition with input $X$ and output $n(n - m - 1) \cdot \hat{\Sigma}^{-1}$, as defined in Step 2 of $\mathcal{A}$, and let $\mathcal{A}_{\text{2b-3}}$ denote the Markov transition with input $Y = n(n - m - 1) \cdot \hat{\Sigma}^{-1}$ and output $Z$ formed by padding $Y^{1/2} R$ with i.i.d. $\mN(0, 1)$ random variables to be $d \times n$ i.e. the output of $\mathcal{A}$. Furthermore, let $\mathcal{A}_{\text{2-3}} = \mathcal{A}_{\text{2b-3}} \circ \mathcal{A}_{\text{2a}}$ denote Steps 2 and 3 with input $X$ and output $Z$.
Now fix some positive semidefinite matrix $\Sigma \in \mathbb{R}^{m \times m}$ and observe that if $A = \sum_{i = 1}^n Z_i Z_i^\top \sim \mathcal{W}_m(n, I_m)$ where $Z_1, Z_2, \dots, Z_n \sim_{\text{i.i.d.}} \mN(0, I_m)$, then it also follows that $$\Sigma^{1/2} A \Sigma^{1/2} = \sum_{i = 1}^n \left(\Sigma^{1/2}Z_i\right) \left(\Sigma^{1/2}Z_i\right)^\top \sim \mathcal{W}_m(n, \Sigma)$$ since $\Sigma^{1/2}Z_i \sim \mN(0, \Sigma)$. Now observe that $(\Sigma^{1/2} A \Sigma^{1/2})^{-1} = \Sigma^{-1/2} A^{-1} \Sigma^{-1/2}$ and thus if $B \sim \mathcal{W}^{-1}_m(n, I_m)$ then $\Sigma^{-1/2} B \Sigma^{-1/2} \sim \mathcal{W}^{-1}_m(n, \Sigma)$. Let $\beta^{-1} = n(n - m - 1)$ and $C \sim \mathcal{W}^{-1}_m(n, \beta \cdot I_m)$. Therefore we have by the data processing inequality for total variation in Fact \[tvfacts\] that $$\begin{aligned}
\TV\left( \mathcal{W}_m(n, \Sigma), \, \mathcal{W}^{-1}_m\left(n, \beta \cdot \Sigma^{-1}\right) \right) &= \TV\left( \mL\left( \Sigma^{1/2} A \Sigma^{1/2} \right), \, \mL\left( \Sigma^{1/2} C \Sigma^{1/2} \right) \right) \\
&\le \TV\left( \mL\left( A \right), \, \mL\left( C \right) \right) \\
&\le \sqrt{\frac{1}{2} \cdot \KL\left( \mathcal{W}_m(n, I_m) \, \Big\| \, \mathcal{W}^{-1}_m(n, \beta \cdot I_m) \right)} \\
&= O\left( m^{3/2} n^{-1/2} \right)\end{aligned}$$ where the last inequality follows from the fact that $n \gg m^3$, Theorem \[thm:inverse-wishart\] and Pinsker’s inequality.
Suppose that $X \sim \mN\left(0, I_m + \theta_S' v_S v_S^\top \right)^{\otimes n}$ where $\theta_S' = \frac{\tau^2 k_n |S|}{4n}$. Then we have that the output $Y$ of $\mathcal{A}_{\text{2a}}$ satisfies $Y = n(n - m - 1) \cdot \hat{\Sigma}^{-1} \sim \mathcal{W}^{-1}_m\left(n, \beta \cdot \Sigma^{-1}\right)$ where $$\Sigma = \left( I_m + \theta_S' v_S v_S^\top \right)^{-1} = I_m - \frac{\theta_S'}{1 + \theta_S'} \cdot v_S^\top v_S^\top = I_m - \theta_S v_S v_S^\top$$ Therefore it follows from the inequality above that $$\TV\left( \mathcal{A}_{\text{2a}}\left( \mN\left(0, I_m + \theta_S' v_S v_S^\top \right)^{\otimes n} \right) , \, \mathcal{W}_m\left(n, I_m - \theta_S v_S v_S^\top\right) \right) = O\left( m^{3/2} n^{-1/2} \right)$$ Similarly, if $X \sim \mN\left(0, I_m \right)^{\otimes n}$ then we have that $$\TV\left( \mathcal{A}_{\text{2a}}\left( \mN\left(0, I_m \right)^{\otimes n} \right) , \, \mathcal{W}_m\left(n, I_m \right) \right) = O\left( m^{3/2} n^{-1/2} \right)$$ applying the same argument with $\Sigma = I_m$. Now note that if $Y \sim \mathcal{W}_m\left(n, I_m - \theta_S v_S v_S^\top\right)$ then Lemma \[lem:invariance\] implies that $\mathcal{A}_{\text{2b-3}}$ produces $Z \sim \mN\left(0, I_d - \theta_S v_S v_S^\top \right)^{\otimes n}$. Similarly, it follows that if $Y \sim \mathcal{W}_m(n, I_m)$ then Lemma \[lem:invariance\] implies that $Z \sim \mN\left(0, I_d \right)^{\otimes n}$.
We now will use Lemma \[lem:tvacc\] applied to the steps $\mathcal{A}_i$ above and the following sequence of distributions $$\begin{aligned}
\mathcal{P}_0 &= \mathcal{M}_{[m] \times [n]}(S \times T, p, q) \\
\mathcal{P}_1 &= \mN\left(0, I_m + \theta'_S v_S v_S^\top \right)^{\otimes n}\\
\mathcal{P}_{\text{2a}} &= \mathcal{W}_m\left(n, I_m - \theta_S v_S v_S^\top\right) \\
\mathcal{P}_{\text{2b-3}} &= \mN\left(0, I_d - \theta_S v_S v_S^\top \right)^{\otimes n}\end{aligned}$$ As in the statement of Lemma \[lem:tvacc\], let $\epsilon_i$ be any real numbers satisfying $\TV\left( \mathcal{A}_i(\mP_{i-1}), \mP_i \right) \le \epsilon_i$ for each step $i$. A direct application of Lemma \[lem:randomrotations\], shows that we can take $\epsilon_1 = O(m^{-1} n^{-1}) + k(4e^{-3})^{n/2k}$. The arguments above show we can take $\epsilon_{\text{2a}} = O(m^{3/2} n^{-1/2})$ and $\epsilon_{\text{2b-3}} = 0$. Lemma \[lem:tvacc\] now implies the first bound in the theorem statement. The second bound follows from an analogous argument for the distributions $$\mathcal{P}_0 = \text{Bern}(q)^{\otimes m \times n}, \quad \mathcal{P}_1 = \mN\left(0, I_m \right)^{\otimes n}, \quad \mathcal{P}_{\text{2a}} = \mathcal{W}_m\left(n, I_m \right) \quad \text{and} \quad \mathcal{P}_{\text{2b-3}} = \mN\left(0, I_d \right)^{\otimes n}$$ with $\epsilon_1 = O(m^{-1} n^{-1})$, $\epsilon_{\text{2a}} = O(m^{3/2} n^{-1/2})$ and $\epsilon_{\text{2b-3}} = 0$. This completes the proof of the theorem.
Comparing Wishart and Inverse Wishart {#subsec:2-inverse-wishart}
-------------------------------------
This section is devoted to proving the upper bound on the KL divergence between Wishart matrices and their inverses in Theorem \[thm:inverse-wishart\] used in the proof of Theorem \[thm:neg-spca\]. As noted in the previous subsection, the next theorem also implies total variation convergence between Wishart and inverse Wishart when $n \gg d^3$ by Pinsker’s inequality. This theorem is related to a line of recent research examining the total variation convergence between ensembles of random matrices in the regime where $n \gg d$. A number of recent papers have investigated the total variation convergence between the fluctuations of the Wishart and Gaussian orthogonal ensembles, also showing these converge when $n \gg d^3$ [@jiang2015approximation; @bubeck2016testing; @bubeck2016entropic; @racz2019smooth], convergence with other matrix ensembles at intermediate asymptotic scales of $d \ll n \ll d^3$ [@chetelat2019middle] and applications of these results to random geometric graphs [@bubeck2016testing; @eldan2016information; @brennan2019phase].
Let $\Gamma_d(x)$ and $\psi_d(x)$ denote the multivariate gamma and digamma functions given by $$\Gamma_d(a) = \pi^{d(d-1)/4} \cdot \prod_{i = 1}^d \Gamma\left( a - \frac{i - 1}{2} \right) \quad \text{and} \quad \psi_d(a) = \frac{\partial \log \Gamma_d(a)}{\partial a} = \sum_{i = 1}^d \psi\left( a - \frac{i - 1}{2} \right)$$ where $\Gamma(z)$ and $\psi = \Gamma'(z)/\Gamma(z)$ denote the ordinary gamma and digamma functions. We will need several approximations to the log-gamma and digamma functions to prove our desired bound on KL divergence. The classical Stirling series for the log-gamma function is $$\log \Gamma(z) \sim \frac{1}{2} \log(2\pi) + \left( z - \frac{1}{2} \right) \log z - z + \sum_{k = 1}^\infty \frac{B_{2k}}{2k(2k-1)z^{2k - 1}}$$ where $B_m$ denotes the $m$th Bernoulli number. While this series does not converge absolutely for any $z$ because of the growth rate of the coefficients $B_{2k}$, its partial sums are increasingly accurate. More precisely, we have the following series approximation to the log-gamma function (see e.g. pg. 67 of [@remmert2013classical]) up to second order $$\log \Gamma(z) = \frac{1}{2} \log(2\pi) + \left( z - \frac{1}{2} \right) \log z - z + \frac{1}{12z} + O(z^{-3})$$ as $z \to \infty$. A similar series expansion exists for the digamma function, given by $$\psi(z) \sim \log z - \frac{1}{2z} - \sum_{k = 1}^\infty \frac{B_{2k}}{2kz^{2k}}$$ This series also exhibits the phenomenon that, while not converge absolutely for any $z$, its partial sums are increasingly accurate. We have the following third order expansion of $\psi(z)$ given by $$\psi(z) = \log z - \frac{1}{2z} - \frac{1}{12z^2} + \frac {2}{z^{2}} \int _{0}^{\infty }\frac {t^{3}}{(t^{2}+z^{2})(e^{2\pi t}-1)} dt = \log z - \frac{1}{2z} - \frac{1}{12z^2} + O(z^{-4})$$ as $z \to \infty$. We now state and prove the main theorem of this section.
\[thm:inverse-wishart\] Let $n \ge d + 1$ and $m \ge d$ be positive integers such that $n = \Theta(m)$, $|m - n| = o(n)$ and $n - d = \Omega(n)$ as $m, n, d \to \infty$, and let $\beta = \frac{1}{m(n - d - 1)}$. Then $$\begin{aligned}
\KL\left( \mathcal{W}_d(n, I_d) \, \Big\| \, \mathcal{W}^{-1}_d(m, \beta \cdot I_d) \right) &= \frac{d^3}{6n} + \frac{s^2d(d + 1)}{8n^2} - \frac{5sd^3}{24n^2} + \frac{sd^3}{12mn} \\
&\quad \quad + O\left( d^2 n^{-3} |s|^3 + d^4 n^{-2} + d^2 n^{-1} \right)\end{aligned}$$ where $s = n - m$. In particular, when $m = n$ and $n \gg d^3$ it follows that $$\KL\left( \mathcal{W}_d(n, I_d) \, \Big\| \, \mathcal{W}^{-1}_d(n, \beta \cdot I_d) \right) = o(1)$$
Note that the given conditions also imply that $m - d = \Omega(m)$. Let $X \sim \mathcal{W}_d(n, I_d)$ and $Y \sim \mathcal{W}^{-1}_d(m, \beta \cdot I_d)$. Throughout this section, $A \in \mathbb{R}^{d \times d}$ will denote a positive semidefinite matrix. It is well known that the Wishart distribution $\mathcal{W}_d(n, I_d)$ is absolutely continuous with respect to the Lebesgue measure on the cone $\mathcal{C}^{\text{PSD}}_d$ of positive semidefinite matrices in $\mathbb{R}^{d \times d}$ [@wishart1928generalised]. Furthermore the density of $X$ with respect to the Lebesgue measure can be written as $$f_X(A) = \frac{1}{2^{nd/2} \cdot \Gamma_d\left( \frac{n}{2} \right)} \cdot |A|^{(n - d - 1)/2} \cdot \exp\left( - \frac{1}{2} \text{Tr}(A) \right)$$ A change of variables from $A \to \beta^{-1} \cdot A^{-1}$ shows that the distribution $\mathcal{W}_d^{-1}(m, \beta \cdot I_d)$ is also absolutely continuous with respect to the Lebesgue measure on $\mathcal{C}^{\text{PSD}}_d$. It is well-known (see e.g. [@gelman2013bayesian]) that the density of $Y$ can be written as $$f_Y(A) = \frac{\beta^{-md/2}}{2^{md/2} \cdot \Gamma_d\left( \frac{m}{2} \right)} \cdot |A|^{-(m + d + 1)/2} \cdot \exp\left( - \frac{\beta^{-1}}{2} \cdot \text{Tr}\left(A^{-1}\right) \right)$$ Now note that $$\begin{aligned}
\log f_X(A) - \log f_Y(A) &= \frac{(m - n)d}{2} \cdot \log 2 + \log \Gamma_d \left( \frac{m}{2} \right) - \log \Gamma_d \left( \frac{n}{2} \right) + \frac{md}{2} \cdot \log \beta \\
&\quad \quad + \frac{m + n}{2} \cdot \log |A| - \frac{1}{2} \text{Tr}(A) + \frac{\beta^{-1}}{2} \cdot \text{Tr}\left(A^{-1}\right)\end{aligned}$$ The expectation of $\log |A|$ where $A \sim \mathcal{W}_d(n, I_d)$ is well known (e.g. see pg. 693 of [@bishop2006pattern]) to be equal to $$\bE_{A \sim \mathcal{W}_d(n, I_d)} \left[ \log |A| \right] = \psi_d\left( \frac{n}{2} \right) + d \log 2$$ Furthermore, it is well known (e.g. see pg. 85 [@mardia1979multivariate]) that the mean of $A^{-1}$ if $A \sim \mathcal{W}_d(n, I_d)$ is $$\bE_{A \sim \mathcal{W}_d(n, I_d)} \left[ A^{-1} \right] = \frac{I_d}{n - d - 1}$$ Therefore we have that $\bE_{A \sim \mathcal{W}_d(n, I_d)} \left[ \text{Tr}\left(A^{-1}\right) \right] = d/(n - d - 1)$. Similarly, we have that $\bE_{A \sim \mathcal{W}_d(n, I_d)} \left[ A \right] = n \cdot I_d$ and thus $\bE_{A \sim \mathcal{W}_d(n, I_d)} \left[ \text{Tr}(A) \right] = nd$. Combining these identities yields that $$\begin{aligned}
\KL\left( \mathcal{W}_d(n, I_d) \, \Big\| \, \mathcal{W}^{-1}_d(m, \beta \cdot I_d) \right) &= \bE_{A \sim \mathcal{W}_d(n, I_d)} \left[ \log f_X(A) - \log f_Y(A) \right] \nonumber \\
&= \frac{(m - n)d}{2} \cdot \log 2 + \log \Gamma_d \left( \frac{m}{2} \right) - \log \Gamma_d \left( \frac{n}{2} \right) + \frac{md}{2} \cdot \log \beta \nonumber \\
&\quad \quad + \frac{m + n}{2} \cdot \left( \psi_d\left( \frac{n}{2} \right) + d \log 2 \right) - \frac{nd}{2} + \frac{\beta^{-1}d}{2(n - d - 1)} \label{eqn:klequation}\end{aligned}$$ We now use the series approximations for $\Gamma(z)$ and $\psi(z)$ mentioned above to approximate each of these terms. Note that since $m - d = \Omega(m)$, we have that $$\begin{aligned}
\log \Gamma_d \left( \frac{m}{2} \right) &= \frac{d(d - 1)}{4} \log \pi + \sum_{i = 1}^d \log \Gamma\left( \frac{m - i + 1}{2} \right) \\
&= \frac{d(d - 1)}{4} \log \pi + \sum_{i = 1}^d \left( \frac{1}{2} \log(2\pi) + \left( \frac{m - i}{2} \right) \log \left( \frac{m - i + 1}{2} \right) \right. \\
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \left. - \left( \frac{m - i + 1}{2} \right) + \frac{1}{6(m - i + 1)} + O(m^{-3}) \right) \\
&= \frac{d(d - 1)}{4} \log \pi + \frac{d}{2} \log(2\pi) - \frac{dm}{2} + \frac{d(d - 1)}{4} + O(dm^{-3}) \\
&\quad \quad + \sum_{i = 1}^d \left( \left( \frac{m - i}{2} \right) \log \left( \frac{m}{2} \right) + \left( \frac{m - i}{2} \right) \log \left( 1 - \frac{i - 1}{m} \right) + \frac{1}{6(m - i + 1)} \right)\end{aligned}$$ using the fact that $\sum_{i = 1}^d (i - 1) = d(d - 1)/2$. Let $H_n$ denote the harmonic series $H_n = \sum_{i =1 }^n 1/i$. Using the well-known fact that $\psi(n + 1) = H_n - \gamma$ where $\gamma$ is the Euler-Mascheroni constant, we have that $$\begin{aligned}
\sum_{i = 1}^d \frac{1}{m - i + 1} &= H_m - H_{m - d} \\
&= \log(m + 1) - \log(m - d + 1) + O(m^{-1}) \\
&= \frac{d}{m + 1} + \frac{d^2}{2(m + 1)^2} + O(d^3 m^{-3}) + O(m^{-1}) \\
&= O\left(dm^{-1}\right)\end{aligned}$$ where the second last estimate follows applying the Taylor approximation $\log(1 - x) = - x - \frac{1}{2} x^2 + O(x^{3})$ for $x = \frac{d}{m + 1} \in (0, 1)$. Applying this Taylor approximation again, we have that $$\begin{aligned}
&\sum_{i = 1}^d \left( \frac{m - i}{2} \right) \log \left( 1 - \frac{i - 1}{m} \right) \\
&\quad \quad = - \frac{1}{2} \sum_{i = 1}^d \left( \frac{(m - i)(i - 1)}{m} + \frac{(m - i)(i - 1)^2}{2m^2} + O\left(i^3m^{-2}\right) \right) \\
&\quad \quad = O(d^4 m^{-2}) - \frac{1}{2} \sum_{i = 1}^d \left( \frac{(m - 1)(i - 1)}{m} - \frac{(i - 1)^2}{m} + \frac{(m - 1)(i - 1)^2}{2m^2} - \frac{(i - 1)^3}{2m^2}\right) \\
&\quad \quad = O(d^4 m^{-2}) - \frac{(m - 1)d(d - 1)}{4m} + \frac{d(d - 1)(2d - 1)}{12m} - \frac{(m - 1)d(d - 1)(2d - 1)}{24m^2} + \frac{d^2(d - 1)^2}{16m^2} \\
&\quad \quad = O(d^4 m^{-2}) - \frac{d(d - 1)}{4} + \frac{d(d - 1)(2d + 5)}{24m}\end{aligned}$$ using the identities $\sum_{i = 1}^d (i - 1)^2 = d(d - 1)(2d - 1)/6$ and $\sum_{i = 1}^d (i - 1)^3 = d^2(d - 1)^2/4$. Combining all of these approximations and simplifying using the fact that $m - d = \Omega(m)$ yields that $$\begin{aligned}
\log \Gamma_d \left( \frac{m}{2} \right) &= \frac{d(d - 1)}{4} \log \pi + \frac{d}{2} \log(2\pi) - \frac{dm}{2} + \frac{dm}{2} \log \left( \frac{m}{2} \right) - \frac{d(d + 1)}{4} \log \left( \frac{m}{2} \right) \\
&\quad \quad + \frac{d(d - 1)(2d + 5)}{24m} + O\left(d^4 m^{-2} + dm^{-1} \right)\end{aligned}$$ as $m, d \to \infty$ and $m - d = \Omega(m)$. An analogous estimate is also true for $\log \Gamma_d \left( \frac{n}{2} \right)$. Similar approximations now yield since $n - d = \Omega(n)$, we have that $$\begin{aligned}
\psi_d\left( \frac{n}{2} \right) &= \sum_{i = 1}^d \left( \log \left( \frac{n - i + 1}{2} \right) - \frac{1}{n - i + 1} + O(n^{-2}) \right) \\
&= d \log \left( \frac{n}{2} \right) + \sum_{i = 1}^d \log \left( 1 - \frac{i - 1}{n} \right) - H_{n} + H_{n - d} + O(dn^{-2}) \\
&= d \log \left( \frac{n}{2} \right) - \sum_{i = 1}^d \left( \frac{i - 1}{n} + \frac{(i - 1)^2}{2n^2} + O\left(i^3 n^{-3}\right) \right) - \frac{d}{n + 1} - \frac{d^2}{2(n + 1)^2} \\
&\quad \quad + O\left(d^3 n^{-3} + dn^{-2} \right) \\
&= d \log \left( \frac{n}{2} \right) - \frac{d(d - 1)}{2n} - \frac{d(d - 1)(2d - 1)}{12n^2} - \frac{d}{n + 1} + O\left(d^4 n^{-3} + d^2 n^{-2} \right)\end{aligned}$$ Here we have expanded $\psi(n + 1) = H_n - \gamma$ to an additional order with the approximation $$\begin{aligned}
H_{n} - H_{n - d} &= \log(n + 1) - \log(n - d + 1) - \frac{1}{2(n + 1)} + \frac{1}{2(n - d + 1)} + O(n^{-2}) \\
&= \frac{d}{n + 1} + \frac{d^2}{2(n + 1)^2} + O(dn^{-2})\end{aligned}$$ Combining all of these estimates and simplifying with $\beta^{-1} = m(n - d - 1)$ now yields that $$\begin{aligned}
&\KL\left( \mathcal{W}_d(n, I_d) \, \Big\| \, \mathcal{W}^{-1}_d(m, \beta \cdot I_d) \right) \\
&\quad \quad = md \log 2 + \frac{md}{2} \log \beta - \frac{nd}{2} + \frac{\beta^{-1}d}{2(n - d - 1)} + \log \Gamma_d \left( \frac{m}{2} \right) - \log \Gamma_d \left( \frac{n}{2} \right) + \frac{m + n}{2} \cdot \psi_d\left( \frac{n}{2} \right) \\
&\quad \quad = md \log 2 + \frac{md}{2} \log \beta - \frac{nd}{2} + \frac{\beta^{-1}d}{2(n - d - 1)} - \frac{d(m - n)}{2} + \frac{dm}{2} \log \left( \frac{m}{2} \right) - \frac{dn}{2} \log \left( \frac{n}{2} \right) \\
&\quad \quad \quad \quad - \frac{d(d + 1)}{4} \log \left( \frac{m}{n} \right) + \frac{d(d - 1)(2d + 5)}{24} \cdot (m^{-1} - n^{-1}) + \frac{(m + n)d}{2} \log \left( \frac{n}{2} \right) \\
&\quad \quad \quad \quad - \frac{(m + n)d(d - 1)}{4n} - \frac{(m + n)d(d - 1)(2d - 1)}{24n^2} - \frac{(m + n)d}{2(n + 1)} + O\left( d^4 n^{-2} + d^2 n^{-1} \right) \\
&\quad \quad = - \frac{(m + n)d(d - 1)}{4n} - \frac{(m + n)d}{2(n + 1)} - \frac{d(d + 1)}{4} \log \left( \frac{m}{n} \right) - \frac{dm}{2} \log \left( 1 - \frac{d + 1}{n} \right) \\
&\quad \quad \quad \quad - \frac{(m + n)d(d - 1)(2d - 1)}{24n^2} + \frac{(n - m)d(d - 1)(2d + 5)}{24mn} + O\left( d^4 n^{-2} + d^2 n^{-1} \right) \\
&\quad \quad = - \frac{(m + n)d(d + 1)}{4n} + \frac{d(d + 1)}{4} \left( \frac{n - m}{n} + \frac{(n - m)^2}{2n^2} + O\left(n^{-3} |s|^3\right) \right) \\
&\quad \quad \quad \quad + \frac{dm}{2} \left( \frac{d + 1}{n} + \frac{(d + 1)^2}{2n^2} + O(d^3 n^{-3}) \right) - \frac{(m + n)d(d - 1)(2d - 1)}{24n^2} \\
&\quad \quad \quad \quad + \frac{sd(d - 1)(2d + 5)}{24mn} + O\left( d^4 n^{-2} + d^2 n^{-1} \right) \\
&\quad \quad = \frac{d^3}{6n} + \frac{s^2d(d + 1)}{8n^2} - \frac{5sd^3}{24n^2} + \frac{sd^3}{12mn} + O\left( d^2 n^{-3} |s|^3 + d^4 n^{-2} + d^2 n^{-1} \right)\end{aligned}$$ In the fourth equality, we used the fact that $1/(n + 1) = 1/n + O(n^{-2})$, that $s = n - m = o(n)$ and the Taylor approximation $\log(1 - x) = - x - \frac{1}{2} x^2 + O(x^{3})$ for $|x| < 1$. The last line follows from absorbing small terms into the error term. The second part of the theorem statement follows immediately from substituting $m = n$ and $s = 0$ into the bound above and noting that the dominant term is $d^3/6n$ when $n \gg d^3$.
We now make two remarks on the theorem above. The first motivates the choice of the parameter $\beta$ to satisfy $\beta^{-1} = m(n - d - 1)$. Note that the KL divergence in Equation (\[eqn:klequation\]) depends on $\beta$ through the terms $$\frac{md}{2} \log \beta + \frac{\beta^{-1}d}{2(n - d - 1)}$$ which is minimized at the stationary point $\beta^{-1} = m(n - d - 1)$. Thus the KL divergence in Equation (\[eqn:klequation\]) is minimized for a fixed pair $(m, n)$ at this value of $\beta$. We also remark that the distributions $\mathcal{W}_d(n, I_d)$ and $\mathcal{W}^{-1}_d(m, \beta \cdot I_d)$ only converge in KL divergence if $d \gg n^3$ as the expression in Theorem \[thm:inverse-wishart\] is easily seen to not converge to zero if $d = O(n^3)$.
Negative Correlations, Sparse Mixtures and Supervised Problems {#sec:2-supervised}
==============================================================
In the first part of this section, we introduce and give a reduction to the intermediate problem imbalanced sparse Gaussian mixtures, as outlined in Section \[subsec:1-tech-design-matrices\] and the beginning of Section \[sec:2-bernoulli-rotations\]. This reduction is then used in the second part of this section, along with the reduction to negative sparse PCA in the previous section, as a subroutine in a reduction to robust sparse linear regression and mixtures of sparse linear regressions, as outlined in Section \[subsec:1-tech-decomposing\]. Our reduction to imbalanced sparse Gaussian mixtures will also be used in Section \[sec:3-robust-and-supervised\] to show computational lower bounds for robust sparse mean estimation.
Reduction to Imbalanced Sparse Gaussian Mixtures {#subsec:3-rsme-reduction}
------------------------------------------------
**Algorithm** $k$<span style="font-variant:small-caps;">-bpds-to-isgm</span>
*Inputs*: Matrix $M \in \{0, 1\}^{m \times n}$, dense subgraph dimensions $k_m$ and $k_n$ where $k_n$ divides $n$ and the following parameters
- partition $F$ of $[n]$ into $k_n$ parts of size $n/k_n$, edge probabilities $0 < q < p \le 1$ and a slow growing function $w(n) = \omega(1)$
- target $\pr{isgm}$ parameters $(N, d, \mu, \epsilon)$ satisfying that $\epsilon = 1/r$ for some prime number $r$, $$wN \le k_nr\ell, \quad m \le d, \quad n \le k_nr^t \le \textnormal{poly}(n) \quad \text{and} \quad \mu \le \frac{c}{\sqrt{r^t(r - 1) \log(k_nmr^t)}}$$ for some $t \in \mathbb{N}$, a sufficiently small constant $c > 0$ and where $\ell = \frac{r^t - 1}{r - 1}$
1. *Pad*: Form $M_{\text{PD}} \in \{0, 1\}^{m \times k_nr^t}$ by adding $k_nr^t - n$ new columns sampled i.i.d. from $\text{Bern}(q)^{\otimes m}$ to the right end of $M$. Let $F'$ be the partition formed by letting $F'_i$ be $F_i$ with exactly $r^t - n/k_n$ of the new columns.
2. *Bernoulli Rotations*: Fix a partition $[k_nr\ell] = F_1'' \cup F_2'' \cup \cdots \cup F_{k_n}''$ into $k_n$ parts each of size $r\ell$ and compute the matrix $M_{\text{R}} \in \mathbb{R}^{m \times k_n r\ell}$ as follows:
1. For each row $i$ and part $F_j'$, apply $\pr{Bern-Rotations}$ to the vector $(M_{\text{PD}})_{i, F_j'}$ of entries in row $i$ and in columns from $F_j'$ with matrix parameter $K_{r, t}$, rejection kernel parameter $R_{\pr{rk}} = k_n mr^t$, Bernoulli probabilities $0 < q < p \le 1$, $\lambda = \sqrt{1 + (r - 1)^{-1}}$, mean parameter $\lambda \sqrt{r^t(r - 1)} \cdot \mu$ and output dimension $r\ell$.
2. Set the entries of $(M_{\text{R}})_{i, F''_j}$ to be the entries in order of the vector output in (1).
3. *Permute and Output*: Form $X \in \mathbb{R}^{d \times N}$ by choosing $N$ distinct columns of $M_{\text{R}}$ uniformly at random, embedding the resulting matrix as the first $m$ rows of $X$ and sampling the remaining $d - m$ rows of $X$ i.i.d. from $\mN(0, I_N)$. Output the columns $(X_1, X_2, \dots, X_N)$ of $X$.
In this section, we give our reduction from $k$ to the intermediate problem , which we will reduce from in subsequent sections to obtain several of our main computational lower bounds. We present our reduction to $\pr{isgm}$ with dense Bernoulli rotations applied with the design matrix $K_{r, t}$ from Definition \[defn:Krt\], and at the end of this section sketch the variant using the random design matrix alternative $R_{n, \epsilon}$ introduced in Section \[subsec:2-Rne\]. Throughout this section, the input $k$ instance will be described by its $m \times n$ adjacency matrix of Bernoulli random variables. The problem , imbalanced sparse Gaussian mixtures, is a simple vs. simple hypothesis testing problem defined formally below. A similar distribution was also used in [@diakonikolas2017statistical] to construct an instance of robust sparse mean estimation inducing the tight statistical-computational gap in the statistical query model.
Given some $\mu \in \mathbb{R}$ and $\epsilon \in (0, 1)$, let $\mu'$ be such that $\epsilon \cdot \mu' + (1 - \epsilon) \cdot \mu = 0$. For each subset $S \subseteq [d]$, $\pr{isgm}_D(n, S, d, \mu, \epsilon)$ denotes the distribution over $X = (X_1, X_2, \dots, X_n)$ where $X_i \in \mathbb{R}^d$ where $$X_1, X_2, \dots, X_n \sim_{\textnormal{i.i.d.}} \pr{mix}_{\epsilon}\left( \mN(\mu \cdot \mathbf{1}_S, I_d), \mN(\mu' \cdot \mathbf{1}_S, I_d) \right)$$
We will use the notation $\pr{isgm}(n, k, d, \mu, \epsilon)$ to refer to the hypothesis testing problem between $H_0: X_1, X_2, \dots, X_n \sim_{\text{i.i.d.}} \mN(0, I_d)$ and an alternative hypothesis $H_1$ sampling the distribution above where $S$ is chosen uniformly at random among all $k$-subsets of $[d]$. Our reduction $k$<span style="font-variant:small-caps;">-bpds-to-isgm</span> is shown in Figure \[fig:isgmreduction\]. The next theorem encapsulates the total variation guarantees of this reduction. A key parameter is the prime number $r$, which is used to parameterize the design matrices $K_{r, t}$ in the $\pr{Bern-Rotations}$ step.
To show the tightest possible statistical-computational gaps in applications of this theorem, we ideally would want to take $n$ such that $n = \Theta(k_nr^t)$. When $r$ is growing with $N$, this induces number theoretic constraints on our choices of parameters that require careful attention and will be discussed in Section \[subsec:3-rsme\]. Because of this subtlety, we have kept the statement of our next theorem technically precise and in terms of all of the free parameters of the reduction $k$. Ignoring these number theoretic constraints, the reduction $k$ can be interpreted as essentially mapping an instance of $k\pr{-bpds}$ with parameters $(m, n, k_m, k_n, p, q)$ with $k_n = o(\sqrt{n})$, $k_m = o(\sqrt{m})$ and planted row indices $S$ where $|S| = k_m$ to the instance $\pr{isgm}_D(N, S, d, \mu, \epsilon)$ where $\epsilon \in (0, 1)$ is arbitrary and can vary with $n$. The target parameters $N, d$ and $\mu$ satisfy that $$d = \Omega(m), \quad N = o(n) \quad \text{and} \quad \mu \asymp \frac{1}{\sqrt{\log n}} \cdot \sqrt{\frac{\epsilon k_n}{n}}$$ All of our applications will handle the number theoretic constraints to set parameters so that they nearly satisfy these conditions. The slow-growing function $w(n)$ is so that Step 3 subsamples the produced samples by a large enough factor to enable an application of finite de Finetti’s theorem.
We now state our total variation guarantees for $k$<span style="font-variant:small-caps;">-bpds-to-isgm</span>. Given a partition $F$ of $[n]$ with $[n] = F_1 \cup F_2 \cup \cdots \cup F_{k_n}$, let $\mU_n(F)$ denote the distribution of $k_n$-subsets of $[n]$ formed by choosing one member element of each of $F_1, F_2, \dots, F_{k_n}$ uniformly at random. Let $\mU_{n, k_n}$ denote the uniform distribution on $k_n$-subsets of $[n]$.
\[thm:isgmreduction\] Let $n$ be a parameter, $r = r(n) \ge 2$ be a prime number and $w(n) = \omega(1)$ be a slow-growing function. Fix initial and target parameters as follows:
- [Initial]{.nodecor} $k\pr{-bpds}$ [Parameters:]{.nodecor} vertex counts on each side $m$ and $n$ that are polynomial in one another, dense subgraph dimensions $k_m$ and $k_n$ where $k_n$ divides $n$, edge probabilities $0 < q < p \le 1$ with $\min\{q, 1 - q\} = \Omega(1)$ and $p - q \ge (mn)^{-O(1)}$, and a partition $F$ of $[n]$.
- [Target]{.nodecor} $\pr{isgm}$ [Parameters:]{.nodecor} $(N, d, \mu, \epsilon)$ where $\epsilon = 1/r$ and there is a parameter $t = t(N) \in \mathbb{N}$ with $$wN \le \frac{k_nr(r^t - 1)}{r - 1}, \quad m \le d \le \textnormal{poly}(n), \quad n \le k_nr^t \le \textnormal{poly}(n) \quad \textnormal{and}$$ $$0 \le \mu \le \frac{\delta}{2 \sqrt{6\log (k_nmr^t) + 2\log (p - q)^{-1}}} \cdot \frac{1}{\sqrt{r^t(r - 1)(1 + (r - 1)^{-1})}}$$ where $\delta = \min \left\{ \log \left( \frac{p}{q} \right), \log \left( \frac{1 - q}{1 - p} \right) \right\}$.
Let $\mathcal{A}(G)$ denote $k$<span style="font-variant:small-caps;">-bpds-to-isgm</span> applied with the parameters above to a bipartite graph $G$ with $m$ left vertices and $n$ right vertices. Then $\mathcal{A}$ runs in $\textnormal{poly}(m, n)$ time and it follows that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mathcal{M}_{[m] \times [n]}(S \times T, p, q) \right), \, \pr{isgm}_D(N, S, d, \mu, \epsilon) \right) &= O\left( w^{-1} + k_n^{-2}m^{-2}r^{-2t} \right) \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n} \right), \, \mN(0, I_d)^{\otimes N} \right) &= O\left( k_n^{-2}m^{-2}r^{-2t} \right)\end{aligned}$$ for all subsets $S \subseteq [m]$ with $|S| = k_m$ and subsets $T \subseteq [n]$ with $|T| = k_n$ and $|T \cap F_i| = 1$ for each $1 \le i \le k_n$.
In the rest of this section, let $\mathcal{A}$ denote the reduction $k\pr{-bpds-to-isgm}$ with input $(M, F)$ where $F$ is a partition of $[n]$ and output $(X_1, X_2, \dots, X_N)$. Let $\text{Hyp}(N, K, n)$ denote a hypergeometric distribution with $n$ draws from a population of size $N$ with $K$ success states. We will also need the upper bound on the total variation between hypergeometric and binomial distributions given by $$\TV\left( \text{Hyp}(N, K, n), \text{Bin}(n, K/N) \right) \le \frac{4n}{N}$$ This bound is a simple case of finite de Finetti’s theorem and is proven in Theorem (4) in [@diaconis1980finite]. We now proceed to establish the total variation guarantees for Bernoulli rotations and subsampling as in Steps 2 and 3 of $\mathcal{A}$ in the next two lemmas.
Before proceeding to prove these lemmas, we make a definition that will be used in the next few sections. Suppose that $M$ is a $b \times a$ matrix, $F$ and $F'$ are partitions of $[ka]$ and $[kb]$ into $k$ equally sized parts and $S \subseteq [kb]$ is such that $|S \cap F_i| = 1$ for each $1 \le i \le k$. Then define the vector $v = v_{S, F, F'}(M) \in \mathbb{R}^{kb}$ to be such that the restriction $v_{F'_i}$ to the elements of $F'_i$ is given by $$v_{F'_i} = M_{\cdot, \sigma_{F_i}(j)} \quad \textnormal{where } j \text{ is the unique element in } S \cap F_i$$ Here, $M_{\cdot, j}$ denotes the $j$th column of $M$ and $\sigma_{F_i}$ denotes the order preserving bijection from $F_i$ to $[b]$. In other words, $v_{S, F, F'}$ is the vector formed by concatenating the columns of $M$ along the partition $F'$, where the elements $S \cap F_i$ select which column appears along each part $F_i'$. In this section, whenever $S \cap F_i$ has size one, we will abuse notation and also use $S \cap F_i$ to denote its unique element.
\[lem:isgm-rotations\] Let $F'$ and $F''$ be a fixed partitions of $[k_nr^t]$ and $[k_nr\ell]$ into $k_n$ parts of size $r^t$ and $r\ell$, respectively, and let $S \subseteq [m]$ be a fixed $k_m$-subset. Let $T \subseteq [k_n r^t]$ where $|T \cap F_i'| = 1$ for each $1 \le i \le k_n$. Let $\mathcal{A}_{\textnormal{2}}$ denote Step 2 of $k\pr{-bpds-to-isgm}$ with input $M_{\textnormal{PD}}$ and output $M_{\textnormal{R}}$. Suppose that $p, q$ and $\mu$ are as in Theorem \[thm:isgmreduction\], then it follows that $$\begin{aligned}
&\TV\left( \mathcal{A}_{\textnormal{2}} \left( \mathcal{M}_{[m] \times [k_n r^t]} \left( S \times T, \textnormal{Bern}(p), \textnormal{Bern}(q) \right) \right), \, \mL\left( \mu \sqrt{r^t(r - 1)} \cdot \mathbf{1}_S v_{T, F', F''}(K_{r, t})^\top + \mN(0, 1)^{\otimes m \times k_nr\ell} \right) \right) \\
&\quad \quad = O\left(k_n^{-2}m^{-2}r^{-2t} \right) \\
&\TV\left( \mathcal{A}_{\textnormal{2}} \left(\textnormal{Bern}(q)^{\otimes m \times k_nr^t} \right), \, \mN(0, 1)^{\otimes m \times k_nr\ell} \right) = O\left(k_n^{-2}m^{-2}r^{-2t} \right)\end{aligned}$$
First consider the case where $M_{\textnormal{PD}} \sim \mathcal{M}_{[m] \times [k_nr^t]} \left( S \times T, \textnormal{Bern}(p), \textnormal{Bern}(q) \right)$. Observe that the subvectors of $M_{\textnormal{PD}}$ are distributed as $$(M_{\textnormal{PD}})_{i, F_j'} \sim \left\{ \begin{array}{ll} \pr{pb}\left(F_j', T \cap F_j', p, q\right) &\textnormal{if } i \in S \\ \textnormal{Bern}(q)^{\otimes r^t} &\textnormal{otherwise} \end{array} \right.$$ and are independent. Combining upper bound on the singular values of $K_{r, t}$ in Lemma \[lem:Krtsv\], Lemma \[lem:bern-rotations\] applied with $R_{\pr{rk}} = k_n m r^t$ and the condition on $\mu$ in the statement of Theorem \[thm:isgmreduction\] implies that $$\begin{aligned}
\TV\left( (M_{\textnormal{R}})_{i, F''_j}, \, \mN\left( \mu \sqrt{r^t(r - 1)} \cdot (K_{r, t})_{\cdot, T \cap F_j'}, I_{r\ell} \right) \right) &= O\left(k_n^{-3}m^{-3}r^{-2t} \right) \quad \textnormal{if } i \in S\\
\TV\left( (M_{\textnormal{R}})_{i, F''_j}, \, \mN\left( 0, I_{r\ell} \right) \right) &= O\left(k_n^{-3}m^{-3}r^{-2t} \right) \quad \textnormal{otherwise}\end{aligned}$$ Now observe that the subvectors $(M_{\textnormal{R}})_{i, F''_j}$ are also independent. Therefore the tensorization property of total variation in Fact \[tvfacts\] implies that $\TV\left( M_{\textnormal{R}}, \mL(Z) \right) = O\left(k_n^{-2}m^{-2}r^{-2t} \right)$ where $Z$ is defined so that its subvectors $Z_{i, F_j''}$ are independent and distributed as $$Z_{i, F_j''} \sim \left\{ \begin{array}{ll} \mN\left( \mu \sqrt{r^t(r - 1)} \cdot (K_{r, t})_{\cdot, T \cap F_j'}, I_{r\ell} \right) &\textnormal{if } i \in S \\ \mN\left( 0, I_{r\ell} \right) &\textnormal{otherwise} \end{array} \right.$$ Note that the entries of $Z$ are independent Gaussians each with variance $1$. Furthermore, the mean of $Z$ can be verified to be exactly $\mu \sqrt{r^t(r - 1)} \cdot \mathbf{1}_S v_{T, F', F''}(K_{r, t})^\top$. This completes the proof of the first total variation upper bound in the statement of the lemma. The second bound follows from the same argument above applied with $S = \emptyset$.
\[lem:subsampling\] Let $F', F'', S$ and $T$ be as in Lemma \[lem:isgm-rotations\]. Let $\mathcal{A}_{\textnormal{3}}$ denote Step 3 of $k\pr{-pds-to-isgm}$ with input $M_{\textnormal{R}}$ and output $(X_1, X_2, \dots, X_N)$. Then $$\TV\left( \mathcal{A}_{\textnormal{3}} \left( \tau \cdot \mathbf{1}_S v_{T, F', F''}(K_{r, t})^\top + \mN(0, 1)^{\otimes m \times k_n r\ell} \right), \pr{isgm}_D(N, S, d, \mu, \epsilon) \right) \le 4w^{-1}$$ where $\epsilon = 1/r$ and $\mu = \frac{\tau}{\sqrt{r^t(r - 1)}}$. Furthermore, it holds that $\mathcal{A}_{\textnormal{3}} \left( \mN(0, 1)^{\otimes m \times k_n r\ell} \right) \sim \mN(0, I_d)^{\otimes N}$.
Suppose that $M_{\textnormal{R}} \sim \tau \cdot \mathbf{1}_S K_{T, F', F''}^\top + \mN(0, 1)^{\otimes m \times k_nr\ell}$. For fixed $S, T, F'$ and $F''$, the entries of $M_{\textnormal{R}}$ are independent. Observe that the columns of $M_{\text{R}}$ are independent and either distributed according $\mN(\mu \cdot \mathbf{1}_S, I_m)$ or $\mN(\mu' \cdot \mathbf{1}_S, I_m)$ where $\mu' = \tau(1 - r)/\sqrt{r^t(r - 1)}$ depending on whether the entry of $v_{T, F', F''}(K_{r, t})$ at the index corresponding to the column is $1/\sqrt{r^t(r - 1)}$ or $(1 - r)/\sqrt{r^t(r - 1)}$.
By Lemma \[lem:suborthogonalmatrices\], it follows that each column of $K_{r, t}$ contains exactly $\ell$ entries equal to $(1 - r)/\sqrt{r^t(r - 1)}$. This implies that exactly $k_n(r - 1)\ell$ entries of $v_{T, F', F''}(K_{r, t})$ are equal to $1/\sqrt{r^t(r - 1)}$. Define $\mR_{N}(s)$ to be the distribution on $\mathbb{R}^N$ with a sample $v \sim \mR_{N}(s)$ generated by first choosing an $s$-subset $U$ of $[N]$ uniformly at random and then setting $v_i = 1/\sqrt{r^t(r - 1)}$ if $i \in U$ and $v_i = (1 - r)/\sqrt{r^t(r - 1)}$ if $i \not \in U$. Note that the number of columns distributed as $\mN(\mu \cdot \mathbf{1}_S, I_m)$ in $M_{\text{R}}$ chosen to be in $X$ is distributed according to $\text{Hyp}(k_nr\ell, k_n(r - 1)\ell, N)$. Step 3 of $\mathcal{A}$ therefore ensures that, if $M_{\textnormal{R}}$ is distributed as above, then $$X \sim \mL\left( \tau \cdot \mathbf{1}_{S} \mR_N(\text{Hyp}(k_n\ell, k_n(r - 1)\ell, N))^\top + \mN(0, 1)^{\otimes d \times N} \right)$$ Observe that the data matrix for a sample from $\pr{isgm}_D(N, S, d, \mu, \epsilon)$ can be expressed similarly as $$\pr{isgm}_D(N, S, d, \mu, \epsilon) = \mL\left( \tau \cdot \mathbf{1}_{S} \mR_n(\text{Bin}(N, 1 - \epsilon))^\top + \mN(0, 1)^{\otimes d \times N} \right)$$ where again we set $\mu = \tau/\sqrt{r^t(r - 1)}$. The conditioning property of $\TV$ in Fact \[tvfacts\] now implies that $$\TV\left( \mL(X), \pr{isgm}_D(N, S, d, \mu, \epsilon) \right) \le \TV\left(\text{Bin}(N, 1 - \epsilon), \text{Hyp}\left(k_nr\ell, k_n(r - 1)\ell, N\right) \right) \le \frac{4N}{k_nr\ell} \le 4w^{-1}$$ The last inequality follows from the application of Theorem (4) in [@diaconis1980finite] to hypergeometric distributions above along with the fact that $1 - \epsilon = (k_n(r - 1)\ell)/k_nr\ell$ and $wN \le k_n r\ell$. This completes the proof of the upper bound in the lemma statement. Now consider applying the above argument with $\tau = 0$. It follows that $\mathcal{A}_{\textnormal{3}} \left( \mN(0, 1)^{\otimes m \times k_n r\ell} \right) \sim \mN(0, 1)^{\otimes d \times N} = \mN(0, I_d)^{\otimes N}$, which completes the proof of the lemma.
We now combine these lemmas to complete the proof of Theorem \[thm:isgmreduction\].
We apply Lemma \[lem:tvacc\] to the steps $\mathcal{A}_i$ of $\mathcal{A}$ under each of $H_0$ and $H_1$ to prove Theorem \[thm:isgmreduction\]. Define the steps of $\mathcal{A}$ to map inputs to outputs as follows $$(M, F) \xrightarrow{\mathcal{A}_1} (M_{\text{PD}}, F') \xrightarrow{\mathcal{A}_2} (M_{\text{R}}, F'') \xrightarrow{\mathcal{A}_{\text{3}}} (X_1, X_2, \dots, X_N)$$ We first prove the desired result in the case that $H_1$ holds. Consider Lemma \[lem:tvacc\] applied to the steps $\mathcal{A}_i$ above and the following sequence of distributions $$\begin{aligned}
\mathcal{P}_0 &= \mathcal{M}_{[m] \times [n]}(S \times T, \textnormal{Bern}(p), \textnormal{Bern}(q)) \\
\mathcal{P}_1 &= \mathcal{M}_{[m] \times [k_nr^t]} \left( S \times T, \textnormal{Bern}(p), \textnormal{Bern}(q) \right) \\
\mathcal{P}_2 &=\mu \sqrt{r^t(r - 1)} \cdot \mathbf{1}_{S} v_{T, F', F''}(K_{r, t})^\top + \mN(0, 1)^{\otimes m \times k_nr\ell} \\
\mathcal{P}_{\text{3}} &= \pr{isgm}_D(N, S, d, \mu, \epsilon)\end{aligned}$$ As in the statement of Lemma \[lem:tvacc\], let $\epsilon_i$ be any real numbers satisfying $\TV\left( \mathcal{A}_i(\mP_{i-1}), \mP_i \right) \le \epsilon_i$ for each step $i$. By construction, the step $\mathcal{A}_1$ is exact and we can take $\epsilon_1 = 0$. Lemma \[lem:isgm-rotations\] yields that we can take $\epsilon_2 = O\left(k_n^{-2}m^{-2}r^{-2t} \right)$. Applying Lemma \[lem:subsampling\] yields that we can take $\epsilon_{\text{3}} = 4w^{-1}$. By Lemma \[lem:tvacc\], we therefore have that $$\TV\left( \mathcal{A}\left( \mathcal{M}_{[m] \times [n]}(S \times T, p, q) \right), \, \pr{isgm}_D(N, S, d, \mu, \epsilon) \right) = O\left( w^{-1} + k_n^{-2}m^{-2}r^{-2t} \right)$$ which proves the desired result in the case of $H_1$. Now consider the case that $H_0$ holds and Lemma \[lem:tvacc\] applied to the steps $\mathcal{A}_i$ and the following sequence of distributions $$\mathcal{P}_0 = \text{Bern}(Q)^{\otimes m \times n}, \quad \mathcal{P}_1 = \text{Bern}(Q)^{\otimes m \times k_nr^t}, \quad \mathcal{P}_2 = \mN(0, 1)^{\otimes m \times k_nr\ell} \quad \text{and} \quad \mathcal{P}_{\text{3}} = \mN(0, I_d)^{\otimes N}$$ As above, Lemmas \[lem:isgm-rotations\] and \[lem:subsampling\] imply that we can take $$\epsilon_1 = 0, \quad \epsilon_2 = O\left(k_n^{-2}m^{-2}r^{-2t} \right) \quad \text{and} \quad \epsilon_{\text{3}} = 0$$ By Lemma \[lem:tvacc\], we therefore have that $$\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n} \right), \, \mN(0, I_d)^{\otimes N} \right) = O\left(k_n^{-2}m^{-2}r^{-2t} \right)$$ which completes the proof of the theorem.
As discussed in Section \[subsec:2-Rne\], we can replace $K_{r, t}$ in $k\textsc{-bpds-to-isgm}$ with the random matrix alternative $R_{L, \epsilon}$. More precisely, let $k\textsc{-bpds-to-isgm}_R$ denote the reduction in Figure \[fig:isgmreduction\] with the following changes:
- At the beginning of the reduction, rejection sample $R_{L, \epsilon}$ for at most $\Theta((\log L)^2)$ iterations until the criteria of Lemma \[lem:Rne\] are met, as outlined in Section \[subsec:2-Rne\]. Let $A \in \mathbb{R}^{L \times L}$ be the resulting matrix or stop the reduction if no such matrix is found. The latter case contributes $L^{-\omega(1)}$ to each of the total variation errors in Corollary \[thm:mod-isgmreduction\].
- The dimensions $r\ell$ and $r^t$ of the matrix $K_{r, t}$ used in $\pr{Bern-Rotations}$ in Step 2 are both replaced throughout the reduction by the parameter $L$. This changes the output dimensions of $M_{\text{PD}}$ and $M_{\text{R}}$ in Steps 1 and 2 to both be $m \times k_n L$.
- In Step 2, apply $\pr{Bern-Rotations}$ with $A$ instead of $K_{r, t}$ and let $\lambda = C$ where $C$ is the constant in Lemma \[lem:Rne\].
The reduction $k\textsc{-bpds-to-isgm}_R$ eliminates a number-theoretic constraint in $k\textsc{-bpds-to-isgm}$ arising from the fact the intermediate matrix $M_{\text{R}}$ has a dimension that must be of the form $k_n r^t$ for some integer $t$. In contrast, $k\textsc{-bpds-to-isgm}_R$ only requires that this dimension of $M_{\text{R}}$ be a multiple of $k_n$. This will remove the condition () from our computational lower bounds for $\pr{rsme}$, which is only restrictive in the very small $\epsilon$ regime of $\epsilon = n^{-\Omega(1)}$. We will deduce this computational lower bound for implied by the reduction $k\textsc{-bpds-to-isgm}_R$ formally in Section \[subsec:3-rsme\].
The reduction $k\textsc{-bpds-to-isgm}_R$ can be analyzed using an argument identical to the one above, with Lemma \[lem:Rne\] used in place of Lemma \[lem:Krtsv\] and accounting for the additional $L^{-\omega(1)}$ total variation error incurred by failing to obtain a $R_{n, \epsilon}$ satisfying the criteria in Lemma \[lem:Rne\]. Carrying this out yields the following corollary. We remark that the new condition $\epsilon \gg L^{-1} \log L$ in the corollary below will amount to the condition $\epsilon \gg N^{-1/2} \log N$ in our computational lower bounds. This is because, in our applications, we will typically set $N = \tilde{\Theta}(k_n L)$ and $k_n$ to be very close to but slightly smaller than $\sqrt{n} = \tilde{\Theta}(\sqrt{N})$, to ensure that the input $k\pr{-bpds}$ instance is hard. These conditions together with $\epsilon \gg L^{-1} \log L$ amount to the condition on the target parameters given by $\epsilon \gg N^{-1/2} \log N$.
\[thm:mod-isgmreduction\] Let $n$ be a parameter and let $w(n) = \omega(1)$ be a slow-growing function. Fix initial and target parameters as follows:
- [Initial]{.nodecor} $k\pr{-bpds}$ [Parameters:]{.nodecor} $m, n, k_m, k_n, p, q$ and $F$ as in Theorem \[thm:isgmreduction\].
- [Target]{.nodecor} $\pr{isgm}$ [Parameters:]{.nodecor} $(N, d, \mu, \epsilon)$ such that there is a parameter $L = L(N) \in \mathbb{N}$ such that $L(N) \to \infty$ and it holds that $$\max\{wN, n\} \le k_n L \le \textnormal{poly}(n), \quad m \le d \le \textnormal{poly}(n), \quad \frac{w\log L}{L} \le \epsilon \le \frac{1}{2} \quad \textnormal{and}$$ $$0 \le \mu \le \frac{C \delta}{\sqrt{\log (k_nmL) + \log (p - q)^{-1}}} \cdot \sqrt{\frac{\epsilon}{L}}$$ for some sufficiently small constant $C > 0$, where $\delta$ is as in Theorem \[thm:isgmreduction\].
If $\mathcal{A}$ denotes $k\textsc{-bpds-to-isgm}_R$ applied with the parameters above, then $\mathcal{A}$ runs in $\textnormal{poly}(m, n)$ time and $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mathcal{M}_{[m] \times [n]}(S \times T, p, q) \right), \, \pr{isgm}_D(N, S, d, \mu, \epsilon) \right) &= o(1) \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n} \right), \, \mN(0, I_d)^{\otimes N} \right) &= o(1)\end{aligned}$$ for all $k_m$-subsets $S \subseteq [m]$ and $k_n$-subsets $T \subseteq [n]$ with $|T \cap F_i| = 1$ for each $1 \le i \le k_n$.
Sparse Mixtures of Regressions and Negative Sparse PCA {#subsec:2-mixtures-slr}
------------------------------------------------------
**Algorithm** $k\textsc{-bpds-to-mslr}$
*Inputs*: Matrix $M \in \{0, 1\}^{m \times n}$, dense subgraph dimensions $k_m$ and $k_n$ where $k_n$ divides $n$ and the following parameters
- partition $F$, edge probabilities $0 < q < p \le 1$ and $w(n)$ as in Figure \[fig:isgmreduction\]
- target $\pr{mslr}$ parameters $(N, d, \gamma, \epsilon)$ and prime $r$ and $t \in \mathbb{N}$ where $N, d, r, t, \ell$ and $\epsilon = 1/r$ are as in Figure \[fig:isgmreduction\] with the additional requirement that $N \le n$ and where $\gamma \in (0, 1)$ satisfies that $$\gamma^2 \le c \cdot \min\left\{ \frac{k_m}{r^{t + 1}\log(k_nmr^t) \log N}, \, \frac{k_n k_m}{n \log(mn)} \right\}$$ for a sufficiently small constant $c > 0$.
1. *Clone*: Compute the matrices $M_{\pr{isgm}} \in \{0, 1\}^{m \times n}$ and $M_{\pr{neg-spca}} \in \{0, 1\}^{m \times n}$ by applying $\pr{Bernoulli-Clone}$ with $t = 2$ copies to the entries of the matrix $M$ with input Bernoulli probabilities $p$ and $q$, and output probabilities $p$ and $Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{p = 1\}} \left( \sqrt{q} - 1 \right)$.
2. *Produce* <span style="font-variant:small-caps;">isgm</span> *Instance*: Form $(Z_1, Z_2, \dots, Z_N)$ where $Z_i \in \mathbb{R}^d$ as the output of $k\pr{-bpds-to-isgm}$ applied to the matrix $M_{\pr{isgm}}$ with partition $F$, edge probabilities $0 < Q < p \le 1$, slow-growing function $w$, target $\pr{isgm}$ parameters $(N, d, \mu, \epsilon)$ and $\mu > 0$ given by $$\mu = 4 \gamma \cdot \sqrt{\frac{\log N}{k_m}}$$
3. *Produce* <span style="font-variant:small-caps;">neg-spca</span> *Instance*: Form $(W_1, W_2, \dots, W_n)$ where $W_i \in \mathbb{R}^d$ as the output of $\pr{bpds-to-neg-spca}$ applied to the matrix $M_{\pr{neg-spca}}$ with edge probabilities $0 < Q < p \le 1$, target dimension $d$ and parameter $\tau > 0$ satisfying that $$\tau^2 = \frac{8n \gamma^2}{k_n k_m(1 - \gamma^2)}$$
4. *Scale and Label* <span style="font-variant:small-caps;">isgm</span> *Instance*: Generate $y_1, y_2, \dots, y_N \sim_{\text{i.i.d.}} \mN(0, 1 + \gamma^2)$ and truncate each $y_i$ to satisfy $|y_i| \le 2 \sqrt{(1 + \gamma^2) \log N}$. Generate $G_1, G_2, \dots, G_N \sim_{\text{i.i.d.}} \mN(0, I_d)$ and form $(Z_1', Z_2', \dots, Z_N')$ where $Z_i' \in \mathbb{R}^d$ as $$Z_i' = \frac{y_i}{4(1 + \gamma^2)} \sqrt{\frac{2}{\log N}} \cdot Z_i + \sqrt{1 - \frac{y_i^2}{4(1 + \gamma^2)^2\log N}} \cdot G_i$$
5. *Merge and Output*: For each $1 \le i \le N$, let $X_i = \frac{1}{\sqrt{2}} \left( Z_i' + W_i \right)$ and output the $N$ labelled pairs $(X_1, y_1), (X_2, y_2), \dots, (X_N, y_N)$.
In this section, we combine the previous two reductions to $\pr{neg-spca}$ and $\pr{isgm}$ with some additional observations to produce a single reduction that will be used to prove two of our main results in Section \[subsec:3-slr\] – computational lower bounds for mixtures of SLRs and robust SLR. We begin this section by generalizing our definition of the distribution $\pr{mslr}_D(n, S, d, \gamma, 1/2)$ from Section \[subsec:2-formulations\] to simultaneously capture the mixtures of SLRs distributions we will reduce to and our adversarial construction for robust SLR.
Recall from Section \[subsec:2-formulations\] that $\pr{lr}_d(v)$ denotes the distribution of a single sample-label pair $(X, y) \in \mathbb{R}^d \times \mathbb{R}$ given by $y = \langle v, X \rangle + \eta$ where $X \sim \mN(0, I_d)$ and $\eta \sim \mN(0, 1)$. Our generalization of $\pr{mslr}_D$ will be parameterized by $\epsilon \in (0, 1)$. The canonical setup for mixtures of SLRs from Section \[subsec:2-formulations\] corresponds to setting $\epsilon = 1/2$ and formally is restated in the following definition for convenience.
\[defn:mslr-balanced\] Let $\gamma \in \mathbb{R}$ be such that $\gamma > 0$. For each subset $S \subseteq [d]$, let $\pr{mslr}_D(n, S, d, \gamma, 1/2)$ denote the distribution over $n$-tuples of independent data-label pairs $(X_1, y_1), (X_2, y_2), \dots, (X_n, y_n)$ where $X_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}$ are sampled as follows:
- first sample $n$ independent Rademacher random variables $s_1, s_2, \dots, s_n \sim_{\textnormal{i.i.d.}} \textnormal{Rad}$; and
- then form data-label pairs $(X_i, y_i) \sim \pr{lr}_d(\gamma s_i v_S)$ for each $1 \le i \le n$.
where $v_S \in \mathbb{R}^d$ is the $|S|$-sparse unit vector $v_S = |S|^{-1/2} \cdot \mathbf{1}_S$.
Our more general formulation when $\epsilon < 1/2$ is described in the definition below. When $\epsilon < 1/2$, the distribution $\pr{mslr}_D(n, S, d, \gamma, \epsilon)$ can always be produced by an adversary in robust SLR. This observation will be discussed in more detail and used in Section \[subsec:3-slr\] to show computational lower bounds for robust SLR. The reason we have chosen to write these two different distributions under a common notation is that the main reduction of this section, $k\textsc{-bpds-to-mslr}$, will simultaneously map to both mixtures of SLRs and robust SLR. Lower bounds for the mixture problem will be obtained by setting $r = 2$ in the reduction to $\pr{isgm}$ used as a subroutine in $k\textsc{-bpds-to-mslr}$, while lower bounds for robust sparse regression will be obtained by taking $r > 2$. These implications of $k\textsc{-bpds-to-mslr}$ are discussed further in Section \[sec:3-robust-and-supervised\].
\[defn:mslr-imbalanced\] Let $\gamma > 0$, $\epsilon \in (0, 1/2)$ and let $a$ denote $a = \epsilon^{-1}(1 - \epsilon)$. For each subset $S \subseteq [d]$, let $\pr{mslr}_D(n, S, d, \gamma, \epsilon)$ denote the distribution over $n$-tuples of data-label pairs $(X_1, y_1), (X_2, y_2), \dots, (X_n, y_n)$ sampled as follows:
- the pairs $(b_1, X_1, y_1), (b_2, X_2, y_2), \dots, (b_n, X_n, y_n)$ are i.i.d. and $b_1, b_2, \dots, b_n \sim_{\textnormal{i.i.d.}} \textnormal{Bern}(1 - \epsilon)$;
- if $b_i = 1$, then $(X_i, y_u) \sim \pr{lr}_d(\gamma v_S)$ where $v_S$ is as in Definition \[defn:mslr-balanced\]; and
- if $b_i = 0$, then $(X_i, y_i)$ is jointly Gaussian with mean zero and $(d + 1) \times (d + 1)$ covariance matrix $$\left[\begin{matrix} \Sigma_{XX} & \Sigma_{Xy} \\ \Sigma_{yX} & \Sigma_{yy} \end{matrix} \right] = \left[\begin{matrix} I_d + \frac{(a^2 - 1)\gamma^2}{1 + \gamma^2} \cdot v_S v_S^\top & -a\gamma \cdot v_S \\ -a\gamma \cdot v_S^\top & 1 + \gamma^2 \end{matrix} \right]$$
The main reduction of this section from $k\pr{-bpds}$ to $\pr{mslr}$ is shown in Figure \[fig:mixtures-slr-reduction\]. This reduction inherits the number theoretic constraints of our reduction to $\pr{isgm}$ mentioned in the previous section. These will be discussed in more detail when $k\textsc{-bpds-to-mslr}$ is used to deduce computational lower bounds in Section \[subsec:3-slr\]. The following theorem gives the total variation guarantees for $k\textsc{-bpds-to-mslr}$.
\[thm:slr-reduction\] Let $n$ be a parameter, $r = r(n) \ge 2$ be a prime number and $w(n) = \omega(1)$ be a slow-growing function. Fix initial and target parameters as follows:
- [Initial]{.nodecor} $k\pr{-bpds}$ [Parameters:]{.nodecor} vertex counts on each side $m$ and $n$ that are polynomial in one another and satisfy the condition that $n \gg m^3$, subgraph dimensions $k_m$ and $k_n$ where $k_n$ divides $n$, constant densities $0 < q < p \le 1$ and a partition $F$ of $[n]$.
- [Target]{.nodecor} $\pr{mslr}$ [Parameters:]{.nodecor} $(N, d, \gamma, \epsilon)$ where $\epsilon = 1/r$ and there is a parameter $t = t(N) \in \mathbb{N}$ with $$N \le n, \quad wN \le \frac{k_nr(r^t - 1)}{r - 1}, \quad m \le d \le \textnormal{poly}(n), \quad \textnormal{and} \quad n \le k_nr^t \le \textnormal{poly}(n)$$ and where $\gamma \in (0, 1/2)$ satisfies that $$\gamma^2 \le c \cdot \min\left\{ \frac{k_m}{r^{t + 1}\log(k_nmr^t) \log N}, \, \frac{k_n k_m}{n \log(mn)} \right\}$$ for a sufficiently small constant $c > 0$.
Let $\mathcal{A}(G)$ denote $k$<span style="font-variant:small-caps;">-bpds-to-mslr</span> applied with the parameters above to a bipartite graph $G$ with $m$ left vertices and $n$ right vertices. Then $\mathcal{A}$ runs in $\textnormal{poly}(m, n)$ time and it follows that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mathcal{M}_{[m] \times [n]}(S \times T, p, q) \right), \, \pr{mslr}_D(N, S, d, \gamma, \epsilon) \right) &= O\left( w^{-1} + k_n^{-2}m^{-2}r^{-2t} + m^{3/2} n^{-1/2} \right) \\
&\quad \quad + O\left( k_n(4e^{-3})^{n/2k_n} + N^{-1} \right) \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n} \right), \, \left( \mN(0, I_d) \otimes \mN\left(0, 1 + \gamma^2\right) \right)^{\otimes N} \right) &= O\left( k_n^{-2}m^{-2}r^{-2t} + m^{3/2} n^{-1/2} \right)\end{aligned}$$ for all subsets $S \subseteq [m]$ with $|S| = k_m$ and subsets $T \subseteq [n]$ with $|T| = k_n$ and $|T \cap F_i| = 1$ for each $1 \le i \le k_n$.
The proof of this theorem will be broken into several lemmas for clarity. The following four lemmas analyze the approximate Markov transition properties of Steps 4 and 5 of $k$<span style="font-variant:small-caps;">-bpds-to-mslr</span>. The first three lemmas establishes a total variation upper bound in the single sample case. The fourth lemma is a simple consequence of the first two and establishes the Markov transition properties for Steps 4 and 5 together.
\[lem:planted-label\] Let $N$ be a parameter, $\gamma, \mu' \in (0, 1)$, $C > 0$ be a constant and $u \in \mathbb{R}^d$ be such that $\| u \|_2 = 1$ and $4C^2 \gamma^2 \le (\mu')^2/\log N $. Define the random variables $(X, y)$ and $(X', y')$ where $X, X' \in \mathbb{R}^d$ and $y, y' \in \mathbb{R}$ as follows:
- Let $X \sim \mN\left(0, I_d \right)$ and $\eta \sim \mN(0, 1)$ be independent, and define $$y = \gamma \cdot \langle u, X \rangle + \eta$$
- Let $y'$ be a sample from $\mN(0, 1 + \gamma^2)$ truncated to satisfy $|y'| \le C\sqrt{(1 + \gamma^2) \log N}$, and let $Z \sim \mN(\mu' \cdot u, I_d)$, $G \sim \mN(0, I_d)$ and $W \sim \mN\left(0, I_d - \frac{2\gamma^2}{1 + \gamma^2} \cdot uu^\top\right)$ be independent. Now let $X'$ be $$\label{eqn:observation-mslr}
X' = \frac{1}{\sqrt{2}} \left( \frac{\gamma \cdot y'\sqrt{2}}{\mu'(1 + \gamma^2)} \cdot Z + \sqrt{1 - 2\left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2} \cdot G + W \right)$$
Then it follows that, as $N \to \infty$, $$\TV\left( \mL(X, y), \mL(X', y') \right) = O\left( N^{-C^2/2} \right)$$
First observe that $4C^2 \gamma^2 \le (\mu')^2/\log N$ implies that since $|y'| \le C \sqrt{(1 + \gamma^2) \log N}$ holds almost surely and $\gamma \in (0, 1)$, it follows that $$2\left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2 \le 2(1 + \gamma^2) C^2 \gamma^2 (\mu')^{-2} \log N \le 1$$ and hence $X'$ is well-defined almost surely.
Now note that since $y$ is a linear function of $X$ and $\eta$, which are independent Gaussians, it follows that the $d + 1$ entries of $(X, y)$ are jointly Gaussian. Since $\| u \|_2 = 1$, it follows that $\text{Var}(y) = 1 + \gamma^2$ and furthermore $\text{Cov}(y, X) = \bE[Xy] = \gamma \cdot u$. This implies that the covariance matrix of $(X, y)$ is given by $$\left[\begin{matrix} I_d & \gamma \cdot u \\ \gamma \cdot u^\top & 1 + \gamma^2 \end{matrix} \right]$$ It is well known that $X | y$ is a Gaussian vector with mean and covariance matrix given by $$\mL(X | y) = \mN\left( \frac{\gamma \cdot y}{1 + \gamma^2} \cdot u, \, I_d - \frac{\gamma^2}{1 + \gamma^2} \cdot uu^\top \right)$$ Now consider $\mL(X' | y')$. Let $Z = \mu' \cdot u + G'$ where $G' \sim \mN(0, I_d)$ and note that $$X' = \frac{\gamma \cdot y'}{1 + \gamma^2} \cdot u + \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \cdot G' + \frac{1}{\sqrt{2}} \cdot \sqrt{1 - 2\left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2} \cdot G + \frac{1}{\sqrt{2}} \cdot W$$ Note that since $y', G', G$ and $W$ are independent, it follows that all of the entries of the second, third and fourth terms in the expression above are jointly Gaussian conditioned on $y'$. Therefore the entries of $X' | y'$ are also jointly Gaussian. Furthermore the second, third and fourth terms in the expression above for $X'$ have covariance matrices given by $$\left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2 \cdot I_d, \quad \left( \frac{1}{2} - \left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2 \right) \cdot I_d \quad \textnormal{and} \quad \frac{1}{2} \cdot I_d - \frac{\gamma^2}{1 + \gamma^2} \cdot uu^\top$$ respectively, conditioned on $y'$. Since these three terms are independent conditioned on $y'$, it follows that $X' | y'$ has covariance matrix $I_d - \frac{\gamma^2}{1 + \gamma^2} \cdot uu^\top$ and therefore that $$\mL(X' | y') = \mN\left( \frac{\gamma \cdot y}{1 + \gamma^2} \cdot u, \, I_d - \frac{\gamma^2}{1 + \gamma^2} \cdot uu^\top \right)$$ and is hence identically distributed to $\mL(X|y)$. Let $\Phi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-x^2/2} dx$ be the CDF of $\mN(0, 1)$. The conditioning property of total variation in Fact \[tvfacts\] therefore implies that $$\begin{aligned}
\TV\left( \mL(X, y), \mL(X', y') \right) &\le \TV\left( \mL(y), \mL(y') \right) \\
&= \bP\left[ |y| > c \sqrt{(1 + \gamma^2) \log N} \right] \\
&= 2 \cdot \left( 1 - \Phi\left( C \sqrt{\log N} \right) \right) \\
&= O\left( N^{-C^2/2} \right) \end{aligned}$$ where the first equality holds due to the conditioning on an event property of total variation in Fact \[tvfacts\] and the last upper bound follows from the standard estimate $1 - \Phi(x) \le \frac{1}{\sqrt{2\pi}} \cdot x^{-1} \cdot e^{-x^2/2}$ for $x \ge 1$. This completes the proof of the lemma.
The next lemma establishes single sample guarantees that will be needed to analyze the case in which $\epsilon < 1/2$. The proof of this lemma is very similar to that of Lemma \[lem:planted-label\] and is deferred to Appendix \[sec:app-label-generation\].
\[lem:imbalanced-planted-label\] Let $N, \gamma, \mu', C$ and $u$ be as in Lemma \[lem:planted-label\] and let $\mu'' \in (0, 1)$. Define the random variables $(X, y)$ and $(X', y')$ as follows:
- Let $(X, y)$ where $X \in \mathbb{R}^d$ and $y \in \mathbb{R}$ be jointly Gaussian with mean zero and $(d + 1) \times (d + 1)$ covariance matrix given by $$\left[\begin{matrix} \Sigma_{XX} & \Sigma_{Xy} \\ \Sigma_{yX} & \Sigma_{yy} \end{matrix} \right] = \left[\begin{matrix} I_d + \frac{(a^2 - 1)\gamma^2}{1 + \gamma^2} \cdot uu^\top & a\gamma \cdot u \\ a\gamma \cdot u^\top & 1 + \gamma^2 \end{matrix} \right]$$
- Let $y', Z, G$ and $W$ be independent where $y', G$ and $W$ are distributed as in Lemma \[lem:planted-label\] and $Z \sim \mN(\mu'' \cdot u, I_d)$. Let $X'$ be defined by Equation (\[eqn:observation-mslr\]) as in Lemma \[lem:planted-label\].
Then it follows that, as $N \to \infty$, $$\TV\left( \mL(X, y), \mL(X', y') \right) = O\left( N^{-C^2/2} \right)$$
We now state a similar lemma analyzing a single sample in Step 4 of $k$<span style="font-variant:small-caps;">-bpds-to-mslr</span> in the case where $X$ and $W$ are not planted. Its proof is also deferred to Appendix \[sec:app-label-generation\].
\[lem:unplanted-label\] Let $N, \gamma, \mu', C$ and $u$ be as in Lemma \[lem:planted-label\]. Suppose that $y'$ is a sample from $\mN(0, 1 + \gamma^2)$ truncated to satisfy $|y'| \le C\sqrt{(1 + \gamma^2) \log N}$ and $Z, G, W \sim_{\textnormal{i.i.d.}} \mN(0, I_d)$ are independent. Let $X'$ be defined by Equation (\[eqn:observation-mslr\]) as in Lemma \[lem:planted-label\]. Then, as $N \to \infty$, $$\TV\left( \mL(X', y'), \mN(0, I_d) \otimes \mN(0, 1 + \gamma^2) \right) = O\left( N^{-C^2/2} \right)$$
Combining these three lemmas, we now can analyze Step 4 and Step 5 of $\mathcal{A}$. Let $\mathcal{A}_{\text{4-5}}(Z, W)$ denote Steps 4 and 5 of $\mathcal{A}$ with inputs $Z = (Z_1, Z_2, \dots, Z_N)$ and $W = (W_1, W_2, \dots, W_n)$ and output $\left((X_1, y_1), (X_2, y_2), \dots, (X_N, y_N)\right)$. The next lemma applies the previous two lemmas to establish the Markov transition properties of $\mathcal{A}_{\text{4-5}}$.
\[lem:isgm-label\] Let $r, N, d, \gamma, \epsilon, m, n, k_n, k_m$ and $S \subseteq [m]$ where $|S| = k_m$ be as in Theorem \[thm:slr-reduction\] and let $\mu, \gamma, \theta > 0$ be such that $$\mu = 4 \gamma \cdot \sqrt{\frac{\log N}{k_m}}, \quad \tau^2 = \frac{8n \gamma^2}{k_n k_m(1 - \gamma^2)} \quad \textnormal{and} \quad \theta = \frac{\tau^2 k_n k_m}{4n + \tau^2 k_n k_m}$$ If $Z \sim \pr{isgm}(N, S, d, \mu, \epsilon)$ and $W \sim \mN\left(0, \, I_d - \theta v_S v_S^\top\right)^{\otimes n}$, then $$\TV\left( \mathcal{A}_{\textnormal{4-5}}(Z, W), \, \pr{mslr}_D(N, S, d, \gamma, \epsilon) \right) = O\left(N^{-1}\right)$$ If $Z \sim \mN(0, I_d)^{\otimes N}$ and $W \sim \mN(0, 1)^{\otimes d \times n}$, then $$\TV\left( \mathcal{A}_{\textnormal{4-5}}(Z, W), \, \left( \mN(0, I_d) \otimes \mN(0, 1) \right)^{\otimes N} \right) = O\left(N^{-1}\right)$$
We treat the cases in which $\epsilon = 1/2$ and $\epsilon < 1/2$ as well as the two possible distributions of $(Z, W)$ in the lemma statement separately. We first consider the case where $\epsilon = 1/2$ and $r = 2$ and $Z \sim \pr{isgm}_D(N, S, d, \mu, \epsilon)$ and $W \sim \mN\left(0, \, I_d - \theta v_S v_S^\top\right)^{\otimes n}$. The $Z_i$ are independent and can be generated by first sampling $s_1, s_2, \dots, s_N \sim_{\text{i.i.d.}} \text{Bern}(1/2)$ and then setting $$Z_i \sim \left\{ \begin{array}{ll} \mN(\mu \sqrt{k_m} \cdot v_S, I_d) &\text{if } s_i = 1 \\ \mN(-\mu \sqrt{k_m} \cdot v_S, I_d) &\text{if } s_i = 0 \end{array} \right.$$ where $v_S = k_m^{-1/2} \cdot \mathbf{1}_S$. Let $\mu' = \mu \sqrt{k_m}$. It can be verified that the settings of $\mu, \gamma$ and $\theta$ above ensure that $$\frac{\gamma \sqrt{2}}{\mu'(1 + \gamma^2)} = \frac{1}{4(1 + \gamma^2)} \cdot \sqrt{\frac{2}{\log N}} \quad \text{and} \quad \theta = \frac{2\gamma^2}{1 + \gamma^2}$$ Let $X \sim \mN(0, I_d)$ and $\eta \sim \mN(0, 1)$ be independent. Applying Lemma \[lem:planted-label\] with $\mu' = \mu \sqrt{k_m}$, $C = 2$, $u = v_S$ and $u = -v_S$, the equalities above and the definition of $X_i$ in Figure \[fig:mixtures-slr-reduction\] now imply that $$\begin{aligned}
\TV\left( \mL(X_i, y_i | s_i = 1), \mL\left(X, \gamma \cdot \langle v_S, X \rangle + \eta \right) \right) &= O(N^{-2}) \\
\TV\left( \mL(X_i, y_i | s_i = 0), \mL\left(X, -\gamma \cdot \langle v_S, X \rangle + \eta \right) \right) &= O(N^{-2})\end{aligned}$$ for each $1 \le i \le N$. The conditioning property of total variation from Fact \[tvfacts\] now implies that if $\mL_1 = \mL\left(X, \gamma \cdot \langle v_S, X \rangle + \eta \right)$ and $\mL_2 = \mL\left(X, -\gamma \cdot \langle v_S, X \rangle + \eta \right)$, then we have that $$\TV\left( \mL(X_i, y_i), \pr{mix}_{1/2}(\mL_1, \mL_2) \right) = O(N^{-2})$$ For the given distribution on $(Z, W)$, observe that the pairs $(X_i, y_i)$ for $1 \le i \le N$ are independent by construction in $\mathcal{A}$. Thus the tensorization property of total variation from Fact \[tvfacts\] implies that $$\TV\left( \mL\left( (X_1, y_1), (X_2, y_2), \dots, (X_N, y_N) \right),\, \pr{mslr}(N, S, d, \gamma, 1/2) \right) = O(N^{-1})$$ where $\pr{mslr}_D(N, S, d, \gamma, 1/2) = \pr{mix}_{1/2}(\mL_1, \mL_2)^{\otimes N}$, which establishes the desired bound when $\epsilon = 1/2$ and for the first distribution of $(Z, W)$.
The other two cases will follow by nearly identical arguments. Consider the case where $\epsilon$ is arbitrary and if $Z \sim \mN(0, I_d)^{\otimes N}$ and $W \sim \mN(0, 1)^{\otimes d \times n}$, applying Lemma \[lem:unplanted-label\] with $C = 2$ and $\mu' = \mu \sqrt{k_m}$ yields that $$\TV\left( \mL(X_i, y_i), \mN(0, I_d) \otimes \mN(0, 1) \right) = O(N^{-2})$$ Applying the tensorization property of total variation from Fact \[tvfacts\] as above then implies the second bound in the lemma statement. Finally, consider the case in which $\epsilon < 1/2$, $r > 2$ and $(Z, W)$ is still distributed as $Z \sim \pr{isgm}_D(N, S, d, \mu, \epsilon)$ and $W \sim \mN\left(0, \, I_d - \theta v_S v_S^\top\right)^{\otimes n}$. If the $s_i$ are defined as above, then the $Z_i$ are distributed as $$Z_i \sim \left\{ \begin{array}{ll} \mN\left(\mu \sqrt{k_m} \cdot v_S, I_d\right) &\text{if } s_i = 1 \\ \mN\left(-a\mu \sqrt{k_m} \cdot v_S, I_d\right) &\text{if } s_i = 0 \end{array} \right.$$ where $a = \epsilon^{-1}(1 - \epsilon)$. Now consider applying Lemma \[lem:imbalanced-planted-label\] with $\mu' = \mu \sqrt{k_m}$, $\mu'' = a \mu' = \mu \epsilon^{-1}(1 - \epsilon)$, $C = 2$ and $u = - v_S$. This yields that $$\TV\left( \mL(X_i, y_i | s_i = 0), \mL(X, y) \right) = O(N^{-2})$$ where $X$ and $y$ are as in the statement of Lemma \[lem:imbalanced-planted-label\]. Combining this with the conditioning property of total variation from Fact \[tvfacts\], the application of Lemma \[lem:planted-label\] in the first case above, the tensorization property of total variation from Fact \[tvfacts\] as in the previous argument and Definition \[defn:mslr-imbalanced\] yields that $$\TV\left( \mL\left( (X_1, y_1), (X_2, y_2), \dots, (X_N, y_N) \right), \, \pr{mslr}(N, S, d, \gamma, \epsilon) \right) = O\left(N^{-1}\right)$$ which completes the proof of the lemma.
With this lemma, the proof of Theorem \[thm:slr-reduction\] reduces to an application of Lemma \[lem:tvacc\] through a similar argument to the proof of Theorem \[thm:isgmreduction\].
Define the steps of $\mathcal{A}$ to map inputs to outputs as follows $$M \xrightarrow{\mathcal{A}_1} (M_{\pr{isgm}}, M_{\pr{neg-spca}}) \xrightarrow{\mathcal{A}_2} \left(Z, M_{\pr{neg-spca}}\right) \xrightarrow{\mathcal{A}_3} \left(Z, W\right) \xrightarrow{\mathcal{A}_{\text{4-5}}} \left((X_1, y_1), (X_2, y_2), \dots, (X_N, y_N) \right)$$ where $Z = (Z_1, Z_2, \dots, Z_N)$ and $W = (W_1, W_2, \dots, W_n)$ in Figure \[fig:mixtures-slr-reduction\]. First note that the condition on $\gamma$ in the theorem statement along with the settings of $\mu$ and $\tau$ in Figure \[fig:mixtures-slr-reduction\] imply that $$\begin{aligned}
\tau &\le \frac{\delta}{2 \sqrt{6\log (mn) + 2\log (p - Q)^{-1}}} \quad \text{where} \quad \delta = \min \left\{ \log \left( \frac{p}{Q} \right), \log \left( \frac{1 - Q}{1 - p} \right) \right\} \\
\mu &\le \frac{\delta}{2 \sqrt{6\log (k_nmr^t) + 2\log (p - Q)^{-1}}} \cdot \frac{1}{\sqrt{r^t(r - 1)(1 + (r - 1)^{-1})}}\end{aligned}$$ for a sufficiently small constant $c > 0$ since $0 < q < p \le 1$ are constants. Let $\theta$ and $v_S$ be as in Lemma \[lem:isgm-label\]. Consider Lemma \[lem:tvacc\] applied to the steps $\mathcal{A}_i$ above and the following sequence of distributions $$\begin{aligned}
\mathcal{P}_0 &= \mathcal{M}_{[m] \times [n]}(S \times T, \textnormal{Bern}(p), \textnormal{Bern}(q)) \\
\mathcal{P}_1 &= \mathcal{M}_{[m] \times [n]}(S \times T, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \otimes \mathcal{M}_{[m] \times [n]}(S \times T, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \\
\mathcal{P}_2 &= \pr{isgm}_D(N, S, d, \mu, \epsilon) \otimes \mathcal{M}_{[m] \times [n]}(S \times T, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \\
\mathcal{P}_{\text{3}} &= \pr{isgm}_D(N, S, d, \mu, \epsilon) \otimes \mN\left(0, \, I_d - \theta v_S v_S^\top\right)^{\otimes n} \\
\mathcal{P}_{\text{4-5}} &= \pr{mslr}_D(N, S, d, \gamma, \epsilon)\end{aligned}$$ Combining the inequalities above for $\mu$ and $\tau$ with Lemmas \[lem:bern-clone\] and \[lem:isgm-label\] and Theorems \[thm:isgmreduction\] and \[thm:neg-spca\] implies that we can take $$\epsilon_1 = 0, \quad \epsilon_2 = O\left( w^{-1} + k_n^{-2}m^{-2}r^{-2t} \right), \quad \epsilon_3 = O\left( m^{3/2} n^{-1/2} + k_n(4e^{-3})^{n/2k_n} \right) \quad \text{and} \quad \epsilon_{\text{4-5}} = O(N^{-1})$$ Applying Lemma \[lem:tvacc\] now yields the first total variation upper bound in the theorem. Now consider Lemma \[lem:tvacc\] applied to $$\begin{aligned}
\mathcal{P}_0 &= \text{Bern}(q)^{\otimes m \times n} \\
\mathcal{P}_1 &= \text{Bern}(Q)^{\otimes m \times n} \otimes \text{Bern}(Q)^{\otimes m \times n} \\
\mathcal{P}_2 &= \mN(0, I_d)^{\otimes N} \otimes \text{Bern}(Q)^{\otimes m \times n} \\
\mathcal{P}_{\text{3}} &= \mN(0, I_d)^{\otimes N} \otimes \mN(0, I_d)^{\otimes n} \\
\mathcal{P}_{\text{4-5}} &= \left( \mN(0, I_d) \otimes \mN(0, 1 + \gamma^2) \right)^{\otimes N}\end{aligned}$$ By Lemmas \[lem:bern-clone\] and \[lem:isgm-label\] and Theorems \[thm:isgmreduction\] and \[thm:neg-spca\], we can take $$\epsilon_1 = 0, \quad \epsilon_2 = O\left( k_n^{-2}m^{-2}r^{-2t} \right), \quad \epsilon_3 = O\left( m^{3/2} n^{-1/2} \right) \quad \text{and} \quad \epsilon_{\text{4-5}} = O(N^{-1})$$ Applying Lemma \[lem:tvacc\] now yields the second total variation upper bound in the theorem and completes the proof of the theorem.
As in the previous section, the random matrix $R_{L, \epsilon}$ can be used in place of $K_{r, t}$ in our reduction $k$<span style="font-variant:small-caps;">-bpds-to-mslr</span>. Specifically, replacing $k\pr{-bpds-to-isgm}$ in Step 2 with $k\pr{-bpds-to-isgm}_R$ and again replacing $r^t$ with the more flexible parameter $L$ yields an alternative reduction $k\textsc{-bpds-to-mslr}_R$. The guarantees below for this modified reduction follow from the same argument as in the proof of Theorem \[thm:slr-reduction\], using Corollary \[thm:mod-isgmreduction\] in place of Theorem \[thm:isgmreduction\].
\[thm:mod-slr-reduction\] Let $n$ be a parameter and let $w(n) = \omega(1)$ be a slow-growing function. Fix initial and target parameters as follows:
- [Initial]{.nodecor} $k\pr{-bpds}$ [Parameters:]{.nodecor} $m, n, k_m, k_n, p, q$ and $F$ as in Theorem \[thm:slr-reduction\].
- [Target]{.nodecor} $\pr{mslr}$ [Parameters:]{.nodecor} $(N, d, \gamma, \epsilon)$ and a parameter $L = L(N) \in \mathbb{N}$ such that $N \le n$ and $(N, d, \epsilon, L)$ satisfy the conditions in Corollary \[thm:mod-isgmreduction\]. Suppose that $\gamma \in (0, 1/2)$ satisfies that $$\gamma^2 \le c \cdot \min\left\{ \frac{\epsilon k_m}{L \log(k_nmL) \log N}, \, \frac{k_n k_m}{n \log(mn)} \right\}$$ for a sufficiently small constant $c > 0$.
If $\mathcal{A}$ denotes $k\textsc{-bpds-to-mslr}_R$ applied with the parameters above, then $\mathcal{A}$ runs in $\textnormal{poly}(m, n)$ time and $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mathcal{M}_{[m] \times [n]}(S \times T, p, q) \right), \, \pr{mslr}_D(N, S, d, \gamma, \epsilon) \right) &= o(1) \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n} \right), \, \left( \mN(0, I_d) \otimes \mN\left(0, 1 + \gamma^2\right) \right)^{\otimes N} \right) &= o(1)\end{aligned}$$ for all $k_m$-subsets $S \subseteq [m]$ and $k_n$-subsets $T \subseteq [n]$ with $|T \cap F_i| = 1$ for each $1 \le i \le k_n$.
Completing Tensors from Hypergraphs {#sec:2-hypergraph-planting}
===================================
**Algorithm** <span style="font-variant:small-caps;">Advice-Complete-Tensor</span>
*Inputs*: instance $H \in \mG_n^s$ with edge probabilities $0 < q < p \le 1$, an $(s - 1)$-set of advice vertices $V = \{v_1, v_2, \dots, v_{s - 1}\}$ of $H$
1. *Clone Hyperedges*: Compute the $(s!)^2$ hypergraphs $H^{\sigma_1, \sigma_2} \in \mG_n^s$ for each pair $\sigma_1, \sigma_2 \in S_s$ by applying $\pr{Bernoulli-Clone}$ with $t = (s!)^2$ to the $\binom{N}{s}$ hyperedge indicators of $H$ with input Bernoulli probabilities $p$ and $q$ and output probabilities $p$ and $$Q = 1 - (1 - p)^{1 - 1/t}(1 - q)^{1/t} + \mathbf{1}_{\{p = 1\}}\left( q^{1/t} - 1 \right)$$
2. *Form Tensor Entries*: For each $I = (i_1, i_2, \dots, i_s) \in \left( [N] \backslash V \right)^s$, set the $(i_1, i_2, \dots, i_s)$th entry of the tensor $T$ with dimensions $(N - s + 1)^{\otimes s}$ to be the following hyperedge indicator $$T_{i_1, i_2, \dots, i_s} = \mathbf{1}\left\{ \{v_1, v_2, \dots, v_{s - |P(I)|} \} \cup \{i_1, i_2, \dots, i_s\} \in E\left( H^{\tau_{\textnormal{P}}(I), \tau_{\textnormal{V}}(I)} \right) \right\}$$ where $P(I)$, $\tau_{\textnormal{P}}(I)$ and $\tau_{\textnormal{V}}(I)$ are as in Definition \[defn:tuple-stats\].
3. *Output*: Output the order $s$ tensor $T$ with axes indexed by the set $[N] \backslash V$.
**Algorithm** <span style="font-variant:small-caps;">Iterate-and-Reduce</span>
*Inputs*: $k$ instance $H \in \mG_n^s$ with edge probabilities $0 < q < p \le 1$, partition $E$ of $[n]$ into $k$ equally-sized parts, a one-sided blackbox $\mathcal{B}$ for the corresponding planted tensor problem
1. For every $(s - 1)$-set of vertices $\{v_1, v_2, \dots, v_{s - 1}\}$ all from different parts of $E$, form the tensor $T$ by applying <span style="font-variant:small-caps;">Advice-Complete-Tensor</span> to $H$ and $\{v_1, v_2, \dots, v_{s - 1}\}$, remove the indices of $T$ that are in the same part of $E$ as at least one of $\{v_1, v_2, \dots, v_{s - 1}\}$ and run the blackbox $\mathcal{B}$ on the resulting tensor $T$.
2. Output $H_1$ if any application if $\mathcal{B}$ in Step 1 output $H_1$.
In this section we introduce a key subroutine that will be used in our reduction to tensor PCA in Section \[sec:3-tensor\]. The starting point for our reduction $k\pr{-hpds-to-tpca}$ is the hypergraph problem $k\pr{-hpds}$. The adjacency tensor of this instance is missing all entries with at least one pair of equal indices. The first procedure <span style="font-variant:small-caps;">Advice-Complete-Tensor</span> in this section gives a method of completing these missing entries and producing an instance of the planted sub-tensor problem, given access to a set of $s - 1$ vertices in the clique, where $s$ is the order of the target tensor. In order to translate this into a reduction, we iterate over all $(s - 1)$-sets of vertices and carry out this reduction for each one, as will be described in more detail later in this section. For the motivation and high-level ideas behind the reductions in this section, we refer to the discussion in Section \[subsec:1-tech-completing\].
In order to describe our reduction <span style="font-variant:small-caps;">Advice-Complete-Tensor</span>, we will need the following definition which will be crucial in indexing the missing entries of the tensor.
\[defn:tuple-stats\] Given a tuple $I = (i_1, i_2, \dots, i_s)$ where each $i_j \in U$ for some set $U$, we define the partition $P(I)$ and permutations $\tau_{\textnormal{P}}(I)$ and $\tau_{\textnormal{V}}(I)$ of $[s]$ as follows:
1. Let $P(I)$ be the unique partition of $[s]$ into nonempty parts $P_1, P_2, \dots, P_t$ where $i_k = i_l$ if and only if $k, l \in P_j$ for some $1 \le j \le t$, and let $|P(I)| = t$.
2. Given the partition $P(I)$, let $\tau_{\textnormal{P}}(I)$ be the permutation of $[s]$ formed by ordering the parts $P_j$ in increasing order of their largest element, and then listing the elements of the parts $P_j$ according to this order, where the elements of each individual part are written in decreasing order.
3. Let $P'_1, P'_2, \dots, P'_t$ be the ordering of the parts of $P(I)$ as defined above and let $v_1, v_2, \dots, v_t$ be such that $v_j = i_k$ for all $k \in P'_j$ or in other words $v_j$ is the common value of $i_k$ of all indices $k$ in the part $P'_j$. The values $v_1, v_2, \dots, v_t$ are by definition distinct and their ordering induces a permutation $\sigma$ on $[t]$. Let $\tau_{\textnormal{V}}(I)$ be the permutation on $[s]$ formed by setting $\left( \tau_{\textnormal{V}}(I) \right)_{[t]} = \sigma$ and extending $\sigma$ to $[s]$ by taking $\left( \tau_{\textnormal{V}}(I) \right)(j) = j$ for all $t < j \le s$.
Note that $|P(I)|$ is the number of distinct values in $I$ and thus $|P(I)| = |\{i_1, i_2, \dots, i_s\}|$ for each $I$. For example, if $I = (4, 4, 1, 2, 2, 5, 3, 5, 2)$ and $s = 9$, then $P(I)$, $\tau_{\textnormal{P}}(I)$ and $\tau_{\textnormal{V}}(I)$ are $$\begin{aligned}
P(I) &= \left\{ \{ 1, 2 \}, \{ 3 \}, \{4, 5, 9\}, \{6, 8\}, \{ 7 \} \right\}, \quad \tau_{\textnormal{P}}(I) = (2, 1, 3, 7, 8, 6, 9, 5, 4) \quad \text{and} \\
\tau_{\textnormal{V}}(I) &= (4, 1, 3, 5, 2, 6, 7, 8, 9)\end{aligned}$$
We now establish the main Markov transition properties of $\textsc{Advice-Complete-Tensor}$. Given a set $X$, let $\mathcal{E}_{X, s}$ be the set $\binom{X}{s}$ of all subsets of $X$ of size $s$.
\[lem:completing\] Let $0 < q < p \le 1$ be such that $\min\{q, 1 - q\} = \Omega_N(1)$ and let $s$ be a constant. Let $0 < Q < p$ be given by $$Q = 1 - (1 - p)^{1 - 1/t}(1 - q)^{1/t} + \mathbf{1}_{\{p = 1\}}\left( q^{1/t} - 1 \right)$$ where $t = (s!)^2$. Let $V$ be an arbitrary $(s - 1)$-subset of $[N]$ and let $\mathcal{A}$ denote $\textsc{Advice-Complete-Tensor}$ with input $H$, output $T$, advice vertices $V$ and parameters $p$ and $q$. Then $\mathcal{A}$ runs in $\textnormal{poly}(N)$ time and satisfies $$\begin{aligned}
\mathcal{A}\left( \mathcal{M}_{\mathcal{E}_{[N], s}}\left( \mathcal{E}_{S \cup V, s}, \textnormal{Bern}(p), \textnormal{Bern}(q) \right) \right) &\sim \mathcal{M}_{\left( [N] \backslash V \right)^s}\left( S^{s}, \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \\
\mathcal{A}\left( \mathcal{M}_{\mathcal{E}_{[N], s}}\left( \textnormal{Bern}(q) \right) \right) &\sim \mathcal{M}_{\left( [N] \backslash V \right)^s}\left( \textnormal{Bern}(Q) \right)\end{aligned}$$ for all subsets $S \subseteq [N]$ disjoint from $V$.
First note that Step 2 of $\mathcal{A}$ is well defined since the fact that $|P(I)| = |\{i_1, i_2, \dots, i_s\}|$ implies that $\{v_1, v_2, \dots, v_{s - |P(I)|} \} \cup \{i_1, i_2, \dots, i_s\}$ is always a set of size $s$. We first consider the case in which $H \sim \mathcal{M}_{\mathcal{E}_{[N], s}}\left( \mathcal{E}_{S \cup V, s}, \textnormal{Bern}(p), \textnormal{Bern}(q) \right)$. By Lemma \[lem:bern-clone\], it follows that the hyperedge indicators of $H^{\sigma_1, \sigma_2}$ are all independent and distributed as $$\mathbf{1}\left\{ e \in E\left( H^{\sigma_1, \sigma_2} \right) \right\} \sim \left\{ \begin{array}{ll} \textnormal{Bern}(p) &\textnormal{if } e \subseteq S \cup V \\ \textnormal{Bern}(Q) &\textnormal{otherwise} \end{array} \right.$$ for each $\sigma_1, \sigma_2 \in S_s$ and subset $e \subseteq [N]$ with $|e| = s$. We now observe that $T$ agrees in its entrywise marginal distributions with $\mathcal{M}_{\left( [N] \backslash V \right)^s}\left( S^{s}, \textnormal{Bern}(p), \textnormal{Bern}(Q) \right)$. In particular, we have that:
- if $(i_1, i_2, \dots, i_s)$ is such that $i_j \in S$ for all $1 \le j \le s$ then we have that $\{v_1, v_2, \dots, v_{s - |P(I)|} \} \cup \{i_1, i_2, \dots, i_s\} \subseteq S \cup V$ and hence $$T_{i_1, i_2, \dots, i_s} = \mathbf{1}\left\{ \{v_1, v_2, \dots, v_{s - |P(I)|} \} \cup \{i_1, i_2, \dots, i_s\} \in E\left( H^{\tau_{\textnormal{P}}(I), \tau_{\textnormal{V}}(I)} \right) \right\} \sim \textnormal{Bern}(p)$$
- if $(i_1, i_2, \dots, i_s)$ is such that there is some $j$ such that $i_jj \not \in S$, then $\{v_1, v_2, \dots, v_{s - |P(I)|} \} \cup \{i_1, i_2, \dots, i_s\} \not \subseteq S \cup V$ and $T_{i_1, i_2, \dots, i_s} \sim \text{Bern}(Q)$.
It suffices to verify that the entries of $T$ are independent. Since all of the hyperedge indicators of the $H^{\sigma_1, \sigma_2}$ are independent, it suffices to verify that the entries of $T$ are equal to distinct hyperedge indicators.
To show this, we will show that $\{i_1, i_2, \dots, i_s\}$, $\tau_{\textnormal{P}}(I)$ and $\tau_{\textnormal{V}}(I)$ determine the tuple $I = (i_1, i_2, \dots, i_s)$, from which the desired result follows. Consider the longest increasing subsequence of $\tau_{\textnormal{P}}(I)$ starting with $\left(\tau_{\textnormal{P}}(I) \right)(1)$. The elements of this subsequence partition $\tau_{\textnormal{P}}(I)$ into contiguous subsequences corresponding to the parts of $P(I)$. Thus $\tau_{\textnormal{P}}(I)$ determines $P(I)$. Now the first $|P(I)|$ elements of $\tau_{\textnormal{V}}(I)$ along with $\{i_1, i_2, \dots, i_s\}$ determine the values $v_j$ in Definition \[defn:tuple-stats\] corresponding to $I$ on each part of $P(I)$. This uniquely determines the tuple $I$. Therefore the entries $T_{i_1, i_2, \dots, i_s}$ all correspond to distinct hyperedge indicators and are therefore independent. Applying this argument with $S = \emptyset$ yields the second identity in the statement of the lemma. This completes the proof of the lemma.
We now analyze the additional subroutine <span style="font-variant:small-caps;">Iterate-and-Reduce</span>. This will show it suffices to design a reduction with low total variation error in order to show computational lower bounds for Tensor PCA. Let $k\pr{-pst}^s_E(N, k, p, q)$ denote the following *planted subtensor* hypothesis testing problem with hypotheses $$H_0 : T \sim \mathcal{M}_{[N]^s}\left( \textnormal{Bern}(q) \right) \quad \text{and} \quad H_1 : T \sim \mathcal{M}_{[N]^s}\left( S^{s}, \textnormal{Bern}(p), \textnormal{Bern}(q) \right)$$ where $S$ is chosen uniformly at random from all $k$-subsets of $[N]$ intersecting each part of $E$ in one element. The next lemma captures our key guarantee of <span style="font-variant:small-caps;">Iterate-and-Reduce</span>.
\[cor:one-side-reduction\] Fix a pair $0 < q < p \le 1$ with $\min\{q, 1 - q\} = \Omega(1)$, a constant $s$ and let $Q$ be as in Figure \[fig:hyp-to-tensors\]. Suppose that there is a reduction mapping both hypotheses of $k\pr{-pst}^s_E(N - (s - 1)N/k, k - s + 1, p, Q)$ with $k = o(\sqrt{N})$ to the corresponding hypotheses $H_0$ and $H_1$ of a testing problem $\mP$ within total variation $O(N^{-s})$. Then the $k\pr{-hpc}^s$ or $k\pr{-hpds}^s$ conjecture for constant $0 < q < p \le 1$ implies that there cannot be a $\textnormal{poly}(n)$ time algorithm $\mathcal{A}$ solving $\mP$ with a low false positive probability of $\bP_{H_0}[\mathcal{A}(X) = H_1] = O(N^{-s})$, where $X$ denotes the observed variable in $\mP$.
Assume for contradiction that there is a such a $\textnormal{poly}(n)$ time algorithm $\mathcal{A}$ for $\mP$ with $\bP_{H_0}[\mathcal{A}(X) = H_1] = O(N^{-s})$ and Type I$+$II error $$\bP_{H_0}[\mathcal{A}(X) = H_1] + \bP_{H_1}[\mathcal{A}(X) = H_0] \le 1 - \epsilon$$ for some $\epsilon = \Omega(1)$. Furthermore, let $\mathcal{R}$ denote the reduction described in the lemma. If $H_0'$ and $H_1'$ denote the hypotheses of $k\pr{-pst}^s_E(N - (s - 1)N/k, k - s + 1, p, Q)$ and $T$ denotes an instance of this problem, then $\mathcal{R}$ satisfies that $$\TV\left( \mathcal{R}\left( \mL_{H_0'}(T) \right), \mL_{H_0}(T) \right) + \TV\left( \mathcal{R}\left( \mL_{H_1'}(T) \right), \mL_{H_1}(T) \right) = O(N^{-s})$$ Now consider applying <span style="font-variant:small-caps;">Iterate-and-Reduce</span> to: (1) a hard instance $H$ of $k\pr{-hpds}(N, k, p, q)$ with $k = o(\sqrt{N})$; and (2) the blackbox $\mathcal{B} = \mathcal{A} \circ \mathcal{R}$. Let $\pr{ir}(H) \in \{H_0'', H_1''\}$ denote the output of <span style="font-variant:small-caps;">Iterate-and-Reduce</span> on input $H$, and let $H_0''$ and $H_1''$ be the hypotheses of $k\pr{-hpds}(N, k, p, q)$. Furthermore, let $T_1, T_2, \dots, T_K$ denote the tensors formed in the $K = \left( \frac{N}{k} \right)^{s - 1} \binom{k}{s - 1}$ iterations of Step 1 of <span style="font-variant:small-caps;">Iterate-and-Reduce</span>. Note that each $T_i$ has all of its $s$ dimensions equal to $N - (s - 1)N/k$ since exactly $s - 1$ parts of $E$ of size $N/k$ are removed from $[N]$ in each iteration of Step 1 of <span style="font-variant:small-caps;">Iterate-and-Reduce</span>. First consider the case in which $H_0''$ holds. Each tensor in the sequence $T_1, T_2, \dots, T_K$ is marginally distributed as $\mathcal{M}_{[N - (s - 1)N/k]^s}\left( \textnormal{Bern}(Q) \right)$ by Lemma \[lem:completing\]. By definition $\pr{ir}(H) = H_1''$ if and only if some application of $\mathcal{B}(T_i)$ outputs $H_1$. Now note that by a union bound, the definition of $\TV$ and the data-processing inequality, we have that $$\begin{aligned}
\bP_{H_0''}\left[\pr{ir}(H) = H_1''\right] &\le \sum_{i = 1}^K \bP_{H_0''}[\mathcal{A} \circ \mathcal{R}(T_i) = H_1] \\
&\le \sum_{i = 1}^K \left[ \bP_{H_0}[\mathcal{A}(X) = H_1] + \TV\left( \mathcal{R}\left( \mL_{H_0'}(T) \right), \mL_{H_0}(T) \right) \right] \\
&= O\left( K \cdot N^{-s} \right) = O(N^{-1})\end{aligned}$$ since $K = O(N^{s - 1})$. Now suppose that $H_1''$ holds and let $i^*$ be the first iteration of <span style="font-variant:small-caps;">Iterate-and-Reduce</span> in which each of the vertices $\{v_1, v_2, \dots, v_{s - 1}\}$ are in the planted dense sub-hypergraph of $H$. Lemma \[lem:completing\] shows that $T_{i^*}$ is distributed as $\mathcal{M}_{[N - (s - 1)N/k]^s}\left( S^s, \text{Bern}(p), \textnormal{Bern}(Q) \right)$ where $S$ is chosen uniformly at random over all $(k - s + 1)$-subsets of $[N - (s - 1)N/k]$ with one element per part of the input partition $E$ associated with $H$. We now have that $$\begin{aligned}
\bP_{H_1''}\left[\pr{ir}(H) = H_0''\right] &\le 1 - \bP_{H_1''}\left[\pr{ir}(H) = H_1''\right] \le 1 - \bP_{H_1''}[\mathcal{A} \circ \mathcal{R}(T_{i^*}) = H_1] \\
&\le 1 - \bP_{H_1}[\mathcal{A}(X) = H_1] + \TV\left( \mathcal{R}\left( \mL_{H_1'}(T) \right), \mL_{H_1}(T) \right) \\
&= \bP_{H_1}[\mathcal{A}(X) = H_0] + O(N^{-s})\end{aligned}$$ Therefore the Type I$+$II error of <span style="font-variant:small-caps;">Iterate-and-Reduce</span> is $$\bP_{H_0''}\left[\pr{ir}(H) = H_1''\right] + \bP_{H_1''}\left[\pr{ir}(H) = H_0''\right] = \bP_{H_1}[\mathcal{A}(X) = H_0] + O(N^{-1}) \le 1 - \epsilon + O(N^{-1})$$ and <span style="font-variant:small-caps;">Iterate-and-Reduce</span> solves $k\pr{-hpds}$, contradicting the $k\pr{-hpds}$ conjecture.
\[part:lower-bounds\]
Secret Leakage and Hardness Assumptions {#sec:2-secret-leakage}
=======================================
In this section, we further discuss the conditions in the $\pr{pc}_\rho$ conjecture and provide evidence for it and for the specific hardness assumptions we use in our reductions. In Section \[subsec:2-sl-verifying\], we show that $k\pr{-hpc}^s$ is our strongest hardness assumption, explicitly give the $\rho$ corresponding to each of these hardness assumptions and show that the barriers in Conjecture \[conj:hard-conj\] are supported by the $\pr{pc}_\rho$ conjecture for these $\rho$. In Section \[subsec:2-low-degree\], we give more general evidence for the $\pr{pc}_{\rho}$ conjecture through the failure of low-degree polynomial tests. We also discuss technical conditions in variants of the low-degree conjecture and how these relate to the $\pr{pc}_\rho$ conjecture. Finally, in Section \[subsec:2-sq\], we give evidence supporting several of the barriers in Conjecture \[conj:hard-conj\] from statistical query lower bounds.
We remark that, as mentioned at the end of Section \[sec:1-PC\], all of our results and conjectures for $\pr{pc}_\rho$ appear to also hold for $\pr{pds}_\rho$ at constant edge densities $0 < q < p \le 1$. Evidence for these extensions to $\pr{pds}_\rho$ from the failure of low-degree polynomials and SQ algorithms can be obtained through computations analogous to those in Sections \[subsec:2-low-degree\] and \[subsec:2-sq\].
Hardness Assumptions and the $\pr{pc}_\rho$ Conjecture {#subsec:2-sl-verifying}
------------------------------------------------------
In this section, we continue the discussion of the $\pr{pc}_{\rho}$ conjecture from Section \[sec:1-PC\]. We first show that $k\pr{-hpc}^s$ reduces to the other conjectured barriers in Conjecture \[conj:hard-conj\]. We then formalize the discussion in Section \[sec:1-PC\] and explicitly construct secret leakage distributions $\rho$ such that the graph problems in Conjecture \[conj:hard-conj\] can be obtained from instances of $\pr{pc}_\rho$ with these $\rho$. We then verify that the $\pr{pc}_\rho$ conjecture implies Conjecture \[conj:hard-conj\] up to arbitrarily small polynomial factors. More precisely, we verify that these $\rho$, when constrained to be in the conjecturally hard parameter regimes in Conjecture \[conj:hard-conj\], satisfy the tail bound conditions on $p_\rho(s)$ in the $\pr{pc}_{\rho}$ conjecture.
#### The $k\pr{-hpc}^s$ Conjecture is the Strongest Hardness Assumption.
First note that when $s = 2$, our conjectured hardness for $k\pr{-hpc}^s$ is exactly our conjectured hardness for $k\pr{-pc}$ in Conjecture \[conj:hard-conj\]. Thus it suffices to show that Conjecture \[conj:hard-conj\] for $k\pr{-hpc}^s$ implies the conjecture for $k\pr{-bpc}$ and $\pr{bpc}$. This is the content of the following lemma.
\[lem:khpc-strong\] Let $\alpha$ be a fixed positive rational number and $w = w(n)$ be an arbitrarily slow-growing function with $w(n) \to \infty$. Then there is a positive integer $s$ and a $\textnormal{poly}(n)$ time reduction from $k\pr{-hpc}^s(n, k, 1/2)$ with $k = o(\sqrt{n})$ to either $k\pr{-bpc}(M, N, k_M, k_N, 1/2)$ or $\pr{bpc}(M, N, k_M, k_N, 1/2)$ for some parameters satisfying $M = \Theta(N^\alpha)$ and $Cw^{-1} \sqrt{N} \le k_N = o(\sqrt{N})$ and $Cw^{-1} \sqrt{M} \le k_M = o(\sqrt{M})$ for some positive constant $C > 0$.
We first describe the desired reduction to $k\pr{-bpc}$. Let $\alpha = a/b$ for two fixed integers $a$ and $b$, and let $H$ be an input instance of $k\pr{-hpc}^{a + b}_E(n, k, 1/2)$ where $E$ is a fixed known partition of $[n]$. Suppose that $H$ is a nearly tight instance with $w^{-1/\max(a, b)} \sqrt{n} \le k = o(\sqrt{n})$. Now consider the following reduction:
1. Let $R_1, R_2, \dots, R_{a + b}$ be a partition of $[k]$ into $a + b$ sets of sizes differing by at most $1$, and let $E(R_j) = \bigcup_{i \in R_j} E_i$ for each $j \in [a + b]$.
2. Form the bipartite graph $G$ with left vertex set indexed by $V_1 = E(R_1) \times E(R_2) \times \cdots \times E(R_a)$ and right vertex set $V_2 = E(R_{a + 1}) \times E(R_{a + 2}) \times \cdots \times E(R_{a + b})$ such that $(u_1, u_2, \dots, u_a) \in V_1$ and $(v_1, v_2, \dots, v_b) \in V_2$ are adjacent if and only if $\{u_1, \dots, u_a, v_1, \dots, v_b\}$ is a hyperedge of $H$.
3. Output $G$ with left parts $E_{i_1} \times E_{i_2} \times \cdots \times E_{i_a}$ for all $(i_1, i_2, \dots, i_a) \in R_1 \times R_2 \times \cdots \times R_a$ and right parts $E_{i_1} \times E_{i_2} \times \cdots \times E_{i_b}$ for all $(i_1, i_2, \dots, i_b) \in R_{a+1} \times R_{a+2} \times \cdots \times R_{a+b}$, after randomly permuting the vertex labels of $G$ within each of these parts.
Note that since $a + b = \Theta(1)$, we have that $|E(R_i)| = \Theta(n)$ for each $i$ and thus $N = |V_2| = \Theta(n^b)$ and $M = |V_1| = \Theta(n^a) = \Theta(N^\alpha)$. Under $H_0$, each possible hyperedge of $H$ is included independently with probability $1/2$. Since the edge indicators of $G$ corresponds to a distinct hyperedge indicator of $H$ in Step 2 above, it follows that each edge of $G$ is also included with probability $1/2$ and thus $G \sim \mG_B(M, N, 1/2)$.
In the case of $H_1$, suppose that $H$ is distributed according to the hypergraph planted clique distribution with clique vertices $S \subseteq [n]$ where $S \sim \mU_n(E)$. Examining the definition of the edge indicators in Step 2 above yields that $G$ is a sample from $H_1$ of $k\pr{-bpc}(M, N, k_M, k_N, 1/2)$ conditioned on having left biclique set $\prod_{i = 1}^a (S \cap E(R_i))$ and right biclique set $\prod_{i = a + 1}^{a+b} (S \cap E(R_i))$. Observe that these sets have exactly one vertex in $G$ in common with each of the parts described in Step 3 above. Now note that since $S$ has one vertex per part of $E$, we have that $|S \cap E(R_i)| = |R_i| = \Theta(k)$ since $a + b = \Theta(1)$. Thus $k_M = \left| \prod_{i = 1}^a (S \cap E(R_i)) \right| = \Theta(k^a)$ and $k_N = \Theta(k^b)$. The bound on $k$ now implies that the two desired bounds on $k_N$ and $k_M$ hold for a sufficiently small constant $C > 0$. Thus the permutations in Step 3 produce a sample exactly from $k\pr{-bpc}(M, N, k_M, k_N, 1/2)$ in the desired parameter regime. If instead of only permuting vertex labels within each part, we randomly permute all left vertex labels and all right vertex labels in Step 3, the resulting reduction produces $\pr{bpc}$ instead of $k\pr{-bpc}$. The correctness of this reduction follows from the same argument as for $k\pr{-bpc}$.
We remark that since $m$ and $n$ are polynomial in each other in the setup in Conjecture \[conj:hard-conj\] for $k\pr{-bpc}$ and $\pr{bpc}$, the lemma above fills out a dense subset of this entire parameter regime – where $m = \Theta(n^\alpha)$ for some rational $\alpha$. In the case where $\alpha$ is irrational, the reduction in Lemma \[lem:khpc-strong\], when composed with our other reductions beginning with $k\pr{-bpc}$ and $\pr{bpc}$, shows tight computational lower bounds up to arbitrarily small polynomial factors $n^{\epsilon}$ by approximating $\alpha$ arbitrarily closely with a rational number.
#### Hardness Conjectures as Instances of $\pr{pc}_\rho$.
We now will verify that each of the graph problems in Conjecture \[conj:hard-conj\] can be obtained from $\pr{pc}_\rho$. To do this, we explicitly construct several $\rho$ and give simple reductions from the corresponding instances of $\pr{pc}_\rho$ to these graph problems. We begin with $k\pr{-pc}$, $\pr{bpc}$ and $k\pr{-bpc}$ as their discussion will be brief.\
*Secrets for $k\pr{-pc}$, $\pr{bpc}$ and $k\pr{-bpc}$.* Below are the $\rho$ corresponding to these three graph problems. Both $\pr{bpc}$ and $k\pr{-bpc}$ can be obtained by restricting to bipartite subgraphs of the $\pr{pc}_\rho$ instances with these $\rho$.
- **$k$-partite :** Suppose that $k$ divides $n$ and $E$ is a partition of $[n]$ into $k$ parts of size $n/k$. By definition, $k\pr{-pc}_E(n, k, 1/2)$ is $\pr{pc}_\rho(n, k, 1/2)$ where $\rho = \rho_{k\pr{-pc}}(E, n, k)$ is the uniform distribution $\mU_n(E)$ over all $k$-sets of $[n]$ intersecting each part of $E$ in one element.
- **bipartite :** Let $\rho_{\pr{bpc}}(m, n, k_m, k_n)$ be the uniform distribution over all $(k_n + k_m)$-sets of $[n + m]$ with $k_n$ elements in $\{1, 2, \dots, n\}$ and $k_m$ elements in $\{n + 1, n + 2, \dots, n + m\}$. An instance of $\pr{bpc}(m, n, k_m, k_n, 1/2)$ can then be obtained by outputting the bipartite subgraph of $\pr{pc}_\rho(m + n, k_m + k_n, 1/2)$ with this $\rho$, consisting of the edges between left vertex set $\{n + 1, n + 2, \dots, n + m\}$ and right vertex set $\{1, 2, \dots, n\}$.
- **$k$-part bipartite :** Suppose that $k_n$ divides $n$, $k_m$ divides $m$, and $E$ and $F$ are partitions of $[n]$ and $[m]$ into $k_n$ and $k_m$ parts of equal size, respectively. Let $\rho_{k\pr{-bpc}}(E, F, m, n, k_m, k_n)$ be uniform over all $(k_n + k_m)$-subsets of $[n+m]$ with exactly one vertex in each part of both $E$ and $n + F$. Here, $n + F$ denotes the partition of $\{n + 1, n + 2, \dots, n + m\}$ induced by shifting indices in $F$ by $n$. As with $\pr{bpc}$, $k\pr{-bpc}(m, n, k_m, k_n, 1/2)$ can be realized as the bipartite subgraph of $\pr{pc}_\rho(m+n, k_m + k_n, 1/2)$, with this $\rho$, between the vertex sets $\{n + 1, n + 2, \dots, n + m\}$ and $\{1, 2, \dots, n\}$.
*Secret for $k\pr{-hpc}^s$.* We first will give the secret $\rho$ corresponding to $k\pr{-hpc}^s$ for even $s$, which can be viewed as roughly the pushforward of $\mU_n(E)$ after unfolding the adjacency tensor of $k\pr{-hpc}^s$. The secret for odd $s$ will then be obtained through a slight modification of the even case.
Suppose that $s = 2t$. Given a set $S \subseteq [n]$, let $P_t^n(S)$ denote the subset of $[n^t]$ given by $$P_t^n(S) = \left\{ 1 + \sum_{j = 0}^{t - 1} (a_j - 1) n^j : a_0, a_1, \dots, a_{t - 1} \in S \right\}$$ In other words, $P_t^n(S)$ is the set of all numbers $x$ in $[n^t]$ such that the base-$n$ representation of $x - 1$ only has digits in $S - 1$, where $S - 1$ is the set of all $s - 1$ where $s \in S$. Note that if $|S| = k$ then $|P_t^n(S)| = k^t$. Given a partition $E$ of $[n]$ into $k$ parts of size $n/k$, let $\rho_{k\pr{-hpc}^s}(E, n, k)$ be the distribution over $k^t$-subsets of $[n^t]$ sampled by choosing $S$ at random from $\mU_n(E)$ and outputting $P_t^n(S)$. Throughout the rest of this section, we will let $I(a_0, a_1, \dots, a_{t - 1})$ denote the sum $1 + \sum_{j = 0}^{t - 1} (a_j - 1) n^j$. We now will show that $k\pr{-hpc}^s_E(n, k, 1/2)$ can be obtained from $\pr{pc}_\rho(n^t, k^t, 1/2)$ where $\rho = \rho_{k\pr{-hpc}^s}(E, n, k)$. Intuitively, this instance of $\pr{pc}_\rho$ has a subset of edges corresponding to the unfolded adjacency tensor of $\pr{-hpc}^s_E$. More formally, consider the following steps.
1. Let $G$ be an input instance of $\pr{pc}_\rho(n^t, k^t, 1/2)$ and let $H$ be the output hypergraph with vertex set $[n]$.
2. Construct $H$ as follows: for each possible hyperedge $e = \{a_1, a_2, \dots, a_{2t}\}$, with $1 \le a_1 < a_2 < \cdots < a_{2t} \le n$, include $e$ in $H$ if and only if there is an edge between vertices $I(a_1, a_2, \dots, a_t)$ and $I(a_{t + 1}, a_{t + 2}, \dots, a_{2t})$ in $G$.
Under $H_0$, it follows that $G \sim \mG(n^t, 1/2)$. Note that each hyperedge $e$ in Step 2 identifies a unique pair of distinct vertices $I(a_1, a_2, \dots, a_t)$ and $I(a_{t + 1}, a_{t + 2}, \dots, a_{2t})$ in $G$, and thus the hyperedges of $H$ are independently included with probability $1/2$. Under $H_1$, it follows that the instance of $\pr{pc}_\rho(n^t, k^t, 1/2)$ is sampled from the planted clique distribution with clique vertices $P_t^n(S)$ where $S \sim \mU_n(E)$. By the definition of $P_t^n(S)$, it follows that $I(a_1, a_2, \dots, a_t)$ is in this clique if and only if $a_1, a_2, \dots, a_t \in S$. Examining the edge indicators of $H$ then yields that $H$ is a sample from the hypergraph planted clique distribution with clique vertex set $S$. Since $S \sim \mU_n(E)$, under both $H_0$ and $H_1$, it follows that $H$ is a sample from $k\pr{-hpc}^s$.
Now suppose that $s$ is odd with $s = 2t + 1$. The idea in this case is to pair up adjacent digits in base-$n$ expansions and use these pairs to label the vertices of $k\pr{-hpc}^s$. More precisely suppose that $n = N^2$ and $k = K^2$ for some positive integers $K$ and $N$. Let $E$ be a fixed partition of $[n]$ into $k = K^2$ equally sized parts and let $\rho_{k\pr{-hpc}^s}(E, n, k)$ be $\rho_{k\pr{-hpc}^{2s}}(F, N, K)$ as defined above for the even number $2s$, where $F$ is a fixed partition of $[N]$ into $K$ equally sized parts. We now will show that $k\pr{-hpc}^s_E(n, k, 1/2)$ can be obtained from $\pr{pc}_\rho(N^s, K^s, 1/2)$ where $\rho = \rho_{k\pr{-hpc}^{2s}}(F, N, K)$. Let $I'$ be the analogue of $I$ for base-$N$ expansions i.e. let $I(b_0, b_1, \dots, b_{t - 1})$ denote the sum $1 + \sum_{j = 0}^{t - 1} (b_j - 1) N^j$. Consider the following steps.
1. Let $G$ be an instance of $\pr{pc}_\rho(N^{s}, K^{s}, 1/2)$ and let $H$ be the output hypergraph with vertex set $[n]$.
2. Let $\sigma : [n] \to [n]$ be a bijection such that, for each $i \in [k]$, we have that $$\sigma(E_i) = \left\{ I'(b_0, b_1) : b_0 \in F_{c_0} \text{ and } b_1 \in F_{c_1} \right\}$$ where $c_0, c_1$ are the unique elements of $[K]$ with $i - 1 = (c_0 - 1) + (c_1 - 1)K$.
3. Construct $H$ as follows. For each possible hyperedge $e = \{a_1, a_2, \dots, a_{s}\}$, with $1 \le a_1 < a_2 < \cdots < a_{s} \le n$, let $b_{2i - 1}, b_{2i}$ be the unique elements of $[N]$ with $I'(b_{2i - 1}, b_{2i}) = \sigma(a_i)$ for each $i$. Now include $e$ in $H$ if and only if there is an edge between the two vertices $I(b_1, b_2, \dots, b_s)$ and $I(b_{s + 1}, b_{s + 2}, \dots, b_{2s})$ in $G$.
4. Permute the vertex labels of $H$ within each part $F_i$ uniformly at random.
Note that $\sigma$ always trivially exists because the $K^2$ sets $E_1, E_2, \dots, E_{K^2}$ and the $K^2$ sets $F'_{i, j} = \{I'(b_0, b_1) : b_0 \in F_{i} \text{ and } b_1 \in F_{j} \}$ for $1 \le i, j \le K$ are both partitions of $[n]$ into parts of size $N^2/K^2$. As in the case where $s$ is even, under $H_1$ we have that $G \sim \mG(N^{2s}, 1/2)$ and the hyperedges of $H$ are independently included with probability $1/2$, since Step 3 identifies distinct pairs of vertices for each hyperedge $e$. Under $H_1$, let $S \sim \mU_N(F)$ be such that the clique vertices in $G$ are $P_s^N(S)$. By the same reasoning as in the even case, after Step 3, the hypergraph $H$ is distributed as a sample from the hypergraph planted clique distribution with clique vertex set $\sigma^{-1}(I'(S, S))$ where $I'(S, S) = \{ I'(s_0, s_1) : s_0, s_1 \in S\}$. The definition of $\sigma$ now ensures that this clique has one vertex per part of $E$. Step 4 ensures that the resulting hypergraph is exactly a sample from $H_1$ of $k\pr{-hpc}^{s}$. We remark that the conditions $n = N^2$ and $k = K^2$ do not affect our lower bounds when composing the reduction above with our other reductions. This is due to the subsequence criterion for computational lower bounds in Condition \[cond:lb\].
#### Verifying the Conditions of the $\pr{pc}_\rho$ Conjecture.
We now verify that the $\pr{pc}_{\rho}$ conjecture corresponds to the hard regimes in Conjecture \[conj:hard-conj\] up to arbitrarily small polynomial factors. To do this, it suffices to verify the tail bound on $p_{\rho}(s)$ in the $\pr{pc}_{\rho}$ conjecture for each $\rho$ described above, which is done in the theorem below. In the next section, we will show that a slightly stronger variant of the $\pr{pc}_\rho$ conjecture implies Conjecture \[conj:hard-conj\] exactly, without the small polynomial factors.
\[thm:verify\] Suppose that $m$ and $n$ are polynomial in one another and let $\epsilon > 0$ be an arbitrarily small constant. Let $\rho$ be any one of the following distributions:
1. $\rho_{k\pr{-pc}}(E, n, k)$ where $k = O(n^{1/2 - \epsilon})$;
2. $\rho_{\pr{bpc}}(m, n, k_m, k_n)$ where $k_n = O(n^{1/2 - \epsilon})$ and $k_m = O(m^{1/2 - \epsilon})$;
3. $\rho_{k\pr{-bpc}}(E, F, m, n, k_m, k_n)$ where $k_n = O(n^{1/2 - \epsilon})$ and $k_m = O(m^{1/2 - \epsilon})$; and
4. $\rho_{k\pr{-hpc}^t}(E, n, k, 1/2)$ for $t \ge 3$ where $k = O(n^{1/2 - \epsilon})$.
Then there is a constant $\delta > 0$ such that: for any parameter $d = O_n((\log n)^{1 + \delta})$, there is some $p_0 = o_n(1)$ such that $p_{\rho}(s)$ satisfies the tail bounds $$p_{\rho}(s) \le p_0 \cdot \left\{ \begin{array}{ll} 2^{-s^2} &\textnormal{if } 1 \le s^2 < d \\ s^{-2d-4} &\textnormal{if } s^2 \ge d \end{array} \right.$$
We first prove the desired tail bounds hold for (1). Let $C > 0$ be a constant such that $k \le C n^{1/2 - \epsilon}$. Note that the probability that $S$ and $S'$ independently sampled from $\rho = \rho_{k\pr{-pc}}(E, n, k)$ intersect in their elements in $E_i$ is $1/|E_i| = k/n$ for each $1 \le i \le k$. Furthermore, these events are independent. Thus it follows that if $\rho = \rho_{k\pr{-pc}}(E, n, k)$, then $p_{\rho}$ is the PMF of $\text{Bin}(k, k/n)$. In particular, we have that $$p_{\rho}(s) = \binom{k}{s} \left( \frac{k}{n} \right)^s \left( 1- \frac{k}{n} \right)^{k - s} \le k^s \cdot \left( \frac{k}{n} \right)^s = \left( \frac{k^2}{n} \right)^s \le C^{2s} \cdot n^{-2\epsilon s}$$ Let $p_0 = p_0(n)$ be a function tending to zero arbitrarily slowly. The bound above implies that $p_{\rho}(s) \le p_0 \cdot 2^{-s^2}$ as long as $s \le C_1 \log n$ for some sufficiently small constant $C_1 > 0$. Furthermore a direct computation verifies that $p_{\rho}(s) \le p_0 \cdot s^{-2d-4}$ as long as $$s \ge \frac{C_2 d \log d}{\log n}$$ for some sufficiently large constant $C_2 > 0$. Thus if $d = O_n((\log n)^{1 + \delta})$ for some $\delta \in (0, 1)$, then $\frac{C_2 d \log d}{\log n} < \sqrt{d}$ and $C_1 \log n > \sqrt{d}$ for sufficiently large $n$. This implies the desired tail bound for (1).
The other three cases are similar. In the case of (3), if $S$ and $S'$ are independently sampled from $\rho = \rho_{k\pr{-bpc}}(E, F, m, n, k_m, k_n)$, then the probability that $S$ and $S'$ intersect in their elements in $E_i$ is $k_n/n$ for each $1 \le i \le k_n$, and the probability that they intersect in their elements in $n + F_i$ is $k_m/m$ for each $1 \le i \le k_m$. Thus $p_\rho$ is distributed as independent sum of samples from $\text{Bin}(k_m, k_m/m)$ and $\text{Bin}(k_n, k_n/n)$. It follows that $$\begin{aligned}
p_{\rho}(s) &= \sum_{\ell = 0}^s \binom{k_n}{\ell} \left( \frac{k_n}{n} \right)^{\ell} \left( 1- \frac{k_n}{n} \right)^{k_n - \ell} \cdot \binom{k_m}{s - \ell} \left( \frac{k_m}{m} \right)^{s - \ell} \left( 1- \frac{k_m}{m} \right)^{k_m - s + \ell} \nonumber \\
&\le \sum_{\ell = 0}^s \left( \frac{k_n^2}{n} \right)^{\ell} \left( \frac{k_m^2}{m} \right)^{s - \ell} \le s \cdot \max\left\{ \left( \frac{k_n^2}{n} \right)^s, \left( \frac{k_m^2}{m} \right)^s \right\} \label{eqn:case-3}\end{aligned}$$ Repeating the bounding argument as in (1) shows that the desired tail bound holds for (3) if $d = O_n((\log n)^{1 + \delta})$ for some $\delta \in (0, 1)$. Since $m$ and $n$ are polynomial in one another implies that $\log m = \Theta(\log n)$, the $(k_m^2/m)^s$ term and the additional factor of $s$ do not affect this bounding argument other than changing the constants $C_1$ and $C_2$. In the case of (2), similar reasoning as in (3) yields that the distribution $p_{\rho}$ where $\rho = \rho_{\pr{bpc}}(m, n, k_m, k_n)$ is the independent sum of samples from $\text{Hyp}\left( n, k_n, k_n \right)$ and $\text{Hyp}\left( m, k_m, k_m \right)$. Now note that $$\bP\left[ \text{Hyp}\left( n, k_n, k_n \right) = \ell \right] = \frac{\binom{k_n}{\ell} \binom{n - k_n}{k_n - \ell}}{\binom{n}{k_n}} \le \frac{k_n^{\ell} \binom{n - \ell}{k_n - \ell}}{\binom{n}{k_n}} = k_n^{\ell} \prod_{i = 0}^{\ell - 1} \frac{k_n - i}{n - i} \le \left( \frac{k_n^2}{n} \right)^{\ell}$$ This implies the same upper bound on $p_{\rho}(s)$ as in Equation (\[eqn:case-3\]) also holds for $\rho$ in the case of (2). The argument above for (3) now establishes the desired tail bounds for (2).
We first handle the case in (4) where $t$ is even with $t = 2r$. We have that $\rho = \rho_{k\pr{-hpc}^t}(E, n, k, 1/2)$ can be sampled as $P^n_r(S) \subseteq [n^r]$ where $S \sim \mU_n(E)$. Thus $p_{\rho}(s)$ is the PMF of $|P^n_r(S) \cap P^n_r(S')|$ where $S, S' \sim_{\text{i.i.d.}} \mU_n(E)$. Furthermore the definition of $P^n_r$ implies that $|P^n_r(S) \cap P^n_r(S')| = |S \cap S'|^r$ and, from case (1), we have that $|S \cap S'| \sim \text{Bin}(k, k/n)$. It now follows that $$p_{\rho}(s) = \left\{ \begin{array}{ll} \binom{k}{s^{1/r}} \left( \frac{k}{n} \right)^{s^{1/r}} \left( 1- \frac{k}{n} \right)^{k - s^{1/r}} &\text{if } s \text{ is an } r\text{th power} \\ 0 &\text{otherwise} \end{array} \right.$$ The same bounds as in case (1) therefore imply that $p_{\rho}(s) \le (k^2/n)^{s^{1/r}}$ for all $s \ge 0$. A similar analysis as in (1) now shows that $p_{\rho}(s) \le p_0 \cdot 2^{-s^2}$ holds if $s \le C_1 (\log n)^{r/(2r - 1)}$ for some sufficiently small constant $C_1 > 0$, and that $p_{\rho}(s) \le p_0 \cdot s^{-2d-4}$ holds if $$s \ge C_2 \left( \frac{d \log d}{\log n} \right)^r$$ for some sufficiently large constant $C_2 > 0$. As long as $d = O_n((\log n)^{1 + \delta})$ for some $0 < \delta < 1/(2r - 1)$, we have that $C_2 \left( \frac{d \log d}{\log n} \right)^r < \sqrt{d}$ and $C_1 (\log n)^{r/(2r - 1)} > \sqrt{d}$ for sufficiently large $n$. Since $t$ and $r$ are constants here, $\delta$ can be taken to be constant as well. In the case where $t$ is odd, it follows that $\rho_{k\pr{-hpc}^t}(E, n, k, 1/2)$ is the same as $\rho_{k\pr{-hpc}^{2t}}(F, \sqrt{n}, \sqrt{k}, 1/2)$ for some partition $F$ as long as $n$ and $k$ are squares. The same argument establishes the desired tail bound for this prior, completing the case of (4) and proof of the theorem.
Low-Degree Polynomials and the $\pr{pc}_\rho$ Conjecture {#subsec:2-low-degree}
--------------------------------------------------------
In this section, we show that the low-degree conjecture – that low-degree polynomials are optimal for a class of average-case hypothesis testing problems – implies the $\pr{pc}_\rho$ conjecture. In particular, we will obtain a simple expression capturing the power of the optimal low-degree polynomial for $\pr{pc}_\rho$ in Proposition \[prop:sl-ld\]. We then will apply this proposition to prove Theorem \[thm:sl-ld\], showing that the power of this optimal low-degree polynomial tends to zero under the tail bounds on $p_\rho$ in the $\pr{pc}_{\rho}$ conjecture. We also will discuss a stronger version of the $\pr{pc}_\rho$ conjecture that exactly implies Conjecture \[conj:hard-conj\]. First, we informally introduce the low-degree conjecture and the technical conditions arising in its various formalizations in the literature.
#### Polynomial Tests and the Low-Degree Conjecture.
In this section, will draw heavily from similar discussions in [@hopkins2017efficient] and Hopkins’s thesis [@hopkinsThesis]. Throughout, we will consider discrete hypothesis testing problems with observations taken without loss of generality to lie in the discrete hypercube $\{-1, 1\}^N$. For example, an $n$-vertex instance of planted clique can be represented in the discrete hypercube by the above-diagonal entries of its signed adjacency matrix when $N = \binom{n}{2}$. Given a hypothesis $H_0$, the term $D$-simple statistic refers to polynomials $f : \{-1, 1\}^N \to \mathbb{R}$ of degree at most $D$ in the coordinates of $\{-1, 1\}^N$ that are calibrated and normalized so that $\E_{H_0}f(X)=0$ and $\E_{H_0}f(X)^2=1$.
For a broad range of hypothesis testing problems, it has been observed in the literature that $D$-simple statistics seem to capture the full power of the SOS hierarchy [@hopkins2017efficient; @hopkinsThesis]. This trend prompted a further conjecture that $D$-simple statistics often capture the full power of efficient algorithms, leading more concretely to the *low-degree conjecture* which is stated informally below. This conjecture has been used to gather evidence of hardness for a number of natural detection problems and has generally emerged as a convenient tool to predict statistical-computational gaps [@hopkins2017efficient; @hopkinsThesis; @kunisky2019notes; @bandeira2019computational]. Variants of this low-degree conjecture have appeared as Hypothesis 2.1.5 and Conjecture 2.2.4 in [@hopkinsThesis] and Conjectures 1.16 and 4.6 in [@kunisky2019notes].
\[c:lowDeg\] For a broad class of hypothesis testing problems $H_0$ versus $H_1$, there is a test running in time $N^{\Ot(D)}$ with Type I$+$II error tending to zero if and only if there is a successful $D$-simple statistic i.e. a polynomial $f$ of degree at most $D$ such that $\E_{H_0}f(X)=0$ and $\E_{H_0}f(X)^2=1$ yet $\E_{H_1}f(X)\to \infty$.
Detailed discussions of the low-degree conjecture and the connections between $D$-simple statistics and other types of algorithms can be found in [@kunisky2019notes] and [@holmgren2020counterexamples]. The informality in the conjecture above is the undefined “broad class” of hypothesis testing problems. In [@hopkinsThesis], several candidate technical conditions defining this class were proposed and subsequently have been further refined in [@kunisky2019notes] and [@holmgren2020counterexamples]. These conditions are discussed in more detail later in this section.
The utility of the low-degree conjecture in predicting statistical-computational gaps arises from the fact that the optimal $D$-simple statistic can be explicitly characterized. By the Neyman-Pearson lemma, the optimal test with respect to Type I$+$II error is the the likelihood ratio test, which declares $H_1$ if $\lr(X) = \P_{H_1}(X)/\P_{H_0}(X) > 1$ and $H_0$ otherwise, given a sample $X$. Computing the likelihood ratio is typically intractable in problems in high-dimensional statistical inference. The low-degree likelihood ratio $\lrd$ is the orthogonal projection of the likelihood ratio onto the subspace of polynomials of degree at most $D$. When $H_0$ is a product distribution on the discrete hypercube $\{-1,1\}^N$, the following theorem asserts that $\lrd$ is the optimal test of a given degree. Here, the projection is with respect to the inner product $\la f,g\ra = \E_{H_0} f(X) g(X)$, which also defines a norm $\|f\|_2^2 = \la f,f\ra$.
The optimal $D$-simple statistic is the low-degree likelihood ratio, i.e. it holds that $$\max_{{f\in \bR[x]_{{\leq D}}}\atop \E_{H_0}f(X)=0} \frac{\E_{H_1} f(X)}{\sqrt{\E_{H_0} f(X)^2}} = \|\lrd - 1\|_2$$
Thus existence of low-degree tests for a given problem boils down to computing the norm of the low-degree likelihood ratio. When $H_0$ is the uniform distribution on $\{-1, 1\}^N$, the norm above can be re-expressed in terms of the standard Boolean Fourier basis. Let the collection of functions $\{\chi_\alpha(X) = \prod_{e\in \alpha} X_e: \alpha \subseteq [N]\}$ denote this basis, which is orthonormal over the space $\{-1,1\}^{N}$ with inner product defined above. By orthonormality, any $\chi_\alpha$ with $1\leq |\alpha|\leq D$ satisfies that $$\la \chi_\alpha, \lrd -1\ra = \la \chi_\alpha, \lr \ra = \E_{H_0} \chi_\alpha(X) \lr(X) = \E_{H_1}\chi_\alpha(X)$$ and $\E_{H_0} \lrd=\E_{H_1} 1=1$ so that $\la 1, \lrd -1\ra=0$. It then follows by Parseval’s identity that $$\label{e:energy}
\|\lrd - 1\|_2 = \left( \sum_{1\leq|\alpha|\leq D} \big(\E_{H_1}\chi_\alpha(X)\big)^2\right)^{1/2}$$ which is exactly the Fourier energy up to degree $D$.
#### Technical Conditions, $S_n$-Invariance and Counterexamples.
While Conjecture \[c:lowDeg\] is believed to accurately predict the computational barriers in nearly any natural high-dimensional statistical problem including all of the problems we consider, a precise set of criteria exactly characterizing this “broad class” has yet to be pinned down in the literature. The following was the first formalization of the low-degree conjecture, which appeared as Conjecture 2.2.4 in [@hopkinsThesis].
\[c:low-deg-formal\] Let $\Omega$ be a finite set or $\mathbb{R}$, and let $k$ be a fixed integer. Let $N = \binom{n}{k}$, let $\nu$ be a product distribution on $\Omega^N$ and let $\mu$ be another distribution on $\Omega^N$. Suppose that $\mu$ is $S_n$-invariant and $(\log n)^{1 + \Omega(1)}$-wise almost independent with respect to $\nu$. Then no polynomial time test distinguishes $T_{\delta} \mu$ and $\nu$ with probability $1 - o(1)$, for any $\delta > 0$. Formally, for all $\delta > 0$ and every polynomial-time test $t : \Omega^N \to \{0, 1\}$ there exists $\delta' > 0$ such that for every large enough $n$, $$\frac{1}{2} \bP_{x \sim \nu}\left[ t(x) = 0 \right] + \frac{1}{2} \bP_{x \sim T_{\delta} \mu}\left[ t(x) = 1 \right] \le 1 - \delta'$$
This conjecture has several key technical stipulations attempting to conservatively pin down the $\tilde{O}$ in Conjecture \[c:lowDeg\] and a set of *sufficient conditions* to be in this “broad class”. We highlight and explain these key conditions below.
1. The distribution $\mu$ is required to be $S_n$-invariant. Here, a distribution $\mu$ on $\Omega^N$ is said to be $S_n$-invariant if $\bP_\mu(x) = \bP_\mu(\pi \cdot x)$ for all $\pi \in S_n$ and $x \in \Omega^N$, where $\pi$ acts on $x$ by identifying the coordinates of $x$ with the $k$-subsets of $[n]$ and permuting these coordinates according to the permutation on $k$-subsets induced by $\pi$.
2. The $(\log n)^{1 + \Omega(1)}$-wise almost independence requirement on $\mu$ essentially enforces that polynomials of degree at most $(\log n)^{1 + \Omega(1)}$ are unable to distinguish between $\mu$ and $\nu$. More formally, a distribution $\mu$ is $D$-wise almost independent with respect to $\nu$ if every $D$-simple statistic $f$, calibrated and normalized with respect to $\nu$, satisfies that $\bE_{x \sim \mu} f(x) = O(1)$.
3. Rather than $\mu$, the distribution the conjecture asserts is hard to distinguish from $\nu$ is the result $T_\delta \mu$ of applying the noise operator $T_{\delta}$. Here, the distribution $T_{\delta} \mu$ is defined by first sampling $x \sim \mu$, then sampling $y \sim \nu$ and replacing each $x_i$ with $y_i$ independently with probability $\delta$.
These technical conditions are intended to conservatively rule out specific pathological examples. As mentioned in [@hopkinsThesis], the purpose of $T_\delta$ is to destroy algebraic structure that may lead to efficient algorithms that cannot be implemented with low-degree polynomials. For example, if $\mu$ uniform over the solution set to a satisfiable system of equations mod $2$ and $\nu$ is the uniform distribution, it is possible to distinguish these two distributions through Gaussian elimination while the lowest $D$ for which a $D$-simple statistic does so can be as large as $D = \Omega(N)$. The noise operator $T_{\delta}$ rules out distributions with this kind of algebraic structure. The $(\log n)^{1 + \Omega(1)}$-wise requirement on the almost independence of $\mu$ and the $\tilde{O}(D)$ in Conjecture \[c:lowDeg\] are both to account for the fact that some common polynomial time algorithms for natural hypothesis testing problems can only be implemented as degree $O(\log n)$ polynomials. For example, Section 4.2.3 of [@kunisky2019notes] shows that spectral methods can typically be implemented as degree $O(\log n)$ polynomials.
In [@hopkinsThesis], it was mentioned that the $S_n$-invariance condition was included in Conjecture \[c:low-deg-formal\] mainly because most canonical inference problems satisfy this property and, furthermore, that there were no existing counterexamples to the conjecture without it. Recently, [@holmgren2020counterexamples] gave two construction of hypothesis testing problems based on efficiently-correctable binary codes and Reed-Solomon codes. The first construction is for binary $\Omega$ and admits a polynomial-time test despite being $\Omega(n)$-wise almost independent. This shows that $T_{\delta}$ is insufficient to always rule out high-degree algebraic structure that can be used in efficient algorithms. However, this construction also is highly asymmetric and ruled out by $S_n$-invariance condition in Conjecture \[c:low-deg-formal\]. The second construction is for $\Omega = \mathbb{R}$ and admits a polynomial-time test despite being both $\Omega(n)$-wise almost independent and $S_n$-invariant, thus falsifying Conjecture \[c:low-deg-formal\] as stated. However, as discussed in [@holmgren2020counterexamples], the conjecture can easily be remedied by replacing $T_{\delta}$ with another operator, such as the Ornstein-Uhlenbeck noise operator. In this work, only the case of binary $\Omega$ will be relevant to the $\pr{pc}_\rho$ conjecture.
#### The $\pr{pc}_\rho$ Conjecture, Technical Conditions and a Generalization.
The $\pr{pc}_\rho$ hypothesis testing problems and their planted dense subgraph generalizations $\pr{pds}_\rho$ that we consider in this work can be shown to satisfy a wide range of properties sufficient to rule out known counterexamples to the low-degree conjecture. In particular, these problems almost satisfy all three conservative conditions proposed in [@hopkinsThesis], instead satisfying a milder requirement for sufficient symmetry than full $S_n$-invariance.
1. By definition, a general instance of $\pr{pc}_\rho$ with an arbitrary $\rho$ is only invariant to permutations $\pi \in S_n$ that $\rho$ is also invariant to. However, each of the specific hardness assumptions we use in our reductions corresponds to a $\rho$ with a large amount of symmetry and that is invariant to large subgroups of $S_n$. For example, $k\pr{-pc}$ and $k\pr{-pds}$ are invariant to permutations within each part $E_i$, each of which has size $n/k = \omega(\sqrt{n})$. This symmetry seems sufficient to break the error-correcting code approach used to construct counterexamples to the low-degree conjecture in [@holmgren2020counterexamples].
2. As will be shown subsequently in this section, the conditions in the $\pr{pc}_\rho$ conjecture imply that a $\pr{pc}_\rho$ instance be $(\log n)^{1 + \Omega(1)}$-wise almost independent for it to be conjectured to be hard.
3. While $\pr{pc}_\rho$ is not of the form $T_\delta \mu$, its generalization $\pr{pds}_\rho$ at any pair of constant edge densities $0 < q < p < 1$ always is. All of our reductions also apply to input instances of $\pr{pds}_\rho$ and thus a $\pr{pds}_\rho$ variant of the $\pr{pc}_\rho$ conjecture is sufficient to deduce our computational lower bounds. That said, we do not expect that the computational complexity of $\pr{pc}_\rho$ and $\pr{pds}_\rho$ to be different as long as $p$ and $q$ are constant.
As mentioned in Section \[sec:1-PC\], while we restrict our formal statement of the $\pr{pc}_\rho$ conjecture to the specific hardness assumptions we need for our reductions, we believe it should hold generally for $\rho$ with sufficient symmetry. A candidate condition is that $\rho$ is invariant to a subgroup $H \subseteq S_n$ of permutations such that, for each index $i \in [n]$, there are at least $n^{\Omega(n)}$ permutations $\pi \in H$ with $\pi(i) \neq i$. This ensures that $\rho$ has a large number of nontrivial symmetries that are not just permuting coordinates known not to lie in the clique.
We also remark that there are many examples of hypothesis testing problems where the three conditions in [@hopkinsThesis] are violated but low-degree polynomials still seem to accurately predict the performance of the best known efficient algorithms. As mentioned in [@holmgren2020counterexamples], the spiked Wishart model does not quite satisfy $S_n$-invariance but still low-degree predictions are conjecturally accurate. Ordinary $\pr{pc}$ is not of the form $T_\delta \mu$ and the low-degree conjecture accurately predicts the $\pr{pc}$ conjecture, which is widely believed to be true.
#### The Degree Requirement and a Stronger $\pr{pc}_\rho$ Conjecture.
Furthermore, the degree requirement for the almost independence condition of Conjecture \[c:low-deg-formal\] is often not exactly necessary. It is discussed in Section 4.2.5 of [@kunisky2019notes] that, for sufficiently nice distributions $H_0$ and $H_1$, low-degree predictions are often still accurate when the almost independence condition is relaxed to only be $\omega(1)$-wise for any $\omega(1)$ function of $n$. This yields the following stronger variant of the $\pr{pc}_\rho$ conjecture.
\[conj:inf-strong-slpc\] For sufficiently symmetric $\rho$, there is no polynomial time algorithm solving $\pr{pc}_\rho(n, k, 1/2)$ if there is some function $w(n) = \omega_n(1)$ such that the tail bounds on $p_\rho(s)$ in Conjecture \[conj:sl-conj\] are only guaranteed to hold for all $d \le w(n)$.
We conjecture that the $\rho$ in Conjecture \[conj:hard-conj\] are symmetric enough for this conjecture to hold. A nearly identical argument to that in Theorem \[thm:verify\] can be used to show that this stronger $\pr{pc}_\rho$ conjecture implies the exact boundaries in Conjecture \[conj:hard-conj\], without the small polynomial error factors of $O(n^\epsilon)$ and $O(m^\epsilon)$.
We now make several notes on the degree requirement in the $\pr{pc}_\rho$ conjecture, as stated in Conjecture \[conj:sl-conj\]. As will be shown later in this section, the tail bounds on $p_{\rho}(s)$ for a particular $d$ directly imply the $d$-wise almost independence of $\pr{pc}_\rho$. Now note that for any $\rho$ and $k \gg \log n$, there is always a $d$-simple statistic solving $\pr{pc}_\rho$ with $d = O((\log n)^2)$. Specifically, $\mG(n, 1/2)$ has its largest clique of size less than $(2 + \epsilon) \log_2 n$ with probability $1 - o_n(1)$ and any instance of $H_1$ of $\pr{pc}_\rho$ with $k \gg \log n$ has $n^{\omega(1)}$ cliques of size $\lceil 3 \log_2 n \rceil$. Furthermore, the number of cliques of this size can be expressed as a degree $O((\log n)^2)$ polynomial in the edge indicators of a graph. Similarly, the largest clique in an $s$-uniform Erdős-Rényi hypergraph is in general of size $O((\log n)^{1/(s - 1)})$ and a simple clique-counting test distinguishing this from the planted clique hypergraph distribution can be expressed as an $O((\log n)^{s/(s - 1)})$ degree polynomial. This shows that for all $\rho$, the problem $\pr{pc}_\rho$ is not $O((\log n)^2)$-wise almost independent. Furthermore, for any $\delta > 0$, there is some $\rho$ corresponding to a hypergraph variant of $\pr{pc}$ such that $\pr{pc}_\rho$ is not $O((\log n)^{1+ \delta})$-wise almost independent. Thus the tail bounds in Conjecture \[conj:sl-conj\] never hold for $\delta \ge 1$ and, for any $\delta' > 0$, there is some $\rho$ requiring $\delta \le \delta'$ for these tail bounds to be true.
Finally, we remark that there are highly asymmetric examples of $\rho$ for which Conjecture \[conj:inf-strong-slpc\] is not true. Suppose that $n$ is even, let $c > 0$ be an arbitrarily large integer and let $S_1, S_2, \dots, S_{n^c} \subseteq [n/2]$ be a known family of subsets of size $\lceil 3 \log_2 n \rceil$. Now let $\rho$ be sampled by taking the union of an $S_i$ chosen uniformly at random and a size $k - \lceil 3 \log_2 n \rceil$ subset of $\{n/2 + 1, n/2 + 2, \dots, n\}$ chosen uniformly at random. The resulting $\pr{pc}_\rho$ problem can be solved in polynomial time by exhaustively searching for the subset $S_i$. However, this $\rho$ only violates the tail bounds on $p_\rho$ in Conjecture \[conj:sl-conj\] for $d = \Omega_n(\log n/\log \log n)$. If $S_1, S_2, \dots, S_{n^c}$ are sufficiently pseudorandom, then the structure of this $\rho$ only appears in the tails of $p_\rho(s)$ when $s \ge \lceil 3 \log_2 n \rceil$. In particular, the probability that $s \ge \lceil 3 \log_2 n \rceil$ under $p_\rho$ is at least the chance that two independent samples from $\rho$ choose the same $S_i$, which occurs with probability $n^{-c}$. It can be verified the the tail bound of $p_0 \cdot s^{-2d-4}$ in Conjecture \[conj:sl-conj\] only excludes this possibility when $d = \Omega_n(\log n/\log \log n)$. We remark though that this $\rho$ is highly asymmetric and any mild symmetry assumption that would effectively cause the number of $S_i$ to be super-polynomial would break this example.
#### The Low-Degree Conjecture and $\pr{pc}_\rho$.
We now will characterize the power of the optimal $D$-simple statistics for $\pr{pc}_\rho$. The following proposition establishes an explicit formula for $\lrd$ in $\pr{pc}_\rho$, which will be shown in the subsequent theorem to naturally yield the PMF decay condition in the $\pr{pc}_\rho$ conjecture.
\[prop:sl-ld\] Let $\lrd$ be the low-degree likelihood ratio for the hypothesis testing problem $\pr{pc}_\rho(n, k, 1/2)$ between $\mG(n, 1/2)$ and $\mG_\rho(n, k, 1/2)$. For any $D \ge 1$, it follows that $$\|\lrd - 1\|_2^2 = \bE_{S, S' \sim \rho^{\otimes 2}} \left[ \# \textnormal{ of nonempty edge subsets of } S \cap S' \textnormal{ of size at most } D \right]$$
In the notation above, let $N= \binom{n}{2}$ and identify $X \in \{-1, 1\}^N$ with the space of signed adjacency matrices $X$ of $n$-vertex graphs. Let $P_S$ be the distribution on graphs in this space induced by $\pr{pc}(n, k, 1/2)$ conditioned on the clique being planted on the vertices in the subset $S$ i.e. such that $X_{ij}=1$ if $i\in S$ and $j\in S$ and otherwise $X_{ij}=\pm 1$ with probability half each. Now let $\alpha\subseteq \mathcal{E}_0$ be a subset of possible edges. The set of functions $\{\chi_\alpha(X) = \prod_{e\in \alpha} X_e: \alpha \subseteq \mathcal{E}_0\}$ comprises the standard Fourier basis on $\{-1, 1\}^{\mathcal{E}_0}$. For each fixed clique $S$, because $\E_{P_S} X_e=0$ if $e\notin {S\choose 2}$ and non-clique edges are independent, we see that $$\E_{P_S} [\chi_\alpha (X) ]= \mathbf{1} \{V(\alpha)\subseteq S\}$$ We therefore have that $$\bE_{H_1} [\chi_\alpha (X) ] = \bE_{S \sim \rho} \E_{P_S} [\chi_\alpha (X) ] = \bE_{S \sim \rho} \left[ \mathbf{1} \{V(\alpha)\subseteq S\} \right] = \bP_{\rho}\left[ V(\alpha) \subseteq S \right]$$ Now suppose that $S'$ is drawn from $\rho$ independently of $S$. It now follows that $$\begin{aligned}
\bE_{H_1} [\chi_\alpha (X) ]^2 &= \bE_{S \sim \rho} \left[ \mathbf{1} \{V(\alpha)\subseteq S\} \right]^2 \\
&= \bE_{S \sim \rho} \left[ \mathbf{1} \{V(\alpha)\subseteq S\} \right] \cdot \bE_{S' \sim \rho} \left[ \mathbf{1} \{V(\alpha)\subseteq S' \} \right] \\
&= \bE_{S, S' \sim \rho^{\otimes 2}} \left[ \mathbf{1} \{V(\alpha)\subseteq S\} \cdot \mathbf{1} \{V(\alpha)\subseteq S'\} \right] \\
&= \bE_{S, S' \sim \rho^{\otimes 2}} \left[ \mathbf{1} \left\{V(\alpha)\subseteq S \cap S' \right\} \right]\end{aligned}$$ From Equation (\[e:energy\]), we therefore have that $$\|\lrd - 1\|_2^2 = \sum_{1\leq|\alpha|\leq D} \E_{H_1}\left[\chi_\alpha(X)\right]^2 = \bE_{S, S' \sim \rho^{\otimes 2}} \left[ \sum_{1\leq|\alpha|\leq D} \mathbf{1} \left\{V(\alpha)\subseteq S \cap S' \right\} \right]$$ Now observe that the sum $$\sum_{1\leq|\alpha|\leq D} \mathbf{1} \left\{V(\alpha)\subseteq S \cap S' \right\}$$ counting the number of nonempty edge subsets of $S \cap S'$ of size at most $D$.
This proposition now allows us to show the main result of this section, which is that the condition in the $\pr{pc}_\rho$ conjecture is enough to show the failure of low-degree polynomials for $\pr{pc}_\rho$. Combining the next theorem with Conjecture \[c:lowDeg\] would suggest that whenever the PMF decay condition of the $\pr{pc}_\rho$ condition holds, there is no polynomial time algorithm solving $\pr{pc}_\rho(n, k, 1/2)$.
\[thm:sl-ld\] Suppose that $\rho$ satisfies that for any parameter $d = O_n(\log n)$, there is some $p_0 = o_n(1)$ such that $p_{\rho}(s)$ satisfies the tail bounds $$p_{\rho}(s) \le p_0 \cdot \left\{ \begin{array}{ll} 2^{-s^2} &\textnormal{if } 1 \le s^2 < d \\ s^{-2d-4} &\textnormal{if } s^2 \ge d \end{array} \right.$$ Let $\lrd$ be the low-degree likelihood ratio for the hypothesis testing problem $\pr{pc}_\rho(n, k, 1/2)$. Then it also follows that for any parameter $D = O_n(\log n)$, we have $$\|\lrd - 1\|_2 = o_n(1)$$
First observe that the number of nonempty edge subsets of $S \cap S'$ of size at most $D$ can be expressed explicitly as $$f_D(s) = \sum_{\ell = 1}^D \binom{s(s - 1)/2}{\ell}$$ if $s = |S \cap S'|$. Furthermore, we can crudely upper bound $f_D$ in two separate ways. Note that the number of nonempty edge subsets of $S \cap S'$ is exactly $2^{\binom{s}{2}} - 1$ if $s = |S \cap S'|$. Therefore we have that $f_D(s) \le 2^{\binom{s}{2}}$. Furthermore using the upper bound that $\binom{x}{\ell} \le x^\ell$, we have that if $s \ge 3$ then $$f_D(s) = \sum_{\ell = 1}^D \binom{s(s - 1)/2}{\ell} \le \sum_{\ell = 1}^D \left( \frac{s(s - 1)}{2} \right)^\ell \le \frac{\left( \frac{s(s - 1)}{2} \right)^{D + 1} - 1}{\left( \frac{s(s - 1)}{2} \right) - 1} \le s^{2(D + 1)}$$ Combining these two crude upper bounds, we have that $f_D(s) \le \min\left\{ 2^{\binom{s}{2}}, s^{2(D + 1)} \right\}$. Also note that $f_D(0) = f_D(1) = 0$. Combining this with the given bounds on $p_{\rho}(s)$, we have that $$\begin{aligned}
\|\lrd - 1\|_2^2 &= \bE_{S, S' \sim \rho^{\otimes 2}} \left[ f_D(|S \cap S'|) \right] \\
&= \sum_{s = 2}^k p_{\rho}(s) \cdot f_D(s) \\
&\le p_0 \cdot \sum_{1 \le s^2 < D} 2^{-s^2} \cdot f_D(s) + p_0 \cdot \sum_{D \le s^2 \le k^2} s^{-2d-4} \cdot f_D(s) \\
&\le p_0 \cdot \sum_{1 \le s^2 < D} 2^{-s^2} \cdot 2^{\binom{s}{2}} + p_0 \cdot \sum_{D \le s^2 \le k^2} s^{-2D-4} \cdot s^{2(D + 1)} \\
&= p_0 \cdot \sum_{s = 1}^\infty 2^{-\binom{s+1}{2}} + p_0 \cdot \sum_{s = 1}^\infty s^{-2} = O_n(p_0)\end{aligned}$$ which completes the proof of the theorem.
Statistical Query Algorithms and the $\pr{pc}_\rho$ Conjecture {#subsec:2-sq}
--------------------------------------------------------------
In this section, we verify that the lower bounds shown by [@feldman2013statistical] for <span style="font-variant:small-caps;">pc</span> for a generalization of statistical query algorithms hold essentially unchanged for SQ variants of $k\pr{-pc}, k\pr{-bpc}$ and $\pr{bpc}$. We remark at the end of this section why the statistical query model seems ill-suited to characterizing the computational barriers in problems that are tensor or hypergraph problems such as $k\pr{-hpc}$. Since it was shown in Section \[subsec:2-sl-verifying\] that there are specific $\rho$ in $\pr{pc}_\rho$ corresponding to $k\pr{-hpc}$, it similarly follows that the SQ model seems ill-suited to characterizing the barriers $\pr{pc}_\rho$ for general $\rho$. Throughout this section, we focus on $k\pr{-pc}$, as lower bounds in the statistical query model for $k\pr{-bpc}$ and $\pr{bpc}$ will follow from nearly identical arguments.
#### Distributional Problems and SQ Dimension.
The Statistical Algorithm framework of [@feldman2013statistical] applies to distributional problems, where the input is a sequence of i.i.d. observations from a distribution $D$. In order to obtain lower bounds in the statistical query model supporting Conjecture \[conj:hard-conj\], we need to define a distributional analogue of . As in [@feldman2013statistical], a natural distributional version can be obtained by considering a bipartite version of $k\pr{-pc}$, which we define as follows.
Let $k$ divide $n$ and fix a known partition $E$ of $[n]$ into $k$ parts $E_1, E_2, \dots, E_k$ with $|E_i|=n/k$. Let $S\subseteq [n]$ be a subset of indices with $|S\cap E_i|=1$ for each $i\in [k]$. The distribution $D_S$ over $\{0,1\}^n$ produces with probability $1-k/n$ a uniform point $X\sim \mathrm{Unif}(\{0,1\}^n)$ and with probability $k/n$ a point $X$ with $X_i=1$ for all $i\in S$ and $X_{S^c}\sim \mathrm{Unif}(\{0,1\})^{n-k}$. The distributional bipartite $k$-<span style="font-variant:small-caps;">pc</span> problem is to find the subset $S$ given some number of independent samples $m$ from $D_S$.
In other words, the distribution $k\pr{-pc}$ problem is $k\pr{-bpc}$ with $n$ left and $n$ right vertices, a randomly-sized right part of the planted biclique and no $k$-partite structure on the right vertex set. We remark that many of our reductions, such as our reductions to $\pr{rsme}$, $\pr{neg-spca}$, $\pr{mslr}$ and $\pr{rslsr}$, only need the $k$-partite structure along one vertex set of $k\pr{-pc}$ or $k\pr{-bpc}$. This distributional formulation of $k\pr{-pc}$ is thus a valid starting point for these reductions.
We now formally introduce the Statistical Algorithm framework of [@feldman2013statistical] and SQ dimension. Let $\cX=\{0,1\}^n$ denote the space of configurations and let $\cD$ be a set of distributions over $\cX$. Let $\cF$ be a set of solutions and $\cZ:\cD\to 2^\cF$ be a map taking each distribution $D\in \cD$ to a subset of solutions $\cZ(D)\subseteq \cF$ that are defined to be valid solutions for $D$. In our setting, $\cF$ corresponds to clique positions $S$ respecting the partition $E$. Furthermore, since each clique position is in one-to-one correspondence with distributions, there is a single clique $\cZ(D)$ corresponding to each distribution $D$. For $m>0$, the *distributional search problem* $\cZ$ over $\cD$ and $\cF$ using $m$ samples is to find a valid solution $f\in \cZ(D)$ given access to $m$ random samples from an unknown $D\in \cD$.
Classes of algorithms in the framework of [@feldman2013statistical] are defined in terms of access to oracles. The most basic oracle is an unbiased oracle, which evaluates a simple function on a single sample as follows.
Let $D$ be the true unknown distribution. A query to the oracle consists of any function $h:\cX\to \{0,1\}$, and the oracle then takes an independent random sample $X\sim D$ and returns $h(X)$.
Algorithms with access to an unbiased oracle are referred to as *unbiased statistical algorithms*. Since these algorithms access the sampled data only through the oracle, it is possible to prove *unconditional* lower bounds using information-theoretic methods. Another oracle is the $VSTAT$, defined below, which is similar but also allowed to make an adversarial perturbation of the function evaluation. It is shown in [@feldman2013statistical] via a simulation argument that the two oracles are approximately equivalent.
Let $D$ be the true distribution and $t>0$ a sample size parameter. A query to the $VSTAT(t)$ oracle again consists of any function $h:\cX\to [0,1]$, and the oracle returns an arbitrary value $v\in[\E_{ D}h(X)-\tau, \E_{ D}h(X)+\tau]$, where $\tau = \max\{1/t,\sqrt{\E_{ D}h(X)(1-\E_{ D}h(X))/t}\}$.
We borrow some definitions from [@feldman2013statistical]. Given a distribution $D$, we define the inner product $\la f,g\ra_D = \E_{X\sim D}f(X) g(X)$ and the corresponding norm $\|f\|_D = \sqrt{\la f,f\ra_D}$. Given two distributions $D_1$ and $D_2$ both absolutely continuous with respect to $D$, their pairwise correlation is defined to be $$\chi_D(D_1,D_2) = \Big|\Big\la \frac{D_1}D-1,\frac{D_2}D-1\Big\ra_D \Big|=|\la \Dh_1, \Dh_2\ra_D|\,.$$ where $ \Dh_1 = \frac{D_1}D-1$. The *average correlation* $\rho(\cD, D)$ of a set of distributions $\cD$ relative to distribution $D$ is then given by $$\rho(\cD, D) = \frac1{|\cD|^2} \sum_{D_1,D_2\in \cD}\chi_D(D_1,D_2) = \frac1{|\cD|^2} \sum_{D_1,D_2\in \cD}\Big|\Big\la \frac{D_1}D-1,\frac{D_2}D-1\Big\ra_D \Big|\,.$$ Given these definitions, we can now introduce the key quantity from [@feldman2013statistical], statistical dimension, which is defined in terms of average correlation.
\[d:statDimProblem\] Fix $\gamma>0,\eta>0$, and search problem $\cZ$ over set of solutions $\cF$ and class of distributions $\cD$ over $\cX$. We consider pairs $(D,\cD_D)$ consisting of a “reference distribution" $D$ over $\cX$ and a finite set of distributions $\cD_D\subseteq \cD$ with the following property: for any solution $f\in \cF$, the set $\cD_f = \cD_D\setminus \cZ\inv (f)$ has size at least $(1-\eta)\cdot |\cD_D|$. Let $\ell(D,\cD_D)$ be the largest integer $\ell$ so that for any subset $\cD'\subseteq \cD_f$ with $|\cD'|\geq |\cD_f|/\ell$, the average correlation is $|\rho(\cD',D)|<\gamma$ (if there is no such $\ell$ one can take $\ell=0$). The *statistical dimension* with average correlation $\gamma$ and solution set bound $\eta$ is defined to be the largest $\ell(D,\cD_D)$ for valid pairs $(D,\cD_D)$ as described, and is denoted by $\mathrm{SDA}(\cZ,\gamma,\eta)$.
In [@feldman2013statistical], it is shown that statistical dimension immediately yields a lower bound on the number of queries to an unbiased oracle or a $VSTAT$ oracle needed to solve a given distributional search problem.
\[t:sampleBound\] Let $\cX$ be a domain and $\cZ$ a search problem over a set of solutions $\cF$ and a class of distributions $\cD$ over $\cX$. For $\gamma>0$ and $\eta\in (0,1)$, let $\ell = \mathrm{SDA}(\cZ,\gamma,\eta)$. Any (possibly randomized) statistical query algorithm that solves $\cZ$ with probability $\delta>\eta$ requires at least $\ell$ calls to the $VSTAT(1/(3\gamma))$ oracle to solve $\cZ$.
Moreover, any statistical query algorithm requires at least $m$ calls to the Unbiased Oracle for $m = \min\left\{ \frac{\ell(\delta- \eta)}{2(1-\eta)},\frac{(\delta-\eta)^2}{12\gamma}\right\}$. In particular, if $\eta \leq 1/6$, then any algorithm with success probability at least $2/3$ requires at least $\min\{ \ell/4,1/48\gamma\}$ samples from the Unbiased Oracle.
We remark that the number of queries to an oracle is a lower bound on the runtime of the statistical algorithm in question. Furthermore, the number of “samples” $m$ corresponding to a $VSTAT(t)$ oracle is $t$, as this is the number needed to approximately obtain the confidence interval of width $2\tau$ in the definition of the $VSTAT$ oracle above.
#### SQ Lower Bounds for Distributional $k\pr{-pc}$.
We now will use the theorem above to deduce SQ lower bounds for distributional $k\pr{-pc}$. Let $\cS$ be the set of all $k$-subsets of $[n]$ respecting the partition $E$ i.e. $\cS = \{S:|S|=k\text{ and } |S\cap E_i|=1\text{ for }i\in [k]\}$. Note that $|\cS| = (n/k)^k$. We henceforth use $D$ to denote the uniform distribution on $\{0,1\}^n$. The following lemma is as in [@feldman2013statistical], except that we further restrict $S$ and $T$ to be in $\cS$ rather than arbitrary size $k$ subsets of $[n]$, which does not change the bound.
\[l:avgCorr\] For $S,T\in \cS$, $\chi_D(D_S,D_T) = |\la \Dh_S, \Dh_T\ra_D|\leq 2^{|S\cap T|} k^2 / n^2$.
The following lemma is crucial to deriving the SQ dimension of distributional $k\pr{-pc}$ and is similar to Lemma 5.2 in [@feldman2013statistical]. Its proof is deferred to Appendix \[sec:appendix-4\].
\[l:avgCorrLargeSets\] Let $\delta \geq 1/\log n$ and $k\leq n^{1/2 - \delta}$. For any integer $\ell \leq k$, $S\in \cS$, and set $A\subseteq \cS$ with $|A|\geq 2|\cS|/ n^{2\ell \delta}$, $$\frac1{|A|} \sum_{T\in A} \big| \la \Dh_S, \Dh_T\ra_D \big| \leq 2^{\ell + 3}\frac{k^2}{n^2}\,.$$
This lemma now implies the following SQ dimension lower bound for distributional $k\pr{-pc}$.
\[t:SQdim\] For $\delta\geq 1/\log n$ and $k\leq n^{1/2-\delta}$ let $\cZ$ denote the distributional bipartite problem. If $\ell\leq k$ then $SDA(\cZ, 2^{\ell+3} k^2/n^2,\big(\frac nk\big) ^{-k} ) \geq n^{2\ell \delta}/8$.
For each clique position $S$ let $\cD_S = \cD\setminus\{D_S\}$. Then $|\cD_S| = \big(\frac nk\big) ^k -1=\big(1-\big(\frac nk\big) ^{-k}\big)|\cD|$. Now for any $\cD'$ with $|\cD'|\geq 2|\cS|/ n^{2\ell \delta}$ we can apply Lemma \[l:avgCorrLargeSets\] to conclude that $\rho(\cD',D)\leq 2^{\ell + 3}k^2/n^2$. By Definition \[d:statDimProblem\] of statistical dimension this implies the bound stated in the theorem.
Applying Theorem \[t:sampleBound\] to this statistical dimension lower bound yields the following hardness for statistical query algorithms.
For any constant $\delta>0$ and $k\leq n^{1/2-\delta}$, any SQ algorithm that solves the distributional bipartite problem requires $\Omega(n^2/k^2\log n)=\tilde \Omega(n^{1+2\delta})$ queries to the Unbiased Oracle.
This is to be interpreted as impossible, as there are only $n$ right vertices vertices available in the actual bipartite graph. Because all the quantities in Theorem \[t:SQdim\] are the same as in [@feldman2013statistical] up to constants, the same logic as used there allows to deduce a statement regarding the hypothesis testing version, stated there as Theorems 2.9 and 2.10.
For any constant $\delta>0$, suppose $k\leq n^{1/2-\delta}$. Let $D=\mathrm{Unif}(\{0,1\}^n)$ and let $\cD$ be the set of all planted bipartite distributions (one for each clique position). Any SQ algorithm that solves the hypothesis testing problem between $\cD$ and $D$ with probability better than $2/3$ requires $\Omega(n^2/k^2)$ queries to the Unbiased Oracle.
A similar statement holds for VSTAT. There is a $t = n^{\Omega(\log n)}$ such that any randomized SQ algorithm that solves the hypothesis testing problem between $\cD$ and $D$ with probability better than $2/3$ requires at least $t$ queries to $VSTAT(n^{2-\delta}/k^2)$.
We conclude this section by outlining how to extend these lower bounds to distributional versions of $k\pr{-bpc}$ and $\pr{bpc}$ and why the statistical query model is not suitable to deduce hardness of problems that are implicitly tensor or hypergraph problems such as $k\pr{-hpc}$.
#### Extending these SQ Lower Bounds.
Extending to the bipartite case is straightforward and follows by replacing the probability of including each right vertex from $k/n$ to $k_m/m$ where $k_m = O(m^{1/2 - \delta})$. This causes the upper bound in Lemma \[l:avgCorr\] to become $\chi_D(D_S,D_T) = |\la \Dh_S, \Dh_T\ra_D|\leq 2^{|S\cap T|} k_m^2 / m^2$. Similarly, the upper bound in Lemma \[l:avgCorrLargeSets\] becomes $2^{\ell + 3} k_m^2/m^2$, the relevant statistical dimension becomes $SDA(\cZ, 2^{\ell+3} k_m^2/n_m^2,\big(\frac nk\big) ^{-k} ) \geq n^{2\ell \delta}/8$ and the query lower bound in the final corollary becomes $\Omega(m^2/k_m^2 \log n) = \tilde{\Omega}(m^{1 + 2\delta})$ which yields the desired lower bound for $k\pr{-bpds}$. The lower bound for $\pr{bpds}$ follows by the same extension to the ordinary $\pr{pc}$ lower bound in [@feldman2013statistical].
#### Hypergraph and SQ Lower Bounds.
A key component of formulating SQ lower bounds is devising a distributional version of the problem with analogous limits in the SQ model. While there was a natural bipartite extension for $\pr{pc}$, for hypergraph , such an extension does not seem to exist. Treating slices as individual samples yields a problem with statistical query algorithms that can detect a planted clique outside of polynomial time. Consider the function that given a slice, searches for a clique of size $k$ in the induced $(s - 1)$-uniform hypergraph on the neighbors of the vertex corresponding to the slice, outputting $1$ if such a clique is found. Without a planted clique, the probability a slice contains such a clique is exponentially small, while it is $k/n$ if there is a planted clique. An alternative is to consider individual entries as samples, but this discards the hypergraph structure of the problem entirely.
Robustness, Negative Sparse PCA and Supervised Problems {#sec:3-robust-and-supervised}
=======================================================
In this section, we apply reductions in Part \[part:reductions\] to deduce computational lower bounds for robust sparse mean estimation, negative sparse PCA, mixtures of SLRs and robust SLR that follow from specific instantiations of the $\pr{pc}_\rho$ conjecture. Specifically, we apply the reduction $k\pr{-bpds-to-isgm}$ to deduce a lower bound for $\pr{rsme}$, the reduction $\pr{bpds-to-neg-spca}$ to deduce a lower bound for $\pr{neg-spca}$ and the reduction $k\pr{-bpds-to-mslr}$ to deduce lower bounds for $\pr{mslr}$, $\pr{uslr}$ and $\pr{rslr}$. This section is primarily devoted to summarizing the implications of these reductions and making explicit how their input parameters need to be set to deduce our lower bounds. The implications of these lower bounds and the relation between them and algorithms was previously discussed in Section \[sec:1-problems\]. In cases where the discussion in Section \[sec:1-problems\] was not exhaustive, such as the details of starting with different hardness assumptions, the number theoretic condition $\pr{(t)}$ or the adversary implied by our reductions for $\pr{rslr}$, we include omitted details in this section.
All lower bounds that will be shown in this section are *computational lower bounds* in the sense introduced in the beginning of Section \[sec:1-problems\]. To deduce our computational lower bounds from reductions, it suffices to verify the three criteria in Condition \[cond:lb\]. We remark that this section is technical due to the number-theoretic constraints imposed by the prime number $r$ in our reductions. However, these technical details are tangential to the primary focus of the paper, which is reduction techniques.
Robust Sparse Mean Estimation {#subsec:3-rsme}
-----------------------------
We first observe that the instances of $\pr{isgm}$ output by the reduction $k\pr{-bpds-to-isgm}$ are instances of $\pr{rsme}$ in Huber’s contamination model. Let $r$ be a prime number and $\epsilon \ge 1/r$. It then follows that a sample from $\pr{isgm}_D(n, k, d, \mu, 1/r)$ is of the form $$\pr{mix}_{\epsilon}\left( \mN(\mu \cdot \mathbf{1}_S, I_d), \mD_O \right)^{\otimes n} \quad \text{where} \quad \mD_O = \pr{mix}_{\epsilon^{-1} r^{-1}} \left( \mN(\mu \cdot \mathbf{1}_S, I_d), \mN(\mu' \cdot \mathbf{1}_S, I_d) \right)$$ for some possibly random $S$ with $|S| = k$ and where $(1 - r^{-1}) \mu + r^{-1} \cdot \mu' = 0$. Note that this is a distribution in the composite hypothesis $H_1$ of $\pr{rsme}(n, k, d, \tau, \epsilon)$ in Huber’s contamination model with outlier distribution $\mD_O$ and where $\tau = \| \mu \cdot \mathbf{1}_S \|_2 = \mu \sqrt{k}$. This observation and the discussion in Section \[subsec:2-tvreductions\] yields that it suffices to exhibit a reduction to $\pr{isgm}$ to show the lower bound for $\pr{rsme}$ in Theorem \[thm:rsme-lb\].
We now discuss the condition $\pr{(t)}$ and the number-theoretic constraint arising from applying Theorem \[thm:isgmreduction\] to prove Theorem \[thm:rsme-lb\]. As mentioned in Section \[subsec:1-problems-rsme\], while this condition does not restrict our computational lower bound for $\pr{rsme}$ in the main regime of interest where $\epsilon^{-1} = n^{o(1)}$, it also can be removed using the design matrices $R_{n, \epsilon}$ in place of $K_{r, t}$. Despite this, we introduce the condition $\pr{(t)}$ in this section as it will be a necessary condition in subsequent lower bounds in Part \[part:lower-bounds\].
As discussed in Section \[sec:2-supervised\], the prime power $r^t$ in $k\pr{-bpds-to-isgm}$ is intended to be a fairly close approximation to each of $k_n, \sqrt{n}$ and $\sqrt{N}$. We will now see that in order to show tight computational lower bounds for $\pr{rsme}$, this approximation needs to be very close to asymptotically exact, leading to the technical condition $\pr{(t)}$. First note that the level of signal $\mu$ produced by the reduction $k\pr{-bpds-to-isgm}$ is $$\mu \le \frac{\delta}{2 \sqrt{6\log (k_nmr^t) + 2\log (p - q)^{-1}}} \cdot \frac{1}{\sqrt{r^t(r - 1)(1 + (r - 1)^{-1})}} = \tilde{\Theta}\left( r^{-(t + 1)/2} \right)$$ where $\delta = \Theta(1)$ and the estimate above holds whenever $p$ and $q$ are constants. Therefore the corresponding $\tau$ is given by $\tau = \mu \sqrt{k} = \tilde{O}(k^{1/2} r^{-(t + 1)/2})$. Furthermore, in Theorem \[thm:isgmreduction\], the output number of samples $N$ is constrained to satisfy that $N = o(k_n r^t)$ and $n = O(k_n r^t)$. Combining this with the fact that in order to be starting with a hard $k\pr{-bpds}$ instance, we need $k_n = o(\sqrt{n})$ to hold, it is straightforward to see that these constraints together require that $N = o(r^{2t})$. If this is close to tight with $N = \tilde{\Theta}(r^{2t})$, the computational lower bound condition on $\tau$ becomes $$\tau = \tilde{O}\left(k^{1/2} r^{-(t + 1)/2}\right) = \tilde{\Theta}\left(k^{1/2} \epsilon^{1/2} N^{-1/4} \right)$$ where we also use the fact that $\epsilon = \Theta(1/r)$. Note that this corresponds exactly to the desired computational lower bound of $N = \tilde{o}(k^2 \epsilon^2/\tau^4)$. Furthermore, if instead $N = \tilde{\Theta}(a^{-1}r^{2t})$ for some $a = \omega(1)$, then the lower bound we show degrades to $N = \tilde{o}(k^2 \epsilon^2/a\tau^4)$, and is suboptimal by a factor $a = \omega(1)$. Thus ideally we would like the pair of parameters $(N, r)$ to be such that there infinitely many $N$ with something like $N = \tilde{\Theta}(r^{2t})$ true for some positive integer $t \in \mathbb{N}$. This leads exactly to the condition $\pr{(t)}$ below.
Suppose that $(N, r)$ is a pair of parameters with $N \in \mathbb{N}$ and $r = r(N)$ is non-decreasing. The pair $(N, r)$ satisfies $\pr{(t)}$ if either $r = N^{o(1)}$ as $N \to \infty$ or if $r = \tilde{\Theta}(N^{1/t})$ where $t \in \mathbb{N}$ is a constant even integer.
The key property arising from condition $\pr{(t)}$ is captured in the following lemma.
\[lem:propT\] Suppose that $(N, r)$ satisfies $\pr{(t)}$ and let $r' = r'(N)$ be any non-decreasing positive integer parameter satisfying that $r' = \tilde{\Theta}(r)$. Then there are infinitely many values of $N$ with the following property: there exists $s \in \mathbb{N}$ such that $\sqrt{N} = \tilde{\Theta}\left( (r')^s \right)$.
If $r = \tilde{\Theta}(N^{1/t})$ where $t \in \mathbb{N}$ is a constant even integer, then this property is satisfied trivially by taking $s = t/2$. Now suppose that $r = N^{o(1)}$ and note that this also implies that $r' = N^{o(1)}$. Now consider the function $$f(N) = \frac{\log N}{2\log r'(N)}$$ Since $r' = N^{o(1)}$, it follows that $f(N) \to \infty$ as $N \to \infty$. Suppose that $N$ is sufficiently large so that $f(N) > 1$. Note that, for each $N$, either $r'(N + 1) \ge r'(N) + 1$ or $r'(N + 1) = r'(N)$. If $r'(N + 1) = r'(N)$, then $f(N + 1) > f(N)$. If $r'(N + 1) \ge r'(N) + 1$, then $$\frac{f(N+1)}{f(N)} \le \frac{g(N)}{g(r'(N))} \quad \text{where} \quad g(x) = \frac{\log (x + 1)}{\log x}$$ Note that $g(x)$ is a decreasing function of $x$ for $x \ge 2$. Since $f(N) > 1$, it follows that $r'(N) < N$ and hence the above inequality implies that $f(N + 1) < f(N)$. Summarizing these observations, every time $f(N)$ increases it must follow that $r'(N + 1) = r'(N)$. Fix a sufficiently large positive integer $s$ and consider the first $N$ for which $f(N) \ge s$. It follows by our observation that $r'(N) = r'(N - 1)$ and furthermore that $f(N - 1) < s$. This implies that $N - 1 < r'(N)^{2s}$ and $N \ge r'(N)^{2s}$. Since $r'(N)$ is a positive integer, it then must follow that $N = r'(N)^{2s}$. Since such an $N$ exists for every sufficiently large $s$, this completes the proof of the lemma.
This condition $\pr{(t)}$ will arise in a number of others problems that we map to, including robust SLR and dense stochastic block models, for a nearly identical reason. We now formally prove Theorem \[thm:rsme-lb\]. All remaining proofs in this section will be of a similar flavor and where details are similar, we only sketch them to avoid redundancy.
[thm:rsme-lb]{} \[Lower Bounds for $\pr{rsme}$\] If $k, d$ and $n$ are polynomial in each other, $k = o(\sqrt{d})$ and $\epsilon < 1/2$ is such that $(n, \epsilon^{-1})$ satisfies $\pr{(t)}$, then the $k\pr{-bpc}$ conjecture or $k\pr{-bpds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for $\pr{rsme}(n, k, d, \tau, \epsilon)$ at all sample complexities $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$.
To prove this theorem, we will to show that Theorem \[thm:isgmreduction\] implies that $k\pr{-bpds-to-isgm}$ fills out all of the possible growth rates specified by the computational lower bound $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$ and the other conditions in the theorem statement. As discussed earlier in this section, it suffices to reduce in total variation to $\pr{isgm}(n, k, d, \mu, 1/r)$ where $1/r \le \epsilon$ and $\mu = \tau/\sqrt{k}$.
Fix a constant pair of probabilities $0 < q < p \le 1$ and any sequence of parameters $(n, k, d, \tau, \epsilon)$ all of which are implicitly functions of $n$ such that $(n, \epsilon^{-1})$ satisfies $\pr{(t)}$ and $(n, k, d, \tau, \epsilon)$ satisfy the conditions $$n \le c \cdot \frac{k^2 \epsilon^2}{\tau^4 \cdot (\log n)^{2+2c'}} \quad \text{and} \quad w k^2 \le d$$ for sufficiently large $n$, an arbitrarily slow-growing function $w = w(n) \to \infty$ at least satisfying that $w(n) = n^{o(1)}$, a sufficiently small constant $c > 0$ and a sufficiently large constant $c' > 0$. In order to fulfill the criteria in Condition \[cond:lb\], we now will specify:
1. a sequence of parameters $(M, N, k_M, k_N, p, q)$ such that the $k\pr{-bpds}$ instance with these parameters is hard according to Conjecture \[conj:hard-conj\]; and
2. a sequence of parameters $(n', k, d, \tau, \epsilon)$ with a subsequence that satisfies three conditions: (2.1) the parameters on the subsequence are in the regime of the desired computational lower bound for $\pr{rsme}$; (2.2) they have the same growth rate as $(n, k, d, \tau, \epsilon)$ on this subsequence; and (2.3) such that $\pr{rsme}$ with the parameters on this subsequence can be produced by the reduction $k\pr{-bpds-to-isgm}$ with input $k\pr{-bpds}(M, N, k_M, k_N, p, q)$.
By the discussion in Section \[subsec:2-tvreductions\], this would be sufficient to show the desired computational lower bound. We choose these parameters as follows:
- let $r$ be a prime with $r \ge \epsilon^{-1}$ and $r \le 2\epsilon^{-1}$, which exists by Bertrand’s postulate and can be found in $\text{poly}(\epsilon^{-1}) \le \text{poly}(n)$ time;
- let $t$ be such that $r^t$ is the closest power of $r$ to $\sqrt{n}$, let $n' = \lfloor w^{-2} r^{2t} \rfloor$, let $k_N = \lfloor \sqrt{n'} \rfloor$ and let $N = wk_N^2 \le k_N r^t$; and
- set $\mu = \tau/\sqrt{k}$, $k_M = k$ and $M = w k^2$.
The given inequality and parameter settings above rearrange to the following condition on $n'$: $$n' \le w^{-2} r^{2t} = O\left( \frac{r^{2t}}{n} \cdot \frac{k^2 \epsilon^2}{\tau^4 \cdot (\log n)^{2+2c'}} \right)$$ Furthermore, the given inequality yields the constraint on $\mu$ that $$\mu = \tau \cdot k^{-1/2} \le \frac{c^{1/4} \epsilon^{1/2}}{n^{1/4} (\log n)^{(1 + c')/2}} = \Theta \left( \frac{r^{t/2}}{n^{1/4}} \cdot \frac{1}{\sqrt{r^{t + 1} (\log n)^{1+c'}}} \right)$$ As long as $\sqrt{n} = \tilde{\Theta}(r^t)$ then: (2.1) the inequality above on $n'$ would imply that $(n', k, d, \tau, \epsilon)$ is in the desired hard regime; (2.2) $n$ and $n'$ have the same growth rate since $w = n^{o(1)}$; and (2.3) taking $c'$ large enough would imply that $\mu$ satisfies the conditions needed to apply Theorem \[thm:isgmreduction\] to yield the desired reduction. By Lemma \[lem:propT\], there is an infinite subsequence of the input parameters such that $\sqrt{n} = \tilde{\Theta}(r^t)$. This verifies the three criteria in Condition \[cond:lb\]. Following the argument in Section \[subsec:2-tvreductions\], Lemma \[lem:3a\] now implies the theorem.
As alluded to in Section \[subsec:1-problems-rsme\], replacing $K_{r, t}$ with $R_{n, \epsilon}$ in the applications of dense Bernoulli rotations in $k\pr{-bpds-to-isgm}$ removes condition $(\pr{t})$ from this lower bound. Specifically, applying $k\pr{-bpds-to-isgm}_R$ and Corollary \[thm:mod-isgmreduction\] in place of $k\pr{-bpds-to-isgm}$ and replacing the dimension $r^t$ with $L$ in the argument above yields the lower bound shown below. Note that condition $(\pr{t})$ in Theorem \[thm:rsme-lb\] is replaced by the looser requirement that $\epsilon = \tilde{\Omega}(n^{-1/2})$. As discussed at the end of Section \[subsec:3-rsme-reduction\], this requirement arises from the condition $\epsilon \gg L^{-1} \log L$ in Corollary \[thm:mod-isgmreduction\]. We remark that the condition $\epsilon = \tilde{\Omega}(n^{-1/2})$ is implicit in $(\pr{t})$ and hence the following corollary is strictly stronger than Theorem \[thm:rsme-lb\].
\[cor:rsme-lb-mod\] If $k, d$ and $n$ are polynomial in each other, $k = o(\sqrt{d})$ and $\epsilon < 1/2$ is such that $\epsilon = \tilde{\Omega}(n^{-1/2})$, then the $k\pr{-bpc}$ conjecture or $k\pr{-bpds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for $\pr{rsme}(n, k, d, \tau, \epsilon)$ at all sample complexities $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$.
We remark that only assuming the $k\pr{-pc}$ conjecture also yields hardness for $\pr{rsme}$. In particular $k\pr{-pc}$ can be mapped to the asymmetric bipartite case by considering the bipartite subgraph with $k/2$ parts on one size and $k/2$ on the other. Showing hardness for $\pr{rsme}$ from $k\pr{-pc}$ then reduces to the hardness yielded by $k\pr{-bpc}$ with $M = N$. Examining this restricted setting in the theorem above and passing through an analogous argument yields a computational lower bound at the slightly suboptimal rate $$n = \tilde{o}\left(k^2 \epsilon/\tau^2\right) \quad \text{as long as} \quad \tau^2 \log n = o(\epsilon)$$ When $(\log n)^{-O(1)} \lesssim \epsilon \lesssim 1/\log n$, then the optimal $k$-to-$k^2$ gap is recovered up to $\text{polylog}(n)$ factors by this result.
Negative Sparse PCA {#subsec:3-neg-spca}
-------------------
In this section, we deduce Theorem \[thm:neg-spca-lb\] on the hardness of $\pr{neg-spca}$ using the reduction $\pr{bpds-to-neg-spca}$ and Theorem \[thm:neg-spca\]. Because this reduction does not bear the number-theoretic considerations of the reduction to $\pr{rsme}$, this proof will be substantially more straightforward.
[thm:neg-spca-lb]{} \[Lower Bounds for $\pr{neg-spca}$\] If $k, d$ and $n$ are polynomial in each other, $k = o(\sqrt{d})$ and $k = o(n^{1/6})$, then the $\pr{bpc}$ or $\pr{bpds}$ conjecture for constant $0 < q < p \le 1$ both imply conjecture implies a computational lower bound for $\pr{neg-spca}(n, k, d, \theta)$ at all levels of signal $\theta = \tilde{o}(\sqrt{k^2/n})$.
We show that Theorem \[thm:neg-spca\] implies that $\pr{bpds-to-neg-spca}$ fills out all of the possible growth rates specified by the computational lower bound $\theta = \tilde{o}(\sqrt{k^2/n})$ and the other conditions in the theorem statement. Fix a constant pair of probabilities $0 < q < p \le 1$ and a sequence of parameters $(n, k, d, \theta)$ all of which are implicitly functions of $n$ such that $$\theta \le cw^{-1} \cdot \sqrt{\frac{k^2}{n (\log n)^2}}, \quad wk \le n^{1/6} \quad \text{and} \quad w k^2 \le d$$ for sufficiently large $n$, an arbitrarily slow-growing function $w = w(n) \to \infty$ where $w(n) = n^{o(1)}$ and a sufficiently small constant $c > 0$. In order to fulfill the criteria in Condition \[cond:lb\], we now will specify: a sequence of parameters $(M, N, k_M, k_N, p, q)$ such that the $\pr{bpds}$ instance with these parameters is hard according to Conjecture \[conj:hard-conj\], and such that $\pr{neg-spca}$ with the parameters $(n, k, d, \theta)$ can be produced by the reduction $\pr{bpds-to-neg-spca}$ applied to $\pr{bpds}(M, N, k_M, k_N, p, q)$. These parameters along with the internal parameter $\tau$ of the reduction can be chosen as follows:
- let $N = n$, $k_N = w^{-1} \sqrt{n}$, $k_M = k$ and $M = w k^2$; and
- let $\tau > 0$ be such that $$\tau^2 = \frac{4n\theta}{k_N k(1 - \theta)}$$
It is straightforward to verify that the inequality above upper bounding $\theta$ implies that $\tau \le 4c/\sqrt{\log n}$ and thus satisfies the condition on $\tau$ needed to apply Lemma \[lem:randomrotations\] and Theorem \[thm:neg-spca\] for a sufficiently small $c > 0$. Furthermore, this setting of $\tau$ yields $$\theta = \frac{\tau^2 k_N k}{4n + \tau^2 k_N k}$$ Furthermore, note that $d \ge M$ and $n \gg M^3$ by construction. Applying Theorem \[thm:neg-spca\] now verifies the desired property above. This verifies the criteria in Condition \[cond:lb\] and, following the argument in Section \[subsec:2-tvreductions\], Lemma \[lem:3a\] now implies the theorem.
We remark the the constraint $k = o(n^{1/6})$, as mentioned in Section \[subsec:1-problems-negspca\], is a technical condition that we believe should not be necessary for the theorem to hold. This is similar to the constraint arising in the strong reduction to sparse PCA given by $\pr{Clique-to-Wishart}$ in [@brennan2019optimal]. In $\pr{Clique-to-Wishart}$, the random matrix comparison between Wishart and $\pr{goe}$ produced the technical condition that $k = o(n^{1/6})$ in a similar manner to how our comparison result between Wishart and inverse Wishart produces the same constraint here. We also remark that the reduction $\pr{Clique-to-Wishart}$ can be used here to yield the same hardness for $\pr{neg-spca}$ as in Theorem \[thm:neg-spca-lb\] based only on the $\pr{pc}$ conjecture. This is achieved by the reduction that maps from $\pr{pc}$ to sparse PCA with $d = wk^2$ as a first step using $\pr{Clique-to-Wishart}$ and then uses the second step of $\pr{bpds-to-neg-spca}$ to map to $\pr{neg-spca}$.
Mixtures of Sparse Linear Regressions and Robustness {#subsec:3-slr}
----------------------------------------------------
In this section, we deduce Theorems \[thm:uslr-lb\], \[thm:mslr-lb\] and \[thm:rslr-lb\] on the hardness of unsigned, mixtures of and robust sparse linear regression, all using the reduction $k\pr{-bpds-to-mslr}$ with different parameters $(r, \epsilon)$ and Theorem \[thm:slr-reduction\]. We begin by showing bounds for $\pr{uslr}(n, k, d, \tau)$.
We first make the following simple but important observation. Note that a single sample from $\pr{uslr}$ is of the form $y = | \tau \cdot \langle v_S, X \rangle + \mN(0, 1)|$, which has the same distribution as $|y'|$ where $y' = \tau r \cdot \langle v_S, X \rangle + \mN(0, 1)$ and $r$ is an independent Rademacher random variable. Note that $y'$ is a sample from $\pr{mslr}_D(n, k, d, \gamma, 1/2)$ with $\gamma = \tau$. Thus to show a computational lower bound for $\pr{uslr}(n, k, d, \tau)$, it suffices to show a lower bound for $\pr{mslr}(n, k, d, \tau)$.
\[thm:uslr-lb\] If $k, d$ and $n$ are polynomial in each other, $k = o(\sqrt{d})$ and $k = o(n^{1/6})$, then the $k\pr{-bpc}$ or $k\pr{-bpds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for $\pr{uslr}(n, k, d, \tau)$ at all sample complexities $n = \tilde{o}(k^2/\tau^4)$.
To prove this theorem, we will show that Theorem \[thm:slr-reduction\] implies that $k\pr{-bpds-to-mslr}$ applied with $r = 2$ fills out all of the possible growth rates specified by the computational lower bound $n = \tilde{o}(k^2/\tau^4)$ and the other conditions in the theorem statement. As mentioned above, it suffices to reduce in total variation to $\pr{mslr}(n, k, d, \tau)$. Fix a constant pair of probabilities $0 < q < p \le 1$ and any sequence of parameters $(n, k, d, \tau)$ all of which are implicitly functions of $n$ with $$n \le c \cdot \frac{k^2}{w^2 \cdot \tau^4 \cdot (\log n)^{4}}, \quad wk \le n^{1/6} \quad \text{and} \quad w k^2 \le d$$ for sufficiently large $n$, an arbitrarily slow-growing function $w = w(n) \to \infty$ and a sufficiently small constant $c > 0$. In order to fulfill the criteria in Condition \[cond:lb\], we now will specify: a sequence of parameters $(M, N, k_M, k_N, p, q)$ such that the $k\pr{-bpds}$ instance with these parameters is hard according to Conjecture \[conj:hard-conj\], and such that $\pr{mslr}$ with the parameters $(n, k, d, \tau, 1/2)$ can be produced by the reduction $k\pr{-bpds-to-mslr}$ applied with $r = 2$ to $\pr{bpds}(M, N, k_M, k_N, p, q)$. By the discussion in Section \[subsec:2-tvreductions\], this would be sufficient to show the desired computational lower bound. We choose these parameters as follows:
- let $t$ be such that $2^t$ is the smallest power of two greater than $w\sqrt{n}$, let $k_N = \lfloor \sqrt{n} \rfloor$ and let $N = wk_N^2 \le k_N 2^t$; and
- set $k_M = k$ and $M = w k^2$.
Now note that $\tau^2$ is upper bounded by $$\tau^2 \le \frac{c^{1/2} \cdot k}{wn^{1/2} \cdot (\log n)^{2}} = O\left( \frac{k_N k_M}{N \log (MN)} \right)$$ Furthermore, we have that $$\tau^2 \le \frac{c^{1/2} \cdot k}{wn^{1/2} \cdot (\log n)^{2}} = \Theta\left( \frac{k_M}{2^{t + 1} \log (k_N M \cdot 2^t) \log n} \right)$$ Therefore $\tau$ satisfies the conditions needed to apply Theorem \[thm:slr-reduction\] for a sufficiently small $c > 0$. Also note that $n \gg M^3$ and $d \ge M$ by construction. Applying Theorem \[thm:slr-reduction\] now verifies the desired property above. This verifies the criteria in Condition \[cond:lb\] and, following the argument in Section \[subsec:2-tvreductions\], Lemma \[lem:3a\] now implies the theorem.
The proof of the theorem above also directly implies Theorem \[thm:mslr-lb\]. This yields our main computational lower bounds for $\pr{mslr}$, which are stated below.
[thm:mslr-lb]{} \[Lower Bounds for $\pr{mslr}$\] If $k, d$ and $n$ are polynomial in each other, $k = o(\sqrt{d})$ and $k = o(n^{1/6})$, then the $k\pr{-bpc}$ or $k\pr{-bpds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for $\pr{mslr}(n, k, d, \tau)$ at all sample complexities $n = \tilde{o}(k^2/\tau^4)$.
Now observe that the instances of $\pr{mslr}$ output by the reduction $k\pr{-bpds-to-mslr}$ applied with $r > 2$ are instances of $\pr{rslr}$ in Huber’s contamination model. Let $r$ be a prime number and $\epsilon \ge 1/r$. Also let $X \sim \mN(0, I_d)$ and $y = \tau \cdot \langle v_S, X \rangle + \eta$ where $\eta \sim \mN(0, 1)$ where $|S| = k$. By Definition \[defn:mslr-imbalanced\], $\pr{mslr}_D(n, k, d, \tau, 1/r)$ is of the form $$\pr{mix}_{\epsilon}\left( \mL(X, y), \mD_O \right)^{\otimes n} \quad \text{where} \quad \mD_O = \pr{mix}_{\epsilon^{-1} r^{-1}} \left( \mL(X, y), \mL' \right)$$ for some possibly random $S$ with $|S| = k$ and where $\mL'$ denotes the distribution on pairs $(X, y)$ that are jointly Gaussian with mean zero and $(d + 1) \times (d + 1)$ covariance matrix $$\left[\begin{matrix} \Sigma_{XX} & \Sigma_{Xy} \\ \Sigma_{yX} & \Sigma_{yy} \end{matrix} \right] = \left[\begin{matrix} I_d + \frac{(a^2 - 1)\gamma^2}{1 + \gamma^2} \cdot v_S v_S^\top & -a\gamma \cdot v_S \\ -a\gamma \cdot v_S^\top & 1 + \gamma^2 \end{matrix} \right]$$ This yields a very particular construction of an adversary in Huber’s contamination model, which we show in the next theorem yields a computational lower bound for $\pr{rslr}$. With the observations above, the proof of this theorem is similar to that of Theorem \[thm:rsme-lb\] and is deferred to Appendix \[subsec:appendix-3-part-3\].
If $k, d$ and $n$ are polynomial in each other, $\epsilon < 1/2$ is such that $(n, \epsilon^{-1})$ satisfies $\pr{(t)}$, $k = o(\sqrt{d})$ and $k = o(n^{1/6})$, then the $k\pr{-bpc}$ conjecture or $k\pr{-bpds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for $\pr{rslr}(n, k, d, \tau, \epsilon)$ at all sample complexities $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$.
Our main computational lower bound for $\pr{rslr}$ follows from the same argument applied to the reduction $k\pr{-bpds-to-mslr}_R$ instead of $k\pr{-bpds-to-mslr}$ and using Corollary \[thm:mod-slr-reduction\] instead of Theorem \[thm:slr-reduction\]. As in Corollary \[cor:rsme-lb-mod\], this replaces condition $(\pr{t})$ with the weaker condition that $\epsilon = \tilde{\Omega}(n^{-1/2})$.
[thm:rslr-lb]{} \[Lower Bounds for $\pr{rslr}$\] If $k, d$ and $n$ are polynomial in each other, $\epsilon < 1/2$ is such that $\epsilon = \tilde{\Omega}(n^{-1/2})$, $k = o(\sqrt{d})$ and $k = o(n^{1/6})$, then the $k\pr{-bpc}$ conjecture or $k\pr{-bpds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for $\pr{rslr}(n, k, d, \tau, \epsilon)$ at all sample complexities $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$.
Community Recovery and Partition Models {#sec:3-all-community}
=======================================
In this section, we devise several reductions based on $\pr{Bern-Rotations}$ and $\pr{Tensor-Bern-Rotations}$ using the design matrices and tensors from Section \[sec:2-bernoulli-rotations\] to reduce from $k\pr{-pc}, k\pr{-pds}, k\pr{-bpc}$ and $k\pr{-bpds}$ to dense stochastic block models, hidden partition models and semirandom planted dense subgraph. These reductions are briefly outlined in Section \[subsec:1-tech-design-matrices\].
Furthermore, the heuristic presented at the end of Section \[subsec:1-tech-design-matrices\] predicts the computational barriers for the problems in this section. The $\ell_2$ norm of the matrix $\bE[X]$ corresponding to a $k\pr{-pc}$ instance is $\Theta(k)$, which is just below $\tilde{\Theta}(\sqrt{n})$ when this $k\pr{-pc}$ is near its computational barrier. Furthermore, it can be verified that the $\ell_2$ norm of the matrices $\bE[X]$ corresponding to the problems in this section are:
- If $\gamma = P_{11} - P_0$ in the $\pr{isbm}$ notation of Section \[subsec:1-problems-sbm\], then a direct calculation yields that the $\ell_2$ norm corresponding to $\pr{isbm}$ is $\Theta(n\gamma/k)$.
- In $\pr{ghpm}$ and $\pr{bhpm}$, the corresponding $\ell_2$ norm can be verified to be $\Theta(K\gamma\sqrt{r})$.
- In our adversarial construction for $\pr{semi-cr}$, the corresponding $\ell_2$ norm is $\Theta(k \gamma)$ where $\gamma = P_1 - P_0$.
Following the heuristic, setting these equal to $\tilde{\Theta}(\sqrt{n})$ yields the predicted computational barriers of $\gamma^2 = \tilde{\Theta}(k^2/n)$ in $\pr{isbm}$, $\gamma^2 = \tilde{\Theta}(n/rK^2)$ in $\pr{ghpm}$ and $\pr{bhpm}$ and $\gamma^2 = \tilde{\Theta}(n/k^2)$ in $\pr{semi-cr}$. We now present our reduction to $\pr{isbm}$.
Dense Stochastic Block Models with Two Communities {#sec:3-community}
--------------------------------------------------
We begin by recalling the definition of the imbalanced 2-block stochastic block model from Section \[subsec:1-problems-sbm\].
Let $k$ and $n$ be positive integers such that $k$ divides $n$. The distribution $\pr{isbm}_D(n, k, P_{11}, P_{12}, P_{22})$ over $n$-vertex graphs $G$ is sampled by first choosing an $(n/k)$-subset $C \subseteq [n]$ uniformly at random and sampling the edges of $G$ independently with the following probabilities $$\bP\left[ \{i, j \} \in E(G) \right] = \left\{ \begin{array}{ll} P_{11} &\textnormal{if } i, j \in C \\ P_{12} &\textnormal{if exactly one of } i, j \textnormal{ is in } C \\ P_{22} &\textnormal{if } i, j \in [n] \backslash C \end{array} \right.$$
Given a subset $C \subseteq [n]$ of size $n/k$, we let $\pr{isbm}_D(n, C, P_{11}, P_{12}, P_{22})$ denote $\pr{isbm}$ as defined above conditioned on the latent subset $C$. As discussed in Section \[subsec:2-formulations\], this naturally leads to a composite hypothesis testing problem between $$H_0 : G \sim \mG\left(n, P_0 \right) \quad \text{and} \quad H_1 : G \sim \pr{isbm}_D(n, k, P_{11}, P_{12}, P_{22})$$ where $P_0$ is any edge density in $(0, 1)$. This section is devoted to showing reductions from $k\pr{-pds}$ and $k\pr{-pc}$ to $\pr{isbm}$ formulated as this hypothesis testing problem. In particular, we will focus on $P_{11}, P_{12}, P_{22}$ and $P_0$ all of which are bounded away from $0$ and $1$ by a constant, and which satisfy that $$\label{eqn:deg-isbm}
P_0 = \frac{1}{k} \cdot P_{11} + \left( 1 - \frac{1}{k} \right) P_{12} = \frac{1}{k} \cdot P_{12} + \left( 1 - \frac{1}{k} \right) P_{22}$$ These two constraints allow $P_{11}, P_{12}, P_{22}$ to be reparameterized in terms of a signal parameter $\gamma$ as $$\label{eqn:isbm-param}
P_{11} = P_0 + \gamma, \quad P_{12} = P_0 - \frac{\gamma}{k - 1} \quad \text{and} \quad P_{22} = P_0 + \frac{\gamma}{(k - 1)^2}$$ There are two main reasons why we restrict to the parameter regime enforced by the density constraints in (\[eqn:deg-isbm\]) – it creates a model with nearly uniform expected degrees and which is a mean-field analogue of recovering the first community in the $k$-block stochastic block model.
- *Nearly Uniform Expected Degrees*: Observe that, conditioned on $C$, the expected degree of a vertex $i \in [n]$ in $\pr{isbm}(n, k, P_{11}, P_{12}, P_{22})$ is given by $$\bE\left[ \deg(i) | C \right] = \left\{ \begin{array}{ll} \left( \frac{n}{k} - 1 \right) \cdot P_{11} + \frac{n(k - 1)}{k} \cdot P_{12} &\textnormal{if } i \in C \\ \frac{n}{k} \cdot P_{12} + \left( \frac{n(k - 1)}{k} - 1 \right) \cdot P_{22} &\textnormal{if } i \in [n] \backslash C \end{array} \right.$$ Thus the density constraints in (\[eqn:deg-isbm\]) ensure that these differ by at most $1$ from each other and from $(n - 1)P_0$. Thus all of the vertices in $\pr{isbm}(n, k, P_{11}, P_{12}, P_{22})$ and the $H_0$ model $\mG\left(n, P_0 \right)$ have approximately the same expected degree. This precludes simple degree and total edge thresholding tests that are optimal in models of single community detection that are not degree-corrected. As discussed in Section \[subsec:1-problems-semicr\], the planted dense subgraph model has a detection threshold that differs from the conjectured Kesten-Stigum threshold for recovery of the planted dense subgraph. Thus to obtain computational lower bounds for a hypothesis testing problem that give tight recovery lower bounds, calibrating degrees is crucial. The main result of this section can be viewed as showing approximate degree correction is sufficient to obtain the Kesten-Stigum threshold for $\pr{isbm}$ through a reduction from $k\pr{-pds}$ and $k\pr{-pc}$.
- *Mean-Field Analogue of First Community Recovery in $k\pr{-sbm}$*: As discussed in Section \[subsec:1-problems-sbm\], the imbalanced 2-block stochastic block model $\pr{isbm}_D(n, k, P_{11}, P_{12}, P_{22})$ is roughly a mean-field analogue of recovering the first community $C_1$ in a $k$-block stochastic block model. More precisely, consider a graph $G$ wherein the vertex set $[n]$ is partitioned into $k$ latent communities $C_1, C_2, \dots, C_k$ each of size $n/k$ and edges are then included in the graph $G$ independently such that intra-community edges appear with probability $p$ while inter-community edges appear with probability $q < p$. The distribution $\pr{isbm}_D(n, k, P_{11}, P_{12}, P_{22})$ can be viewed as a mean-field analogue of recovering a first community $C = C_1$ in the $k$-block model, when $$P_{11} = p, \quad P_{12} = q \quad \text{and} \quad P_{22} = \frac{1}{k - 1} \cdot p + \left(1 - \frac{1}{k - 1} \right) q$$ Here, $P_{22}$ approximately corresponds to the average edge density on the subgraph of the $k$-block model restricted to $[n] \backslash C_1$. This analogy between $\pr{isbm}$ and $k\pr{-sbm}$ is also why we choose to parameterize $\pr{isbm}$ in terms of $k$ rather than the size $n/k$ of $C$.
As discussed in Section \[subsec:1-problems-sbm\], if $k = o(\sqrt{n})$, the conjectured recovery threshold for efficient recovery in $k\pr{-sbm}$ is the Kesten-Stigum threshold of $$\frac{(p - q)^2}{q(1 - q)} \gtrsim \frac{k^2}{n}$$ while the statistically optimal rate of recovery is when this level of signal is instead $\tilde{\Omega}(k^4/n^2)$. Furthermore, the information-theoretic threshold and conjectured computational barrier are the same for $\pr{isbm}$ in the regime defined by (\[eqn:deg-isbm\]). Parameterizing $\pr{isbm}$ in terms of $\gamma$ as in (\[eqn:isbm-param\]), the Kesten-Stigum threshold can be expressed as $\gamma^2 = \tilde{\Omega}(k^2/n)$. The objective of this section is give a reduction from $k\pr{-pds}$ to $\pr{isbm}$ in the dense regime with $\min\{P_0, 1 - P_0\} = \Omega(1)$ up to the Kesten-Stigum threshold.
The first reduction of this section $k$<span style="font-variant:small-caps;">-pds-to-isbm</span> is shown in Figure \[fig:isbm-reduction\] and maps to the case where $P_0 = 1/2$ and (\[eqn:isbm-param\]) is only approximately true. In a subsequent corollary, a simple modification of this reduction will map to all $P_0$ with $\min\{P_0, 1 - P_0\} = \Omega(1)$ and show (\[eqn:isbm-param\]) holds exactly. The following theorem establishes the approximate Markov transition properties of $k$<span style="font-variant:small-caps;">-pds-to-isbm</span>. The proof of this theorem follows a similar structure to the proof of Theorem \[thm:isgmreduction\]. Recall that $\Phi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-x^2/2} dx$ denotes the standard normal CDF.
\[thm:isbm\] Let $N$ be a parameter and $r = r(N) \ge 2$ be a prime number. Fix initial and target parameters as follows:
- [Initial]{.nodecor} $k\pr{-bpds}$ [Parameters:]{.nodecor} vertex count $N$, subgraph size $k = o(N)$ dividing $N$, edge probabilities $0 < q < p \le 1$ with $\min\{q, 1 - q\} = \Omega(1)$ and $p - q \ge N^{-O(1)}$, and a partition $E$ of $[N]$.
- [Target]{.nodecor} $\pr{isbm}$ [Parameters:]{.nodecor} $(n, r)$ where $\ell = \frac{r^t - 1}{r - 1}$ and $n = kr\ell$ for some parameter $t = t(N) \in \mathbb{N}$ satisfying that that $$m \le kr^t \le kr\ell \le \textnormal{poly}(N)$$ where $m$ is the smallest multiple of $k$ larger than $\left( \frac{p}{Q} + 1 \right) N$ and where $$Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{ p = 1\}} \left( \sqrt{q} - 1 \right)$$
- [Target]{.nodecor} $\pr{isbm}$ [Edge Strengths:]{.nodecor} $(P_{11}, P_{12}, P_{22})$ given by $$P_{11} = \Phi\left( \frac{\mu(r - 1)^2}{r^{t +1}}\right), \quad P_{12} = \Phi\left( - \frac{\mu(r - 1)}{r^{t+1}}\right) \quad \textnormal{and} \quad P_{22} = \Phi\left( \frac{\mu}{r^{t +1}}\right)$$ where $\mu \in (0, 1)$ satisfies that $$\mu \le \frac{1}{2 \sqrt{6\log (kr\ell) + 2\log (p - Q)^{-1}}} \cdot \min \left\{ \log \left( \frac{p}{Q} \right), \log \left( \frac{1 - Q}{1 - p} \right) \right\}$$
Let $\mathcal{A}(G)$ denote $k$<span style="font-variant:small-caps;">-pds-to-isbm</span> applied to the graph $G$ with these parameters. Then $\mathcal{A}$ runs in $\textnormal{poly}(N)$ time and it follows that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mG_E(N, k, p, q) \right), \, \pr{isbm}_D(n, r, P_{11}, P_{12}, P_{22}) \right) &= O\left( \frac{k}{\sqrt{N}} + e^{-\Omega(N^2/km)} + (kr\ell)^{-1} \right) \\
\TV\left( \mathcal{A}\left( \mG(N, q) \right), \, \mG(n, 1/2) \right) &= O\left( e^{-\Omega(N^2/km)} + (kr\ell)^{-1} \right)\end{aligned}$$
**Algorithm** $k$<span style="font-variant:small-caps;">-pds-to-isbm</span>
*Inputs*: $k$ instance $G \in \mG_N$ with dense subgraph size $k$ that divides $N$, and the following parameters
- partition $E$ of $[N]$ into $k$ parts of size $N/k$, edge probabilities $0 < q < p \le 1$
- let $m$ be the smallest multiple of $k$ larger than $\left( \frac{p}{Q} + 1 \right) N$ where $$Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{ p = 1\}} \left( \sqrt{q} - 1 \right)$$
- output number of vertices $n = kr\ell$ where $r$ is a prime number $r$, $\ell = \frac{r^t - 1}{r - 1}$ for some $t \in \mathbb{N}$ and $$m \le kr^t \le kr\ell \le \text{poly}(N)$$
- mean parameter $\mu \in (0, 1)$ satisfying that $$\mu \le \frac{1}{2 \sqrt{6\log n + 2\log (p - Q)^{-1}}} \cdot \min \left\{ \log \left( \frac{p}{Q} \right), \log \left( \frac{1 - Q}{1 - p} \right) \right\}$$
1. *Symmetrize and Plant Diagonals*: Compute $M_{\text{PD1}} \in \{0, 1\}^{m \times m}$ with partition $F$ of $[m]$ as $$M_{\text{PD1}} \gets \pr{To-}k\textsc{-Partite-Submatrix}(G)$$ applied with initial dimension $N$, partition $E$, edge probabilities $p$ and $q$ and target dimension $m$.
2. *Pad*: Form $M_{\text{PD2}} \in \{0, 1\}^{kr^t \times kr^t}$ by embedding $M_{\text{PD1}}$ as the upper left principal submatrix of $M_{\text{PD2}}$ and then adding $kr^t - m$ new indices for columns and rows, with all missing entries sampled i.i.d. from $\text{Bern}(Q)$. Let $F'_i$ be $F_i$ with $r^t - m/k$ of the new indices. Sample $k$ random permutations $\sigma_i$ of $F_i'$ independently for each $1 \le i \le k$ and permute the indices of the rows and columns of $M_{\text{PD2}}$ within each part $F'_i$ according to $\sigma_i$.
3. *Bernoulli Rotations*: Let $F''$ be a partition of $[kr\ell]$ into $k$ equally sized parts. Now compute the matrix $M_{\text{R}} \in \mathbb{R}^{kr\ell \times kr\ell}$ as follows:
1. For each $i, j \in [k]$, apply $\pr{Tensor-Bern-Rotations}$ to the matrix $(M_{\text{PD2}})_{F_i', F_j'}$ with matrix parameter $A_1 = A_2 = K_{r, t}$, rejection kernel parameter $R_{\pr{rk}} = kr\ell$, Bernoulli probabilities $0 < Q < p \le 1$, output dimension $r\ell$, $\lambda_1 = \lambda_2 = \sqrt{1 + (r - 1)^{-1}}$ and mean parameter $\mu$.
2. Set the entries of $(M_{\text{R}})_{F''_i, F''_j}$ to be the entries in order of the matrix output in (1).
4. *Threshold and Output*: Now construct the graph $G'$ with vertex set $[kr\ell]$ such that for each $i > j$ with $i, j \in [kr\ell]$, we have $\{i, j \} \in E(G')$ if and only if $(M_{\text{R}})_{ij} \ge 0$. Output $G'$ with randomly permuted vertex labels.
To prove this theorem, we begin by proving a lemma analyzing the dense Bernoulli rotations step of $k$<span style="font-variant:small-caps;">-pds-to-isbm</span>. Define $v_{S, F', F''}(M)$ as in Section \[subsec:3-rsme-reduction\]. The proof of the next lemma follows similar steps to the proof of Lemma \[lem:isgm-rotations\].
\[lem:isbm-rotations\] Let $F'$ and $F''$ be a fixed partitions of $[kr^t]$ and $[kr\ell]$ into $k$ parts of size $r^t$ and $r\ell$, respectively, and let $S \subseteq [kr^t]$ where $|S \cap F_i'| = 1$ for each $1 \le i \le k$. Let $\mathcal{A}_{\textnormal{3}}$ denote Step 3 of $k\pr{-pds-to-isbm}$ with input $M_{\textnormal{PD2}}$ and output $M_{\textnormal{R}}$. Suppose that $p, Q$ and $\mu$ are as in Theorem \[thm:isbm\], then it follows that $$\begin{aligned}
&\TV\Big( \mathcal{A}_{\textnormal{3}} \left( \mathcal{M}_{[kr^t] \times [kr^t]} \left( S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right), \\
&\quad \quad \quad \quad \left. \mL\left( \frac{\mu(r -1)}{r} \cdot v_{S, F', F''}(K_{r, t}) v_{S, F', F''}(K_{r, t})^\top + \mN(0, 1)^{\otimes kr\ell \times kr\ell} \right) \right) = O\left((kr\ell)^{-1}\right) \\
&\TV\left( \mathcal{A}_{\textnormal{3}} \left(\textnormal{Bern}(Q)^{\otimes kr^t \times kr^t} \right), \, \mN(0, 1)^{\otimes kr\ell \times kr\ell} \right) = O\left((kr\ell)^{-1}\right)\end{aligned}$$
First consider the case where $M_{\textnormal{PD2}} \sim \mathcal{M}_{[kr^t] \times [kr^t]} \left( S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q) \right)$. Observe that the submatrices of $M_{\textnormal{PD2}}$ are distributed as follows $$(M_{\textnormal{PD2}})_{F_i', F_j'} \sim \pr{pb}\left(F_i' \times F_j', (S \cap F_i', S \cap F_j'), p, Q\right)$$ and are independent. Combining upper bound on the singular values of $K_{r, t}$ in Lemma \[lem:Krtsv\] with Corollary \[cor:tensor-bern-rotations\] implies that $$\TV\left( (M_{\textnormal{R}})_{F''_i, F''_j}, \, \mL\left( \frac{\mu(r -1)}{r} \cdot (K_{r, t})_{\cdot, S \cap F_i'} (K_{r, t})_{\cdot, S \cap F_j'}^\top + \mN(0, 1)^{\otimes r\ell \times r\ell} \right) \right) = O\left(r^{2t} \cdot (kr\ell)^{-3} \right)$$ Since the submatrices $(M_{\textnormal{R}})_{F''_i, F''_j}$ are independent, the tensorization property of total variation in Fact \[tvfacts\] implies that $\TV\left( M_{\textnormal{R}}, \mL(Z) \right) = O\left(k^2r^{2t} \cdot (kr\ell)^{-3} \right) = O\left((kr\ell)^{-1}\right)$ where the submatrices $Z_{F''_i, F_j''}$ are independent and satisfy $$Z_{F''_i, F_j''} \sim \mL\left( \frac{\mu(r -1)}{r} \cdot (K_{r, t})_{\cdot, S \cap F_i'} (K_{r, t})_{\cdot, S \cap F_j'}^\top + \mN(0, 1)^{\otimes r\ell \times r\ell} \right)$$ Note that the entries of $Z$ are independent Gaussians each with variance $1$ and $Z$ has mean given by $\mu(1 + r^{-1}) \cdot v_{S, F', F''}(K_{r, t}) v_{S, F', F''}(K_{r, t})^\top$, by the definition of $v_{S, F', F''}(K_{r, t})$. This proves the first total variation upper bound in the statement of the lemma. Now suppose that $M_{\textnormal{PD2}} \sim \textnormal{Bern}(Q)^{\otimes kr^t \times kr^t}$. Corollary \[cor:tensor-bern-rotations\] implies that $$\TV\left( (M_{\textnormal{R}})_{F''_i, F''_j}, \, \mN(0, 1)^{\otimes r\ell \times r\ell} \right) = O\left(r^{2t} \cdot (kr\ell)^{-3} \right)$$ for each $1 \le i, j \le k$. Since the submatrices $(M_{\textnormal{R}})_{F''_i, F''_j}$ of $M_{\textnormal{R}}$ are independent, it follows that $$\TV\left( M_{\textnormal{R}}, \, \mN(0, 1)^{\otimes kr\ell \times kr\ell} \right) = O\left(k^2r^{2t} \cdot (kr\ell)^{-3} \right) = O\left((kr\ell)^{-1}\right)$$ by the tensorization property of total variation in Fact \[tvfacts\], completing the proof of the lemma.
The next lemma is immediate but makes explicit the precise guarantees for Step 4 of $k\pr{-pds-to-isbm}$.
\[lem:thresholding-isbm\] Let $F', F'', S$ and $T$ be as in Lemma \[lem:isbm-rotations\]. Let $\mathcal{A}_{\textnormal{4}}$ denote Step 4 of $k\pr{-pds-to-isbm}$ with input $M_{\textnormal{R}}$ and output $G'$. Then $$\begin{aligned}
\mathcal{A}_{\textnormal{4}}\left( \frac{\mu(r -1)}{r} \cdot v_{S, F', F''}(K_{r, t}) v_{S, F', F''}(K_{r, t})^\top + \mN(0, 1)^{\otimes kr\ell \times kr\ell} \right) &\sim \pr{isbm}_D(kr\ell, r, P_{11}, P_{12}, P_{22}) \\
\mathcal{A}_{\textnormal{4}} \left( \mN(0, 1)^{\otimes kr\ell \times kr\ell} \right) &\sim \mG(kr\ell, 1/2)\end{aligned}$$ where $P_{11}, P_{12}$ and $P_{22}$ are as in Theorem \[thm:isbm\].
First observe that, since Lemma \[lem:suborthogonalmatrices\] implies that each column of $K_{r, t}$ contains exactly $(r - 1)\ell$ entries equal to $1/\sqrt{r^t(r - 1)}$ and $\ell$ entries equal to $(1 - r)/\sqrt{r^t(r - 1)}$, it follows that $v_{S, F', F''}(K_{r, t})$ contains $k(r - 1)\ell$ entries equal to $1/\sqrt{r^t(r - 1)}$ and $k\ell$ entries equal to $(1 - r)/\sqrt{r^t(r - 1)}$. Therefore there is a subset $T \subseteq [kr\ell]$ with $|T| = k\ell$ such that the $kr\ell \times kr\ell$ mean matrix $Z = v_{S, F', F''}(K_{r, t}) v_{S, F', F''}(K_{r, t})^\top$ has entries $$Z_{ij} = \frac{1}{r^t(r - 1)} \cdot \left\{ \begin{array}{ll} (r - 1)^2 &\textnormal{if } i, j \in S \\ -(r - 1) &\textnormal{if } i \in S \text{ and } j \not \in S \text{ or } i \not \in S \text{ and } j \in S \\ 1 &\textnormal{if } i, j \not \in S \end{array} \right.$$ Since the vertices of $G'$ are randomly permuted, it follows by definition now that if $$M_{\textnormal{R}} \sim \mL\left( \frac{\mu(r -1)}{r} \cdot v_{S, F', F''}(K_{r, t}) v_{S, F', F''}(K_{r, t})^\top + \mN(0, 1)^{\otimes kr\ell \times kr\ell} \right)$$ then $G' \sim \pr{isbm}_D(kr\ell, k\ell, P_{11}, P_{12}, P_{22})$, proving the first distributional equality in the lemma. The second distributional equality follows from the fact that $\Phi(0) = 1/2$.
We now complete the proof of Theorem \[thm:isbm\] using a similar application of Lemma \[lem:tvacc\] as in the proof of Theorem \[thm:isgmreduction\].
We apply Lemma \[lem:tvacc\] to the steps $\mathcal{A}_i$ of $\mathcal{A}$ under each of $H_0$ and $H_1$. Define the steps of $\mathcal{A}$ to map inputs to outputs as follows $$(G, E) \xrightarrow{\mathcal{A}_1} (M_{\text{PD1}}, F) \xrightarrow{\mathcal{A}_2} (M_{\text{PD2}}, F') \xrightarrow{\mathcal{A}_3} (M_{\text{R}}, F'') \xrightarrow{\mathcal{A}_{\text{4}}} G'$$ Under $H_1$, consider Lemma \[lem:tvacc\] applied to the following sequence of distributions $$\begin{aligned}
\mathcal{P}_0 &= \mG_E(N, k, p, q) \\
\mathcal{P}_1 &= \mathcal{M}_{[m] \times [m]}(S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \quad \text{where } S \sim \mU_m(F) \\
\mathcal{P}_2 &= \mathcal{M}_{[kr^t] \times [kr^t]}(S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \quad \text{where } S \sim \mU_{kr^t}(F') \\
\mathcal{P}_3 &= \frac{\mu(r -1)}{r} \cdot v_{S, F', F''}(K_{r, t}) v_{S, F', F''}(K_{r, t})^\top + \mN(0, 1)^{\otimes kr\ell \times kr\ell} \quad \text{where } S \sim \mU_{kr^t}(F') \\
\mathcal{P}_{\text{4}} &= \pr{isbm}_D(kr\ell, r, P_{11}, P_{12}, P_{22})\end{aligned}$$ Applying Lemma \[lem:submatrix\], we can take $$\epsilon_1 = 4k \cdot \exp\left( - \frac{Q^2N^2}{48pkm} \right) + \sqrt{\frac{C_Q k^2}{2m}}$$ where $C_Q = \max\left\{ \frac{Q}{1 - Q}, \frac{1 - Q}{Q} \right\}$. The step $\mathcal{A}_2$ is exact and we can take $\epsilon_2 = 0$. Applying Lemma \[lem:isbm-rotations\] and averaging over $S \sim \mU_{kr^t}(F')$ using the conditioning property of total variation in Fact \[tvfacts\] yields that we can take $\epsilon_3 = O\left((kr\ell)^{-1}\right)$. By Lemma \[lem:thresholding-isbm\], Step 4 is exact and we can take $\epsilon_4 = 0$. By Lemma \[lem:tvacc\], we therefore have that $$\TV\left( \mathcal{A}\left( \mG_E(N, k, p, q) \right), \, \pr{isbm}(n, r, P_{11}, P_{12}, P_{22}) \right) = O\left( \frac{k}{\sqrt{N}} + e^{-\Omega(N^2/km)} + (kr\ell)^{-1} \right)$$ which proves the desired result in the case of $H_1$. Under $H_0$, consider the distributions $$\begin{aligned}
\mathcal{P}_0 &= \mG(N, q) \\
\mathcal{P}_1 &= \text{Bern}(Q)^{\otimes m \times m} \\
\mathcal{P}_2 &= \text{Bern}(Q)^{\otimes kr^t \times kr^t} \\
\mathcal{P}_3 &= \mN(0, 1)^{\otimes kr\ell \times kr\ell} \\
\mathcal{P}_{\text{4}} &= \mG(kr\ell, 1/2)\end{aligned}$$ As above, Lemmas \[lem:submatrix\], \[lem:isbm-rotations\] and \[lem:thresholding-isbm\] imply that we can take $$\epsilon_1 = 4k \cdot \exp\left( - \frac{Q^2N^2}{48pkm} \right), \quad \epsilon_2 = 0, \quad \epsilon_3 = O\left((kr\ell)^{-1}\right) \quad \text{and} \quad \epsilon_{\text{4}} = 0$$ By Lemma \[lem:tvacc\], we therefore have that $$\TV\left( \mathcal{A}\left( \mG(N, q) \right), \mG(n, 1/2) \right) = O\left( e^{-\Omega(N^2/kn)} + (kr\ell)^{-1} \right)$$ which completes the proof of the theorem.
We now prove that a slight modification to this reduction will map to all $P_0$ with $\min\{P_0, 1 - P_0\} = \Omega(1)$ and to the setting where the density constraints in (\[eqn:isbm-param\]) hold exactly.
\[thm:isbm-mod\] Let $0 < q < p \le 1$ be constant and let $N, r, k, E, \ell$ and $n$ be as in Theorem \[thm:isbm\] with the additional condition that $kr^{3/2} = o(r^{2t})$. Suppose that $P_0$ satisfies $\min\{P_0, 1 - P_0 \} = \Omega(1)$ and $\gamma \in (0, 1)$ satisfies that $$\gamma \le \frac{c}{r^{t - 1} \sqrt{\log (k r \ell)}}$$ for a sufficiently small constant $c > 0$. Then there is a $\textnormal{poly}(N)$ time reduction $\mathcal{A}$ from graphs on $N$ vertices to graphs on $n$ vertices satisfying that $$\begin{aligned}
&\TV\left( \mathcal{A}\left( \mG_E(N, k, p, q) \right), \, \pr{isbm}_D\left(n, r, P_0 + \gamma, P_0 - \frac{\gamma}{k - 1}, P_0 + \frac{\gamma}{(k - 1)^2} \right) \right) \\
&\quad \quad = O\left( \frac{k \mu^3 r^{3/2}}{r^{2t}} + \frac{k}{\sqrt{N}} + e^{-\Omega(N^2/km)} + (kr\ell)^{-1} \right) \\
&\TV\left( \mathcal{A}\left( \mG(N, q) \right), \, \mG(n, P_0) \right) = O\left( e^{-\Omega(N^2/km)} + (kr\ell)^{-1} \right)\end{aligned}$$
Consider the reduction $\mathcal{A}$ that adds a simple post-processing step to $k$<span style="font-variant:small-caps;">-pds-to-isbm</span> as follows. On input graph $G$ with $N$ vertices:
1. Form the graph $G_1$ by applying $k$<span style="font-variant:small-caps;">-pds-to-isbm</span> to $G$ with parameters $N, r, k, E, \ell, n$ and $\mu$ where $\mu$ is given by $$\mu = \frac{r^{t + 1}}{(r - 1)^2} \cdot \Phi^{-1}\left( \frac{1}{2} + \frac{1}{2} \cdot \min\{P_0, 1 - P_0\}^{-1} \cdot \gamma \right)$$ and $\Phi^{-1}$ is the inverse of the standard normal CDF.
2. If $P_0 \le 1/2$, output the graph $G_2$ formed by independently including each edge of $G_1$ in $G_2$ with probability $2P_0$. If $P_0 > 1/2$, form $G_2$ instead by including each edge of $G_1$ in $G_2$ and including each non-edge of $G_1$ in $G_2$ as an edge independently with probability $2P_0 - 1$.
This clearly runs in $\text{poly}(N)$ time and it suffices to establish its approximate Markov transition properties. Let $\mathcal{A}_1$ and $\mathcal{A}_2$ denote the two steps above with input-output pairs $(G, G_1)$ and $(G_1, G_2)$, respectively. Let $C \subseteq [n]$ be a fixed subset of size $n/r$ and define $$\begin{aligned}
&P_{11} = \Phi\left( \frac{\mu(r - 1)^2}{r^{t +1}}\right), \quad P_{12} = \Phi\left( - \frac{\mu(r - 1)}{r^{t+1}}\right) \quad \textnormal{and} \quad P_{22} = \Phi\left( \frac{\mu}{r^{t +1}}\right) \\
&P_{11}' = P_0 + \gamma, \quad P_{12}' = P_0 - \frac{\gamma}{r - 1} \quad \text{and} \quad P_{22}' = P_0 + \frac{\gamma}{(r - 1)^2}\end{aligned}$$ We will show that $$\label{eqn:density-comparison}
\TV\left( \mathcal{A}_2\left( \pr{isbm}_D\left(n, C, P_{11}, P_{12}, P_{22} \right) \right), \, \pr{isbm}_D\left(n, C, P_{11}', P_{12}', P_{22}' \right) \right) = O\left( \frac{k \mu^3 r^{3/2}}{r^{2t}} \right) = o(1)$$ where the upper bound is $o(1)$ since $kr^{3/2} = o(r^{2t})$. First consider the case where $P_0 \le 1/2$. Step 2 above yields by construction that $$\mathcal{A}_2\left( \pr{isbm}_D\left(n, C, P_{11}, P_{12}, P_{22} \right) \right) \sim \pr{isbm}_D\left(n, C, 2P_0 P_{11}, 2P_0 P_{12}, 2P_0 P_{22} \right)$$ Suppose that $X(r) \in \{0, 1\}^m$ is sampled by first sampling $X' \sim \text{Bin}(m, r)$ and then letting $X$ be selected uniformly at random from all elements of $\{0, 1\}^m$ with support size $X'$. It follows that $X(r) \sim \text{Bern}(r)^{\otimes m}$ since both distributions are permutation-invariant and their support sizes have the same distribution. Now the data-processing inequality in Fact \[tvfacts\] implies that $$\TV\left( \text{Bern}(r)^{\otimes m}, \, \text{Bern}(r')^{\otimes m} \right) = \TV\left( X(r), X(r') \right) \le \TV\left( \text{Bin}(m, r), \text{Bin}(m, r') \right)$$ which can be upper bounded with Lemma \[lem:bintv\]. Using the fact that the edge indicators of $\pr{isbm}$ conditioned on $C$ are independent, the tensorization property in Fact \[tvfacts\] and Lemma \[lem:bintv\], we now have that $$\begin{aligned}
&\TV\left( \pr{isbm}_D\left(n, C, 2P_0 P_{11}, 2P_0 P_{12}, 2P_0 P_{22} \right), \, \pr{isbm}_D\left(n, C, P_{11}', P_{12}', P_{22}' \right) \right) \\
&\quad \quad \le \TV\left( \text{Bern}(2P_0 P_{11})^{\otimes \binom{n/r}{2}}, \, \text{Bern}(P_{11}')^{\otimes \binom{n/r}{2}} \right) + \TV\left( \text{Bern}(2P_0 P_{12})^{\otimes \frac{n^2(r - 1)}{r^2}}, \, \text{Bern}(P_{12}')^{\otimes \frac{n^2(r - 1)}{r^2}} \right) \\
&\quad \quad \quad \quad + \TV\left( \text{Bern}(2P_0 P_{22})^{\otimes \binom{n(1 - 1/r)}{2}}, \, \text{Bern}(P_{22}')^{\otimes \binom{n(1 - 1/r)}{2}} \right) \\
&\quad \quad \le \left| 2P_0 P_{11} - P_{11}' \right| \cdot \sqrt{\frac{\binom{n/r}{2}}{2P'_{11}(1 - P'_{11})}} + \left| 2P_0 P_{12} - P_{12}' \right| \cdot \sqrt{\frac{n^2(r - 1)}{2r^2 P'_{12}(1 - P'_{12})}} \\
&\quad \quad \quad \quad + \left| 2P_0 P_{22} - P_{22}' \right| \cdot \sqrt{\frac{\binom{n(1 - 1/r)}{2}}{2P'_{22}(1 - P'_{22})}} \\
&\quad \quad \le \left| 2P_0 P_{11} - P_{11}' \right| \cdot O\left( \frac{n}{r} \right) + \left| 2P_0 P_{12} - P_{12}' \right| \cdot O\left( \frac{n}{\sqrt{r}} \right) + \left| 2P_0 P_{22} - P_{22}' \right| \cdot O(n)\end{aligned}$$ where the third inequality uses the fact that $P'_{11}, P'_{12}$ and $P'_{22}$ are each bounded away from $0$ and $1$. Observe that the definition of $\mu$ ensures $$\frac{1}{2} + \frac{1}{2P_0} \cdot \gamma = \Phi\left( \frac{\mu (r - 1)^2}{r^{t + 1}} \right)$$ which implies that $2P_0 P_{11} = P_{11}'$. We now use a standard Taylor approximation for the error function $\Phi(x) - 1/2$ around zero, given by $\Phi(x) = \frac{1}{2} + \frac{x}{\sqrt{2\pi}} + O(x^3)$ when $x \in (-1, 1)$. Observe that $$\begin{aligned}
\left| 2P_0 P_{12} - P_{12}' \right| &= 2P_0 \cdot \left| \Phi\left( - \frac{\mu(r - 1)}{r^{t+1}}\right) - \frac{1}{2} + \frac{\gamma}{2P_0 (r - 1)} \right| \\
&= 2P_0 \cdot \left| \Phi\left( - \frac{\mu(r - 1)}{r^{t+1}}\right) - \frac{1}{2} + \frac{1}{r - 1} \left( \Phi\left( \frac{\mu (r - 1)^2}{r^{t + 1}} \right) - \frac{1}{2} \right) \right| \\
&= O\left( \frac{\mu^3 r^2}{r^{3t}} \right)\end{aligned}$$ An analogous computation shows that $\left| 2P_0 P_{22} - P_{22}' \right| = O\left( \mu^3/r^{3t - 1} \right)$. Combining all of these bounds now yields Equation (\[eqn:density-comparison\]) after noting that $n = kr\ell = O(kr^t)$ implies that $n\mu^3r^{3/2}/r^{3t} = O(kr^{3/2}/r^{2t})$. A nearly identical argument considering the complement of the graph $G_1$ and replacing with $P_0$ with $1 - P_0$ establishes Equation (\[eqn:density-comparison\]) in the case when $P_0 > 1/2$. Now observe that $$\mathcal{A}_2 \left( \mG(n, 1/2) \right) \sim \mG(n, P_0)$$ by definition. Now consider applying Lemma \[lem:tvacc\] to the steps $\mathcal{A}_1$ and $\mathcal{A}_2$ using an analogous recipe as in the proof of Theorem \[thm:isbm\]. We have that $\epsilon_1$ is bounded by Theorem \[thm:isbm\] and $\epsilon_2$ is bounded by the argument above. Note that in order to apply Theorem \[thm:isbm\] here, it must follow that the required bound on $\mu$ is met. Observe that $$\gamma = 2P_0 \left( \Phi\left( \frac{\mu (r - 1)^2}{r^{t + 1}} \right) - \frac{1}{2} \right) = \Theta\left( \frac{\mu}{r^{t - 1}} \right)$$ and hence if $\gamma$ satisfies the upper bound in the statement of the corollary for a sufficiently small constant $c$, then $\mu$ satisfies the requirement in Theorem \[thm:isbm\] since $p$ and $q$ are constant. This application of Lemma \[lem:tvacc\] now yields the desired two approximate Markov transition properties and completes the proof of the corollary.
We now show that setting parameters in the reduction of Corollary \[thm:isbm-mod\] as in the recipe set out in Theorems \[thm:rsme-lb\] and \[thm:uslr-lb\] now shows that we can fill out the parameter space for $\pr{isbm}$ obeying the edge density constraints of (\[eqn:isbm-param\]) below the Kesten-Stigum threshold. This proves the following computational lower bound for $\pr{isbm}$. We remark that typically the parameter regime of interest for the $k$-block stochastic block model is when $k = n^{o(1)}$, and thus the conditions and $k = o(n^{1/3})$ are only mild restrictions here. Note that the condition $\pr{(t)}$ here is the same condition that was introduced in Section \[subsec:3-rsme\].
[thm:isbm-lb]{} \[Lower Bounds for $\pr{isbm}$\] Suppose that $(n, k)$ satisfy condition , that $k$ is prime or $k = \omega_n(1)$ and $k = o(n^{1/3})$, and suppose that $P_0 \in (0, 1)$ satisfies $\min\{P_0, 1 - P_0 \} = \Omega_n(1)$. Consider the testing problem $\pr{isbm}(n, k, P_{11}, P_{12}, P_{22})$ where $$P_{11} = P_0 + \gamma, \quad P_{12} = P_0 - \frac{\gamma}{k - 1} \quad \text{and} \quad P_{22} = P_0 + \frac{\gamma}{(k - 1)^2}$$ Then the $k\pr{-pc}$ conjecture or $k\pr{-pds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for $\pr{isbm}(n, k, P_{11}, P_{12}, P_{22})$ at all levels of signal below the Kesten-Stigum threshold of $\gamma^2 = \tilde{o}(k^2/n)$.
It suffices to show that the reduction $\mathcal{A}$ in Corollary \[thm:isbm-mod\] applied with $r \ge 2$ fills out all of the possible growth rates specified by the computational lower bound $\gamma^2 = \tilde{o}(k^2/n)$ and the other conditions in the theorem statement. Fix a constant pair of probabilities $0 < q < p \le 1$ and any sequence of parameters $(n, k, \gamma, P_0)$ all of which are implicitly functions of $n$ such that $(n, k)$ satisfies $\pr{(t)}$ and $$\gamma^2 \le \frac{k^2}{w' \cdot n \log n}, \quad 2(w')^2 k \le n^{1/3} \quad \text{and} \quad \min\{P_0, 1 - P_0 \} = \Omega_n(1)$$ for sufficiently large $n$ and $w' = w'(n) = (\log n)^{c}$ for a sufficiently large constant $c > 0$. Now let $w = w(n) \to \infty$ be an arbitrarily slow-growing increasing positive integer-valued function at least satisfying that $w(n) = n^{o(1)}$. As in the proof of Theorem \[thm:rsme-lb\], we now specify the following in order to fulfill the criteria in Condition \[cond:lb\]:
1. a sequence $(N, k_N)$ such that the $k\pr{-pds}(N, k_N, p, q)$ is hard according to Conjecture \[conj:hard-conj\]; and
2. a sequence $(n', k', \gamma, P_0)$ with a subsequence that satisfies three conditions: (2.1) the parameters on the subsequence are in the regime of the desired computational lower bound for $\pr{isbm}$; (2.2) they have the same growth rate as $(n, k, \gamma, P_0)$ on this subsequence; and (2.3) such that $\pr{isbm}$ with the parameters on this subsequence can be produced by $\mathcal{A}$ with input $k\pr{-pds}(N, k_N, p, q)$.
As discussed in Section \[subsec:2-tvreductions\], this is sufficient to prove the theorem. We choose these parameters as follows:
- let $k' = r$ be the smallest prime satisfying that $k \le r \le 2k$, which exists by Bertrand’s postulate and can be found in $\text{poly}(n)$ time;
- let $t$ be such that $r^t$ is the closest power of $r$ to $\sqrt{n}$ and let $$k_N = \left\lfloor \frac{1}{2}\left( 1 + \frac{p}{Q} \right)^{-1} w^{-2} \cdot \min\left\{ r^t, \sqrt{n} \right\} \right\rfloor$$ where $Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{ p = 1\}} \left( \sqrt{q} - 1 \right)$; and
- let $n' = k_N r\ell$ where $\ell = \frac{r^t - 1}{r - 1}$ and let $N = wk_N^2$.
Note that we have that $w^2 r \le n^{1/3}$ since $r \le 2k$. Now observe that we have the following bounds $$\begin{aligned}
n' &\asymp k_N r^t \asymp \left( w^{-2} \cdot \min\left\{ \frac{r^t}{\sqrt{n}}, 1 \right\} \cdot \frac{r^t}{\sqrt{n}} \right) n \\
k_N r^{3/2} &\lesssim w^{-2} \cdot \min\left\{ r^t, \sqrt{n} \right\} \cdot w^{-3} \sqrt{n} \lesssim \left( w^{-4} \cdot \frac{n}{r^{2t}} \right) r^{2t} \\
m &\le 2\left( \frac{p}{Q} + 1 \right) wk_N^2 \le \left( w^{-3} \cdot \frac{\sqrt{n}}{r^t} \right) k_N r^t \\
k_N r \ell &\le \text{poly}(N) \\
\gamma^2 &\le \frac{k^2}{w' \cdot n\log n} = \frac{1}{w' \cdot r^{2t - 2} \log(k_N r \ell)} \cdot \frac{r^{2t} \log(k_N r \ell)}{n\log n} \\
\gamma^2 &\lesssim \frac{r^2}{w' \cdot n' \log n'} \left( w^{-2} \cdot \min\left\{ \frac{r^t}{\sqrt{n}}, 1 \right\} \cdot \frac{r^t}{\sqrt{n}} \right) \cdot \frac{\log n'}{\log n} \lesssim \frac{r^2}{w' \cdot w^2 \cdot n' \log n'} \cdot \frac{r^t}{\sqrt{n}} \end{aligned}$$ where $m$ is the smallest multiple of $k_N$ larger $\left( \frac{p}{Q} + 1 \right) N$. Now observe that as long as $\sqrt{n} = \tilde{\Theta}(r^t)$ then: (2.1) the last inequality above on $\gamma^2$ would imply that $(n', k', \gamma, P_0)$ is in the desired hard regime; (2.2) $n$ and $n'$ have the same growth rate since $w = n^{o(1)}$, and $k$ and $k' = r$ have the same growth rate since either $k' = k$ or $k' = \Theta(k) = \omega(1)$; and (2.3) the middle four bounds above imply that taking $c$ large enough yields the conditions needed to apply Corollary \[thm:isbm-mod\] to yield the desired reduction. By Lemma \[lem:propT\], there is an infinite subsequence of the input parameters such that $\sqrt{n} = \tilde{\Theta}(r^t)$, which concludes the proof as in Theorem \[thm:rsme-lb\].
Testing Hidden Partition Models {#sec:3-hidden-partition}
-------------------------------
In this section, we establish statistical-computational gaps based on the $k\pr{-pc}$ and $k\pr{-pds}$ conjectures for detection in the Gaussian and bipartite hidden partition models introduced in Sections \[subsec:1-problems-hidden-partition\] and \[subsec:2-formulations\]. These two models are bipartite analogues of the subgraph variants of the $k$-block stochastic block model in the constant edge density regime. Specifically, they are multiple-community variants of the subgraph stochastic block model considered in [@brennan2018reducibility].
The motivation for considering these two models is to illustrate the versatility of Bernoulli rotations as a reduction primitive. These two models are structurally very different from planted clique yet can be produced through Bernoulli rotations for appropriate choices of the output mean vectors $A_1, A_2, \dots, A_m$. The mean vectors specified in the reduction are vectorizations of the slices of the design tensor $T_{r, t}$ constructed based on the incidence geometry of $\mathbb{F}_r^t$. The definition of $T_{r, t}$ and several of its properties can be found in Section \[subsec:2-design-tensors\]. The reduction in this section demonstrates that natural applications of Bernoulli rotations can require more involved constructions than $K_{r, t}$ in order to produce tight computational lower bounds.
We begin by reviewing the definitions of the two main models considered in this section – Gaussian and bipartite hidden partition models – which were introduced in Sections \[subsec:1-problems-hidden-partition\] and \[subsec:2-formulations\].
\[defn:ghpm\] Let $n, r$ and $K$ be positive integers, let $\gamma \in \mathbb{R}$ and let $C = (C_1, C_2, \dots, C_r)$ be a sequence of disjoint $K$-subsets of $[n]$. Let $D = (D_1, D_2, \dots, D_r)$ be another such sequence. The distribution $\pr{ghpm}_D(n, r, C, D, \gamma)$ over matrices $M \in \mathbb{R}^{n \times n}$ is such that $M_{ij} \sim_{\textnormal{i.i.d.}} \mN(\mu_{ij}, 1)$ where $$\mu_{ij} = \left\{ \begin{array}{ll} \gamma &\textnormal{if } i \in C_h \textnormal{ and } j \in D_h \textnormal{ for some } h \in [r] \\ -\frac{\gamma}{r - 1} &\textnormal{if } i \in C_{h_1} \textnormal{ and } j \in D_{h_2} \textnormal{ where } h_1 \neq h_2 \\ 0 &\textnormal{otherwise} \end{array} \right.$$ for each $i, j \in [n]$. Furthermore, let $\pr{ghpm}_D(n, r, K, \gamma)$ denote the mixture over $\pr{ghpm}_D(n, r, C, D, \gamma)$ induced by choosing $C$ and $D$ independently and uniformly at random.
\[defn:bhpm\] Let $n, r, K, C$ and $D$ be as in Definition \[defn:ghpm\] and let $P_0, \gamma \in (0, 1)$ be such that $\gamma/r \le P_0 \le 1 - \gamma$. The distribution $\pr{bhpm}_D(n, r, C, D, P_0, \gamma)$ over bipartite graphs $G$ with two parts of size $n$, each indexed by $[n]$, such that each edge $(i, j)$ is included in $G$ independently with the following probabilities $$\bP\left[ (i, j) \in E(G) \right] = \left\{ \begin{array}{ll} P_0 + \gamma &\textnormal{if } i \in C_h \textnormal{ and } j \in D_h \textnormal{ for some } h \in [r] \\ P_0 - \frac{\gamma}{r - 1} &\textnormal{if } i \in C_{h_1} \textnormal{ and } j \in D_{h_2} \textnormal{ where } h_1 \neq h_2 \\ P_0 &\textnormal{otherwise} \end{array} \right.$$ for each $i, j \in [n]$. Let $\pr{bhpm}_D(n, r, K, P_0, \gamma)$ denote the mixture over $\pr{bhpm}_D(n, r, C, D, P_0, \gamma)$ induced by choosing $C$ and $D$ independently and uniformly at random.
The problems we consider in this section are the two simple hypothesis testing problems $\pr{ghpm}$ and $\pr{bhpm}$ from Section \[subsec:2-formulations\], given by $$\begin{array}{lll}
H_0: M \sim \mN(0, 1)^{\otimes n \times n} &\text{and} &H_1: M \sim \pr{ghpm}(n, r, K, \gamma) \\
H_0: G \sim \mG_B(n, n, P_0) &\text{and} &H_1: G \sim \pr{bhpm}(n, r, K, P_0, \gamma)
\end{array}$$ An important remark is that the hypothesis testing formulations above for these two problems seem to have different computational and statistical barriers from the tasks of recovering $C$ and $D$. We now state the following lemma, giving guarantees for a natural polynomial-time test and exponential time test for $\pr{ghpm}$. The proof of this lemma is tangential to the main focus of this section – computational lower bounds for $\pr{ghpm}$ and $\pr{bhpm}$ – and is deferred to Appendix \[subsec:appendix-3-part-3\].
\[lem:ghpm-test\] Given a matrix $M \in \mathbb{R}^{n \times n}$, let $s_C(M) = \sum_{i, j = 1}^n M_{ij}^2 - n^2$ and $$s_I(M) = \max_{C, D} \left\{ \sum_{h = 1}^r \sum_{i \in C_h} \sum_{j \in D_h} M_{ij} \right\}$$ where the maximum is over all pairs $(C, D)$ of sequences of disjoint $K$-subsets of $[n]$. Let $w = w(n)$ be any increasing function with $w(n) \to \infty$ as $n \to \infty$. We prove the following:
1. If $M \sim \pr{ghpm}_D(n, r, K, \gamma)$, then with probability $1 - o_n(1)$ it holds that $$s_C(M) \ge rK^2\gamma^2 + \frac{rK^2}{r - 1} \cdot \gamma^2 - w\left(n + \gamma K \sqrt{r} + \frac{K\gamma}{r} \right) \quad \textnormal{and} \quad s_I(M) \ge rK^2 \gamma - wr^{1/2} K$$
2. If $M \sim \mN(0, 1)^{\otimes n \times n}$, then with probability $1 - o_n(1)$ it holds that $$s_C(M) \le wn \quad \textnormal{and} \quad s_I(M) \le 2r K^{3/2} w\sqrt{\left(\log n + \log r \right)}$$
This lemma implies upper bounds on the computational and statistical barriers for $\pr{ghpm}$. Specifically, it implies that the variance test $s_C$ succeeds above $\gamma_{\text{comp}}^2 = \tilde{\Theta}(n/rK^2)$ and the search test $s_I$ succeeds above $\gamma_{\text{IT}}^2 = \tilde{\Theta}(1/K)$. Thus, showing that there is a computational barrier at this level of signal $\gamma_{\text{comp}}$ is sufficient to show that there is a nontrivial statistical-computational gap for $\pr{ghpm}$. For $P_0$ with $\min\{P_0, 1 - P_0\} = \Omega(1)$, analogous tests show the same upper bounds on $\gamma_{\text{comp}}$ and $\gamma_{\text{IT}}$ for $\pr{bhpm}$.
Consider the case when $n = rK$, which corresponds to a testing variant of the bipartite $k$-block stochastic block model. In this case, the upper bounds shown by the previous lemma coincide at $\gamma_{\text{comp}}^2, \gamma_{\text{IT}}^2 = O(r/n)$ and hence do not support the existence of a statistical-computational gap. The subgraph formulation in which $rK \ll n$ seems crucial to yielding a testing problem with a statistical-computational gap. We also remark that while this testing formulation when $n = rK$ may not have a gap, the task of recovering $C$ and $D$ likely shares the gap conjectured in the $k$-block stochastic block model. Specifically, the conjectured computational barrier at the Kesten-Stigum threshold lies at $\gamma^2 = \tilde{\Theta}(r^2/n)$, which lies well above the $r/n$ limit in the testing formulation.
**Algorithm** $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span>
*Inputs*: $k$ instance $G \in \mG_N$ with dense subgraph size $k$ that divides $N$, and the following parameters
- partition $E$, edge probabilities $0 < q < p \le 1$, $Q \in (0, 1)$ and $m$ as in Figure \[fig:isbm-reduction\]
- refinement parameter $s$ and number of vertices $n = ksr^t$ where $r$ is a prime number, $\ell = \frac{r^t - 1}{r - 1}$ for some $t \in \mathbb{N}$ satisfy that $m \le ks(r - 1)\ell \le \text{poly}(N)$
- mean parameter $\mu \in (0, 1)$ as in Figure \[fig:isbm-reduction\]
1. *Symmetrize and Plant Diagonals*: Compute $M_{\text{PD1}} \in \{0, 1\}^{m \times m}$ and $F$ as in Step 1 of Figure \[fig:isbm-reduction\].
2. *Pad and Further Partition*: Form $M_{\text{PD2}}$ and $F'$ as in Step 2 of Figure \[fig:isbm-reduction\] modified so that $M_{\text{PD2}}$ is a $ks(r-1)\ell \times ks(r-1)\ell$ matrix and each $F'_i$ has size $s(r-1)\ell$. Let $F^s$ be the partition of $[ks(r - 1)\ell]$ into $ks$ parts of size $(r - 1)\ell$ by refining $F'$ by splitting each of its parts into $s$ parts of equal size arbitrarily.
3. *Bernoulli Rotations*: Let $F^o$ be a partition of $[ksr^t]$ into $ks$ equally sized parts. Now compute the matrix $M_{\text{R}} \in \mathbb{R}^{ksr^t \times ksr^t}$ as follows:
1. For each $i, j \in [ks]$, flatten the $(r-1)\ell \times (r-1)\ell$ submatrix $(M_{\text{P}})_{F_i^s, F_j^s}$ into a vector $V_{ij} \in \mathbb{R}^{(r-1)^2 \ell^2}$ and let $A = M_{r, t}^\top \in \mathbb{R}^{r^{2t} \times (r-1)^2 \ell^2}$ as in Definition \[defn:unfolded-Trt\].
2. Apply $\pr{Bern-Rotations}$ to $V_{ij}$ with matrix $A$, rejection kernel parameter $R_{\pr{rk}} = ksr^t$, Bernoulli probabilities $0 < Q < p \le 1$, output dimension $r^{2t}$, $\lambda = \sqrt{1 + (r - 1)^{-1}}$ and mean parameter $\mu$.
3. Set the entries of $(M_{\text{R}})_{F^o_i, F^o_j}$ to be the entries of the output in (2) unflattened into a matrix.
4. *Permute and Output*: Output the matrix $M_{\text{R}}$ with its rows and columns independently permuted uniformly at random.
The rest of this section is devoted to giving our main reduction $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span> showing a computational barrier at $\gamma^2 = \tilde{o}(n/rK^2)$. This reduction is shown in Figure \[fig:sbmtesting\] and its approximate Markov transition guarantees are stated in the theorem below. The intuition behind why our reduction is tight to the algorithm $s_C$ is as follows. Bernoulli rotations are approximately $\ell_2$-norm preserving in the signal to noise ratio if the output dimension is comparable to the input dimension with $m \asymp n$. Much of the effort in constructing $T_{r, t}$ and $M_{r, t}$ in Section \[subsec:2-design-tensors\] was devoted to the linear functions $L$ which are crucial in designing $M_{r, t}$ to be nearly square and hence achieve $m \asymp n$ in Bernoulli rotations. Any reduction that is approximately $\ell_2$-norm preserving in the signal to noise ratio will be tight to a variance test such as $s_C$.
The key to the reduction $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span> lies in the construction of $T_{r, t}$ and $M_{r, t}$ in Section \[subsec:2-design-tensors\]. The rest of the proof of the following theorem is similar to the proofs in the previous section. We omit details that are similar for brevity. We recall from Section \[subsec:2-notation\] that, given a matrix $M \in \mathbb{R}^{n \times n}$, the matrix $M_{S, T} \in \mathbb{R}^{k \times k}$ where $S, T$ are $k$-subsets of $[n]$ refers to the minor of $M$ restricted to the row indices in $S$ and column indices in $T$. Furthermore, $(M_{S, T})_{i, j} = M_{\sigma_S(i), \sigma_T(j)}$ where $\sigma_S : [k] \to S$ is the unique order-preserving bijection and $\sigma_T$ is analogously defined.
\[thm:ghpm\] Let $N$ be a parameter and $r = r(N) \ge 2$ be a prime number. Fix initial and target parameters as follows:
- [Initial]{.nodecor} $k\pr{-bpds}$ [Parameters:]{.nodecor} $k, N, p, q$ and $E$ as in Theorem \[thm:isbm\].
- [Target]{.nodecor} $\pr{ghpm}$ [Parameters:]{.nodecor} $(n, r, K, \gamma)$ where $n = ksr^t$, $K = kr^{t - 1}$ and $\ell = \frac{r^t - 1}{r - 1}$ for some parameters $t = t(N), s = s(N) \in \mathbb{N}$ satisfying that that $$m \le ks(r - 1)\ell \le \textnormal{poly}(N)$$ where $m$ and $Q$ are as in Theorem \[thm:ghpm\]. The target level of signal $\gamma$ is given by $\gamma = \frac{\mu(r - 1)}{r^t\sqrt{r}}$ where $$\mu \le \frac{1}{2 \sqrt{6\log (ksr^t) + 2\log (p - Q)^{-1}}} \cdot \min \left\{ \log \left( \frac{p}{Q} \right), \log \left( \frac{1 - Q}{1 - p} \right) \right\}$$
Let $\mathcal{A}(G)$ denote $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span> applied to the graph $G$ with these parameters. Then $\mathcal{A}$ runs in $\textnormal{poly}(N)$ time and it follows that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mG_E(N, k, p, q) \right), \, \pr{ghpm}_D(n, r, K, \gamma) \right) &= O\left( \frac{k}{\sqrt{N}} + e^{-\Omega(N^2/km)} + (ksr^t)^{-1} \right) \\
\TV\left( \mathcal{A}\left( \mG(N, q) \right), \, \mN(0, 1)^{\otimes n \times n} \right) &= O\left( e^{-\Omega(N^2/km)} + (ksr^t)^{-1} \right)\end{aligned}$$
In order to state the approximate Markov transition guarantees of the Bernoulli rotations step of $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span>, we need the formalism from Section \[subsec:2-design-tensors\] to describe the matrix $M_{r, t}$, tensor $T_{r, t}$ and their community alignment properties. While this will require a plethora of cumbersome notation, the goal of the ensuing discussion is simple – we will show that Lemma \[lem:comm-align-tensors\] guarantees that stitching together the individual applications of $\pr{Bern-Rotations}$ in Step 3 of $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span> yields a valid instance of $\pr{ghpm}$.
Recall $\mathcal{C}(M^{1, 1}, M^{1, 2}, \dots, M^{ks, ks})$ denotes the concatenation of $k^2s^2$ matrices $M^{i, j} \in \mathbb{R}^{r^t \times r^t}$ into a $ksr^t \times ksr^t$ matrix, as introduced in Section \[subsec:2-design-tensors\]. Given a partition $F$ of $[ksr^t]$ into $ks$ equally sized parts, let $\mathcal{C}_{F}(M^{1, 1}, M^{1, 2}, \dots, M^{ks, ks})$ denote the concatenation of the $M^{i, j}$, where now the entries of $M^{i, j}$ appear in $\mathcal{C}_{F}$ on the index set $F_i \times F_j$. For consistency, we fix a canonical embedding of the row and column indices of $\mathbb{R}^{r^t \times r^t}$ to $F_i \times F_j$ by always preserving the order of indices.
Let $F^o$ and $F^s$ be fixed partitions of $[ksr^t]$ and $[ks(r - 1)\ell]$ into $k$ parts of size $r^t$ and $(r - 1)\ell$, respectively, and let $S \subseteq [ks(r - 1)\ell]$ be such that $|S| = k$ and $S$ intersects each part of $F^s$ in at most one element. Now let $\mathbf{M}_{S, F^s, F^o}(T_{r, t}) \in \mathbb{R}^{ksr^t \times ksr^t}$ be the matrix $$\mathbf{M}_{S, F^s, F^o}(T_{r, t}) = \mathcal{C}_{F^o}\left(M^{1, 1}, M^{1, 2}, \dots, M^{ks, ks}\right) \quad \text{where} \quad M^{i, j} = \left\{ \begin{array}{ll} T_{r, t}^{(V_{t_i}, V_{t_j}, L_{ij})} &\text{if } S \cap F_i^s \neq \emptyset \\ 0 &\text{otherwise} \end{array} \right.$$ where $t_i, t_j$ and $L_{ij}$ are given by:
- let $\sigma : [ks(r - 1)\ell] \to [ks(r - 1)\ell]$ be the unique bijection transforming the partition $F^s$ to the canonical contiguous partition $\{1, \dots, (r - 1)\ell\} \cup \cdots \cup \{(ks - 1)(r - 1)\ell + 1, \dots, ks(r -1)\ell\}$ while preserving ordering on each part $F^s_i$ for $1 \le i \le ks$;
- let $s'_i$ be the unique element in $\sigma(S \cap F^s_i)$ for each $i$ for which this intersection is nonempty, and let $s_i$ be the unique positive integer with $1 \le s_i \le (r - 1)\ell$ and $s_i \equiv s_i' \pmod{(r - 1)\ell}$; and
- $t_i, t_j$ and $L_{ij}$ are as in Lemma \[lem:comm-align-tensors\] given these $s_i$ i.e. $t_i$ and $t_j$ are the unique $1 \le t_i, t_j \le \ell$ such that $t_i \equiv s_i \pmod{\ell}$ and $t_j \equiv s_j \pmod{\ell}$ and $L_{ij} : \mathbb{F}_r \to \mathbb{F}_r$ is given by $L_{ij}(x) = a_i x + a_j$ where $a_i = \lceil s_i/\ell \rceil$ and $a_j = \lceil s_j/\ell \rceil$.
The next lemma makes explicit the implications of Lemma \[lem:bern-rotations\] and Lemma \[lem:comm-align-tensors\] for the approximate Markov transition guarantees of Step 3 in $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span>. The proof follows a similar structure to the proof of Lemma \[lem:isbm-rotations\] and we omit identical details.
\[lem:ghpm-rotations\] Let $F^o$ and $F^s$ be a fixed partitions of $[ksr^t]$ and $[ks(r - 1)\ell]$ into $k$ parts of size $r^t$ and $(r - 1)\ell$, respectively, and let $S \subseteq [ksr^t]$ be such that $|S| = k$ and $|S \cap F_i^s| \le 1$ for each $1 \le i \le ks$. Let $\mathcal{A}_{\textnormal{3}}$ denote Step 3 of $k\pr{-pds-to-ghpm}$ with input $M_{\textnormal{PD2}}$ and output $M_{\textnormal{R}}$. Suppose that $p, Q$ and $\mu$ are as in Theorem \[thm:isbm\], then it follows that $$\begin{aligned}
&\TV\Big( \mathcal{A}_{\textnormal{3}} \left( \mathcal{M}_{[ks(r - 1)\ell] \times [ks(r - 1)\ell]} \left( S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right), \\
&\quad \quad \quad \quad \left. \mL\left( \mu \sqrt{\frac{r - 1}{r}} \cdot \mathbf{M}_{S, F^s, F^o}(T_{r, t}) + \mN(0, 1)^{\otimes ksr^t \times ksr^t} \right) \right) = O\left((ksr^t)^{-1}\right) \\
&\TV\left( \mathcal{A}_{\textnormal{3}} \left(\textnormal{Bern}(Q)^{\otimes ks(r - 1)\ell \times ks(r - 1)\ell} \right), \, \mN(0, 1)^{\otimes ksr^t \times ksr^t} \right) = O\left((ksr^t)^{-1}\right)\end{aligned}$$ and furthermore, for all such subsets $S$, it holds that the matrix $\mathbf{M}_{S, F^s, F^o}(T_{r, t})$ has zero entries other than in a $kr^t \times kr^t$ submatrix, which is also $r$-block as defined in Section \[subsec:2-design-tensors\].
Define $s_i', s_i, t_i$ and $L_{ij}$ as in the preceding discussion for all $i, j$ with $S \cap F_i^s$ and $S \cap F_j^s$ nonempty. Let (1) and (2) denote the following two cases:
1. $M_{\textnormal{PD2}} \sim \mathcal{M}_{[ks(r - 1)\ell] \times [ks(r - 1)\ell]} \left( S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q) \right)$; and
2. $M_{\textnormal{PD2}} \sim \textnormal{Bern}(Q)^{\otimes ks(r - 1)\ell \times ks(r - 1)\ell}$.
Now define the matrix $M_{\text{R}}'$ with independent entries such that $$\left( M_{\text{R}}' \right)_{F_i^s, F_j^s} \sim \left\{ \begin{array}{ll} \mu \sqrt{\frac{r - 1}{r}} \cdot T_{r, t}^{(V_{t_i}, V_{t_j}, L_{ij})} + \mN(0, 1)^{\otimes r^t \times r^t} &\text{if (1) holds, } S \cap F_i^s \neq \emptyset \text{ and } S \cap F_j^s \neq \emptyset \\ \mN(0, 1)^{\otimes r^t \times r^t} &\text{otherwise if (1) holds or if (2) holds} \end{array} \right.$$ for each $1 \le i, j \le ks$. The vectorization and ordering conventions we adopt imply that if $S \cap F_i^s \neq \emptyset$ and $S \cap F_j^s \neq \emptyset$, then the unflattening of the row with index $(s_i - 1) (r - 1)\ell + s_j$ in $M_{r, t}$ is the approximate output mean of $\mathcal{A}_3$ on the minor $( M_{\text{R}} )_{F_i^s, F_j^s}$ when applying Lemma \[lem:bern-rotations\] under (1). By Definition \[defn:unfolded-Trt\] and the definitions of $a_i, t_i$ and $L_{ij}$, this unflattened row is exactly the matrix $$M^{i, j} = T_{r, t}^{(V_{t_i}, V_{t_j}, L_{ij})}$$ Combining this observation with Lemmas \[lem:bern-rotations\] and \[lem:Mrtsv\] yields that under both (1) and (2), we have that $$\TV\left( \left( M_{\text{R}} \right)_{F_i^s, F_j^s}, \left( M_{\text{R}}' \right)_{F_i^s, F_j^s} \right) = O\left( r^{2t} \cdot (ksr^t)^{-3} \right)$$ for all $1 \le i, j \le ks$. Through the same argument as in Lemma \[lem:isbm-rotations\], the tensorization property of total variation in Fact \[tvfacts\] now yields that $\TV\left( \mL(M_{\text{R}}), \mL(M_{\text{R}}') \right) = O\left( (ksr^t)^{-1} \right)$ under both (1) and (2). Now note that the definition of $\mathcal{C}_{F^o}$ implies that $$M_{\text{R}}' \sim \left\{ \begin{array}{ll} \mu \sqrt{\frac{r - 1}{r}} \cdot \mathbf{M}_{S, F^s, F^o}(T_{r, t}) + \mN(0, 1)^{\otimes ksr^t \times ksr^t} &\text{if (1) holds} \\ \mN(0, 1)^{\otimes ksr^t \times ksr^t} &\text{if (2) holds} \end{array} \right.$$ which completes the proof of the approximate Markov transition guarantees in the lemma statement. Now note that $\mathbf{M}_{S, F^s, F^o}(T_{r, t})$ is zero everywhere other than on the union $U$ of the $F^o_i$ over the $i$ such that $S \cap F^s_i \neq \emptyset$. There are exactly $k$ such $i$ and thus $|U| = kr^t$. Note that $r$-block matrices remain $r$-block matrices under permutations of column and row indices, and therefore Lemma \[lem:comm-align-tensors\] implies the same conclusion if $\mathcal{C}$ is replaced by $\mathcal{C}_{F^o}$. Applying Lemma \[lem:comm-align-tensors\] to the submatrix of $\mathbf{M}_{S, F^s, F^o}(T_{r, t})$ restricted to the indices of $U$ now completes the proof of the lemma.
We now complete the proof of Theorem \[thm:ghpm\], again applying Lemma \[lem:tvacc\] as in the proofs of Theorems \[thm:isgmreduction\] and \[thm:isbm\]. In this theorem, we let $\mathcal{U}_n^k(F)$ denote the uniform distribution over subsets $S \subseteq [n]$ of size $k$ intersecting each part of the partition $F$ in at most one element. When $F$ has exactly $k$ parts, this definition recovers the previously defined distribution $\mathcal{U}_n(F)$.
Let the steps of $\mathcal{A}$ to map inputs to outputs as follows $$(G, E) \xrightarrow{\mathcal{A}_1} (M_{\text{PD1}}, F) \xrightarrow{\mathcal{A}_2} (M_{\text{PD2}}, F^s) \xrightarrow{\mathcal{A}_3} (M_{\text{R}}, F^o) \xrightarrow{\mathcal{A}_{\text{4}}} M_{\text{R}}'$$ where here $M_{\text{R}}'$ denotes the permuted form of $M_{\text{R}}$ after Step 4. Under $H_1$, consider Lemma \[lem:tvacc\] applied to the following sequence of distributions $$\begin{aligned}
\mathcal{P}_0 &= \mG_E(N, k, p, q) \\
\mathcal{P}_1 &= \mathcal{M}_{[m] \times [m]}(S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \quad \text{where } S \sim \mU_m(F) \\
\mathcal{P}_2 &= \mathcal{M}_{[ks(r - 1)\ell] \times [ks(r - 1)\ell]}(S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \quad \text{where } S \sim \mU_{ks(r - 1)\ell}^k(F^s) \\
\mathcal{P}_3 &= \mu \sqrt{\frac{r - 1}{r}} \cdot \mathbf{M}_{S, F^s, F^o}(T_{r, t}) + \mN(0, 1)^{\otimes ksr^t \times ksr^t} \quad \text{where } S \sim \mU_{ks(r - 1)\ell}^k(F^s) \\
\mathcal{P}_{\text{4}} &= \pr{ghpm}_D\left(ksr^t, r, kr^{t - 1}, \frac{\mu(r - 1)}{r^t \sqrt{r}} \right)\end{aligned}$$ Let $C_Q = \max\left\{ \frac{Q}{1 - Q}, \frac{1 - Q}{Q} \right\}$ and consider setting $$\epsilon_1 = 4k \cdot \exp\left( - \frac{Q^2N^2}{48pkm} \right) + \sqrt{\frac{C_Q k^2}{2m}}, \quad \epsilon_2 = 0, \quad \epsilon_3 = O\left( (ksr^t)^{-1} \right) \quad \text{and} \quad \epsilon_4 = 0$$ As in the proof of Theorem \[thm:isbm\], Lemma \[lem:submatrix\] implies this is a valid choice of $\epsilon_1$ and $\mathcal{A}_2$ is exact so we can take $\epsilon_2 = 0$. The choice of $\epsilon_3$ is valid by applying Lemma \[lem:ghpm-rotations\] and averaging over $S \sim \mU^k_{ks(r - 1)\ell}(F^s)$ using the conditioning property of total variation in Fact \[tvfacts\]. Now note that the $kr^t \times kr^t$ $r$-block submatrix of $\mathbf{M}_{S, F^s, F^o}(T_{r, t})$ has entries $\frac{r - 1}{r^t\sqrt{r - 1}}$ and $-\frac{1}{r^t \sqrt{r - 1}}$. Thus the matrix $\mu \sqrt{\frac{r - 1}{r}} \cdot \mathbf{M}_{S, F^s, F^o}(T_{r, t})$ is of the form of the mean matrix $(\mu_{ij})_{1 \le i, j \le ksr^t}$ in Definition \[defn:ghpm\] for some choice of $C$ and $D$ where $K = kr^{t - 1}$ and $$\gamma = \mu \sqrt{\frac{r - 1}{r}} \cdot \frac{r - 1}{r^t\sqrt{r - 1}} = \frac{\mu(r - 1)}{r^t \sqrt{r}}$$ This implies that permuting the rows and columns of $\mP_3$ yields $\mP_4$ exactly with $\epsilon_4 = 0$. Applying Lemma \[lem:tvacc\] now yields the first bound in the theorem statement. Under $H_0$, consider the distributions $$\mathcal{P}_0 = \mG(N, q), \quad \mathcal{P}_1 = \text{Bern}(Q)^{\otimes m \times m}, \quad \mathcal{P}_2 = \text{Bern}(Q)^{\otimes ks(r - 1)\ell \times ks(r - 1)\ell}, \quad \mathcal{P}_3 = \mathcal{P}_{\text{4}} = \mN(0, 1)^{\otimes ksr^t \times ksr^t}$$ As above, Lemmas \[lem:submatrix\] and \[lem:ghpm-rotations\] imply that we can take $\epsilon_1 = 4k \cdot \exp\left( - \frac{Q^2N^2}{48pkm} \right)$ and $\epsilon_2, \epsilon_3$ and $\epsilon_4$ as above. Lemma \[lem:tvacc\] now yields the second bound in the theorem statement.
We now append a final post-processing step to the reduction $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span> to map to $\pr{bhpm}$. The proof of the following corollary is similar to that of Corollary \[thm:isbm-mod\] and is deferred to Appendix \[subsec:appendix-3-part-3\].
\[cor:bhpm\] Let $0 < q < p \le 1$ be constant and let the parameters $k, N, E, r, \ell, n, s$ and $K$ be as in Theorem \[thm:ghpm\] with the additional condition that $k\sqrt{r} = o(r^{2t})$. Let $\gamma \in (0, 1)$ be such that $$\gamma \le \frac{c(r - 1)}{r^t \sqrt{r\log(ksr^t)}}$$ for a sufficiently small constant $c > 0$. Suppose that $P_0$ satisfies $\min\{P_0, 1 - P_0 \} = \Omega(1)$. Then there is a $\textnormal{poly}(N)$ time reduction $\mathcal{A}$ from graphs on $N$ vertices to graphs on $n$ vertices satisfying that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mG_E(N, k, p, q) \right), \, \pr{bhpm}_D(n, r, K, P_0, \gamma) \right) &= O\left( \frac{k \mu^3\sqrt{r}}{r^{2t}} + \frac{k}{\sqrt{N}} + e^{-\Omega(N^2/km)} + (ksr^t)^{-1} \right) \\
\TV\left( \mathcal{A}\left( \mG(N, q) \right), \, \mG_B(N, N, P_0) \right) &= O\left( e^{-\Omega(N^2/km)} + (ksr^t)^{-1} \right)\end{aligned}$$
Collecting the results of this section, we arrive at the following computational lower bounds for $\pr{ghpm}$ and $\pr{bhpm}$ matching the efficient test $s_C$ in Lemma \[lem:ghpm-test\].
[thm:ghpm-lb]{} \[Lower Bounds for $\pr{ghpm}$ and $\pr{bhpm}$\] Suppose that $r^2 K^2 = \tilde{\omega}(n)$ and $(\lceil r^2 K^2/n \rceil, r)$ satisfies condition , suppose $r$ is prime or $r = \omega_n(1)$ and suppose that $P_0 \in (0, 1)$ satisfies $\min\{P_0, 1 - P_0 \} = \Omega_n(1)$. Then the $k\pr{-pc}$ conjecture or $k\pr{-pds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for each of $\pr{ghpm}(n, r, K, \gamma)$ for all levels of signal $\gamma^2 = \tilde{o}(n/rK^2)$. This same lower bound also holds for $\pr{bhpm}(n, r, K, P_0, \gamma)$ given the additional condition $n = o(rK^{4/3})$.
The proof of this theorem will follow that of Theorem \[thm:isbm-lb\] with several modifications. We begin by showing a lower bound for $\pr{ghpm}$. It suffices to show that the reduction $k\pr{-pds-to-ghpm}$ fills out all of the possible growth rates specified by the computational lower bound $\gamma^2 = \tilde{o}(n/rK^2)$ and the other conditions in the theorem statement. Fix a constant pair of probabilities $0 < q < p \le 1$ and any sequence of parameters $(n, r, K, \gamma)$ all of which are implicitly functions of $n$ such that $(\lceil r^2 K^2/n \rceil, r)$ satisfies $\pr{(t)}$ and $$\gamma^2 \le c \cdot \frac{n}{w' \cdot rK^2 \log n} \quad \text{and} \quad r^2 K^2 \ge w'n$$ for sufficiently large $n$ and $w' = w'(n) = (\log n)^{c}$ for a sufficiently large constant $c > 0$. Now let $w = w(n) \to \infty$ be an arbitrarily slow-growing increasing positive integer-valued function at least satisfying that $w(n) = n^{o(1)}$. As in Theorem \[thm:isbm-lb\], we now specify the following parameters which are sufficient to establish the lower bound for $\pr{ghpm}$:
1. a sequence $(N, k_N)$ such that $k\pr{-pds}(N, k_N, p, q)$ is hard according to Conjecture \[conj:hard-conj\]; and
2. a sequence $(n', r', K', \gamma, s, t, \mu)$ with a subsequence that satisfies three conditions: (2.1) the parameters on the subsequence are in the regime of the desired computational lower bound for $\pr{ghpm}$; (2.2) the parameters $(n', r', K', \gamma)$ have the same growth rate as $(n, r, K, \gamma)$ on this subsequence; and (2.3) such that $\pr{ghpm}(n', r', K', \gamma)$ with the parameters on this subsequence can be produced by $k\pr{-pds-to-ghpm}$ with input $k\pr{-pds}(N, k_N, p, q)$ applied with additional parameters $s, t$ and $\mu$.
We choose these parameters as follows:
- let $r' = r$ be the smallest prime satisfying that $r \le r' \le 2r$, which exists by Bertrand’s postulate and can be found in $\text{poly}(n)$ time;
- let $t$ be such that $(r')^{t}$ is the closest power of $r'$ to $r'K/\sqrt{n}$, let $s = \lceil n/r'K \rceil$ and let $\mu = \frac{\gamma (r')^t \sqrt{r'}}{r' - 1}$;
- now let $k_N$ be given by $$k_N = \left\lfloor \frac{1}{2}\left( 1 + \frac{p}{Q} \right)^{-1} w^{-2} \cdot \min\left\{ \frac{K}{(r')^{t - 1}}, \sqrt{n} \right\} \right\rfloor$$ where $Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{ p = 1\}} \left( \sqrt{q} - 1 \right)$; and
- let $K' = k_N(r')^{t-1}$, let $n' = k_N s (r')^t$ and let $N = wk_N^2$.
Now observe that we have the following bounds $$\begin{aligned}
n' &\asymp k_N s (r')^t \asymp \left( w^{-2} \cdot \min\left\{ 1, \frac{(r')^{t -1}\sqrt{n}}{K} \right\} \right) n \\
K' &\asymp k_N (r')^{t - 1} = \frac{n'}{r's} \asymp \left( w^{-2} \cdot \min\left\{ 1, \frac{(r')^{t -1}\sqrt{n}}{K} \right\} \right) K \\
m &\le 2\left( \frac{p}{Q} + 1 \right) wk_N^2 \le \left( w^{-1} \cdot \min\left\{ \frac{K}{(r')^{t - 1} \sqrt{n}}, 1 \right\} \cdot \frac{K}{(r')^{t - 1} \sqrt{n}} \right) k_N s (r' - 1)\ell \\
k_N s (r' - 1)\ell&\le \text{poly}(N) \\
(r')^2 (K')^2 &\ge \left( \frac{r'K'}{rK} \right)^2 \cdot \frac{n}{n'} \cdot w' n' \\
\mu &= \frac{\gamma (r')^t \sqrt{r'}}{r' - 1} \le \sqrt{\frac{r'}{r}} \cdot \frac{(r')^{t - 1}\sqrt{n}}{K} \cdot \frac{2}{(w')^{1/2} \sqrt{\log n}} \\
\gamma^2 &\lesssim \frac{n'}{w' \cdot r'(K')^2 \log n'} \cdot \frac{r'}{r} \cdot \frac{n'}{n} \cdot \frac{(K')^2}{K^2} \cdot \frac{\log n'}{\log n}\end{aligned}$$ where $m$ is the smallest multiple of $k_N$ larger $\left( \frac{p}{Q} + 1 \right) N$ and $\ell = \frac{(r')^t - 1}{r' - 1}$. Now observe that as long as $r'K/\sqrt{n} = \tilde{\Theta}((r')^t)$ then: (2.1) the last inequality above on $\gamma^2$ would imply that $(n', r', K', \gamma)$ is in the desired hard regime; (2.2) the pairs of parameters $(n, n')$, $(K, K')$ and $(r, r')$ have the same growth rates since $w = n^{o(1)}$ and either $r' = r$ or $r' = \Theta(r) = \omega(1)$; and (2.3) the third through sixth bounds above imply that taking $c$ large enough yields the conditions needed to apply Corollary \[thm:isbm-mod\] to yield the desired reduction. By Lemma \[lem:propT\], there is an infinite subsequence of the input parameters such that $r'K/\sqrt{n} = \tilde{\Theta}((r')^t)$, which concludes the proof of the lower bound for $\pr{ghpm}$ as in Theorems \[thm:rsme-lb\] and \[thm:isbm-lb\].
The computational lower bound for $\pr{bhpm}$ follows from the same argument applied to $\mathcal{A}$ from Corollary \[cor:bhpm\] with the following modification. The conditions in the theorem statement for $\pr{bhpm}$ add the initial condition that $rK^{4/3} \ge w'n$. The parameter settings above then imply that $k_N \sqrt{r'} = \tilde{o}((r')^{2t})$ holds on the parameter subsequence with $r'K/\sqrt{n} = \tilde{\Theta}((r')^t)$. The same reasoning above then yields the desired computational lower bound for $\pr{bhpm}$ and completes the proof of the theorem.
Semirandom Single Community Recovery {#sec:semirandom}
------------------------------------
In this section, we show that the $k$ and $k$ conjectures with constant edge density imply the $\pr{pds}$ Recovery Conjecture under a semirandom adversary in the regime of constant ambient edge density. The $\pr{pds}$ Recovery Conjecture and formulations of semirandom single community recovery here are as they were introduced in Sections \[subsec:1-problems-semicr\] and \[subsec:2-formulations\]. Our reduction from $k$ to is shown in Figure \[fig:semirandreduction\]. On a high level, our main observation is that an adversary in $\pr{semi-cr}$ with subgraph size $k$ can simulate the problem of detecting for the presence of a hidden $\pr{isbm}$ instance on a subgraph with $O(k)$ in an $n$-vertex Erdős-Rényi graph. Furthermore, combining the Bernoulli rotations step with $K_{3, t}$ as in $k\pr{-pds-to-isbm}$ with the partition refinement of $k\pr{-pds-to-ghpm}$ can be shown to map to this detection problem. Furthermore, it faithfully recovers the Kesten-Stigum bound from the $\pr{pds}$ Recovery Conjecture as opposed to the slower detection rate. The key proofs in this section resemble similar proofs in the previous two sections. We omit details that are similar for brevity.
Before proceeding with the main proofs of this section, we discuss the relationship between our results and the reduction of [@cai2015computational]. In [@cai2015computational], the authors prove a detection-recovery gap in the context of sub-Gaussian submatrix localization based on the hardness of finding a planted $k$-clique in a random $n/2$-regular graph. This degree-regular formulation of $\pr{pc}$ was previously considered in [@deshpande2015finding] and differs in a number of ways from . For example, it is unclear how to generate a sample from the degree-regular variant in polynomial time. We remark that the reduction of [@cai2015computational], when instead applied the usual formulation of produces a matrix with highly dependent entries. Specifically, the sum of the entries of the output matrix has variance $n^2/\mu$ where $\mu \ll 1$ is the mean parameter for the submatrix localization instance whereas an output matrix with independent entries of unit variance would have a sum of entries of variance $n^2$. Note that, in general, any reduction beginning with $\pr{pc}$ that also preserves the natural $H_0$ hypothesis cannot show the existence of a detection-recovery gap, as any lower bounds for localization would also apply to detection.
Formally, the goal of this section is to show that the reduction $k\pr{pds-to-semi-cr}$ in Figure \[fig:semirandreduction\] maps from $k$ and $k$ to the following distribution under $H_1$, for a particular choice of $\mu_1, \mu_2$ and $\mu_3$ just below the $\pr{pds}$ Recovery Conjecture. We remark that $k\pr{-pds-to-semi-cr}$ maps to the specific case where $P_0 = 1/2$. This reduction is extended in Corollary \[cor:semi-cr-gen\] to handle $P_0 \neq 1/2$ with $\min\{P_0, 1 - P_0\} = \Omega(1)$.
Given positive integers $k, k' \le n$ and $P_0, \mu_1, \mu_2, \mu_3 \in (0, 1)$ satisfying that $\mu_1, \mu_2 \le P_0 \le 1 - \mu_3$, let $\pr{tsi}(n, k, k', P_0, \mu_1, \mu_2, \mu_3)$ be the distribution over $G \in \mG_n$ sampled as follows:
1. choose two disjoint subsets $S \subseteq [n]$ and $S' \subseteq [n]$ of sizes $|S| = k$ and $|S'| = k'$, respectively, uniformly at random; and
2. include the edge $\{i, j\}$ in $E(G)$ independently with probability $p_{ij}$ where $$p_{ij} = \left\{ \begin{array}{ll} P_0 &\textnormal{if } (i, j) \in S'^2 \\ P_0 - \mu_1 &\textnormal{if } (i, j) \in [n]^2 \backslash (S \cup S')^2 \\ P_0 - \mu_2 &\textnormal{if } (i, j) \in S \times S' \textnormal{ or } (i, j) \in S' \times S \\ P_0 + \mu_3 &\textnormal{if } (i, j) \in S^2 \end{array} \right.$$
Note that this distribution can be produced by a semirandom adversary in $\pr{semi-cr}(n, k, P_0 + \mu_3, P_0)$ under $H_1$ as follows:
1. samples $S'$ of size $k'$ uniformly at random from all $k'$-subsets of $[n] \backslash S$ where $S$ is the vertex set of the planted dense subgraph; and
2. if the edge $\{i, j \}$ is in $E(G)$, remove it from $G$ independently with probability $q_{ij}$ where $$q_{ij} = \left\{ \begin{array}{ll} 0 &\text{if } (i, j) \in S^2 \cup S'^2 \\ \mu_1/P_0 &\text{if } (i, j) \not \in (S \cup S')^2 \\ \mu_2/P_0 &\text{if } (i, j) \in S \times S' \text{ or } (i, j) \in S' \times S \end{array} \right.$$
Note that $\mG(n, P_0')$ can be produced by the adversary under $H_0$ of $\pr{semi-cr}(n, k, P_0 + \mu_1, P_0)$ as long as $P_0' \le P_0$ by removing all edges independently with probability $1 - P_0'/P_0$. Thus it suffices to map to a testing problem between some $\pr{tsi}(n, k, k', P_0, \mu_1, \mu_2, \mu_3)$ and $\mG(n, P_0')$.
The next theorem establishes our main Markov transition guarantees for the reduction $k\pr{pds-to-semi-cr}$, which map to such a testing problem when $P_0 = 1/2$.
\[thm:semi-cr-reduction\] Let $N$ be a parameter and fix other parameters as follows:
- [Initial]{.nodecor} $k\pr{-bpds}$ [Parameters:]{.nodecor} $k, N, p, q$ and $E$ as in Theorem \[thm:isbm\].
- [Target]{.nodecor} $\pr{semi-cr}$ [Parameters:]{.nodecor} $(n, K, 1/2 + \gamma, 1/2)$ where $n = 3ks \cdot \frac{3^t - 1}{2}$ and $K = (3^t - 1)k$ for some parameters $t = t(N), s = s(N) \in \mathbb{N}$ satisfying that $$m \le 3^t ks \le n \le \textnormal{poly}(N)$$ where $m$ and $Q$ are as in Theorem \[thm:ghpm\]. The target level of signal $\gamma$ is given by $\gamma = \Phi\left( \frac{\mu}{3^{t}} \right) - 1/2$ and the target $\pr{tsi}$ densities are $$\mu_1 = \Phi\left( \frac{\mu}{3^{t+1}} \right) - \frac{1}{2} \quad \textnormal{and} \quad \mu_2 = \mu_3 = \Phi\left( \frac{\mu}{3^{t}} \right) - \frac{1}{2}$$ where $\mu \in (0, 1)$ satisfies that $$\mu \le \frac{1}{2 \sqrt{6\log n + 2\log (p - Q)^{-1}}} \cdot \min \left\{ \log \left( \frac{p}{Q} \right), \log \left( \frac{1 - Q}{1 - p} \right) \right\}$$
Let $\mathcal{A}(G)$ denote $k$<span style="font-variant:small-caps;">-pds-to-semi-cr</span> applied to the graph $G$ with these parameters. Then $\mathcal{A}$ runs in $\textnormal{poly}(N)$ time and it follows that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mG_E(N, k, p, q) \right), \, \pr{tsi}(n, K, K/2, 1/2, \mu_1, \mu_2, \mu_3) \right) &= O\left( \frac{k}{\sqrt{N}} + e^{-\Omega(N^2/km)} + (3^t ks)^{-1} \right) \\
\TV\left( \mathcal{A}\left( \mG(N, q) \right), \, \mG\left(n, 1/2 - \mu_1 \right) \right) &= O\left( e^{-\Omega(N^2/km)} + (3^t ks)^{-1} \right)\end{aligned}$$
**Algorithm** $k$<span style="font-variant:small-caps;">-pds-to-semi-cr</span>
*Inputs*: $k$ instance $G \in \mG_N$ with dense subgraph size $k$ that divides $N$, and the following parameters
- partition $E$, edge probabilities $0 < q < p \le 1$, $Q \in (0, 1)$ and $m$ as in Figure \[fig:isbm-reduction\]
- refinement parameter $s$ and number of vertices $n = 3ks \cdot \frac{3^t - 1}{2}$ for some $t \in \mathbb{N}$ satisfy that $m \le 3^t ks \le n \le \text{poly}(N)$
- mean parameter $\mu \in (0, 1)$ as in Figure \[fig:isbm-reduction\]
1. *Symmetrize and Plant Diagonals*: Compute $M_{\text{PD1}} \in \{0, 1\}^{m \times m}$ and $F$ as in Step 1 of Figure \[fig:isbm-reduction\].
2. *Pad and Further Partition*: Form $M_{\text{PD2}}$ and $F'$ as in Step 2 of Figure \[fig:isbm-reduction\] modified so that $M_{\text{PD2}}$ is a $3^t ks \times 3^t ks$ matrix and each $F'_i$ has size $3^t s$. Let $F^s$ be the partition of $[3^t ks]$ into $ks$ parts of size $3^t$ by refining $F'$ by splitting each of its parts into $s$ parts of equal size arbitrarily.
3. *Bernoulli Rotations*: Let $F^o$ be a partition of $[n]$ into $ks$ equally sized parts. Now compute the matrix $M_{\text{R}} \in \mathbb{R}^{n \times n}$ as follows:
1. For each $i, j \in [k]$, apply $\pr{Tensor-Bern-Rotations}$ to the matrix $(M_{\text{P}})_{F_i^s, F_j^s}$ with matrix parameter $A_1 = A_2 = K_{3, t}$, Bernoulli probabilities $0 < Q < p \le 1$, output dimension $\frac{1}{2} (3^{t} - 1)$, $\lambda_1 = \lambda_2 = \sqrt{3/2}$ and mean parameter $\mu$.
2. Set the entries of $(M_{\text{R}})_{F^o_i, F^o_j}$ to be the entries in order of the matrix output in (1).
4. *Threshold and Output*: Output the graph generated by Step 4 of Figure \[fig:isbm-reduction\] modified so that $G'$ has vertex set $[n]$ and $M_{\text{R}}$ is thresholded at $\frac{\mu}{3^{t + 1}}$.
To prove this theorem, we prove a lemma analyzing the Bernoulli rotations step in Figure \[fig:semirandreduction\]. The proof of this lemma is similar to those of Lemmas \[lem:isbm-rotations\] and \[lem:ghpm-rotations\]. We omit details that are identical. Recall from Section \[subsec:3-rsme-reduction\] the definition of the vector $v_{S, F^s, F^o}(M) \in \mathbb{R}^{ab}$ where $F^s$ and $F^o$ are partitions of $[ab]$ into $a$ equally sized parts and $S$ is a set intersecting each $F^s_i$ in exactly one element. Here we extend this definition to sets $S$ intersecting each $F^s_i$ in at most one element, by setting $$\left( v_{S, F^s, F^o}(M) \right)_{F_i^o} = \left\{ \begin{array}{ll} M_{\cdot, S \cap F_i^s} &\text{if } S \cap F_i \neq \emptyset \\ 0 &\text{if } S \cap F_i = \emptyset \end{array} \right.$$ for each $1 \le i \le a$. We now can state the approximate Markov transition guarantees for the Bernoulli rotations step of $k\pr{-pds-to-semi-cr}$ in this notation.
\[lem:rotthres\] Let $F^s$ and $F^o$ be a fixed partitions of $[3^t ks]$ and $[n]$ into $ks$ parts of size $3^t$ and $\frac{1}{2}(3^t - 1)$, respectively, and let $S \subseteq [3^t ks]$ where $|S| = k$ and $|S \cap F_i^s| \le 1$ for each $1 \le i \le ks$. Let $\mathcal{A}_{\textnormal{3}}$ denote Step 3 of $k\pr{-pds-to-semi-cr}$ with input $M_{\textnormal{PD2}}$ and output $M_{\textnormal{R}}$. Suppose that $p, Q$ and $\mu$ are as in Theorem \[thm:semi-cr-reduction\], then it follows that $$\begin{aligned}
&\TV\Big( \mathcal{A}_{\textnormal{3}} \left( \mathcal{M}_{[3^t ks] \times [3^t ks]} \left( S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right), \\
&\quad \quad \quad \quad \left. \mL\left( \frac{2\mu}{3} \cdot v_{S, F^s, F^o}(K_{3, t}) v_{S, F^s, F^o}(K_{3, t})^\top + \mN(0, 1)^{\otimes n \times n} \right) \right) = O\left((3^t k s)^{-1}\right) \\
&\TV\left( \mathcal{A}_{\textnormal{3}} \left(\textnormal{Bern}(Q)^{\otimes 3^t ks \times 3^t ks} \right), \, \mN(0, 1)^{\otimes n \times n} \right) = O\left((3^t k s)^{-1}\right)\end{aligned}$$
Let (1) and (2) denote the following two cases:
1. $M_{\textnormal{PD2}} \sim \mathcal{M}_{[3^t ks] \times [3^t ks]} \left( S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q) \right)$; and
2. $M_{\textnormal{PD2}} \sim \textnormal{Bern}(Q)^{\otimes 3^t ks \times 3^t ks}$.
Now define the matrix $M_{\text{R}}'$ with independent entries such that $$M_{\text{R}}' \sim \left\{ \begin{array}{ll} \frac{2\mu}{3} \cdot v_{S, F^s, F^o}(K_{3, t}) v_{S, F^s, F^o}(K_{3, t})^\top + \mN(0, 1)^{\otimes n \times n} &\text{if (1) holds} \\ \mN(0, 1)^{\otimes n \times n} &\text{if (2) holds} \end{array} \right.$$ Similarly to Lemma \[lem:ghpm-rotations\], Lemmas \[lem:bern-rotations\] and \[lem:Krtsv\] yields that under both (1) and (2), we have that $$\TV\left( \left( M_{\text{R}} \right)_{F_i^s, F_j^s}, \left( M_{\text{R}}' \right)_{F_i^s, F_j^s} \right) = O\left( 3^{2t} \cdot (3^t ks)^{-3} \right)$$ for all $1 \le i, j \le ks$. The tensorization property of total variation in Fact \[tvfacts\] now yields that $$\TV\left( \mL(M_{\text{R}}), \mL(M_{\text{R}}') \right) = O\left( (3^t ks)^{-1} \right)$$ under both (1) and (2), proving the lemma.
We now complete the proof of Theorem \[thm:semi-cr-reduction\], which follows a similar structure as in Theorem \[thm:isbm\].
Let the steps of $\mathcal{A}$ to map inputs to outputs as follows $$(G, E) \xrightarrow{\mathcal{A}_1} (M_{\text{PD1}}, F) \xrightarrow{\mathcal{A}_2} (M_{\text{PD2}}, F^s) \xrightarrow{\mathcal{A}_3} (M_{\text{R}}, F^o) \xrightarrow{\mathcal{A}_{\text{4}}} G'$$ Under $H_1$, consider Lemma \[lem:tvacc\] applied to the following sequence of distributions $$\begin{aligned}
\mathcal{P}_0 &= \mG_E(N, k, p, q) \\
\mathcal{P}_1 &= \mathcal{M}_{[m] \times [m]}(S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \quad \text{where } S \sim \mU_m(F) \\
\mathcal{P}_2 &= \mathcal{M}_{[3^t ks] \times [3^t ks]}(S \times S, \textnormal{Bern}(p), \textnormal{Bern}(Q)) \quad \text{where } S \sim \mU_{3^t ks}^k(F^s) \\
\mathcal{P}_3 &= \frac{2\mu}{3} \cdot v_{S, F^s, F^o}(K_{3, t}) v_{S, F^s, F^o}(K_{3, t})^\top + \mN(0, 1)^{\otimes n \times n} \quad \text{where } S \sim \mU_{3^t ks}^k(F^s) \\
\mathcal{P}_{\text{4}} &= \pr{tsi}(n, K, K/2, 1/2, \mu_1, \mu_2, \mu_3)\end{aligned}$$ Let $C_Q = \max\left\{ \frac{Q}{1 - Q}, \frac{1 - Q}{Q} \right\}$ and consider setting $$\epsilon_1 = 4k \cdot \exp\left( - \frac{Q^2N^2}{48pkm} \right) + \sqrt{\frac{C_Q k^2}{2m}}, \quad \epsilon_2 = 0, \quad \epsilon_3 = O\left( (3^t ks)^{-1} \right) \quad \text{and} \quad \epsilon_4 = 0$$ Lemma \[lem:submatrix\] implies this is a valid choice of $\epsilon_1$, $\mathcal{A}_2$ is exact so we can take $\epsilon_2 = 0$ and $\epsilon_3$ is valid by applying Lemma \[lem:rotthres\] and averaging over $S \sim \mU_{3^t ks}^k(F^s)$ using the conditioning property of total variation in Fact \[tvfacts\]. Now note that for each $S$ the definition of $v_{S, F^s, F^o}(K_{3, t})$ implies that there are sets $S_1$ and $S_2$ with $|S_1| = (3^t - 1)k$ and $|S_2| = \frac{3^t - 1}{2} \cdot k$ such that $$\left( \frac{2\mu}{3} \cdot v_{S, F^s, F^o}(K_{3, t}) v_{S, F^s, F^o}(K_{3, t})^\top \right)_{ij} = \frac{\mu}{3^{t + 1}} + \left\{ \begin{array}{ll} \mu/3^t &\text{if } i, j \in S_1 \\ -\mu/3^t &\text{if } (i, j) \in S_1 \times S_2 \text{ or } (i, j) \in S_2 \times S_1 \\ 0 &\text{if } i, j \in S_2 \\ -\mu/3^{t + 1} &\text{if } i, j \not \in (S_1 \cup S_2) \end{array} \right.$$ for each $1 \le i, j \le n$. Permuting the rows and columns of $\mP_3$ therefore yields $\mP_4$ exactly with $\epsilon_4 = 0$. Lemma \[lem:tvacc\] thus establishes the first bound. Under $H_0$, consider the distributions $$\begin{aligned}
&\mathcal{P}_0 = \mG(N, q), \quad \mathcal{P}_1 = \text{Bern}(Q)^{\otimes m \times m}, \quad \mathcal{P}_2 = \text{Bern}(Q)^{\otimes 3^t ks \times 3^t ks}, \\
&\mathcal{P}_3 = \mN(0, 1)^{\otimes n \times n} \quad \text{and} \quad \mP_4 = \mG\left(n, 1/2 - \mu_1 \right)\end{aligned}$$ As in Theorems \[thm:isbm\] and \[thm:ghpm\], Lemmas \[lem:submatrix\] and \[lem:rotthres\] imply $\epsilon_1 = 4k \cdot \exp\left( - \frac{Q^2N^2}{48pkm} \right)$ and the choices of $\epsilon_2, \epsilon_3$ and $\epsilon_4$ above are valid. Lemma \[lem:tvacc\] now yields the second bound and completes the proof of the theorem.
We now add a simple final step to $k\pr{pds-to-semi-cr}$, reducing to arbitrary $P_0 \neq 1/2$. The guarantees for this modified reduction are captured in the following corollary.
\[cor:semi-cr-gen\] Define all parameters as in Theorem \[thm:semi-cr-reduction\] and let $P_0 \in (0, 1)$ be such that $\eta = \min \{P_0, 1 - P_0\} = \Omega(1)$. Then there is a $\textnormal{poly}(N)$ time reduction $\mathcal{A}$ from graphs on $N$ vertices to graphs on $n$ vertices satisfying that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mG_E(N, k, p, q) \right), \, \pr{tsi}(n, K, K/2, P_0, 2\eta\mu_1, 2\eta\mu_2, 2\eta\mu_3) \right) &= O\left( \frac{k}{\sqrt{N}} + e^{-\Omega(N^2/km)} + (3^t ks)^{-1} \right) \\
\TV\left( \mathcal{A}\left( \mG(N, q) \right), \, \mG\left(n, P_0 - 2\eta \mu_1 \right) \right) &= O\left( e^{-\Omega(N^2/km)} + (3^t ks)^{-1} \right)\end{aligned}$$
This corollary follows from the same reduction in the first part of the proof of Corollary \[thm:isbm-mod\]. Consider the reduction $\mathcal{A}$ that adds a simple post-processing step to $k$<span style="font-variant:small-caps;">-pds-to-semi-cr</span> as follows. On input graph $G$ with $N$ vertices:
1. Form the graph $G_1$ by applying $k$<span style="font-variant:small-caps;">-pds-to-cr</span> to $G$ with parameters $N, k, E, \ell, n, s, t$ and $\mu$.
2. Form $G_2$ as in $\mathcal{A}_2$ of Corollary \[thm:isbm-mod\].
This clearly runs in $\text{poly}(N)$ time and the second step can be verified to map $\pr{tsi}(n, K, K/2, 1/2, \mu_1, \mu_2, \mu_3)$ to $\pr{tsi}(n, K, K/2, P_0, 2\eta\mu_1, 2\eta\mu_2, 2\eta\mu_3)$ and $\mG\left(n, 1/2 - \mu_1 \right)$ to $\mG\left(n, P_0 - 2\eta \mu_1 \right)$ exactly. Applying Theorem \[thm:semi-cr-reduction\] and Lemma \[lem:tvacc\] to each of these two steps proves the bounds in the corollary statement.
Summarizing the results of this section, we arrive at the desired computational lower bound for $\pr{semi-cr}$. The proof of the next theorem follows the usual recipe for deducing computational lower bounds and is deferred to Appendix \[subsec:appendix-3-part-3\].
[thm:semi-cr-lb]{} \[Lower Bounds for $\pr{semi-cr}$\] If $k$ and $n$ are polynomial in each other with $k = \Omega(\sqrt{n})$ and $0 < P_0 < P_1 \le 1$ where $\min\{P_0, 1 - P_0 \} = \Omega(1)$, then the $k\pr{-pc}$ conjecture or $k\pr{-pds}$ conjecture for constant $0 < q < p \le 1$ both imply that there is a computational lower bound for $\pr{semi-cr}(n, k, P_1, P_0)$ at $\frac{(P_1 - P_0)^2}{P_0(1 - P_0)} = \tilde{o}(n/k^2)$.
Tensor Principal Component Analysis {#sec:3-tensor}
===================================
**Algorithm** $k$<span style="font-variant:small-caps;">-pst-to-tpca</span>
*Inputs*: $k$ instance $T \in \{0, 1\}^{N^{\otimes s}}$ of order $s$ with planted sub-tensor size $k$ that divides $N$, and the following parameters
- partition $F$ of $[N]$ into $k$ parts of size $N/k$ and edge probabilities $0 < q < p \le 1$
- output dimension $n$ and a parameter $t \in \mathbb{N}$ satisfying that $$n \le D = 2k(2^t - 1), \quad N \le 2^t k \quad \text{and} \quad t = O(\log N)$$
- target level of signal $\theta \in (0, 1)$ where $$\theta \le \frac{c \cdot \delta}{2^{st/2} \cdot \sqrt{t + \log (p - q)^{-1}}}$$ for a sufficiently small constant $c > 0$, where $\delta = \min \left\{ \log \left( \frac{p}{q} \right), \log \left( \frac{1 - q}{1 - p} \right) \right\}$.
1. *Pad*: Form $T_{\text{PD}} \in \{0, 1\}^{2^t k \times 2^t k}$ by embedding $T$ as the upper left principal sub-tensor of $T_{\text{PD}}$ and then adding $2^t k - N$ new indices along each axis of $T$ and filling all missing entries with i.i.d. samples from $\text{Bern}(q)$. Let $F'_i$ be $F_i$ with $2^t - N/k$ of the new indices. Sample $k$ random permutations $\sigma_i$ of $F_i'$ independently for each $1 \le i \le k$ and permute the indices along each axis of $T_{\text{PD}}$ within each part $F'_i$ according to $\sigma_i$.
2. *Bernoulli Rotations*: Let $F''$ be a partition of $[D]$ into $k$ equally sized parts. Now compute the matrix $T_{\text{R}} \in \mathbb{R}^{K^{\otimes s}}$ as follows:
1. For each block index $(i_1, i_2, \dots, i_s) \in [k]$, apply $\pr{Tensor-Bern-Rotations}$ to the tensor $(T_{\text{PD}})_{F_{i_1}', F_{i_2}', \dots, F_{i_s}'}$ with matrix parameters $A_1 = A_2 = \cdots = A_s = K_{2, t}$, rejection kernel parameter $R_{\pr{rk}} = (2^t k)^s$, Bernoulli probabilities $0 < Q < p \le 1$, output dimension $D/k = 2(2^t - 1)$, singular value upper bounds $\lambda_1 = \lambda_2 = \cdots = \lambda_s = \sqrt{2}$ and mean parameter $\mu = \theta \cdot 2^{s(t+1)/2}$.
2. Set the entries of $(T_{\text{R}})_{F_{i_1}'', F_{i_2}'', \dots, F_{i_s}''}$ to be the entries in order of the tensor output in (1).
3. *Subsample, Sign and Output*: Randomly choose a subset $U \subseteq [D]$ of size $|U| = n$ and randomly sample a vector $b \sim \text{Unif}\left[ \{-1, 1\}\right]^{\otimes D}$ output the tensor $b^{\otimes s} \odot T_{\text{R}}$ restricted to the indices in $U$, or in other words $\left(b^{\otimes s} \odot T_{\text{R}}\right)_{U, U, \dots, U}$, where $\odot$ denotes the entrywise product of two tensors.
In this section, we: (1) give our reduction $k$<span style="font-variant:small-caps;">-pst-to-tpca</span> from $k$-partite planted sub-tensor to tensor PCA; (2) combine this with the completing hypergraphs technique of Section \[sec:2-hypergraph-planting\] to prove our main computational lower bound for the hypothesis testing formulation of tensor PCA, Theorem \[thm:tpca-lb\]; and (3) we show that Theorem \[thm:tpca-lb\] implies computational lower bounds for the recovery formulation of tensor PCA. We remark that the heuristic at the end of Section \[subsec:1-tech-design-matrices\] yields the predicted computational barrier for $\pr{tpca}$. Specifically, the $\ell_2$ norm for the data tensor $\bE[X]$ corresponding to $k\pr{-hpc}^s$ is $\Theta(k^{s/2})$ which is $\tilde{\Theta}(n^{s/4})$ just below the conjectured computational barrier for $k\pr{-hpc}^s$. Furthermore, the corresponding $\ell_2$ norm for $H_1$ of $\pr{tpca}^s$ is $\tilde{\Theta}(\theta n^{s/2})$. Equating these norms correctly predicts the computational barrier of $\theta = \tilde{\Theta}(n^{-s/4})$.
Our reduction $k$<span style="font-variant:small-caps;">-pst-to-tpca</span> is shown in Figure \[fig:tpca-reduction\], which applies dense Bernoulli rotations with Kronecker products of the matrices $K_{2, t}$ to the planted sub-tensor problem. The following theorem establishes the approximate Markov transition properties of this reduction. Its proof is similar to the proofs of Theorems \[thm:isgmreduction\] and \[thm:isbm\]. We omit details that are similar for brevity.
\[thm:tpca\] Fix initial and target parameters as follows:
- [Initial]{.nodecor} $k\pr{-pst}$ [Parameters:]{.nodecor} dimension $N$, sub-tensor size $k$ that divides $N$, order $s$, a partition $F$ of $[N]$ into $k$ parts of size $N/k$ and edge probabilities $0 < q < p \le 1$ where $\min\{q, 1 - q\} = \Omega_N(1)$.
- [Target]{.nodecor} $\pr{tpca}$ [Parameters:]{.nodecor} dimension $n$ and a parameter $t = t(N) \in \mathbb{N}$ satisfying that $$n \le D = 2k(2^t - 1), \quad N \le 2^t k \quad \text{and} \quad t = O(\log N)$$ and target level of signal $\theta \in (0, 1)$ where $$\theta \le \frac{c \cdot \delta}{2^{st/2} \cdot \sqrt{t + \log (p - q)^{-1}}}$$ for a sufficiently small constant $c > 0$, where $\delta = \min \left\{ \log \left( \frac{p}{q} \right), \log \left( \frac{1 - q}{1 - p} \right) \right\}$.
Let $\mathcal{A}(T)$ denote $k$<span style="font-variant:small-caps;">-pst-to-tpca</span> applied to the tensor $T$ with these parameters. Then $\mathcal{A}$ runs in $\textnormal{poly}(N)$ time and it follows that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mathcal{M}_{[N]^s}\left( S^{s}, \textnormal{Bern}(p), \textnormal{Bern}(q) \right) \right), \, \pr{tpca}^s_D(n, \theta) \right) &= O\left( k^{-2s} 2^{-2st} \right) \\
\TV\left( \mathcal{A}\left( \mathcal{M}_{[N]^s}\left( \textnormal{Bern}(q) \right) \right), \, \mN(0, 1)^{\otimes n^{\otimes s}} \right) &= O\left( k^{-2s} 2^{-2st} \right)\end{aligned}$$ for any set $S \subseteq [N]$ with $|S \cap E_i| = 1$ for each $1 \le i \le k$.
We now prove two lemmas stating the guarantees for the dense Bernoulli rotations step and final step of $k$<span style="font-variant:small-caps;">-pst-to-tpca</span>. Define $v_{S, F', F''}(M)$ as in Section \[subsec:3-rsme-reduction\]. Note that the matrix $K_{2,t}$ has dimensions $2(2^t - 1) \times 2^t$. The proof of the next lemma follows from the same argument as in the proof of Lemma \[lem:isgm-rotations\].
\[lem:tpca-rotations\] Let $F'$ and $F''$ be fixed partitions of $[2^t k]$ and $[D]$ into $k$ parts of size $2^t$ and $2(2^t - 1)$, respectively, and let $S \subseteq [2^t k]$ where $|S \cap F_i'| = 1$ for each $1 \le i \le k$. Let $\mathcal{A}_{\textnormal{2}}$ denote Step 2 of $k$<span style="font-variant:small-caps;">-pst-to-tpca</span> with input $T_{\textnormal{PD}}$ and output $T_{\textnormal{R}}$. Suppose that $p, q$ and $\theta$ are as in Theorem \[thm:tpca\], then it follows that $$\begin{aligned}
&\TV\Big( \mathcal{A}_{\textnormal{2}} \left( \mathcal{M}_{[2^t k]^s} \left( S^s, \textnormal{Bern}(p), \textnormal{Bern}(q) \right) \right), \\
&\quad \quad \quad \quad \left. \mL\left( 2^{st/2} \theta \cdot v_{S, F', F''}(K_{2, t})^{\otimes s} + \mN(0, 1)^{\otimes D^{\otimes s}} \right) \right) = O\left( k^{-2s} 2^{-2st} \right) \\
&\TV\left( \mathcal{A}_{\textnormal{2}} \left( \mathcal{M}_{[2^t k]^s} \left( \textnormal{Bern}(q) \right) \right), \, \mN(0, 1)^{\otimes D^{\otimes s}} \right) = O\left( k^{-2s} 2^{-2st} \right) \end{aligned}$$
This lemma follows from the same argument as in the proof of Lemma \[lem:isgm-rotations\]. We outline the details that differ. Specifically, consider the case in which $T_{\textnormal{PD}} \sim \mathcal{M}_{[2^t k]^s} \left( S^s, \textnormal{Bern}(p), \textnormal{Bern}(q) \right)$. Observe that $$(T_{\textnormal{PD2}})_{F'_{i_1}, F'_{i_2}, \dots, F'_{i_s}} \sim \pr{pb}\left(F_{i_1}' \times F'_{i_2} \times \cdots \times F_{i_s}', (S \cap F_{i_1}', S \cap F_{i_2}', \dots, S \cap F_{i_s}'), p, q\right)$$ for all $(i_1, i_2, \dots, i_s) \in [k]^s$. The singular value upper bound on $K_{2, t}$ in Lemma \[lem:Krtsv\] and the same application of Corollary \[cor:tensor-bern-rotations\] as in Lemma \[lem:isgm-rotations\] yields that $$\TV\left( (T_{\textnormal{R}})_{F''_{i_1}, \dots, F''_{i_s}}, \, \mL\left( 2^{-s/2} \mu \cdot (K_{2, t})_{\cdot, S \cap F_{i_1}'} \otimes \cdots \otimes (K_{2, t})_{\cdot, S \cap F_{i_s}'} + \mN(0, 1)^{\otimes (D/k)^{\otimes s}} \right) \right) = O\left( k^{-3s} 2^{-2st} \right)$$ for all $(i_1, i_2, \dots, i_s) \in [k]^s$ since $\prod_{j = 1}^s \lambda_j = 2^{s/2}$. Note that the exponent of $8$ is guaranteed by changing the parameter in Gaussian rejection kernels from $n$ to $n^{10}$ to decrease their total variation error. Note that this step still runs in $\text{poly}(n^{10})$ time. Combining this bound for all such $(i_1, i_2, \dots, i_s)$ and the tensorization property of total variation in Fact \[tvfacts\] yields that $$\TV\left( T_{\textnormal{R}}, \, \mL\left( 2^{-s/2} \mu \cdot v_{S, F', F''}(K_{2, t})^{\otimes s} + \mN(0, 1)^{\otimes D^{\otimes s}} \right) \right) = O\left( k^{-2s} 2^{-2st} \right)$$ Combining this with the fact that $\mu = \theta \cdot 2^{s(t + 1)/2}$ now yields the first bound in the lemma. The second bound follows by the same argument but now applying Corollary \[cor:tensor-bern-rotations\] to the distribution $(T_{\textnormal{PD2}})_{F'_{i_1}, \dots, F'_{i_s}} \sim \text{Bern}(q)^{(D/k)^{\otimes s}}$. This completes the proof of the lemma.
\[lem:signing\] Let $F', F''$ and $S$ be as in Lemma \[lem:tpca-rotations\] and let $p, q$ and $\theta$ be as in Theorem \[thm:tpca\]. Let $\mathcal{A}_{\textnormal{3}}$ denote Step 3 of $k$<span style="font-variant:small-caps;">-pst-to-tpca</span> with input $T_{\textnormal{R}}$ and output given by the output $T'$ of $\mathcal{A}$. Then $$\begin{aligned}
\mathcal{A}_{\textnormal{3}} \left( 2^{st/2} \theta \cdot v_{S, F', F''}(K_{2, t})^{\otimes s} + \mN(0, 1)^{\otimes D^{\otimes s}} \right) &\sim \pr{tpca}^s_D(n, \theta) \\
\mathcal{A}_{\textnormal{3}} \left( \mN(0, 1)^{\otimes D^{\otimes s}} \right) &\sim \mN(0, 1)^{\otimes n^{\otimes s}}\end{aligned}$$
Suppose that $T_{\textnormal{R}} \sim \mL\left( 2^{st/2} \theta \cdot v_{S, F', F''}(K_{2, t})^{\otimes s} + \mN(0, 1)^{\otimes D^{\otimes s}} \right)$ and let $b \sim \text{Unif}\left[ \{-1, 1\}\right]^{\otimes D}$ be as in Step 3 of $\mathcal{A}$. The symmetry of zero-mean Gaussians and independence among the entries of $\mN(0, 1)^{\otimes D^{\otimes s}}$ imply that $$b^{\otimes s} \odot T_{\textnormal{R}} \sim \mL\left( 2^{st/2} \theta \cdot u^{\otimes s} + b^{\otimes s} \odot \mN(0, 1)^{\otimes D^{\otimes s}} \right) = \mL\left( 2^{st/2} \theta \cdot u^{\otimes s} + \mN(0, 1)^{\otimes D^{\otimes s}} \right)$$ where $u = b \odot v_{S, F', F''}(K_{2, t})$ and the two terms $u^{\otimes s}$ and $\mN(0, 1)^{\otimes D^{\otimes s}}$ above are independent. Now note that each entry of $v_{S, F', F''}(K_{2, t})$ is either $\pm 2^{-t/2}$ by the definition of $K_{2, t}$. This implies that $2^{t/2} u$ is distributed as $\text{Unif}\left[ \{-1, 1\}\right]^{\otimes D}$ and hence that $$\mL\left(b^{\otimes s} \odot T_{\textnormal{R}} \right) = \mL\left( \theta \cdot b^{\otimes s} + \mN(0, 1)^{\otimes D^{\otimes s}} \right) = \pr{tpca}^s_D(D, \theta)$$ Subsampling the same set $U$ of $n$ coordinates of this tensor along each axis by definition yields $ \pr{tpca}(n, \theta)$, proving the first claim in the lemma. The second claim is immediate by the fact that if $T_{\textnormal{R}} \sim \mN(0, 1)^{\otimes D^{\otimes s}}$ then it also holds that $b^{\otimes s} \odot T_{\textnormal{R}} \sim \mN(0, 1)^{\otimes D^{\otimes s}}$. This completes the proof of the lemma.
We now complete the proof of Theorem \[thm:tpca\] by applying Lemma \[lem:tvacc\] as in Theorems \[thm:isgmreduction\] and \[thm:isbm\].
Define the steps of $\mathcal{A}$ to map inputs to outputs as follows $$(T, F) \xrightarrow{\mathcal{A}_1} (T_{\text{PD}}, F) \xrightarrow{\mathcal{A}_2} (T_{\text{R}}, F'') \xrightarrow{\mathcal{A}_{\text{3}}} T'$$ Consider Lemma \[lem:tvacc\] applied to the following sequence of distributions $$\begin{aligned}
\mathcal{P}_0 &= \mathcal{M}_{[N]^s}\left( S^{s}, \textnormal{Bern}(p), \textnormal{Bern}(q) \right) \\
\mathcal{P}_1 &= \mathcal{M}_{[2^t k]^s} \left( S^s, \textnormal{Bern}(p), \textnormal{Bern}(q) \right) \quad \text{where } S \sim \mU_{2^t k}(F') \\
\mathcal{P}_2 &= 2^{st/2} \theta \cdot v_{S, F', F''}(K_{2, t})^{\otimes s} + \mN(0, 1)^{\otimes D^{\otimes s}} \quad \text{where } S \sim \mU_{2^t k}(F') \\
\mathcal{P}_3 &= \pr{tpca}^s_D(n, \theta)\end{aligned}$$ Consider applying Lemmas \[lem:tpca-rotations\] and \[lem:signing\] while averaging over $S \sim \mU_{2^t k}(F')$ and applying the conditioning property of total variation in Fact \[tvfacts\]. This yields that we may take $\epsilon_1 = 0$, $\epsilon_2 = O\left( k^{-2s} 2^{-2st} \right)$ and $\epsilon_3 = 0$. Applying Lemma \[lem:tvacc\] proves the first bound in the theorem. Now consider the following sequence of distributions $$\mathcal{P}_0 = \mathcal{M}_{[N]^s}\left( \textnormal{Bern}(q) \right), \quad \mathcal{P}_1 = \mathcal{M}_{[2^t k]^s}\left( \textnormal{Bern}(q) \right), \quad \mathcal{P}_2 = \mN(0, 1)^{\otimes D^{\otimes s}} \quad \text{and} \quad \mathcal{P}_3 = \mN(0, 1)^{\otimes n^{\otimes s}}$$ Lemmas \[lem:tpca-rotations\] and \[lem:signing\] imply we can again take $\epsilon_1 = 0$, $\epsilon_2 = O\left( k^{-2s} 2^{-2st} \right)$ and $\epsilon_3 = 0$. The second bound in the theorem now follows from Lemma \[lem:tvacc\].
We now apply this theorem to deduce our main computational lower bounds for tensor PCA by verifying its guarantees are sufficient to apply Lemma \[cor:one-side-reduction\].
[thm:tpca-lb]{} \[Lower Bounds for $\pr{tpca}$\] Let $n$ be a parameter and $s \ge 3$ be a constant, then the $k\pr{-hpc}^s$ or $k\pr{-hpds}^s$ conjecture for constant $0 < q < p \le 1$ both imply a computational lower bound for $\pr{tpca}^s(n, \theta)$ at all levels of signal $\theta = \tilde{o}(n^{-s/4})$ against $\textnormal{poly}(n)$ time algorithms $\mathcal{A}$ solving $\pr{tpca}^s(n, \theta)$ with a low false positive probability of $\bP_{H_0}[\mathcal{A}(T) = H_1] = O(n^{-s})$.
We will verify that the approximate Markov transition guarantees for $k$<span style="font-variant:small-caps;">-pst-to-tpca</span> in Theorem \[thm:tpca\] are sufficient to apply Lemma \[cor:one-side-reduction\] for the set of $\mP = \pr{tpca}^s(n, \theta)$ with parameters $(n, \theta)$ that fill out the region $\theta = \tilde{o}(n^{-s/4})$. Fix a constant pair of probabilities $0 < Q < p \le 1$, a constant positive integer $s$ and any sequence of parameters $(n, \theta)$ where $\theta \in (0, 1)$ is implicitly a function of $n$ with $$\theta \le \frac{c}{w^{s/2}n^{s/4} \sqrt{\log n}}$$ for sufficiently large $n$, an arbitrarily slow-growing function $w = w(n) \to \infty$ and a sufficiently small constant $c > 0$. Now consider the parameters $(N, k)$ and input $t$ to $k\pr{-pst-to-tpca}$ defined as follows:
- let $t$ be such that $2^t$ is the smallest power of two greater than $w\sqrt{n}$; and
- let $k = \lceil w^{-1} \sqrt{n} \rceil$ and let $N$ be the largest multiple of $k$ less than $n$.
Now observe that these choices of parameters ensure that $k$ divides $N$, it holds that $k = o(\sqrt{N})$ and $$N \le n \le 2^t k \le D = 2k(2^t - 1)$$ Furthermore, we have that $N = \Theta(n)$ and $2^t = \Theta(w \sqrt{n})$. For a sufficiently small choice of $c > 0$, we also have that $$\theta \le \frac{c}{w^{s/2}n^{s/4} \sqrt{\log n}} \le \frac{c' \cdot \delta}{2^{st/2} \cdot \sqrt{t + \log(p - Q)^{-1}}}$$ where $c' > 0$ is the constant and $\delta$ is as in Theorem \[thm:tpca\]. This verifies all of the conditions needed to apply Theorem \[thm:tpca\], which implies that $k$<span style="font-variant:small-caps;">-pst-to-tpca</span> maps $k\pr{-pst}_E^s(N, k, p, Q)$ to $\pr{tpca}^s(n, \theta)$ under both $H_0$ and $H_1$ to within total variation error $O\left( k^{-2s} 2^{-2st} \right) = O(n^{-2s})$. By Lemma \[cor:one-side-reduction\], the $k\pr{-hpds}^s$ conjecture for $k\pr{-hpds}^s_E(N', k', p, q)$ where $N = N' - (s - 1)N'/k'$ and $k = k' - s + 1$ now implies that there are is no $\textnormal{poly}(n)$ time algorithm $\mathcal{A}$ solving $\pr{tpca}^s(n, \theta)$ with a low false positive probability of $\bP_{H_0}[\mathcal{A}(T) = H_1] = O(n^{-s})$. This completes the proof of the theorem.
We conclude this section with the following lemma observing that this theorem implies a computational lower bound for estimating $v$ in $\pr{tpca}^s(n, \theta)$ where $\theta = \tilde{\omega}(n^{-s/2})$ and $\theta = \tilde{o}(n^{-s/4})$. Note that the requirement $\theta = \tilde{\omega}(n^{-s/2})$ is weaker than the condition $\theta = \tilde{\omega}(n^{(1-s)/2})$, which is necessary for recovering $v$ to be information-theoretically possible, as discussed in Section \[subsec:1-problems-tpca\]. The next lemma shows that any estimator yields a test in the hypothesis testing formulation of tensor PCA that must have a low false positive probability of error, since thresholding $\langle \hat{v}, T\rangle$ where $\hat{v}$ is an estimator of $v$, yields a means to distinguish $H_0$ and $H_1$ with high probability. We remark that the requirement $\langle v, \hat{v} \rangle = \Omega(\| v \|_2)$ is weaker than the condition $\| v - \hat{v} \cdot \sqrt{n} \|_2 = o(\sqrt{n})$ when $\hat{v}$ is a unit vector and $v \in \{-1, 1\}^n$. Thus any estimation algorithm with $\ell_2$ error $o(\sqrt{n})$, directly yields an algorithm $\mathcal{A}_E$ satisfying the conditions of the lemma.
\[lem:one-side-estimation\] Let $s \ge 2$ be a fixed constant and suppose that there is a $\textnormal{poly}(n)$ time algorithm $\mathcal{A}_E$ that, on input sampled from $\theta v^{\otimes s} + \mN(0, 1)^{\otimes n^{\otimes s}}$ where $v \in \{-1, 1\}^n$ is fixed but unknown to $\mathcal{A}_E$ and $\theta = \omega(n^{-s/2} \sqrt{s \log n})$, outputs a unit vector $\hat{v} \in \mathbb{R}^n$ with $\langle v, \hat{v} \rangle = \Omega(\| v \|_2)$. Then there is a $\textnormal{poly}(n)$ time algorithm $\mathcal{A}_D$ solving $\pr{tpca}^s(n, \theta)$ with a low false positive probability of $\bP_{H_0}[\mathcal{A}_D(T) = H_1] = O(n^{-s})$.
Let $T$ be an instance of $\pr{tpca}^s(n, \theta)$ with $T = \theta v^{\otimes s} + G$ under $H_1$ and $T = G$ under $H_0$ where $G \sim \mN(0, 1)^{\otimes n^{\otimes s}}$. Consider the following algorithm $\mathcal{A}_D$ for $\pr{tpca}^s(n, \theta)$:
1. Independently sample $G' \sim \mN(0, 1)^{\otimes n^{\otimes s}}$ and form $T_1 = \frac{1}{\sqrt{2}} (T + G')$ and $T_2 = \frac{1}{\sqrt{2}} (T - G')$.
2. Compute $\hat{v}(T_1)$ as the output of $\mathcal{A}_E$ applied to $T_1$.
3. Output $H_0$ if $\langle \hat{v}(T_1)^{\otimes s}, T_2 \rangle < 2\sqrt{s \log n}$ and output $H_1$ otherwise.
First note that the entries of $\frac{1}{\sqrt{2}} (G + G')$ and $\frac{1}{\sqrt{2}} (G - G')$ are jointly Gaussian but uncorrelated, which implies that these two tensors are independent. This implies that $T_1$ and $T_2$ are independent. Since $\hat{v}(T_1)$ is a unit vector and independent of $T_2$, it follows that $\langle \hat{v}(T_1)^{\otimes s}, T_2 \rangle$ is distributed as $\mN(0, 1)$ conditioned on $\hat{v}(T_1)$ if $T$ is distributed according to $H_0$ of $\pr{tpca}^s(n, \theta)$. Now we have that $$\bP_{H_0}[\mathcal{A}_D(T) = H_1] = \mP\left[ \langle \hat{v}(T_1)^{\otimes s}, T_2 \rangle \ge 2 \sqrt{s \log n} \right] = O(n^{-2s})$$ where the second equality follows from standard Gaussian tail bounds. If $T$ is distributed according to $H_1$, then $\langle \hat{v}(T_1)^{\otimes s}, T_2 \rangle \sim \mN( \theta \langle \hat{v}(T_1), v \rangle^s, 1)$. In this case, $\mathcal{A}_E$ ensures that $\langle \hat{v}(T_1), v \rangle^s = \Omega(n^{s/2})$ since $\| v \|_2 = \sqrt{n}$, and therefore $\theta \langle \hat{v}(T_1), v \rangle^s = \omega(\sqrt{s \log n})$. It therefore follows that $$\bP_{H_1}[\mathcal{A}_D(T) = H_0] \le \mP\left[ \langle \hat{v}(T_1)^{\otimes s}, T_2 \rangle - \theta \langle \hat{v}(T_1), v \rangle^s < - 2 \sqrt{s \log n} \right] = O(n^{-2s})$$ Thus $\mathcal{A}_D$ has Type I$+$II error that is $o(1)$ and the desired low false positive probability, which completes the proof of the lemma.
Universality of Lower Bounds for Learning Sparse Mixtures {#sec:universality}
=========================================================
In this section, we combine our reduction to $\pr{isgm}$ from Section \[subsec:3-rsme-reduction\] with symmetric 3-ary rejection kernels, which were introduced and analyzed in Section \[subsec:srk\]. We remark that the $k$-partite promise in $k\pr{-pds}$ is crucially used in our reduction to obtain this universality. In particular, this promise ensures that the entries of the intermediate $\pr{isgm}$ instance are from one of three distinct distributions, when conditioned on the part of the mixture the sample is from. This is necessary for our application of symmetric 3-ary rejection kernels. An overview of the ideas in this section can be found in Section \[subsec:1-tech-universality\].
Our general lower bound holds given tail bounds on the likelihood ratios between the planted and noise distributions, and applies to a wide range of natural distributional formulations of learning sparse mixtures. For example, our general lower bound recovers the tight computational lower bounds for sparse PCA in the spiked covariance model from [@gao2017sparse; @brennan2018reducibility; @brennan2019optimal]. The results in this section can also be interpreted as a universality principle for computational lower bounds in sparse PCA. We prove the approximate Markov transition guarantees for our reduction to $\pr{glsm}$ in Section \[subsec:universalitybounds\] and discuss the universality conditions needed for our lower bounds in Section \[subsec:universalitydiscussion\].
Reduction to Generalized Learning Sparse Mixtures {#subsec:universalitybounds}
-------------------------------------------------
**Algorithm** $k$<span style="font-variant:small-caps;">-bpds-to-glsm</span>
*Inputs*: Matrix $M \in \{0, 1\}^{m \times n}$, dense subgraph dimensions $k_m$ and $k_n$ where $k_n$ divides $n$ and the following parameters
- partition $F$, edge probabilities $0 < q < p \le 1$ and $w(n)$ as in Figure \[fig:isgmreduction\]
- target $\pr{glsm}$ parameters $(N, k_m, d)$ satisfying $wN \le n$ and $m \le d$, a mixture distribution $\mD$ and target distributions $\{ \mP_{\nu} \}_{\nu \in \mathbb{R}}$ and $\mQ$
1. *Map to Gaussian Sparse Mixtures*: Form the sample $Z_1, Z_2, \dots, Z_N \in \mathbb{R}^d$ by setting $$(Z_1, Z_2, \dots, Z_N) \gets k\pr{-bpds-to-isgm}(M, F)$$ where $k\pr{-bpds-to-isgm}$ is applied with $r = 2$, slow-growing function $w(n)$, $t = \lceil \log_2(n/k_n) \rceil$, target parameters $(N, k_m, d)$, $\epsilon = 1/2$ and $\mu = c_1\sqrt{\frac{k_n}{n \log n}}$ for a sufficiently small constant $c_1 > 0$.
2. *Truncate and 3-ary Rejection Kernels*: Sample $\nu_1, \nu_2, \dots, \nu_N \sim_{\text{i.i.d.}} \mD$, truncate the $\nu_i$ to lie within $[-1, 1]$ and form the vectors $X_1, X_2, \dots, X_N \in \mathbb{R}^d$ by setting $$X_{ij} \gets 3\pr{-srk}(\pr{tr}_{\tau}(Z_{ij}), \mP_{\nu_i}, \mP_{-\nu_i}, \mQ)$$ for each $i \in [N]$ and $j \in [d]$. Here $3\pr{-srk}$ is applied with $N_{\text{it}} = \lceil 4 \log (dN) \rceil$ iterations and with the parameters $$\begin{aligned}
a &= \Phi(\tau) - \Phi(-\tau), \quad \mu_1 = \frac{1}{2} \left( \Phi(\tau + \mu) - \Phi(\tau - \mu) \right), \\
\mu_2 &= \frac{1}{2} \left( 2 \cdot \Phi(\tau) - \Phi(\tau + \mu) - \Phi(\tau - \mu) \right)\end{aligned}$$
3. *Output*: The vectors $(X_1, X_2, \dots, X_N)$.
In this section, we combine symmetric 3-ary rejection kernels with the reduction $k\pr{-bpds-to-isgm}$ to map from $k\pr{-bpds}$ to generalized sparse mixtures. The details of this reduction $k$<span style="font-variant:small-caps;">-bpds-to-glsm</span> are shown in Figure \[fig:universalityreduction\]. As mentioned in Sections \[subsec:1-tech-universality\] and \[subsec:srk\], to reduce to sparse mixtures near their computational barrier, it is crucial to produce multiple planted distributions. Previous rejection kernels do not have enough degrees of freedom to map to three output distributions given their binary inputs. The symmetric 3-ary rejection kernels introduced in Section \[subsec:srk\] overcome this issue by mapping three input to three output distributions. In particular, we will see in this section that their approximate Markov transition guarantees established in Lemma \[lem:srk\] exactly lead to tight computational lower bounds for $\pr{glsm}$. Throughout this section, we will adopt the definitions of $\pr{glsm}$ and $\pr{glsm}_D$ introduced in Sections \[subsec:1-problems-universality\] and \[subsec:2-formulations\].
In order to establish computational lower bounds for $\pr{glsm}$, it is crucial to define a meaningful notion of the level of signal in a set of target distributions $\mD, \mQ$ and $\{ \mP_{\nu} \}_{\nu \in \mathbb{R}}$. This level of signal was defined in Section \[subsec:1-problems-universality\] and is reviewed below for convenience. We remark that this definition will turn out to coincide with the conditions needed to apply symmetric 3-ary rejection kernels. This notion of signal also implicitly defines the universality class over which our computational lower bounds hold.
[defn:univ-signal]{} \[Universal Class and Level of Signal\] Given a parameter $N$, define the collection of distributions $\mathcal{U} = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}})$ implicitly parameterized by $N$ to be in the universality class $\pr{uc}(N)$ if
- the pairs $(\mP_{\nu}, \mQ)$ are all computable pairs, as in Definition \[def:computable\], for all $\nu \in \mathbb{R}$;
- $\mD$ is a symmetric distribution about zero and $\bP_{\nu \sim \mD}[\nu \in [-1, 1]] = 1 - o(N^{-1})$; and
- there is a level of signal $\tau_{\mathcal{U}} \in \mathbb{R}$ such that for all $\nu \in [-1, 1]$ such that for any fixed constant $K > 0$, it holds that $$\left| \frac{d\mP_{\nu}}{d\mQ} (x) - \frac{d\mP_{-\nu}}{d\mQ} (x) \right| = O_N\left(\tau_{\mathcal{U}} \right) \quad \textnormal{and} \quad \left|\frac{d\mP_{\nu}}{d\mQ} (x) + \frac{d\mP_{-\nu}}{d\mQ} (x) - 2 \right| = O_N\left( \tau_{\mathcal{U}}^2 \right)$$ with probability at least $1 - O\left(N^{-K}\right)$ over each of $\mP_{\nu}, \mP_{-\nu}$ and $\mQ$.
In our reduction $k\pr{-bpds-to-isgm}$, we truncate Gaussians to generate the input distributions $\text{Tern}$. In Figure \[fig:universalityreduction\], $\pr{tr}_{\tau} : \mathbb{R} \to \{-1, 0, 1\}$ denotes the truncation map given by $$\pr{tr}_{\tau}(x) = \left\{ \begin{array}{ll} 1 &\text{if } x > |\tau| \\ 0 &\text{if } -|\tau| \le x \le |\tau| \\ -1 &\text{if } x < -|\tau| \end{array} \right.$$ The following simple lemma on truncating symmetric triples of Gaussian distributions will be important in the proofs in this section. Its proof is a direct computation and is deferred to Appendix \[subsec:appendix-3-part-3\].
\[lem:truncgauss\] Let $\tau > 0$ be constant, $\mu > 0$ be tending to zero and let $a, \mu_1, \mu_2$ be such that $$\begin{aligned}
&\pr{tr}_\tau(\mN(\mu, 1)) \sim \textnormal{Tern}(a, \mu_1, \mu_2) \\
&\pr{tr}_\tau(\mN(-\mu, 1)) \sim \textnormal{Tern}(a, -\mu_1, \mu_2) \\
&\pr{tr}_\tau(\mN(0, 1)) \sim \textnormal{Tern}(a, 0, 0)\end{aligned}$$ Then it follows that $a > 0$ is constant, $0 < \mu_1 = \Theta(\mu)$ and $0 < \mu_2 = \Theta(\mu^2)$.
We now will prove our main approximate Markov transition guarantees for $k\textsc{-bpds-to-glsm}$. The proof follows from combining Theorem \[thm:isgmreduction\], Lemma \[lem:srk\] and an application of tensorization of $\TV$.
\[lem:univlem\] Let $n$ be a parameter and $w(n) = \omega(1)$ be a slow-growing function. Fix initial and target parameters as follows:
- [Initial]{.nodecor} $k\pr{-bpds}$ [Parameters:]{.nodecor} vertex counts on each side $m$ and $n$ that are polynomial in one another, dense subgraph dimensions $k_m$ and $k_n$ where $k_n$ divides $n$, constant edge probabilities $0 < q < p \le 1$ and a partition $F$ of $[n]$.
- [Target]{.nodecor} $\pr{glsm}$ [Parameters:]{.nodecor} $(N, d)$ satisfying $wN \le n$, $N \ge n^{c'}$ for some constant $c' > 0$ and $m \le d \le \textnormal{poly}(n)$, target distribution collection $\mathcal{U} = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}}) \in \pr{uc}(N)$ satisyfing that $$0 < \tau_{\mU} \le c \cdot \sqrt{\frac{k_n}{n\log n}}$$ for a sufficiently small constant $c > 0$.
Let $\mathcal{A}(M)$ denote $k\textsc{-bpds-to-glsm}$ applied to the adjacency matrix $M$ with these parameters. Then $\mathcal{A}$ runs in $\textnormal{poly}(m, n)$ time and it follows that $$\begin{aligned}
\TV\left( \mathcal{A}\left( \mathcal{M}_{[m] \times [n]}(S \times T, p, q) \right), \, \pr{glsm}_D(N, S, d, \mU) \right) &= o(1) + O\left( w^{-1} + k_n^{-2}m^{-2}r^{-2t} + n^{-2} + N^{-3} d^{-3} \right) \\
\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n} \right), \, \mQ^{\otimes d \times N} \right) &= O\left( k_n^{-2}m^{-2}r^{-2t} + n^{-2} + N^{-3} d^{-3} \right)\end{aligned}$$ for all subsets $S \subseteq [m]$ with $|S| = k_m$ and subsets $T \subseteq [n]$ with $|T| = k_n$ and $|T \cap F_i| = 1$ for each $1 \le i \le k_n$.
Let $\mathcal{A}_1$ denote Step 1 of $\mathcal{A}$ with input $M$ and output $(Z_1, Z_2, \dots, Z_N)$. First note that $2^t = \Theta(n/k_n)$ by the definition of $t$ and $\log m = \Theta(\log n)$ since $m$ and $n$ are polynomial in one another. Thus for a small enough choice of $c_1 > 0$, we have $$\mu = c_1 \cdot \sqrt{\frac{k_n}{n \log n}} \le \frac{2^{-(t + 1)/2}}{2 \sqrt{6\log (k_n m \cdot 2^t) + 2\log (p - q)^{-1}}} \cdot \min \left\{ \log \left( \frac{p}{q} \right), \log \left( \frac{1 - q}{1 - p} \right) \right\}$$ since $p$ and $q$ are constants. Therefore $\mu$ satisfies the conditions needed to apply Theorem \[thm:isgmreduction\] to $\mathcal{A}_1$. Now let $\mathcal{A}_2$ denote Step 2 of $\mathcal{A}$ with input $(Z_1, Z_2, \dots, Z_N)$ and output $(X_1, X_2, \dots, X_N)$. First suppose that $(Z_1, Z_2, \dots, Z_N) \sim \pr{isgm}_D(N, S, d, \mu, 1/2)$ or in other words where $$Z_i \sim_{\text{i.i.d.}} \pr{mix}_{1/2}\left( \mN( \mu \cdot \mathbf{1}_S, I_d), \mN( -\mu \cdot \mathbf{1}_S, I_d) \right)$$ For the next part of this argument, we condition on: (1) the entire vector $\nu = (\nu_1, \nu_2, \dots, \nu_N)$; and (2) the subset $P \subseteq [N]$ of sample indices corresponding to the positive part $\mN(\mu \cdot \mathbf{1}_S, I_d)$ of the mixture. Let $\mathcal{C}(\nu, P)$ denote the event corresponding to this conditioning. After truncating according to $\pr{tr}_{\tau}$, by Lemma \[lem:truncgauss\] the resulting entries are distributed as $$\pr{tr}_{\tau}(Z_{ij}) \sim \left\{ \begin{array}{ll} \text{Tern}(a, \mu_1, \mu_2) &\text{if } (i, j) \in S \times P \\ \text{Tern}(a, -\mu_1, \mu_2) &\text{if } (i, j) \in S \times P^C \\ \text{Tern}(a, 0, 0) &\text{if } i \not \in S \end{array} \right.$$ Furthermore, these entries are all independent conditioned on $(\nu, P)$. Since $\tau$ is constant, Lemma \[lem:truncgauss\] also implies that $a \in (0, 1)$ is constant, $\mu_1 = \Theta(\mu)$ and $\mu_2 = \Theta(\mu^2)$. Let $S_\nu$ be $$S_{\nu} = \left\{ x \in X : 2|\mu_1| \ge \left| \frac{d\mP_{\nu}}{d\mQ} (x) - \frac{d\mP_{-\nu}}{d\mQ} (x) \right| \quad \textnormal{and} \quad \frac{2|\mu_2|}{\max\{a, 1 - a\}} \ge \left|\frac{d\mP_{\nu}}{d\mQ} (x) + \frac{d\mP_{-\nu}}{d\mQ} (x) - 2 \right| \right\}$$ as in Lemma \[lem:srk\]. Since $\mU = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}}) \in \pr{uc}(N)$ has level of signal $\tau_{\mU} \le c' \cdot \mu$ for a sufficiently small constant $c' > 0$, we have by definition that $\{x \in S_{\nu_i}\}$ occurs with probability at least $1 - \delta_1$ where $\delta_1 = O(n^{-4 - K_1})$ over each of $\mP_{\nu_i}, \mP_{-\nu_i}$ and $\mQ$, where $K_1 > 0$ is a constant for which $d = O(n^{K_1})$. Here, we are implicitly using the fact that $N \ge n^{c'}$ for some constant $c' > 0$.
Now consider applying Lemma \[lem:srk\] to each application of $3\pr{-srk}$ in Step 2 of $\mathcal{A}$. Note that $|\mu_1|^{-1} = O(\sqrt{n \log n})$ and $|\mu_2|^{-1} = O(n \log n)$ since $\mu = \Omega(\sqrt{k_n/n\log n})$ and $k_n \ge 1$. Now consider the $d$-dimensional vectors $X_1', X_2', \dots, X_N'$ with independent entries distributed as $$X'_{ij} \sim \left\{ \begin{array}{ll} \mP_{\nu_i} &\text{if } (i, j) \in S \times P \\ \mP_{-\nu_i} &\text{if } (i, j) \in S \times P^C \\ \mQ &\text{if } i \not \in S \end{array} \right.$$ The tensorization property of $\TV$ from Fact \[tvfacts\] implies that $$\begin{aligned}
&\TV\left( \mL(X_1, X_2, \dots, X_N | \nu, P), \mL(X_1', X_2', \dots, X_N'| \nu, P) \right) \\
&\quad \quad \le \sum_{i = 1}^N \sum_{j = 1}^d \TV\left( \mL(X_{ij} | \nu, P), \mL(X_{ij}' | \nu, P) \right) \\
&\quad \quad \le \sum_{i = 1}^N \sum_{j = 1}^d \TV\left( 3\pr{-srk}(\pr{tr}_{\tau}(Z_{ij}), \mP_{\nu_i}, \mP_{-\nu_i}, \mQ), \mL(X_{ij}' | \nu, P) \right) \\
&\quad \quad \le Nd \left[ 2\delta_1 \left(1 + |\mu_1|^{-1} + |\mu_2|^{-1} \right) + \left( \frac{1}{2} + \delta_1 \left( 1 + |\mu_1|^{-1} + |\mu_2|^{-1} \right) \right)^{N_{\text{it}}} \right] \\
&\quad \quad = O\left( n^{-2} + N^{-3} d^{-3} \right)\end{aligned}$$ since $N \le n$, $\delta_1 = O(n^{-4} d^{-1})$, $N_{\text{it}} = \lceil 4 \log(dN) \rceil$ and by the total variation upper bounds in Lemma \[lem:srk\].
We now will drop the conditioning on $(\nu, P)$ and average over $\nu \sim \mD'$ and $P \sim \text{Unif}\left[2^{[N]}\right]$. Observe that, when not conditioned on $(\nu, P)$, it holds that $$(X_1', X_2', \dots, X_N') \sim \pr{glsm}_D\left(N, S, d, \left( \mD', \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}} \right) \right)$$ where $\mD'$ is $\mD$ conditioned to lie in $[-1, 1]$. Note that here we used the fact that $\mD$ and therefore $\mD'$ is symmetric about zero. Coupling the latent $\nu_1, \nu_2, \dots, \nu_N$ sampled from $\mD$ and $\mD'$ and then applying the tensorization property of Fact \[tvfacts\] yields that $$\begin{aligned}
&\TV\left( \pr{glsm}_D\left(N, S, d, \left( \mD', \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}} \right) \right), \pr{glsm}_D\left(N, S, d, \left( \mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}} \right) \right) \right) \\
&\quad \quad \le \TV( \mD^{\otimes n}, \mD'^{\otimes n}) \le N \cdot \TV( \mD, \mD') \le N \cdot o(N^{-1}) = o(1)\end{aligned}$$ where $\TV( \mD, \mD') = o(N^{-1})$ follow from the conditioning property of $\TV$ from Fact \[tvfacts\] and the fact that $\bP_{\nu \sim \mD}[\nu \in [-1, 1]] = 1 - o(N^{-1})$. The triangle inequality and conditioning property of $\TV$ in Fact \[tvfacts\] now imply that $$\begin{aligned}
&\TV\left( \mathcal{A}_2\left( \pr{isgm}_D(N, S, d, \mu, 1/2) \right), \pr{glsm}_D\left(N, S, d, \mU \right) \right) \\
&\quad \quad \le \TV\left( \mL(X_1, X_2, \dots, X_N), \mL(X_1', X_2', \dots, X_N') \right) + \TV\left( \mL(X_1', X_2', \dots, X_N'), \pr{glsm}_D\left(N, S, d, \mU \right) \right) \\
&\quad \quad \le \bE_{\nu \sim \mD'} \, \bE_{P \sim \text{Unif}\left[ 2^{[N]} \right]} \, \TV\left( \mL(X_1, X_2, \dots, X_N | \nu, P), \mL(X_1', X_2', \dots, X_N'| \nu, P) \right) \\
&\quad \quad \quad \quad + \TV\left( \pr{glsm}_D\left(N, S, d, \left( \mD', \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}} \right) \right), \pr{glsm}_D\left(N, S, d, \mU \right) \right) \\
&\quad \quad = o(1) + O\left( n^{-2} + N^{-3} d^{-3} \right)\end{aligned}$$ Now consider the case when $Z_1, Z_2, \dots, Z_N \sim_{\text{i.i.d.}} \mN(0, I_d)$. Repeating the argument above with $S = \emptyset$ and observing that $(X'_1, X_2', \dots, X_N') \sim \mQ^{\otimes N}$ yields that $$\TV\left( \mathcal{A}_2\left( \mN(0, I_d)^{\otimes N} \right), \mQ^{\otimes d \times N} \right) = O\left( n^{-2} + N^{-3} d^{-3} \right)$$ We now apply Lemma \[lem:tvacc\] to the steps $\mathcal{A}_1$ and $\mathcal{A}_2$ under each of $H_0$ and $H_1$, as in the proof of Theorem \[thm:isgmreduction\]. Under $H_1$, consider Lemma \[lem:tvacc\] applied to the following sequence of distributions $$\mathcal{P}_0 = \mathcal{M}_{[m] \times [n]}(S \times T, p, q), \quad \mathcal{P}_1 = \pr{isgm}_D(N, S, d, \mu, 1/2) \quad \text{and} \quad \mathcal{P}_2 = \pr{glsm}_D\left(N, S, d, \mU \right)$$ By Theorem \[thm:isgmreduction\] and the argument above, we can take $$\epsilon_1 = O\left( w^{-1} + k_n^{-2}m^{-2}r^{-2t} + n^{-2} + N^{-3} d^{-3} \right) \quad \text{and} \quad \epsilon_2 = o(1) + O\left( n^{-2} + N^{-3} d^{-3} \right)$$ By Lemma \[lem:tvacc\], we therefore have that $$\TV\left( \mathcal{A}\left( \mathcal{M}_{[m] \times [n]}(S \times T, p, q) \right), \, \pr{glsm}_D(N, S, d, \mU) \right) = o(1) + O\left( w^{-1} + k_n^{-2}m^{-2}r^{-2t} + n^{-2} + N^{-3} d^{-3} \right)$$ which proves the desired result in the case of $H_1$. Under $H_0$, similarly applying Theorem \[thm:isgmreduction\], the argument above and Lemma \[lem:tvacc\] to the distributions $$\mathcal{P}_0 = \textnormal{Bern}(q)^{\otimes m \times n}, \quad \mathcal{P}_1 = \mN(0, I_d)^{\otimes N} \quad \text{and} \quad \mathcal{P}_2 = \mQ^{\otimes d \times N}$$ yields the total variation bound $$\TV\left( \mathcal{A}\left( \textnormal{Bern}(q)^{\otimes m \times n} \right), \, \mQ^{\otimes d \times N} \right) = O\left( k_n^{-2}m^{-2}r^{-2t} + n^{-2} + N^{-3} d^{-3} \right)$$ which completes the proof of the theorem.
We now use this theorem to deduce our universality principle for lower bounds in $\pr{glsm}$. The proof of this next theorem is similar to that of Theorems \[thm:rsme-lb\] and \[thm:uslr-lb\] and is deferred to Appendix \[subsec:appendix-3-part-3\].
[thm:glsm-lb]{} \[Computational Lower Bounds for \] Let $n, k$ and $d$ be polynomial in each other and such that $k = o(\sqrt{d})$. Suppose that the collections of distributions $\mU = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}})$ is in $\pr{uc}(n)$. Then the $k\pr{-bpc}$ conjecture or $k\pr{-bpds}$ conjecture for constant $0 < q < p \le 1$ both imply a computational lower bound for $\pr{glsm}\left(n, k, d, \mU \right)$ at all sample complexities $n = \tilde{o}\left(\tau_{\mU}^{-4}\right)$.
The Universality Class UC$(n)$ and Level of Signal $\tau_{\mU}$ {#subsec:universalitydiscussion}
---------------------------------------------------------------
The result in Theorem \[thm:glsm-lb\] shows universality of the computational sample complexity of $n = \tilde{\Omega}(\tau_{\mU}^{-4})$ for learning sparse mixtures under the mild conditions of $\pr{uc}(n)$. In this section, we discuss this lower bound, its implications, the universality class $\pr{uc}(n)$ and the level of signal $\tau_{\mU}$.
#### Remarks on UC$(n)$ and $\tau_{\mU}$.
The conditions for $\mU = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}}) \in \pr{uc}(n)$ and the definition of $\tau_{\mU}$ have the following two notable properties.
- *They are defined in terms of marginals*: The class $\pr{uc}(n)$ and $\tau_{\mU}$ are defined entirely in terms of the likelihood ratios $d\mP_\nu/d\mQ$ between the planted and non-planted marginals. In particular, they are independent of the sparsity level $k$ and other high-dimensional properties of the distribution $\pr{glsm}$ constructed from the $\mP_{\nu}$ and $\mQ$. Theorem \[thm:glsm-lb\] thus establishes a computational lower bound for $\pr{glsm}$ at a sample complexity entirely based on properties of the marginals of $\mP_{\nu}$ and $\mQ$.
- *Their dependence on $n$ is negligible*: The parameter $n$ only enters the definitions of $\pr{uc}(n)$ and $\tau_{\mU}$ through requirements on tail probabilities. When the likelihood ratios $d\mP_\nu/d\mQ$ are relatively concentrated, the dependence of the conditions in $\pr{uc}(n)$ and $\tau_{\mU}$ on $n$ is nearly negligible. If the ratios $d\mP_\nu/d\mQ$ are concentrated under $\mP_{\nu}$ and $\mQ$ with exponentially decaying tails, then the tail probability bound requirement of $O(n^{-K})$ only appears as a $\text{polylog}(n)$ factor in $\tau_{\mU}$. This will be the case in the examples that appear later in this section.
#### $\mD$ and Parameterization over $[-1,1]$.
$\mD$ and the indices of $\mP_{\nu}$ can be reparameterized without changing the underlying problem. The assumption that $\mD$ is symmetric and mostly supported on $[-1, 1]$ is for notational convenience. As in the case of $\tau_{\mU}$ and the examples later in this section, the tail probability requirement of $o(n^{-1})$ for $\mD$ only appears as a $\text{polylog}(n)$ factor in the computational lower bound of $n = \tilde{\Omega}(\tau_{\mU}^{-4})$ if $\mD$ is concentrated with exponential tails.
While the output vectors $(X_1, X_2, \dots, X_N)$ of our reduction $k$<span style="font-variant:small-caps;">-bpds-to-glsm</span> are independent, their coordinates have dependence induced by the mixture $\mD$. The fact that our reduction samples the $\nu_i$ implies that if these values were revealed to the algorithm, the problem would still remain hard: an algorithm for the latter could be used together with the reduction to solve . However, even given the $\nu_i$ for the $i$th sample, our reduction is such that whether the planted marginals in the $i$th sample are distributed according to $\mP_{\nu_i}$ or $\mP_{-\nu_i}$ remains unknown to the algorithm. Intuitively, our setup chooses to parameterize the distribution $\mD$ over $[-1, 1]$ such that the sign ambiguity between $\mP_{\nu_i}$ or $\mP_{-\nu_i}$ is what is producing hardness below the sample complexity of $n = \tilde{\Omega}(\tau_{\mU}^{-4})$.
#### Implications for Concentrated LLR.
We now give several remarks on $\tau_{\mU}$ in the case that the log-likelihood ratios (LLR) $\log d\mP_{\nu}/d\mQ (x)$ are sufficiently well-concentrated if $x \sim \mQ$ or $x \sim \mP_{\nu}$. Suppose that $\mU = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}}) \in \pr{uc}(n)$, fix some arbitrarily large constant $c > 0$ and fix some $\nu \in [-1,1]$. If $S_{\mQ}$ is the common support of the $\mP_{\nu}$ and $\mQ$, define $S$ to be $$S = \left\{ x \in S_{\mQ} : c \cdot \tau_{\mU} \ge \left| \frac{d\mP_{\nu}}{d\mQ} (x) - \frac{d\mP_{-\nu}}{d\mQ} (x) \right| \quad \textnormal{and} \quad c \cdot \tau_{\mU}^2 \ge \left|\frac{d\mP_{\nu}}{d\mQ} (x) + \frac{d\mP_{-\nu}}{d\mQ} (x) - 2 \right| \right\}$$ Suppose that $\tau_{\mU} = \Omega(n^{-K})$ for some constant $K > 0$ and let $c$ be large enough that $S$ occurs with probability at least $1 - O(n^{-K})$ under each of $\mP_{\nu}, \mP_{-\nu}$ and $\mQ$. Note that such a constant $c$ is guaranteed by Definition \[defn:univ-signal\]. Now observe that $$\begin{aligned}
\TV\left( \mP_{\nu}, \mP_{-\nu} \right) &= \frac{1}{2} \cdot \bE_{x \in \mQ} \left[ \left| \frac{d\mP_{\nu}}{d\mQ} (x) - \frac{d\mP_{-\nu}}{d\mQ} (x) \right| \right] \\
&\le \frac{1}{2} \cdot \bE_{x \in \mQ} \left[ \left| \frac{d\mP_{\nu}}{d\mQ} (x) - \frac{d\mP_{-\nu}}{d\mQ} (x) \right| \cdot \mathbf{1}_S(x) \right] + \frac{1}{2} \cdot \mP_{\nu}\left[S^C\right] + \frac{1}{2} \cdot \mP_{-\nu}\left[S^C\right] \\
&\le c \cdot \tau_{\mU} + O\left(n^{-K}\right) = O\left(\tau_{\mU}\right)\end{aligned}$$ A similar calculation with the second condition defining $S$ shows that $$\TV\left( \pr{mix}_{1/2}\left(\mP_{\nu}, \mP_{-\nu} \right), \mQ \right) = O\left( \tau_{\mU}^2 \right)$$ If the LLRs $\log d\mP_{\nu}/d\mQ$ are sufficiently well-concentrated, then the random variables $$\left| \frac{d\mP_{\nu}}{d\mQ} (x) - \frac{d\mP_{-\nu}}{d\mQ} (x) \right| \quad \text{and} \quad \left|\frac{d\mP_{\nu}}{d\mQ} (x) + \frac{d\mP_{-\nu}}{d\mQ} (x) - 2 \right|$$ will also concentrate around their means if $x \sim \mQ$. LLR concentration also implies that this is true if $x \sim \mP_\nu$ or $x \sim \mP_{-\nu}$. Thus, under sufficient concentration, the definition of the level of signal $\tau_{\mU}$ reduces to the much more interpretable pair of upper bounds $$\TV\left( \mP_{\nu}, \mP_{-\nu} \right) = O\left(\tau_{\mU}\right) \quad \text{and} \quad \TV\left( \pr{mix}_{1/2}\left(\mP_{\nu}, \mP_{-\nu} \right), \mQ \right) = O\left(\tau_{\mU}^2 \right)$$ These conditions directly measure the amount of statistical signal present in the planted marginals $\mP_{\nu}$. The relevant calculations for an example application of Theorem \[thm:glsm-lb\] when the LLR concentrates is shown below for sparse PCA. In [@brennan2019universality], various assumptions of concentration of the LLR and analogous implications for computational lower bounds in submatrix detection are analyzed in detail. We refer the reader to Sections 3 and 9 of [@brennan2019universality] for the calculations needed to make the discussion here precise.
We remark that, assuming sufficient concentration on the LLR, the analysis of the $k$-sparse eigenvalue statistic from [@berthet2013complexity] yields an information-theoretic upper bound for $\pr{glsm}$. Given $\pr{glsm}$ samples $(X_1, X_2, \dots, X_n)$, consider forming the LLR-processed samples $Z_i$ with $$Z_{ij} = \bE_{\nu \sim \mD} \left[ \log \frac{d\mP_{\nu}}{d\mQ} (X_{ij}) \right]$$ for each $i \in [n]$ and $j \in [d]$. Now consider taking the $k$-sparse eigenvalue of the samples $Z_1, Z_2, \dots, Z_n$. Under sub-Gaussianity assumptions on the $Z_{ij}$, the analysis in Theorem 2 of [@berthet2013complexity] applies. Similarly, the analysis in Theorem 5 of [@berthet2013complexity] continues to hold, showing that the semidefinite programming algorithm for sparse PCA yields an algorithmic upper bound for $\pr{glsm}$. As information-theoretic limits and algorithms are not the focus of this paper, we omit the technical details needed to make this rigorous.
In many setups captured by $\pr{glsm}$ such as sparse PCA, learning sparse mixtures of Gaussians and learning sparse mixtures of Rademachers, these analyses and our lower bound in Theorem \[thm:glsm-lb\] together yield a $k$-to-$k^2$ statistical-computational gap. How our lower bound yields a $k^2$ dependence in the computational barriers for these problems is discussed below.
#### Sparse PCA and Specific Distributions.
One specific example captured by our universality principle and that falls under the concentrated LLR setup discussed above is sparse PCA in the spiked covariance model. The statistical-computational gaps of sparse PCA have been characterized based on the planted clique conjecture in a line of work [@berthet2013optimal; @berthet2013complexity; @wang2016statistical; @gao2017sparse; @brennan2018reducibility; @brennan2019optimal]. We show that our universality principle faithfully recovers the $k$-to-$k^2$ gap for sparse PCA shown in [@berthet2013optimal; @berthet2013complexity; @wang2016statistical; @gao2017sparse; @brennan2018reducibility] assuming the $k\pr{-bpds}$ conjecture. As discussed in Section \[sec:2-secret-leakage\], also the $k\pr{-bpc}$, $k\pr{-pds}$ or $k\pr{-pc}$ conjectures therefore yields nontrivial lower bounds. We remark that [@brennan2019optimal] shows stronger hardness based on weaker forms of the $\pr{pc}$ conjecture.
We show in the next lemma that sparse PCA corresponds to $\pr{glsm}\left(n, k, d, \mU \right)$ for a proper choice of $\mU = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}}) \in \pr{uc}(n)$ and $\tau_{\mU}$ so that the lower bound $n = \tilde{\Omega}(\tau_{\mU}^{-4})$ exactly corresponds to the conjectured computational barrier in Sparse PCA. Recall that the hypothesis testing problem $\pr{spca}(n, k, d, \theta)$ has hypotheses $$\begin{aligned}
&H_0 : (X_1, X_2, \dots, X_n) \sim_{\textnormal{i.i.d.}} \mN(0, I_d) \\
&H_1 : (X_1, X_2, \dots, X_n) \sim_{\textnormal{i.i.d.}} \mN\left(0, I_d + \theta vv^\top\right)\end{aligned}$$ where $v$ is a $k$-sparse unit vector in $\mathbb{R}^d$ chosen uniformly at random among all such vectors with nonzero entries equal to $1/\sqrt{k}$.
If, then $\pr{spca}(n, k, d, \theta)$ can be expressed as $\pr{glsm}(n, k, d, \mU)$ where $\mU = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}}) \in \pr{uc}(n)$ is given by $$\mP_{\nu} = \mN\left( 2\nu \sqrt{\frac{\theta \log n}{k}}, 1 \right) \textnormal{ for all } \nu \in \mathbb{R}, \quad \mQ = \mN(0, 1) \quad \textnormal{and} \quad \mD = \mN\left(0, \frac{1}{4\log n} \right)$$ and has valid level of signal $\tau_{\mU} = \Theta\left( \sqrt{\frac{\theta (\log n)^2}{k}} \right)$ if it holds that $\theta (\log n)^2 = o(k)$.
Note that if $X \sim \mN\left(0, I_d + \theta vv^\top \right)$ then $X$ can be written as $$X = 2\sqrt{\theta \log n} \cdot gv + G \quad \text{where } g \sim \mN\left(0, \frac{1}{4\log n} \right) \text{ and } G \sim \mN(0, I_d)$$ and where $g$ and $G$ are independent. This follows from the fact that the random variable on the right-hand side above is a jointly Gaussian vector with covariance matrix given by the sum of the covariance matrices of the individual terms. This observation implies that $\pr{spca}(n, k, d, \theta)$ is exactly the problem $\pr{glsm}(n, k, d, \mU)$. Now observe that the probability that $x \sim \mD$ satisfies $x \in [-1, 1]$ is $1 - o(n^{-1})$ by standard Gaussian tail bounds. Fix some $\nu\in [-1, 1]$ and let $t = 2\nu \sqrt{\frac{\theta \log n}{k}}$. Note that $$\left|\frac{d\mP_\nu}{d\mQ}(x) - \frac{d\mP_{-\nu}}{d\mQ}(x) \right| = \left| e^{tx - t^2/2} - e^{-tx - t^2/2}\right| = \Theta \left( |tx| \right)$$ if $|tx| = o(1)$. As long as $x = O(\sqrt{\log n})$, it follows that $|tx| = O(\tau_{\mU}) = o(1)$ from the definition of $\tau_{\mU}$ and fact that $\theta (\log n)^2 = o(k)$. Note that $x = O(\sqrt{\log n})$ occurs with probability at least $1 - O(n^{-K})$ for any constant $K > 0$ under each of $\mP_{\nu}$ where $\nu \in [-1, 1]$ and $\mQ$ by standard Gaussian tail bounds. Now observe that $$\left|\frac{d\mP_\nu}{d\mQ}(x) + \frac{d\mP_{-\nu}}{d\mQ}(x) - 2\right| = \left| e^{tx - t^2/2} + e^{-tx - t^2/2} - 2\right| = \Theta(t^2)$$ holds if $|tx| = o(1)$, which is true as long as $x = O(\sqrt{\log n})$ and thus holds with probability $1 - O(n^{-K})$ for any fixed $K > 0$. Since $t^2 = O(\tau_{\mU}^2)$ for any $\nu \in [-1, 1]$, this completes the proof that $\mU \in \pr{uc}(n)$ with level of signal $\tau_{\mU}$.
Combining this lemma with Theorem \[thm:glsm-lb\] yields the $k\pr{-bpds}$ conjecture implies a computational lower bound for Sparse PCA at the barrier $n = \tilde{o}(k^2/\theta^2)$ as long as $\theta(\log n)^2 = o(k)$ and $k = o(\sqrt{d})$, which matches the planted clique lower bounds in [@berthet2013optimal; @berthet2013complexity; @wang2016statistical; @gao2017sparse; @brennan2018reducibility]. Similar calculations to those in the above corollary can be used to identify the computational lower bound implied by Theorem \[thm:glsm-lb\] for many other choices of $\mU = (\mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}}) \in \pr{uc}(n)$. Some examples are:
- Balanced sparse Gaussian mixtures where $\mQ = \mN(0, 1)$, $\mP_{\nu} = \mN(\theta \nu, 1)$ where $\mD$ is any symmetric distribution over $[-1, 1]$ can be shown to satisfy that $\tau_{\mU} = \Theta\left(\theta \sqrt{\log n}\right)$ if $\theta \sqrt{\log n} = o(1)$.
- The Bernoulli case where $\mQ = \text{Bern}(1/2)$, $\mP_{\nu} = \text{Bern}(1/2 + \theta \nu)$ and $\mD$ is any symmetric distribution over $[-1, 1]$ can be shown to satisfy that $\tau_{\mU} = \Theta\left(\theta \right)$ if $\theta \le 1/2$.
- Sparse mixtures of exponential distributions where $\mQ = \text{Exp}(\lambda)$, $\mP_{\nu} = \text{Exp}(\lambda + \theta \nu)$ and $\mD$ is any symmetric distribution over $[-1, 1]$ can be shown to satisfy that $\tau_{\mU} = \tilde{\Theta}\left( \theta \lambda^{-1} \log n \right)$ if it holds that $\theta \log n = o(\lambda)$.
- Sparse mixtures of centered Gaussians with difference variances where $\mQ = \mN(0, 1)$, $\mP_{\nu} = \mN(0, 1 + \theta \nu)$ and $\mD$ is any symmetric distribution over $[-1, 1]$ can be shown to satisfy that $\tau_{\mU} = \Theta\left(\theta \log n \right)$ if $\theta \log n = o(1)$.
We remark that $\tau_{\mU}$ can be calculated for many more choices of $\mD, \mQ$ and $\mP_{\nu}$ using the computations outlined in the discussion above on the implications of our result for concentrated LLR.
Computational Lower Bounds for Recovery and Estimation {#subsec:2-estimation}
======================================================
In this section, we outline several ways to deduce that our reductions to the hypothesis testing formulations in the previous section imply computational lower bounds for natural recovery and estimation formulations of the problems introduced in Section \[sec:1-problems\]. We first introduce a notion of average-case reductions in total variation between recovery problems and note that most of our reductions satisfy these stronger conditions in addition to those in Section \[subsec:2-tvreductions\]. We then discuss alternative methods of obtaining hardness of recovery and estimation in the problems that we consider directly from computational lower bounds for detection.
In the previous section, we showed that lower bounds for our detection formulations of $\pr{rsme}$ and $\pr{glsm}$ directly imply lower bounds for natural estimation and recovery variants, respectively. In Section \[sec:3-tensor\], we showed that our lower bounds against blackboxes solving the detection formulation of tensor PCA with a low false positive probability of error directly implies hardness of estimating $v$ in $\ell_2$ norm. As discussed in Section \[subsec:1-problems-hidden-partition\], the problems of recovering the hidden partitions in $\pr{ghpm}$ and $\pr{bhpm}$ have very different barriers than the testing problem we consider in this work. In this section, we will discuss recovery and estimation hardness for the remaining problems from Section \[sec:1-problems\].
Our Reductions and Computational Lower Bounds for Recovery
----------------------------------------------------------
Similar to the framework in Section \[subsec:2-tvreductions\] for reductions showing hardness of detection, there is a natural notion of a reduction in total variation transferring computational lower bounds between recovery problems. Let $\mP(n, \tau)$ denote the recovery problem of estimating $\theta \in \Theta_\mP$ within some small loss $\ell_{\mP}(\theta, \hat{\theta}) \le \tau$ given an observation from the distribution $\mP_D(\theta)$. Here, $n$ is any parameterization such that this observation has size $\text{poly}(n)$ and, as per usual, $\ell_\mP$, $\Theta_\mP$ and $\tau$ are implicitly functions of $n$. Define the problem $\mP'(N, \tau')$ analogously. The following is the definition of a reduction in total variation between $\mP$ and $\mP'$.
\[defn:tvreductions-recovery\] A $\textnormal{poly}(n)$ time algorithm $\mathcal{A}$ sending valid inputs for $\mP(n, \tau)$ to valid inputs for $\mP'(N, \tau')$ is a reduction in total variation from $\mP$ to $\mP'$ if the following criteria are met for all $\theta \in \Theta_{\mP}$:
1. There is a distribution $\mD(\theta)$ over $\Theta_{\mP'}$ such that $$\TV\left( \mathcal{A}(\mP_D(\theta)), \, \bE_{\theta' \sim \mD(\theta)} \, \mP'_D(\theta')\right) = o_n(1)$$
2. There is a $\textnormal{poly}(n)$ time randomized algorithm $\mathcal{B}(X, \hat{\theta'})$ mapping instances $X$ of $\mP(n, \tau)$ and $\hat{\theta'} \in \Theta_{\mP'}$ to $\hat{\theta} \in \Theta_{\mP}$ with the following property: if $X \sim \mP_D(\theta)$, $\theta'$ is an arbitrary element of $\textnormal{supp} \, \mD(\theta)$ and $\hat{\theta'}$ is guaranteed to satisfy that $\ell_{\mP'}(\theta', \hat{\theta'}) \le \tau'$, then $\mathcal{B}(X, \hat{\theta'})$ outputs some $\hat{\theta}$ with $\ell_{\mP}(\theta, \hat{\theta}) \le \tau$ with probability $1 - o_n(1)$.
While this definition has a number of technical conditions, it is conceptually simple. A randomized algorithm $\mathcal{A}$ is a reduction in total variation from $\mP$ to $\mP'$ if it maps a sample from the conditional distribution $\mP_D(\theta)$ approximately to a sample from a mixture of $\mP_D(\theta')$, where the mixture is over a distribution $\mD(\theta)$ determined by $\theta$. Furthermore, there must be an efficient way $\mathcal{B}$ to recover a good estimate $\hat{\theta}$ of $\theta$ given a good estimate $\hat{\theta'}$ of $\theta'$ and the original instance $X$ of $\mP$. The reason that (2) must be true for any $\theta' \in \textnormal{supp} \, \mD(\theta)$ is that, to transfer recovery hardness from $\mP$ to $\mP'$, the algorithm $\mathcal{B}$ will be applied to the output $\theta'$ of a blackbox solving $\mP'$ applied to $\mathcal{A}(X)$. In this setting, $\theta'$ and $X$ are dependent and allowing $\theta' \in \textnormal{supp} \, \mD(\theta)$ in the definition above accounts for this. Note that, as per usual, $\mathcal{A}$ must satisfy the properties in the definition above oblivious to $\theta$. The following lemma shows that Definition \[defn:tvreductions-recovery\] fulfills its objective and transfers hardness of recovery from $\mP$ to $\mP'$. Its proof is simple and deferred to Appendix \[subsec:appendix-2-tv\].
\[lem:tvreductions-recovery\] Suppose that there is reduction $\mathcal{A}$ from $\mP(n, \tau)$ to $\mP'(N, \tau')$ satisfying the conditions in Definition \[defn:tvreductions-recovery\]. If there is a polynomial time algorithm $\mathcal{E}'$ solving $\mP'(N, \tau')$ with probability at least $p$, then there is a polynomial time algorithm $\mathcal{E}$ solving $\mP(n, \tau)$ with probability at least $p - o_n(1)$.
The recovery variants of the problems we consider all take the form of $\mP(n, \tau)$. For example, $\Theta_{\mP}$ is the set of $k$-sparse vectors of bounded norm and $\ell_{\mP}$ is $\ell_2$ in $\pr{mslr}$, and $\Theta_{\mP}$ is the set of $(n/k)$-subsets of $[n]$ and $\ell_{\mP}$ is the size of the symmetric difference between two $(n/k)$-subsets in $\pr{isbm}$. In $\pr{rslr}$, $\Theta_{\mP}$ can be taken to be the set of al $(u, \mathcal{A})$ where $u$ is a $k$-sparse vector of bounded norm and $\mathcal{A}$ is a valid adversary. The loss $\ell_{\mP}$ is then independent of $\mathcal{A}$ and given by the $\ell_2$ norm on $u$. Throughout Parts \[part:reductions\] and \[part:lower-bounds\], the guarantees we proved for our reductions among the hypothesis testing formulations from Section \[subsec:2-formulations\] generally took the form of condition (1) in Definition \[defn:tvreductions-recovery\]. Some reductions had a post-processing step where coordinates in the output instance are randomly permuted or subsampled, but these can simply be removed to yield a guarantee matching the form of (1). In light of this and Lemma \[lem:tvreductions-recovery\], it suffices to show that our reductions also satisfy condition (2) in Definition \[defn:tvreductions-recovery\]. We outline how to construct these algorithms $\mathcal{B}$ for each of our remaining problems below.
#### Reductions from $\pr{bpc}$ and $k\pr{-bpc}$.
All of our reductions from $\pr{bpc}$ and $k\pr{-bpc}$ to $\pr{rsme}$, $\pr{neg-spca}$, $\pr{mslr}$ and $\pr{rslr}$ map from an instance with left biclique vertex set $S$ with $|S| = k_m$ to an instance with hidden vector $u = \gamma \cdot k_m^{-1/2} \cdot \mathbf{1}_S$ for some $\gamma \in (0, 1)$. In the notation of Definition \[defn:tvreductions-recovery\], $\mD(S)$ is a point mass on $u$. We now outline how such reductions imply hardness of estimation up to any $\ell_2$ error $\tau' = o(\gamma)$.
To verify condition (2) of Definition \[defn:tvreductions-recovery\], it suffices to give an efficient algorithm $\mathcal{B}$ recovering $S$ and the right biclique vertices $S'$ from the original $\pr{bpc}$ or $k\pr{-bpc}$ instance $G$ and an estimate $\hat{u}$ satisfying that $\| \hat{u} - \gamma \cdot k_m^{-1/2} \cdot \mathbf{1}_S \|_2 \le \tau'$. Suppose that $|S| = k_m$ and $|S'|$ are both $\omega(\log n)$. Let $\hat{S}$ be the set of the largest $k_m$ entries of $\hat{u}$ and note that $\| \gamma^{-1} \cdot \hat{u} - k_m^{-1/2} \cdot \mathbf{1}_S \|_2 = o(1)$, which can be verified to imply that at least $(1 - o(1))k_m$ of $\hat{S}$ must be in $S$. A union bound and Chernoff bound can be used to show that, in a $\pr{bpc}$ instance with left and right biclique sets $S$ and $S'$, there is no right vertex in $[n] \backslash S'$ with at least $3k_m/4$ neighbors in $S$ with probability $1 - o_n(1)$ if $k_m \gg \log n$. Therefore $S'$ is exactly the set of right vertices with at least $5k_m/6$ neighbors in $\hat{S}$ with probability $1 - o_n(1)$. Taking the common neighbors of $S'$ now recovers $S$ with high probability. Thus this procedure of taking the $k_m$ largest entries $\hat{S}$ of $\hat{u}$, taking right vertices with many neighbors in $\hat{S}$ and then taking their common neighborhoods, exactly solves the $\pr{bpc}$ and $k\pr{-bpc}$ recovery problems. We remark that hardness for these exact recovery problems follows from detection hardness, as bipartite Erdős-Rényi random graphs do not contain bicliques of left and right sizes $\omega(\log n)$ with probability $1 - o_n(1)$.
We remark that for the values of $\gamma$ in our reductions, the condition $\tau = o(\gamma)$ implies tight computational lower bounds for estimation in $\pr{rsme}$, $\pr{neg-spca}$, $\pr{mslr}$ and $\pr{rslr}$. In particular, for $\pr{rsme}$, $\pr{mslr}$ and $\pr{rslr}$, we may take $\tau'$ to be arbitrarily close to $\tau$ in our detection lower bound as long as $\tau' = o(\tau)$. For $\pr{neg-spca}$, a natural estimation analogue is to estimate some $k$-sparse $v$ within $\ell_2$ norm $\tau'$ given $n$ i.i.d. samples from $\mN(0, I_d + vv^\top)$. For this estimation formulation, we may take $\tau' = o(\sqrt{\theta})$ where $\theta$ is as in our detection lower bound.
#### Reductions from $k\pr{-pc}$.
We now outline how to construct such an algorithm $\mathcal{B}$ for $\pr{isbm}$. We only sketch the details of this construction as a more direct and simpler way to deduce hardness of recovery for $\pr{isbm}$ will be discussed in the next section. We remark that a similar construction of $\mathcal{B}$ also verifies condition (2) for our reduction to $\pr{semi-cr}$.
For simplicity, first consider $k\pr{-pds-to-isbm}$ without the initial $\pr{To-}k\pr{-Partite-Submatrix}$ step and the random permutations of vertex labels in Steps 2 and 4. Let $S \subseteq [kr^t]$ be the vertex set of the planted dense subgraph in $M_{\text{PD2}}$ and let $F'$ and $F''$ be the given partitions of the indices $[kr^t]$ of $M_{\text{PD2}}$ and the vertices $[kr\ell]$ of the output graph, respectively. Lemma \[lem:isbm-rotations\] shows that the output instance of $\pr{isbm}$ has its smaller hidden community $C_1$ of size $k\ell$ on the vertices corresponding to the negative entries of the vector $v_{S, F', F''}(K_{r, t})$. Note that, as a function of this set $S$, the mixture distribution $\mD(S)$ is again a point mass. We now will outline how to approximately recover $S$ given a close estimate $\hat{C}_1$ of $C_1$. Suppose that $\hat{C}_1$ is a $k\ell$-subset of $[kr\ell]$ such that $|C_1 \cap \hat{C}_1| \ge (1 - o(1))k\ell$. Construct the vector $\hat{v}$ given by $$\hat{v}_i = \frac{1}{\sqrt{r^t(r - 1)}} \cdot \left\{ \begin{matrix} 1 & \textnormal{if } i \not\in \hat{C}_1 \\ 1 - r & \textnormal{if } i \in \hat{C}_1 \end{matrix} \right.$$ Since $\ell = \Theta(r^{t - 1})$, a direct calculation shows that $\| \hat{v} - v_{S, F', F''}(K_{r, t}) \|_2 = o(\sqrt{k})$. For each part $F_i''$, consider the vector in $\mathbb{R}^{r\ell}$ formed by restricting $\hat{v}$ to the indices in $F_i''$ and identifying these indices with $[r\ell]$ in increasing order. For each such vector, find the closest column of $K_{r, t}$ to this vector in $\ell_2$ norm. If the index of this column is $j$, add the $j$th smallest element of $F_i'$ to $\hat{S}$. We claim that the resulting set $\hat{S}$ contains at least $(1 - o(1))k$ elements of $S$. The singular values of $K_{r, t}$ computed in Lemma \[lem:Krtsv\] can be used to show that any two columns of $K_{r, t}$ are separated by an $\ell_2$ distance of $\Omega(1)$. Any part $F_i'$ for which the correct $j \in S \cap F_i'$ was not added to $\hat{S}$ must have satisfied that $\hat{v}$ restricted to the part $F_i''$ was an $\ell_2$ distance of $\Omega(1)$ from the corresponding restriction of $v_{S, F', F''}(K_{r, t})$. Since $\| \hat{v} - v_{S, F', F''}(K_{r, t}) \|_2 = o(\sqrt{k})$, the number of such $j$ incorrectly added to $\hat{S}$ is $o(k)$, verifying the claim.
Now consider $k\pr{-pds-to-isbm}$ with its first step and the random permutations. Since the random index permutation in $\pr{To-}k\pr{-Partite-Submatrix}$ and the subsequent random permutations in Steps 2 and 4 are all generated by the reduction, they can also be remembered and used in the algorithm $\mathcal{B}$ recovering the clique of the input $k\pr{-pc}$ instance. When combined with the subroutine recovering $\hat{S}$ from $\hat{C}_1$, these permutations are sufficient to identify a set of $k$ vertices overlapping with the clique in at least $(1 - o(1))k$ vertices. Now using a similar procedure to the one mentioned above for $\pr{bpc}$, together with the input $k\pr{-pc}$ instance $G$, this is sufficient to exactly recover the hidden clique vertices.
Relationship Between Detection and Recovery
-------------------------------------------
As shown in the previous section, computational lower bounds from recovery can generally be deduced from our reductions because they are also reductions in total variation between recovery problems. We now will outline how our computational lower bounds for detection all either directly or almost directly imply hardness of recovery. As in Section 10 of [@brennan2018reducibility], our approach is to produce two independent instances $X$ and $X'$ from $\mP_D(\theta)$ without knowing $\theta$, to use $X$ to recover an estimate $\hat{\theta}$ of $\theta$ and then to verify that $\hat{\theta}$ is a good estimate of $\theta$ using $X'$. If $\hat{\theta}$ is confirmed to closely approximate $\theta$ using $X'$, then output $H_1$, and otherwise output $H_0$. This recipe shows detection is easier than recovery as long as there are efficient ways to produce the pair $(X, X')$ and to verify $\hat{\theta}$ is a good estimate given a fresh sample $X'$. In general, the purpose of cloning into the pair $(X, X')$ is to sidestep the fact that $X$ and $\hat{\theta}$ are dependent random variables, which complicates analyzing the verification step. In contrast, $\hat{\theta}$ and $X'$ are conditionally independent given $\theta$. We now show that this recipe applies to each of our problems.
#### Sample Splitting.
In problems with samples, a natural way to produce $X$ and $X'$ is to simply split the set of samples into two groups. This yields a means to directly transfer computational lower bounds from detection to recovery for $\pr{rsme}$, $\pr{neg-spca}$, $\pr{mslr}$ and $\pr{rslr}$. As we already discussed one way our reductions imply computational lower bounds for the recovery variants of these problems in the previous section, we only sketch the main ideas here.
We first show an efficient algorithm for recovery in $\pr{mslr}$ yields an efficient algorithm for detection. Consider the detection problem $\pr{mslr}(2n, k, d, \tau)$, and assume there is a blackbox $\mathcal{E}$ solving the recovery problem $\pr{mslr}(n, k, d, \tau')$ with probability $1 - o_n(1)$ for some $\tau' = o(\tau)$. If the samples from $\pr{mslr}(2n, k, d, \tau)$ are $(X_1, y_1), (X_2, y_2), \dots, (X_{2n}, y_{2n})$, apply $\mathcal{E}$ to $(X_1, y_1), \dots, (X_{n}, y_{n})$ to produce an estimate $\hat{u}$. Under $H_1$, there is some true $u = \tau \cdot k^{-1/2} \cdot \mathbf{1}_S$ for some $k$-set $S$ and it holds that $\| \hat{u} - u \|_2 = o(\tau)$. As in the previous section, taking the largest $k$ coordinates of $\hat{u}$ yields a set $\hat{S}$ containing at least $(1 - o(1))k$ elements of $S$. The idea is now that we almost know the true set $S$, detection using the second group of $n$ samples essentially reduces to $\pr{mslr}$ without sparsity and is easy down to the information-theoretic limit. More precisely, consider using the second half of the samples to form the statistic $$Z = \frac{1}{\tau^2 (1 + \tau^2)} \sum_{i = n + 1}^{2n} \left( y_i^2 - 1 - \tau^2 \right) \cdot \left\langle (X_i)_{\hat{S}}, \hat{u}_{\hat{S}} \right\rangle^2$$ where $v_{\hat{S}}$ denotes the vector equal to $v$ on the indices in $\hat{S}$ and zero elsewhere. Note that conditioned on $S$, the second group of $n$ samples is independent of $\hat{S}$. Under $H_0$, it can be verified that $\bE[Z] = 0$ and $\text{Var}[Z] = O(n)$. Under $H_1$, it can be verified that $\| \hat{u} \|_2$ and $\| \hat{u}_{\hat{S}} \|_2$ are both $(1 + o(1))\tau$ and furthermore that $\langle u, \hat{u}_{\hat{S}} \rangle \ge (1 - o(1)) \tau^2$. Now note that since $y_i = R_i \cdot \langle X_i, u \rangle + g_i$ where $g_i \sim \mN(0, 1)$ and $R_i \sim \text{Rad}$, we have that $$\begin{aligned}
\left( y_i^2 - 1 - \tau^2 \right) \cdot \left\langle (X_i)_{\hat{S}}, \hat{u}_{\hat{S}} \right\rangle^2 &= \langle X_i, u \rangle^2 \cdot \left\langle X_i, \hat{u}_{\hat{S}} \right\rangle^2 - \tau^2 \cdot \left\langle X_i, \hat{u}_{\hat{S}} \right\rangle^2 + 2R_i g_i \cdot \langle X_i, u \rangle \cdot \left\langle X_i, \hat{u}_{\hat{S}} \right\rangle^2 \\
&\quad \quad + (g_i^2 - 1) \cdot \left\langle X_i, \hat{u}_{\hat{S}} \right\rangle^2 \end{aligned}$$ The last two terms are mean zero and the second term has expectation $-(1 + o(1))\tau^4$ since $\| \hat{u}_{\hat{S}} \|_2 = (1 + o(1))\tau$. Directly expanding the first term in terms of the components of $X_i$ yields that its expectation is given by $2 \langle u, \hat{u}_{\hat{S}} \rangle^2 + \| u \|_2^2 \cdot \| \hat{u}_{\hat{S}} \|_2^2 \ge 3(1 - o(1))\tau^4$. Combining these computations yields that $\bE[Z] \ge 2n(1 - o(1)) \tau^2$, and it can again be verified that $\text{Var}[Z] = O(n)$. Chebyshev’s inequality now yields that thresholding $Z$ at $n\tau^2$ distinguishes $H_0$ and $H_1$ as long as $\tau^2 \sqrt{n} \gg 1$. Since the information-theoretic limit of the detection formulation of $\pr{mslr}$ is when $n = \Theta(k \log d/\tau^4)$ [@fan2018curse], whenever this problem is possible it holds that $\tau^2 \sqrt{n} \gg 1$. Therefore, whenever detection is possible, the reduction outlined above shows how to produce a test solving detection in $\pr{mslr}$ using an estimator with $\ell_2$ error $\tau' = o(\tau)$.
Similar reductions transfer hardness of recovery to detection for $\pr{neg-spca}$, $\pr{rsme}$ and $\pr{rslr}$. For $\pr{neg-spca}$ and $\pr{rsme}$, the same argument as above can be shown to work with the test statistic given by $Z = \sum_{i = 1}^{2n} \langle X_i, \hat{u}_{\hat{S}} \rangle^2$, and the same $Z$ used above for $\pr{mslr}$ suffices in the case of $\pr{rslr}$. We remark that to show these statistics $Z$ solve the detection variants of $\pr{rsme}$ and $\pr{rslr}$, it is important to use detection formulations incorporating the exact form of our adversarial constructions, which are $\pr{isgm}$ in the case of $\pr{rsme}$ and the adversary described in Section \[sec:2-supervised\] in the case of $\pr{rslr}$. An arbitrary adversary could corrupt instances of $\pr{rsme}$ and $\pr{rslr}$ to cause these statistics $Z$ to not distinguish between $H_0$ and $H_1$. Because our detection lower bounds apply to these fixed adversaries rather than requiring an arbitrary adversary, this argument yields the desired hardness of estimation for $\pr{rsme}$ and $\pr{rslr}$.
#### Post-Reduction Cloning.
In problems without samples, producing the pair $(X, X')$ requires an additional reduction step. We now outline how to produce such a pair and verification step for $\pr{isbm}$. The high-level idea is to stop our reduction to $\pr{isbm}$ before the final thresholding step, apply Gaussian cloning as in Section 10 of [@brennan2018reducibility], then to continue the reduction with both copies, eventually using one to verify the output of a recovery blackbox applied to the other. A similar argument can be used to show computational lower bounds for recovery in $\pr{semi-cr}$.
Consider the reduction $k\pr{-pds-to-isbm}$ without the final thresholding step, outputting the matrix $M_{\text{R}} \in \mathbb{R}^{kr\ell \times kr\ell}$ at the end of Step 3. Now consider adding the following three steps to this reduction, given access to a recovery blackbox $\mathcal{E}$. More precisely, given an instance of $\pr{isbm}(n, k, P_{11}, P_{12}, P_{22})$ with $$P_{11} = P_0 + \gamma, \quad P_{12} = P_0 - \frac{\gamma}{k - 1} \quad \text{and} \quad P_{22} = P_0 + \frac{\gamma}{(k - 1)^2}$$ as in Section \[sec:3-community\], suppose $\mathcal{E}$ is guaranteed to output an $(n/k)$-subset of vertices $\hat{C}_1 \subseteq [n]$ with $|C_1 \cap \hat{C}_1| \ge (1 + \epsilon)n/k^2$ with probability $1 - o_n(1)$ for some $\epsilon = \Omega(1)$. Here, $C_1$ is the true hidden smaller community of the input $\pr{isbm}$ instance. Observe that when $\epsilon = \Theta(1)$, the blackbox $\mathcal{E}$ has the weak guarantee of recovering marginally more than a trivial $1/k$ fraction of $C_1$. This exactly matches the notion of weak recovery discussed in Section \[subsec:1-problems-sbm\].
1. Sample $W \sim \mN(0, 1)^{\otimes n \times n}$ and form $$M_{\text{R}}^1 = \frac{1}{\sqrt{2}} \left( M_{\text{R}} + W \right) \quad \text{and} \quad M_{\text{R}}^2 = \frac{1}{\sqrt{2}} \left( M_{\text{R}} - W \right)$$
2. Using each of $M_{\text{R}}^1$ and $M_{\text{R}}^2$, complete the reduction $k\pr{-pds-to-isbm}$ omitting the random permutation in Step 4, and complete the additional steps from Corollary \[thm:isbm-mod\] replacing $\mu$ with $\mu/\sqrt{2}$. Let the two output graphs be $G^1$ and $G^2$.
3. Let $\hat{C}_1$ be the output of $\mathcal{E}$ applied to $G^1$. Output $H_0$ if the subgraph of $G^2$ restricted to $\hat{C}_1$ has at least $M$ edges, and output $H_1$ otherwise.
We now outline how this solves the detection variant of $\pr{isbm}$. Let $C_1$ be the true hidden smaller community of the instance that $k\pr{-pds-to-isbm}$ would produce if completed using $M_{\text{R}}$. We claim that $G^1$ and $G^2$ are $o(1)$ total variation from independent copies of $\pr{isbm}(n, C_1, P_{11}, P_{12}, P_{22})$ where $P_{11}, P_{12}$ and $P_{22}$ are as above and $\gamma$ is as in Corollary \[thm:isbm-mod\], but defined using $\mu/\sqrt{2}$ instead of $\mu$. To see this, note that $M_{\text{R}}$ is $o(1)$ total variation from the distribution $$M'_{\text{R}} = \frac{\mu(r - 1)}{r} \cdot v(C_1) v(C_1)^\top + Y \quad \text{where} \quad v(C_1)_i = \frac{1}{\sqrt{r^t(r - 1)}} \cdot \left\{ \begin{matrix} 1 & \textnormal{if } i \not\in C_1 \\ 1 - r & \textnormal{if } i \in C_1 \end{matrix} \right.$$ by Lemma \[lem:isbm-rotations\], where $Y \sim \mN(0, 1)^{\otimes n \times n}$ and $t$ is the internal parameter used in $k\pr{-pds-to-isbm}$. Now it follows that $M_{\text{R}}^1$ and $M_{\text{R}}^2$ are respectively $o(1)$ total variation from $$\begin{aligned}
\left( M_{\text{R}}^1 \right)' &= \frac{\mu(r - 1)}{r\sqrt{2}} \cdot v(C_1) v(C_1)^\top + \frac{1}{\sqrt{2}} \left( Y + W \right) \quad \text{and} \\
\left( M_{\text{R}}^2 \right)' &= \frac{\mu(r - 1)}{r\sqrt{2}} \cdot v(C_1) v(C_1)^\top + \frac{1}{\sqrt{2}} \left( Y - W \right)\end{aligned}$$ The entries of $\frac{1}{\sqrt{2}} \left( Y + W \right)$ and $\frac{1}{\sqrt{2}} \left( Y - W \right)$ are all jointly Gaussian and have variance $1$. Furthermore, they can all be verified to be uncorrelated, implying that these two matrices are independent copies of $\mN(0, 1)^{\otimes n \times n}$ and thus $\left( M_{\text{R}}^1 \right)'$ and $\left( M_{\text{R}}^2 \right)'$ are independent conditioned on $C_1$. Note that $\mu$ has essentially been scaled down by a factor of $\sqrt{2}$ in both of these instances as well. Thus Step 2 above ensures that $G^1$ and $G^2$ are $o(1)$ total variation from independent copies of $\pr{isbm}(n, C_1, P_{11}, P_{12}, P_{22})$.
Now consider Step 3 above applied to two exact independent copies of $\pr{isbm}(n, C_1, P_{11}, P_{12}, P_{22})$. The guarantee for $\mathcal{E}$ ensures that $|C_1 \cap \hat{C}_1| \ge (1 + \epsilon)n/k^2$ with probability $1 - o_n(1)$. The variance of the number of edges in the subgraph of $G^2$ restricted to $\hat{C}_1$ is $O(n^2/k^2)$ under both $H_0$ and $H_1$, and the expected number of edges in this subgraph is $P_0 \binom{n/k}{2}$ under $H_0$. Under $H_1$, the expected number of edges is $$\begin{aligned}
\bE\left[ |E(G[\hat{C}_1])| \right] &= \binom{|C_1 \cap \hat{C}_1|}{2} P_{11} + |C_1 \cap \hat{C}_1| \cdot \left( \frac{n}{k} - |C_1 \cap \hat{C}_1| \right) P_{12} + \binom{\frac{n}{k} - |C_1 \cap \hat{C}_1|}{2} P_{22} \\
&= P_0 \binom{n/k}{2} + \frac{\gamma}{2(k - 1)^2} \cdot \left( k|C_1 \cap \hat{C}_1| - \frac{n}{k} \right)^2 - \frac{\gamma}{2(k - 1)} \cdot \left( (k - 2) \cdot |C_1 \cap \hat{C}_1| + \frac{n}{k} \right) \\
&= P_0 \binom{n/k}{2} + \Omega\left( \frac{\gamma \epsilon^2 n^2}{k^4} \right)\end{aligned}$$ where the last bound holds since $\epsilon = \Omega(1)$ and $k^2 \ll n$.
By Chebyshev’s inequality, Step 3 solves the hypothesis testing problem exactly when this difference $\Omega( \gamma \epsilon^2 n^2/k^4)$ grows faster than the $O(n/k)$ standard deviations in the number of edges in the subgraph under $H_0$ and $H_1$. This implies that Step 3 succeeds if it holds that $\gamma \epsilon^2 \gg k^3/n$. The Kesten-Stigum threshold corresponds to $\gamma^2 = \tilde{\Theta}(k^2/n)$ and therefore as long as $\epsilon^4 n = \tilde{\omega}(k^4)$, this argument solves the detection problem just below the Kesten-Stigum threshold. When $\epsilon = \Theta(1)$, this argument shows a computational lower bound up to the Kesten-Stigum threshold for weak recovery in $\pr{isbm}$. Since $k^2 = o(n)$ is always true in our formulation of $\pr{isbm}$, setting $\epsilon = \Theta(\sqrt{k})$ yields that for all $k$ it is hard to recover a $\Theta(1/\sqrt{k})$ fraction of the hidden community $C_1$. This guarantee is much stronger than the analysis in the previous section, which only showed hardness for a blackbox recovering a $1 - o(1)$ fraction of the hidden community. We remark that the same trick used in Step 1 above to produce two independent copies of a matrix with Gaussian noise was used to show estimation lower bounds for tensor PCA in Section \[sec:3-tensor\].
#### Pre-Reduction Cloning.
We remark that there is a general alternative method to obtain the pairs $(X, X')$ in our reductions that we sketch here. Consider applying Bernoulli cloning either directly to the input $\pr{pc}$ or $\pr{pds}$ instance or to the output of $\pr{To-}k\pr{-Partite-Submatrix}$, in the case of reductions from $k\pr{-pc}$, and then running the remaining parts of our reductions on each of the two resulting copies. Ignoring post-processing steps where we permute vertex labels or subsample the output instance, this general approach can be used to yield two copies of the outputs of our reductions that have the same hidden structure and are conditionally independent given this hidden structure. The same verification steps outlined above can then be applied to obtain our computational lower bounds for recovery.
Acknowledgements {#acknowledgements .unnumbered}
================
We are greatly indebted to Jerry Li for introducing the conjectured statistical-computational gap for robust sparse mean estimation and for discussions that helped lead to this work. We thank Ilias Diakonikolas for pointing out the statistical query model construction in [@diakonikolas2017statistical]. We thank the anonymous reviewers for helpful feedback that greatly improved the exposition. We also thank Frederic Koehler, Sam Hopkins, Philippe Rigollet, Enric Boix-Adserà, Dheeraj Nagaraj, Rares-Darius Buhai, Alex Wein, Ilias Zadik, Dylan Foster and Austin Stromme for inspiring discussions on related topics. This work was supported in part by MIT-IBM Watson AI Lab and NSF CAREER award CCF-1940205.
Deferred Proofs from Part \[part:reductions\] {#sec:appendix-2}
=============================================
Proofs of Total Variation Properties {#subsec:appendix-2-tv}
------------------------------------
In this section, we present several deferred proofs from Sections \[subsec:2-tvreductions\] and \[subsec:2-estimation\]. We first prove Lemma \[lem:tvacc\].
This follows from a simple induction on $m$. Note that the case when $m = 1$ follows by definition. Now observe that by the data-processing and triangle inequalities of total variation, we have that if $\mathcal{B} = \mathcal{A}_{m-1} \circ \mathcal{A}_{m-2} \circ \cdots \circ \mathcal{A}_1$ then $$\begin{aligned}
\TV\left( \mathcal{A}(\mP_0), \mP_m \right) &\le \TV\left( \mathcal{A}_m \circ \mathcal{B}(\mP_0), \mathcal{A}_m(\mP_{m - 1}) \right) + \TV\left(\mathcal{A}_m(\mP_{m - 1}), \mP_m \right) \\
&\le \TV\left( \mathcal{B}(\mP_0), \mP_{m - 1} \right) + \epsilon_m \\
&\le \sum_{i = 1}^m \epsilon_i\end{aligned}$$ where the last inequality follows from the induction hypothesis applied with $m - 1$ to $\mathcal{B}$. This completes the induction and proves the lemma.
We now prove Lemma \[lem:bernproduct\] upper bounding the total variation distance between vectors of unplanted and planted samples from binomial distributions.
Given some $P \in [0, 1]$, we begin by computing $\chi^2\left( \textnormal{Bern}(P) + \textnormal{Bin}(m - 1, Q), \textnormal{Bin}(m, Q) \right)$. For notational convenience, let $\binom{a}{b} = 0$ if $b > a$ or $b < 0$. It follows that $$\begin{aligned}
&1 + \chi^2\left( \textnormal{Bern}(P) + \textnormal{Bin}(m - 1, Q), \textnormal{Bin}(m, Q) \right) \\
&\quad \quad = \sum_{t = 0}^{m} \frac{\left((1 - P) \cdot \binom{m - 1}{t} Q^t (1 - Q)^{m - 1 - t} + P \cdot \binom{m - 1}{t - 1} Q^{t - 1} (1 - Q)^{m - t} \right)^2}{\binom{m}{t} Q^t (1 - Q)^{m - t}} \\
&\quad \quad = \sum_{t = 0}^{m} \binom{m}{t} Q^t (1 - Q)^{m - t} \left( \frac{m - t}{m} \cdot \frac{1 - P}{1 - Q} + \frac{t}{m} \cdot \frac{P}{Q} \right)^2 \\
&\quad \quad = \bE\left[ \left( \frac{m - X}{m} \cdot \frac{1 - P}{1 - Q} + \frac{X}{m} \cdot \frac{P}{Q} \right)^2 \right] \\
&\quad \quad = \bE\left[ \left( 1 + \frac{X - mQ}{m} \cdot \frac{P - Q}{Q(1 - Q)} \right)^2 \right] \\
&\quad \quad = 1 + \frac{2(P - Q)}{mQ(1 - Q)} \cdot \bE[X - mQ] + \frac{(P - Q)^2}{m^2Q^2(1 - Q)^2} \cdot \bE\left[(X - Qm)^2\right] \\
&\quad \quad = 1 + \frac{(P - Q)^2}{mQ(1 - Q)}\end{aligned}$$ where $X \sim \textnormal{Bin}(m, Q)$ and the second last equality follows from $\bE[X] = Qm$ and $\bE[(X - Qm)^2] = \text{Var}[X] = Q(1 - Q)m$. The concavity of $\log$ implies that $\KL(\mP, \mQ) \le \log\left( 1 + \chi^2(\mP, \mQ) \right) \le \chi^2(\mP, \mQ)$ for any two distributions with $\mP$ absolutely continuous with respect to $\mQ$. Pinsker’s inequality and tensorization of $\KL$ now imply that $$\begin{aligned}
&2 \cdot \TV\left( \otimes_{i = 1}^k \left( \textnormal{Bern}(P_i) + \textnormal{Bin}(m - 1, Q) \right), \textnormal{Bin}(m, Q)^{\otimes k} \right)^2 \\
&\quad \quad \le \KL\left( \otimes_{i = 1}^k \left( \textnormal{Bern}(P_i) + \textnormal{Bin}(m - 1, Q) \right), \textnormal{Bin}(m, Q)^{\otimes k} \right) \\
&\quad \quad = \sum_{i = 1}^k \KL\left( \textnormal{Bern}(P_i) + \textnormal{Bin}(m - 1, Q), \textnormal{Bin}(m, Q) \right) \\
&\quad \quad \le \sum_{i = 1}^k \chi^2\left( \textnormal{Bern}(P_i) + \textnormal{Bin}(m - 1, Q), \textnormal{Bin}(m, Q) \right) = \sum_{i = 1}^k \frac{(P_i - Q)^2}{mQ(1 - Q)}\end{aligned}$$ which completes the proof of the lemma.
We now prove Lemma \[lem:bintv\] on the total variation distance between two binomial distributions.
By applying the data processing inequality for $\TV$ to the function taking the sum of the coordinates of a vector, we have that $$\begin{aligned}
2 \cdot \TV\left( \textnormal{Bin}(n, P), \textnormal{Bin}(n, Q) \right)^2 &\le 2 \cdot \TV\left( \textnormal{Bern}(P)^{\otimes n}, \textnormal{Bern}(Q)^{\otimes n} \right)^2 \\
&\le \KL\left( \textnormal{Bern}(P)^{\otimes n}, \textnormal{Bern}(Q)^{\otimes n} \right) \\
&= n \cdot \KL\left( \textnormal{Bern}(P), \textnormal{Bern}(Q) \right) \\
&\le n \cdot \chi^2\left( \textnormal{Bern}(P), \textnormal{Bern}(Q) \right) \\
&= n \cdot \frac{(P - Q)^2}{Q(1 - Q)}\end{aligned}$$ The second inequality is an application of Pinsker’s, the first equality is tensorization of $\KL$ and the third inequality is the fact that $\chi^2$ upper bounds $\KL$ by the concavity of $\log$. This completes the proof of the lemma.
We conclude this section with a proof of Lemma \[lem:tvreductions-recovery\], establishing the key property of reductions in total variation among recovery problems.
As in the proof of Lemma \[lem:3a\] from [@brennan2018reducibility], this lemma follows from a simple application of the definition of $\TV$. Suppose that there is such an $\mathcal{E}'$. Now consider the algorithm $\mathcal{E}$ that proceeds as follows on an input $X$ of $\mP(n, \tau)$:
1. compute $\mathcal{A}(X)$ and the output $\hat{\theta'}$ of $\mathcal{E}'$ on input $\mathcal{A}(X)$; and
2. output the result $\hat{\theta} \gets \mathcal{B}(X, \hat{\theta'})$.
Suppose that $X \sim \mP_D(\theta)$ for some $\theta \in \Theta_{\mP}$. Consider a coupling of $X$, the randomness of $\mathcal{A}$ and $Y \sim \bE_{\theta' \sim \mD(\theta)} \, \mP'_D(\theta')$ such that $\P[\mathcal{A}(X) \neq Y] = o_n(1)$. Since $Y$ is distributed as a mixture of $\mP'_D(\theta')$, conditioned on $\theta'$, it holds that $\mathcal{E}'$ succeeds with probability $$\bP\left[ \ell_{\mP'}(\mathcal{E}'(Y), \theta') \le \tau' \, \Big| \, \theta' \right] \ge p$$ Marginalizing this over $\theta'$ yields that $\bP\left[ \ell_{\mP'}(\mathcal{E}'(Y), \theta') \le \tau' \text{ for some } \theta' \in \textnormal{supp} \, \mD(\theta) \right] \ge p$. Now since $\mathcal{A}(X) = Y$ is a probability $1 - o_n(1)$ event, we have that the intersection of this and the event above occurs with probability $p - o_n(1)$. Therefore $$\bP\left[ \ell_{\mP'}(\theta', \hat{\theta'}) \le \tau' \text{ for some } \theta' \in \textnormal{supp} \, \mD(\theta) \right] \ge \bP\left[ \mathcal{A}(X) = Y \textnormal{ and } \mathcal{E}' \textnormal{ succeeds} \right] \ge p - o_n(1)$$ Now note that the definition of $\mathcal{B}$ implies that $$\begin{aligned}
\bP\left[ \ell_{\mP}(\theta, \hat{\theta}) \le \tau \right] &\ge \bP\left[ \ell_{\mP'}(\theta', \hat{\theta'}) \le \tau' \text{ for some } \theta' \in \textnormal{supp} \, \mD(\theta) \text{ and } \mathcal{B} \text{ succeeds} \right] \\
&\ge \bP\left[ \ell_{\mP'}(\theta', \hat{\theta'}) \le \tau' \text{ for some } \theta' \in \textnormal{supp} \, \mD(\theta) \right] - \bP\left[ \mathcal{B} \text{ fails} \right] \\
&\ge p - o_n(1)\end{aligned}$$ which completes the proof of the lemma.
Proofs for To-$k$-Partite-Submatrix {#subsec:appendix-2-k-partite}
-----------------------------------
In this section, we prove Lemma \[lem:submatrix\], which establishes the approximate Markov transition properties of the reduction $\pr{To-}k\textsc{-Partite-Submatrix}$. We first establish analogue of Lemma 6.4 from [@brennan2019universality] in the $k$-partite case to analyze the planted diagonal entries in Step 2 of $\pr{To-}k\textsc{-Partite-Submatrix}$.
\[lem:plantingdiagonals\] Suppose that $0 < Q < P \le 1$ and $n \ge \left( \frac{P}{Q} + 1 \right) N$ is such that both $N$ and $n$ are divisible by $k$ and $k \le QN/4$. Suppose that for each $t \in [k]$, $$z_1^t \sim \textnormal{Bern}(P), \quad z_2^t \sim \textnormal{Bin}(N/k - 1, P) \quad \textnormal{and} \quad z_3^t \sim \textnormal{Bin}(n/k, Q)$$ are independent. If $z_4^t = \max \{ z_3^t - z_1^t - z_2^t, 0 \}$, then it follows that $$\begin{aligned}
\TV\left( \otimes_{t = 1}^k \mL(z_1^t, z_2^t + z_4^t), \left( \textnormal{Bern}(P) \otimes \textnormal{Bin}(n/k - 1, Q) \right)^{\otimes k} \right) &\le 4k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right) + \sqrt{\frac{C_Q k^2}{2n}} \\
\TV\left( \otimes_{t = 1}^k \mL(z_1^t + z_2^t + z_4^t), \textnormal{Bin}(n/k, Q)^{\otimes k} \right) &\le 4k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right)\end{aligned}$$ where $C_Q = \max \left\{ \frac{Q}{1 - Q}, \frac{1 - Q}{Q} \right\}$.
Throughout this argument, let $v$ denote a vector in $\{0, 1\}^k$. Now define the event $$\mathcal{E} = \bigcap_{t = 1}^k \left\{ z_3^t = z_1^t + z_2^t + z_4^t \right\}$$ Now observe that if $z_3^t \ge Qn/k - QN/2k + 1$ and $z_2^t \le P(N/k - 1) + QN/2k$ then it follows that $z_3^t \ge 1 + z_2^t \ge v_t + z_2^t$ for any $v_t \in \{0, 1\}$ since $Qn \ge (P+Q)N$. Now union bounding the probability that $\mathcal{E}$ does not hold conditioned on $z_1$ yields that $$\begin{aligned}
\bP\left[ \mathcal{E}^C \Big| z_1 = v \right] &\le \sum_{t = 1}^k \bP\left[ z_3^t < v_t + z_2^t \right] \\
&\le \sum_{t = 1}^k \bP\left[ z_3^t < \frac{Qn}{k} - \frac{QN}{2k} + 1 \right] + \sum_{t = 1}^k \bP\left[ z_2^t > P\left(\frac{N}{k} - 1\right) + \frac{QN}{2k} \right] \\
&\le k \cdot \exp\left( - \frac{\left(QN/2k - 1 \right)^2}{3Qn/k} \right) + k \cdot \exp\left( - \frac{\left(QN/2k \right)^2}{2P(N/k - 1)} \right) \\
&\le 2k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right)\end{aligned}$$ where the third inequality follows from standard Chernoff bounds on the tails of the binomial distribution. Marginalizing this bound over $v \sim \mL(z_1) = \text{Bern}(P)^{\otimes k}$, we have that $$\bP\left[ \mathcal{E}^C \right] = \bE_{v \sim \mL(z_1)} \bP\left[ \mathcal{E}^C \Big| z_1 = v \right] \le 2k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right)$$ Now consider the total variation error induced by conditioning each of the product measures $\otimes_{t = 1}^k \mL(z_1^t + z_2^t + z_4^t)$ and $\otimes_{t = 1}^k \mL(z_3^t)$ on the event $\mathcal{E}$. Note that under $\mathcal{E}$, by definition, we have that $z_3^t = z_1^t + z_2^t + z_4^t$ for each $t \in [k]$. By the conditioning property of $\TV$ in Fact \[tvfacts\], we have $$\begin{aligned}
\TV\left( \otimes_{t = 1}^k \mL(z_1^t + z_2^t + z_4^t), \mL\left( \left(z_3^t : t \in [k]\right) \Big| \mathcal{E} \right) \right) &\le \bP\left[ \mathcal{E}^C \right] \\
\TV\left( \otimes_{t = 1}^k \mL(z_3^t), \mL\left( \left(z_3^t : t \in [k]\right) \Big| \mathcal{E} \right) \right) &\le \bP\left[ \mathcal{E}^C \right]\end{aligned}$$ The fact that $\otimes_{t = 1}^k \mL(z_3^t) = \text{Bin}(n/k, Q)^{\otimes k}$ and the triangle inequality now imply that $$\TV\left( \otimes_{t = 1}^k \mL(z_1^t + z_2^t + z_4^t), \textnormal{Bin}(n/k, Q)^{\otimes k} \right) \le 2 \cdot \bP\left[ \mathcal{E}^C \right] \le 4k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right)$$ which proves the second inequality in the statement of the lemma. It suffices to establish the first inequality. A similar conditioning step as above shows that for all $v \in \{0, 1\}^k$, we have that $$\begin{aligned}
\TV\left( \otimes_{t = 1}^k \mL\left(v_t + z_2^t + z_4^t \Big| z_1^t = v_t\right), \mL\left( \left(v_t + z_2^t + z_4^t : t \in [k]\right) \Big| z_1 = v \text{ and } \mathcal{E} \right) \right) &\le \bP\left[ \mathcal{E}^C \Big| z_1 = v \right] \\
\TV\left( \otimes_{t = 1}^k \mL\left(z_3^t \Big| z_1^t = v_t \right), \mL\left( \left(z_3^t : t \in [k]\right) \Big| z_1 = v \text{ and } \mathcal{E} \right) \right) &\le \bP\left[ \mathcal{E}^C \Big| z_1 = v \right]\end{aligned}$$ The triangle inequality and the fact that $z_3 \sim \text{Bin}(n/k, Q)^{\otimes k}$ is independent of $z_1$ implies that $$\TV\left( \otimes_{t = 1}^k \mL\left(v_t + z_2^t + z_4^t \Big| z_1^t = v_t\right), \text{Bin}(n/k, Q)^{\otimes k} \right) \le 4k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right)$$ By Lemma \[lem:bernproduct\] applied with $P_t = v_t \in \{0, 1\}$, we also have that $$\TV\left( \otimes_{t = 1}^k \left( v_t + \text{Bin}(n/k - 1, Q) \right), \text{Bin}(n/k, Q)^{\otimes k} \right) \le \sqrt{\sum_{t = 1}^k \frac{k(v_t - Q)^2}{2nQ(1 - Q)}} \le \sqrt{\frac{C_Q k^2}{2n}}$$ The triangle now implies that for each $v \in \{0, 1\}^k$, $$\begin{aligned}
&\TV\left( \otimes_{t = 1}^k \mL\left(z_2^t + z_4^t \Big| z_1^t = v_t\right), \text{Bin}(n/k - 1, Q)^{\otimes k} \right) \\
&\quad \quad = \TV\left( \otimes_{t = 1}^k \mL\left(v_t + z_2^t + z_4^t \Big| z_1^t = v_t\right), \otimes_{t = 1}^k \left( v_t + \text{Bin}(n/k - 1, Q) \right) \right) \\
&\quad \quad \le 4k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right) + \sqrt{\frac{C_Q k^2}{2n}}\end{aligned}$$ We now marginalize over $v \sim \mL(z_1) = \text{Bern}(P)^{\otimes k}$. The conditioning on a random variable property of $\TV$ in Fact \[tvfacts\] implies that $$\begin{aligned}
&\TV\left( \otimes_{t = 1}^k \mL(z_1^t, z_2^t + z_4^t), \left( \textnormal{Bern}(P) \otimes \textnormal{Bin}(n/k - 1, Q) \right)^{\otimes k} \right) \\
&\quad \quad \le \bE_{v \sim \text{Bern}(P)^{\otimes k}} \, \TV\left( \otimes_{t = 1}^k \mL\left(z_2^t + z_4^t \Big| z_1^t = v_t\right), \text{Bin}(n/k - 1, Q)^{\otimes k} \right)\end{aligned}$$ which, when combined with the inequalities above, completes the proof of the lemma.
We now apply this lemma to prove Lemma \[lem:submatrix\]. The proof of this lemma is a $k$-partite variant of the argument used to prove Theorem 6.1 in [@brennan2019universality]. However, it involves several technical subtleties that do not arise in the non $k$-partite case.
Fix some subset $R \subseteq [N]$ such that $|R \cap E_i| = 1$ for each $i \in [k]$. We will first show that $\mathcal{A}$ maps an input $G \sim \mG(N, R, p, q)$ approximately in total variation to a sample from the planted submatrix distribution $\mathcal{M}_{[n] \times [n]} \left(\mU_n(F), \textnormal{Bern}(p), \textnormal{Bern}(Q) \right)$. By AM-GM, we have that $$\sqrt{pq} \le \frac{p + q}{2} = 1 - \frac{(1 - p) + (1 - q)}{2} \le 1 - \sqrt{(1 - p)(1 - q)}$$ If $p \neq 1$, it follows that $P = p > Q = 1 - \sqrt{(1 - p)(1 - q)}$. This implies that $\frac{1 - p}{1 - q} = \left( \frac{1 - P}{1 - Q} \right)^2$ and the inequality above rearranges to $\left( \frac{P}{Q} \right)^2 \le \frac{p}{q}$. If $p = 1$, then $Q = \sqrt{q}$ and $\left( \frac{P}{Q} \right)^2 = \frac{p}{q}$. Furthermore, the inequality $\frac{1 - p}{1 - q} \le \left( \frac{1 - P}{1 - Q} \right)^2$ holds trivially. Therefore we may apply Lemma \[lem:graphcloning\], which implies that $(G_1, G_2) \sim \mG(N, R, p, Q)^{\otimes 2}$.
Let the random set $U = \{ \pi_1^{-1}(R \cap E_1), \pi_2^{-1}(R \cap E_2), \dots, \pi_k^{-1}(R \cap E_k) \}$ denote the support of the $k$-subset of $[n]$ that $R$ is mapped to in the embedding step of $\pr{To-}k\textsc{-Partite-Submatrix}$. Now fix some $k$-subset $R' \subseteq [n]$ with $|R' \cap F_i| = 1$ for each $i \in [k]$ and consider the distribution of $M_{\text{PD}}$ conditioned on the event $U = R'$. Since $(G_1, G_2) \sim \mG(n, R, p, Q)^{\otimes 2}$, Step 2 of $\pr{To-}k\textsc{-Partite-Submatrix}$ ensures that the off-diagonal entries of $M_{\text{PD}}$, given this conditioning, are independent and distributed as follows:
- $M_{ij} \sim \text{Bern}(p)$ if $i \neq j$ and $i, j \in R'$; and
- $M_{ij} \sim \text{Bern}(Q)$ if $i \neq j$ and $i \not \in R'$ or $j \not \in R'$.
which match the corresponding entries of $\mathcal{M}_{[n] \times [n]} \left(R' \times R', \textnormal{Bern}(p), \textnormal{Bern}(Q) \right)$. Furthermore, these entries are independent of the vector $\text{diag}(M_{\text{PD}}) = \left( (M_{\text{PD}})_{ii} : i \in [k] \right)$ of the diagonal entries of $M_{\text{PD}}$. It therefore follows that $$\begin{aligned}
&\TV\left( \mL \left( M_{\text{PD}} \Big| U = R' \right), \mathcal{M}_{[n] \times [n]} \left(R' \times R', \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right) \\
&\quad \quad = \TV\left( \mL \left( \text{diag}(M_{\text{PD}}) \Big| U = R' \right), \mathcal{M}_{[n]} \left(R', \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right)\end{aligned}$$ Let $(S_1', S_2', \dots, S_k')$ be any tuple of fixed subsets such that $|S_t'| = N/k$, $S_i' \subseteq F_t$ and $R' \cap F_t \in S_t'$ for each $t \in [k]$. Now consider the distribution of $\text{diag}(M_{\text{PD}})$ conditioned on both $U = R'$ and $(S_1, S_2, \dots, S_k) = (S_1', S_2', \dots, S_k')$. It holds by construction that the $k$ vectors $\text{diag}(M_{\text{PD}})_{F_t}$ are independent for $t \in [k]$ and each distributed as follows:
- $\text{diag}(M_{\text{PD}})_{S_t'}$ is an exchangeable distribution on $\{0, 1\}^{N/k}$ with support of size $s_1^t \sim \text{Bin}(N/k, p)$, by construction. This implies that $\text{diag}(M_{\text{PD}})_{S_t'} \sim \text{Bern}(p)^{\otimes N/k}$. This can trivially be restated as $\left(M_{R' \cap F_t, R' \cap F_t}, \text{diag}(M_{\text{PD}})_{S_t' \backslash R'}\right) \sim \text{Bern}(p) \otimes \text{Bern}(p)^{\otimes N/k - 1}$.
- $\text{diag}(M_{\text{PD}})_{F_t \backslash S_t'}$ is an exchangeable distribution on $\{0, 1\}^{N/k}$ with support of size $z_4^t = \max\{s_2^t - s_1^t, 0\}$. Furthermore, $\text{diag}(M_{\text{PD}})_{F_t \backslash S_t'}$ is independent of $\text{diag}(M_{\text{PD}})_{S_t'}$.
For each $t \in [k]$, let $z_1^t = M_{R' \cap F_t, R' \cap F_t} \sim \text{Bern}(p)$ and $z_2^t \sim \text{Bin}(N/k - 1, p)$ be the size of the support of $\text{diag}(M_{\text{PD}})_{S_t' \backslash R'}$. As shown discussed in the first point above, we have that $z_1^t$ and $z_2^t$ are independent and $z_1^t + z_2^t = s_1^t$.
Now consider the distribution of $\text{diag}(M_{\text{PD}})$ relaxed to only be conditioned on $U = R'$, and no longer on $(S_1, S_2, \dots, S_k) = (S_1', S_2', \dots, S_k')$. Conditioned on $U = R'$, the $S_t$ are independent and each uniformly distributed among all $N/k$ size subsets of $F_t$ that contain the element $R' \cap F_t$. In particular, this implies that the distribution of $\text{diag}(M_{\text{PD}})_{F_t \backslash R'}$ is an exchangeable distribution on $\{0, 1\}^{n/k - 1}$ with support size $z_2^t + z_4^t$ for each $t$. Note that any $v \sim \mathcal{M}_{[n]} \left(R', \textnormal{Bern}(p), \textnormal{Bern}(Q) \right)$ also satisfies that $v_{F_t \backslash R'}$ is exchangeable. This implies that $\mathcal{M}_{[n]} \left(R', \textnormal{Bern}(p), \textnormal{Bern}(Q) \right)$ and $\text{diag}(M_{\text{PD}})$ are identically distributed when conditioned on their entries with indices in $R'$ and on their support sizes within the $k$ sets of indices $F_t \backslash R'$. The conditioning property of Fact \[tvfacts\] therefore implies that $$\begin{aligned}
&\TV\left( \mL \left( \text{diag}(M_{\text{PD}}) \Big| U = R' \right), \mathcal{M}_{[n]} \left(R', \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right) \\
&\quad \quad \le \TV\left( \otimes_{t = 1}^k \mL(z_1^t, z_2^t + z_4^t), \left( \textnormal{Bern}(p) \otimes \textnormal{Bin}(n/k - 1, Q) \right)^{\otimes k} \right) \\
&\quad \quad \le 4k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right) + \sqrt{\frac{C_Q k^2}{2n}}\end{aligned}$$ by the first inequality in Lemma \[lem:plantingdiagonals\]. Now observe that $U \sim \mU_n(F)$ and thus marginalizing over $R' \sim \mL(U) = \mU_n(F)$ and applying the conditioning property of Fact \[tvfacts\] yields that $$\begin{aligned}
&\TV\left( \mathcal{A}(G(N, R, p, q)), \mathcal{M}_{[n] \times [n]} \left(\mU_n(F), \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right) \\
&\quad \quad \le \bE_{R' \sim \mU_n(F)} \, \TV\left( \mL \left( M_{\text{PD}} \Big| U = R' \right), \mathcal{M}_{[n] \times [n]} \left(R' \times R', \textnormal{Bern}(p), \textnormal{Bern}(Q) \right) \right)\end{aligned}$$ since $M_{\text{PD}} \sim \mathcal{A}(\mG(N, R, p, q))$. Applying an identical marginalization over $R \sim \mU_N(E)$ completes the proof of the first inequality in the lemma statement.
It suffices to consider the case where $G \sim \mG(N, q)$, which follows from an analogous but simpler argument. By Lemma \[lem:graphcloning\], we have that $(G_1, G_2) \sim \mG(N, Q)^{\otimes 2}$. It follows that the entries of $M_{\text{PD}}$ are distributed as $(M_{\text{PD}})_{ij} \sim_{\text{i.i.d.}} \text{Bern}(Q)$ for all $i \neq j$ independently of $\text{diag}(M_{\text{PD}})$. Now note that the $k$ vectors $\text{diag}(M_{\text{PD}})_{F_t}$ for $t \in [k]$ are each exchangeable and have support size $s_1^t + \max\{ s_2^t - s_1^t, 0 \} = z_1^t + z_2^t + z_4^t$ where $z_1^t \sim \text{Bern}(p)$, $z_2^t \sim \text{Bin}(N/k - 1, p)$ and $s_2^t \sim \text{Bin}(n/k, Q)$ are independent. By the same argument as above, we have that $$\begin{aligned}
\TV\left( \mL(M_{\text{PD}}), \text{Bern}(Q)^{\otimes n \times n} \right) &= \TV\left( \mL(\text{diag}(M_{\text{PD}})), \text{Bern}(Q)^{\otimes n} \right) \\
&= \TV\left( \otimes_{t = 1}^k \mL\left( z_1^t + z_2^t + z_4^t \right), \text{Bin}(n/k, Q) \right) \\
&\le 4k \cdot \exp \left( - \frac{Q^2N^2}{48Pkn} \right)\end{aligned}$$ by Lemma \[lem:plantingdiagonals\]. Since $M_{\text{PD}} \sim \mathcal{A}(\mG(N, q))$, this completes the proof of the lemma.
Proofs for Symmetric 3-ary Rejection Kernels {#subsec:appendix-3-ary}
--------------------------------------------
In this section, we establish the approximate Markov transition properties for symmetric 3-ary rejection kernels introduced in Section \[subsec:srk\].
Define $\mL_1, \mL_2 : X \to \mathbb{R}$ to be $$\mL_1(x) = \frac{d\mP_+}{d\mQ} (x) - \frac{d\mP_-}{d\mQ} (x) \quad \text{and} \quad \mL_2(x) = \frac{d\mP_+}{d\mQ} (x) + \frac{d\mP_-}{d\mQ} (x) - 2$$ Note that if $x \in S$, then the triangle inequality implies that $$\begin{aligned}
P_A(x, 1) &\le \frac{1}{2} \left( 1 + \frac{a}{4|\mu_2|} \cdot |\mL_2(x)| + \frac{1}{4|\mu_1|} \cdot |\mL_1(x)| \right) \le 1 \\
P_A(x, 1) &\ge \frac{1}{2} \left( 1 - \frac{a}{4|\mu_2|} \cdot |\mL_2(x)| - \frac{1}{4|\mu_1|} \cdot |\mL_1(x)| \right) \ge 0\end{aligned}$$ Similar computations show that $0 \le P_A(x, 0) \le 1$ and $0 \le P_A(x, -1) \le 1$, implying that each of these probabilities is well-defined. Now let $R_1 = \bP_{X \sim \mP_+}[X \in S]$, $R_0 = \bP_{X \sim \mQ}[X \in S]$ and $R_{-1} = \bP_{X \sim \mP_-}[X \in S]$ where $R_1, R_0, R_{-1} \ge 1 - \delta$ by assumption.
We now define several useful events. For the sake of analysis, consider continuing to iterate Step 2 even after $z$ is set for the first time for a total of $N$ iterations. Let $A_i^1$, $A_i^0$ and $A_i^{-1}$ be the events that $z$ is set in the $i$th iteration of Step 2 when $B = 1$, $B = 0$ and $B = -1$, respectively. Let $B_i^1 = (A_1^1)^C \cap (A_2^1)^C \cap \cdots \cap (A^1_{i - 1})^C \cap A_i^1$ be the event that $z$ is set for the first time in the $i$th iteration of Step 2. Let $C^1 = A_1^1 \cup A_2^1 \cup \cdots \cup A_N^1$ be the event that $z$ is set in some iteration of Step 2. Define $B_i^0$, $C^0$, $B_i^{-1}$ and $C^{-1}$ analogously. Let $z_0$ be the initialization of $z$ in Step 1.
Now let $Z_1 \sim \mD_1 = \mL(3\textsc{-srk}(1))$, $Z_0 \sim \mD_0 = \mL(3\textsc{-srk}(0))$ and $Z_{-1} \sim \mD_{-1} = \mL(3\textsc{-srk}(-1))$. Note that $\mL(Z_t|B_i^t) = \mL(Z_t|A_i^t)$ for each $t \in \{-1, 0, 1\}$ since $A_i^t$ is independent of $A_1^t, A_2^t, \dots, A_{i-1}^t$ and the sample $z'$ chosen in the $i$th iteration of Step 2. The independence between Steps 2.1 and 2.3 implies that $$\begin{aligned}
\bP\left[A_i^1\right] &= \bE_{x \sim \mQ}\left[ \frac{1}{2} \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \cdot \mathbf{1}_{S}(x) \right] \\
&= \frac{1}{2} R_0 + \frac{a}{8\mu_2} \left( R_1 + R_{-1} - 2R_0 \right) + \frac{1}{8\mu_1} \left( R_1 - R_{-1} \right) \ge \frac{1}{2} - \frac{\delta}{2} \left( 1 + \frac{a}{2}|\mu_2|^{-1} + \frac{1}{4}|\mu_1|^{-1} \right) \\
\bP\left[A_i^0 \right] &= \bE_{x \sim \mQ}\left[ \frac{1}{2} \left( 1 - \frac{1 - a}{4\mu_2} \cdot \mL_2(x) \right) \cdot \mathbf{1}_{S}(x) \right] \\
&= \frac{1}{2} R_0 - \frac{1 - a}{8\mu_2} \left( R_1 + R_{-1} - 2R_0 \right) \ge \frac{1}{2} - \frac{\delta}{2} \left( 1 + \frac{1 - a}{4} \cdot |\mu_2|^{-1} \right) \\
\bP\left[A_i^{-1}\right] &= \bE_{x \sim \mQ}\left[ \frac{1}{2} \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) - \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \cdot \mathbf{1}_{S}(x) \right] \\
&= \frac{1}{2} R_0 + \frac{a}{8\mu_2} \left( R_1 + R_{-1} - 2R_0 \right) - \frac{1}{4\mu_1} \left( R_1 - R_{-1} \right) \ge \frac{1}{2} - \frac{\delta}{2} \left( 1 + \frac{a}{2}|\mu_2|^{-1} + \frac{1}{4}|\mu_1|^{-1} \right)\end{aligned}$$ The independence of the $A_i^t$ for each $t \in \{-1, 0, 1\}$ implies that $$1 - \bP\left[ C^t \right] = \prod_{i = 1}^N \left( 1 - \bP\left[A_i^t\right] \right) \le \left( \frac{1}{2} + \frac{\delta}{2} \left( 1 + \frac{1}{2}|\mu_2|^{-1} + |\mu_1|^{-1} \right) \right)^N$$ Note that $\mL(Z_t|A_i^t)$ are each absolutely continuous with respect to $\mQ$ or each $t \in \{-1, 0, 1\}$, with Radon-Nikodym derivatives given by $$\begin{aligned}
\frac{d\mL(Z_1|B_i^1)}{d\mQ} (x) = \frac{d\mL(Z_1|A_i^1)}{d\mQ} (x) &= \frac{1}{2\cdot \bP\left[A_i^1\right]} \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \cdot \mathbf{1}_S(x) \\
\frac{d\mL(Z_0|B_i^0)}{d\mQ} (x) = \frac{d\mL(Z_0|A_i^0)}{d\mQ} (x) &= \frac{1}{2\cdot \bP\left[A_i^1\right]} \left( 1 - \frac{1 - a}{4\mu_2} \cdot \mL_2(x) \right) \cdot \mathbf{1}_S(x) \\
\frac{d\mL(Z_{-1}|B_i^{-1})}{d\mQ} (x) = \frac{d\mL(Z_{-1}|A_i^{-1})}{d\mQ} (x) &= \frac{1}{2\cdot \bP\left[A_i^1\right]} \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) - \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \cdot \mathbf{1}_S(x)\end{aligned}$$ Fix one of $t \in \{-1, 0, 1\}$ and note that since the conditional laws $\mL(Z_t|B_i^t)$ are all identical, we have that $$\frac{d\mD_t}{d\mQ} (x) = \bP\left[C^t \right] \cdot \frac{d\mL(Z_t|B_1^t)}{d\mQ} (x) + \left( 1 - \bP\left[C^t \right] \right) \cdot \mathbf{1}_{z_0}(x)$$ Therefore it follows that $$\begin{aligned}
\TV\left( \mD_t, \mL(Z_t|B_1^t) \right) &= \frac{1}{2} \cdot \bE_{x \sim \mQ} \left[\left| \frac{d\mD_t}{d\mQ} (x) - \frac{d\mL(Z_t|B_1^t)}{d\mQ} (x) \right| \right] \\
&\le \frac{1}{2} \left( 1 - \bP\left[ C^t \right] \right) \cdot \bE_{x \sim \mQ} \left[ \mathbf{1}_{z_0}(x) + \frac{d\mL(Z_t|B_1^t)}{d\mQ} (x) \right] = 1 - \bP\left[ C^t \right]\end{aligned}$$ by the triangle inequality. Since $1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \ge 0$ for $x \in S$, we have that $$\begin{aligned}
&\bE_{x \sim \mQ} \left[\left| \frac{d\mL(Z_1|B_1^1)}{d\mQ} (x) - \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \right| \right] \\
&\quad \quad = \left|\frac{1}{2\cdot \bP\left[A_i^1\right]} - 1 \right| \cdot \bE_{x \sim \mQ^*_n} \left[\left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \cdot \mathbf{1}_S(x) \right] \\
&\quad \quad \quad \quad + \bE_{x \sim \mQ} \left[ \left| 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4|\mu_1|} \cdot \mL_1(x) \right| \cdot \mathbf{1}_{S^C}(x) \right] \\
&\quad \quad \le \left| \frac{1}{2} - \bP[A_i^1] \right| + \bE_{x \sim \mQ} \left[ \left( 1+ \frac{a}{4|\mu_2|} \cdot \left( \frac{d\mP_+}{d\mQ} (x) + \frac{d\mP_-}{d\mQ} (x) +2 \right) \right) \cdot \mathbf{1}_{S^C}(x) \right] \\
&\quad \quad \quad \quad + \bE_{x \sim \mQ} \left[ \frac{1}{4|\mu_1|} \cdot \left( \frac{d\mP_+}{d\mQ} (x) + \frac{d\mP_-}{d\mQ} (x) \right) \cdot \mathbf{1}_{S^C}(x) \right] \\
&\quad \quad \le \frac{\delta}{2} \left( 1 + \frac{a}{2}|\mu_2|^{-1} + \frac{1}{4}|\mu_1|^{-1} \right) + \delta \left( 1 + a|\mu_2|^{-1} + \frac{1}{2}|\mu_1|^{-1} \right) = \delta \left( \frac{3}{2} + \frac{5}{4} |\mu_2|^{-1} + \frac{5}{8} |\mu_1|^{-1} \right)\end{aligned}$$ By analogous computations, we have that $$\begin{aligned}
\bE_{x \sim \mQ} \left[\left| \frac{d\mL(Z_0|B_1^0)}{d\mQ} (x) - \left( 1 - \frac{1 - a}{4\mu_2} \cdot \mL_2(x) \right) \right| \right] &\le 2\delta \left(1 + |\mu_1|^{-1} + |\mu_2|^{-1} \right) \\
\bE_{x \sim \mQ} \left[\left| \frac{d\mL(Z_{-1}|B_1^{-1})}{d\mQ} (x) - \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) - \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \right| \right] &\le 2\delta \left(1 + |\mu_1|^{-1} + |\mu_2|^{-1} \right) \end{aligned}$$ Now observe that $$\begin{aligned}
\frac{d\mP_+}{d\mQ}(x) &= \left( \frac{1 - a}{2} + \mu_1 + \mu_2 \right) \cdot \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \right) + (a - 2\mu_2) \cdot \left( 1 - \frac{1 - a}{4\mu_2} \cdot \mL_2(x) \right) \\
&\quad \quad + \left( \frac{1 - a}{2} - \mu_1 + \mu_2 \right) \cdot \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) - \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \\
1 &= \frac{1 - a}{2} \cdot \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \right) + a \cdot \left( 1 - \frac{1 - a}{4\mu_2} \cdot \mL_2(x) \right) \\
&\quad \quad +\frac{1 - a}{2} \cdot \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) - \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \\
\frac{d\mP_-}{d\mQ}(x) &= \left( \frac{1 - a}{2} - \mu_1 + \mu_2 \right) \cdot \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \right) + (a - 2\mu_2) \cdot \left( 1 - \frac{1 - a}{4\mu_2} \cdot \mL_2(x) \right) \\
&\quad \quad + \left( \frac{1 - a}{2} + \mu_1 + \mu_2 \right) \cdot \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) - \frac{1}{4\mu_1} \cdot \mL_1(x) \right)\end{aligned}$$ Let $\mD^*$ be the mixture of $\mL(Z_1 | B_1^1), \mL(Z_0 | B_1^0)$ and $\mL(Z_{-1} | B_1^{-1})$ with weights $\frac{1 - a}{2} + \mu_1 + \mu_2, a - 2\mu_2$ and $\frac{1 - a}{2} - \mu_1 + \mu_2$, respectively. It then follows by the triangle inequality that $$\begin{aligned}
&\TV\left( 3\textsc{-srk}(\textnormal{Tern}(a, \mu_1, \mu_2)), \mP_+ \right) \\
&\quad \quad \le \TV\left( \mD^*, \mP_+ \right) + \TV\left( \mD^*, 3\textsc{-srk}(\textnormal{Tern}(a, \mu_1, \mu_2)) \right) \\
&\quad \quad \le \left( \frac{1 - a}{2} + \mu_1 + \mu_2 \right) \cdot \bE_{x \sim \mQ} \left[\left| \frac{d\mL(Z_1|B_1^1)}{d\mQ} (x) - \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) + \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \right| \right] \\
&\quad \quad \quad \quad + \left( a - 2\mu_2 \right) \cdot \bE_{x \sim \mQ} \left[\left| \frac{d\mL(Z_0|B_1^0)}{d\mQ} (x) - \left( 1 - \frac{1 - a}{4\mu_2} \cdot \mL_2(x) \right) \right| \right] \\
&\quad \quad \quad \quad + \left( \frac{1 - a}{2} - \mu_1 + \mu_2 \right) \cdot \bE_{x \sim \mQ} \left[\left| \frac{d\mL(Z_{-1}|B_1^{-1})}{d\mQ} (x) - \left( 1+ \frac{a}{4\mu_2} \cdot \mL_2(x) - \frac{1}{4\mu_1} \cdot \mL_1(x) \right) \right| \right] \\
&\quad \quad \quad \quad + \left( \frac{1 - a}{2} + \mu_1 + \mu_2 \right) \cdot \TV\left( \mD_1, \mL(Z_1|B_1^1) \right) + \left( a - 2\mu_2 \right) \cdot \TV\left( \mD_1, \mL(Z_0|B_1^0) \right) \\
&\quad \quad \quad \quad + \left( \frac{1 - a}{2} - \mu_1 + \mu_2 \right) \cdot \TV\left( \mD_{-1}, \mL(Z_{-1}|B_1^{-1}) \right) \\
&\quad \quad \le 2\delta \left(1 + |\mu_1|^{-1} + |\mu_2|^{-1} \right) + \left( \frac{1}{2} + \delta \left( 1 + |\mu_1|^{-1} + |\mu_2|^{-1} \right) \right)^N\end{aligned}$$ A symmetric argument shows analogous upper bounds on both $\TV\left( 3\textsc{-srk}(\textnormal{Tern}(a, -\mu_1, \mu_2)), \mP_- \right)$ and $\TV\left( 3\textsc{-srk}(\textnormal{Tern}(a, 0, 0)), \mQ \right)$, completing the proof of the lemma.
Proofs for Label Generation {#sec:app-label-generation}
---------------------------
In this section, we give the two deferred proofs from Section \[subsec:2-mixtures-slr\].
This lemma follows from a similar argument to Lemma \[lem:planted-label\]. As in Lemma \[lem:planted-label\], the given conditions on $C, \gamma, \mu'$ and $N$ imply that $$2\left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2 \le 1$$ and thus $X'$ is well-defined almost surely. First observe that if $Z = \mu'' \cdot u + G'$ where $G' \sim \mN(0, I_d)$ then $$X' = \frac{a\gamma \cdot y'}{1 + \gamma^2} \cdot u + \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \cdot G' + \frac{1}{\sqrt{2}} \cdot \sqrt{1 - 2\left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2} \cdot G + \frac{1}{\sqrt{2}} \cdot W$$ where $a = \mu''/\mu'$. Thus by the same argument as in Lemma \[lem:planted-label\], we have that $$\mL(X' | y') = \mN\left( \frac{a\gamma \cdot y}{1 + \gamma^2} \cdot u, \, I_d - \frac{\gamma^2}{1 + \gamma^2} \cdot uu^\top \right)$$ Now note that by the conditioning property of multivariate Gaussians, we have that $$\mL(X|y) = \mN\left(\Sigma_{Xy}\Sigma_{yy}^{-1} \cdot y, \, \Sigma_{XX} - \Sigma_{Xy} \Sigma_{yy}^{-1} \Sigma_{yX} \right)$$ It is easily verified that $$\Sigma_{Xy}\Sigma_{yy}^{-1} = \frac{a\gamma}{1 + \gamma^2} \cdot u \quad \text{and} \quad \Sigma_{XX} - \Sigma_{Xy} \Sigma_{yy}^{-1} \Sigma_{yX} = I_d - \frac{\gamma^2}{1 + \gamma^2} \cdot uu^\top$$ and thus $\mL(X|y)$ and $\mL(X'|y')$ are equidistributed. Since $y \sim \mN(0, 1 + \gamma^2)$, it follows by the same application of the conditioning property in Fact \[tvfacts\] as in Lemma \[lem:planted-label\] implies that $$\TV\left( \mL(X, y), \mL(X', y') \right) \le \TV\left( \mL(y), \mL(y') \right) = O\left( N^{-C^2/2} \right)$$ which completes the proof of the lemma.
This lemma follows from a similar argument to Lemma \[lem:planted-label\]. As in Lemmas \[lem:planted-label\] and \[lem:imbalanced-planted-label\], the given conditions imply that $X'$ is well-defined almost surely. Conditioned on $y'$, it holds that $Z, G$ and $W$ are independent. Therefore the three terms in the definition of $X'$ are independent and distributed as $$\begin{aligned}
&\frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \cdot Z \sim \mN\left(0, \, \left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2 \cdot I_d \right), \\
&\frac{1}{\sqrt{2}} \cdot \sqrt{1 - 2\left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2} \cdot G \sim \mN\left(0, \, \frac{1}{2} \cdot I_d - \left( \frac{\gamma \cdot y'}{\mu'(1 + \gamma^2)} \right)^2 \cdot I_d \right) \quad \text{and} \\
&\frac{1}{\sqrt{2}} \cdot W \sim \mN\left(0, \, \frac{1}{2} \cdot I_d \right)\end{aligned}$$ conditioned on $y'$. It follows that $X' | y' \sim \mN(0, I_d)$ and thus $X'$ is independent of $y'$. Now let $X \in \mathbb{R}^d$ and $y \in \mathbb{R}$ be such that $X \sim \mN(0, I_d)$ and $y \sim \mN(0, 1 + \gamma^2)$ are independent. The same application of the conditioning property in Fact \[tvfacts\] as in Lemmas \[lem:planted-label\] and \[lem:imbalanced-planted-label\] now completes the proof of the lemma.
Deferred Proofs from Part \[part:lower-bounds\] {#sec:appendix-3}
===============================================
Proofs from Secret Leakage and the $\pr{pc}_\rho$ Conjecture {#sec:appendix-4}
------------------------------------------------------------
In this section, we present the deferred proof of Lemma \[l:avgCorrLargeSets\] from Section \[sec:2-secret-leakage\]. The proof of this lemma is similar to the proof of Lemma 5.2 in [@feldman2013statistical].
The proof is almost identical to Lemma 5.2 in [@feldman2013statistical] and we give a sketch here. Lemma \[l:avgCorr\] implies that $\sum_{T\in A}\big| \la \Dh_S, \Dh_T\ra_D \big| \leq \sum_{T\in A} 2^{|S\cap T|} k^2 / n^2$. If the only constraint on $A$ is its cardinality, then the maximum value for the RHS is obtained by adding $S$ to $A$, next $\{T:|T\cap S|=k-1\}$, and so forth with decreasing size of $|T\cap S|$, and we assume that $A$ is defined in this manner. Letting $T_\lambda = \{T: |T\cap S|=\lambda\}|$, set $\lambda_0 = \min\{\lambda: T_\lambda\neq \varnothing\}$ so that $T_\lambda\subseteq A$ for $\lambda>\lambda_0$. We bound the ratio $$\frac{|T_j|}{|T_{j+1}|} = \frac{{k\choose j}\big(\frac nk\big)^{k-j}}{{k\choose j+1}\big(\frac nk\big)^{k-j-1}}\geq \frac{jn}{k^2}=j n^{2\delta}\quad \text{hence}\quad |T_j|\leq \frac{|T_0|}{(j-1)! n^{2\delta j}}\leq \frac{|\cS|}{(j-1)! n^{2\delta j}}\,.$$ Now $$|A|\leq \sum_{j\geq \lambda_0} |T_j| \leq |\cS|n^{-2\delta \lambda_0}\sum_{j\geq \lambda_0} \frac1{(j-1)!n^{2\delta(j-\lambda_0)}}\leq 2 |\cS|n^{-2\delta \lambda_0}$$ for $n$ greater than some constant. Thus if $|A|\geq 2|\cS|/ n^{2\ell \delta}$, we must conclude that $\ell \geq \lambda_0$. We bound the quantity $\sum_{T\in A} 2^{|S\cap T|} \leq \sum_{j=\lambda_0}^k 2^j|T_j\cap A|\leq 2^{\lambda_0}|T_{\lambda_0}\cap A|+\sum_{j=\lambda_0+1}^k 2^j|T_j|\leq 2^{\lambda_0}|A| + 2^{\lambda_0+2}|T_{\lambda_0+1}|\leq 2^{\lambda_0+3}|A|\leq 2^{\ell +3}|A|$. Here we used that $|T_{j+1}|\leq |T_j| n^{-2\delta}$ to bound by a geometric series and also that $T_{\lambda_0+1}\subseteq A$. Rearranging and combining with the inequality at the start of the proof concludes the argument.
Proofs for Reductions and Computational Lower Bounds {#subsec:appendix-3-part-3}
----------------------------------------------------
In this section, we present a number of deferred proofs from Part \[part:lower-bounds\]. The majority of these proofs are similar to other proofs presented in the main body of the paper.
To prove this theorem, we will to show that Theorem \[thm:slr-reduction\] implies that $k\pr{-bpds-to-mslr}$ applied with $r > 2$ fills out all of the possible growth rates specified by the computational lower bound $n = \tilde{o}(k^2 \epsilon^2/\tau^4)$ and the other conditions in the theorem statement. As discussed above, it suffices to reduce in total variation to $\pr{mslr}(n, k, d, \tau, 1/r)$ where $1/r \le \epsilon$.
Fix a constant pair of probabilities $0 < q < p \le 1$ and any sequence of parameters $(n, k, d, \tau, \epsilon)$ all of which are implicitly functions of $n$ such that $(n, \epsilon^{-1})$ satisfies $\pr{(t)}$ and $(n, k, d, \tau, \epsilon)$ satisfy the conditions $$n \le c \cdot \frac{k^2 \epsilon^2}{w^2 \cdot \tau^4 \cdot (\log n)^{4+2c'}}, \quad wk \le n^{1/6} \quad \text{and} \quad w k^2 \le d$$ for sufficiently large $n$, an arbitrarily slow-growing function $w = w(n) \to \infty$ at least satisfying that $w(n) = n^{o(1)}$, a sufficiently small constant $c > 0$ and a sufficiently large constant $c' > 0$. The rest of this proof will follow that of Theorem \[thm:rsme-lb\] very closely. In order to fulfill the criteria in Condition \[cond:lb\], we specify $M, N, k_M, k_N$ and $n'$ exactly as in Theorem \[thm:rsme-lb\]. As in Theorem \[thm:rsme-lb\], we have the inequalities $$n' \le w^{-2} r^{2t} = O\left( \frac{r^{2t}}{n} \cdot \frac{k^2 \epsilon^2}{\tau^4 \cdot (\log n)^{2+2c'}} \right)$$ $$\tau \le \frac{c^{1/4} \epsilon^{1/2} k^{1/2}}{n^{1/4} (\log n)^{(2 + c')/2}} = \Theta \left( \frac{r^{t/2}}{n^{1/4}} \cdot \frac{k_M^{1/2}}{\sqrt{r^{t + 1} (\log n)^{2+c'}}} \right)$$ Furthermore, we also have that $$\tau^2 \le \frac{c^{1/2} \cdot k}{wn^{1/2} \cdot (\log n)^{2+c'}} = O\left( \frac{r^t}{n} \cdot \frac{k_N k_M}{N \log (MN)} \right)$$ As long as $\sqrt{n} = \tilde{\Theta}(r^t)$ then: (2.1) the inequality above on $n'$ would imply that $(n', k, d, \tau, \epsilon)$ is in the desired hard regime; (2.2) $n$ and $n'$ have the same growth rate since $w = n^{o(1)}$; and (2.3) $n \gg M^3$, $d \ge M$ and taking $c'$ large enough would imply that $\tau$ satisfies the bounds needed to apply Theorem \[thm:slr-reduction\] to yield the desired reduction. By Lemma \[lem:propT\], there is an infinite subsequence of the input parameters such that $\sqrt{n} = \tilde{\Theta}(r^t)$, which concludes the proof as in Theorem \[thm:rsme-lb\].
First suppose that $M \sim \pr{ghpm}_D(n, r, C, D, \gamma)$ where $C$ and $D$ are each sequences of $r$ disjoint sets of size $K$. Since the $M_{ij}$ are independent for $1 \le i, j \le n$, we now have that $$\begin{aligned}
\bE[s_C(M)] &= \sum_{i, j = 1}^n \bE\left[M_{ij}^2 - 1\right] = rK^2 \cdot \gamma^2 + \frac{rK^2}{r - 1} \cdot \gamma^2 \\
\text{Var}\left[ s_C(M) \right] &= \sum_{i, j = 1}^n \text{Var}\left[M_{ij}^2 - 1\right] = rK^2 \cdot 4\gamma^2 + \frac{rK^2}{(r - 1)^3} \cdot \gamma^2 + 2n^2\end{aligned}$$ Here, we have used the following facts. If $X \sim \mN(0, 1)$, then $$\begin{aligned}
&\bE[(\gamma + X)^2 - 1] = \gamma^2, \quad \bE\left[\left(\frac{\gamma}{r - 1} + X\right)^2 - 1\right] = \frac{\gamma^2}{(r - 1)^2} \\
&\text{Var}[X^2 - 1] = 2, \quad \text{Var}[(\gamma + X)^2 - 1] = 4\gamma^2 + 2, \quad \text{Var}\left[\left(\frac{\gamma}{r - 1} + X\right)^2 - 1\right] = \frac{\gamma^2}{(r - 1)^4} + 2\end{aligned}$$ Note that $s_C(M)$ is invariant to permuting the rows and columns of $M$ and thus $s_C(M)$ is equidistirbuted under $M \sim \pr{ghpm}_D(n, r, C, D, \gamma)$ and $M \sim \pr{ghpm}_D(n, r, K, \gamma)$. Now Chebyshev’s inequality implies the desired lower bound on $s_C(M)$ in (1) holds with probability $1 - o_n(1)$. Now observe that $$s_I(M) \ge \sum_{h = 1}^r \sum_{i \in C_h} \sum_{j \in D_h} M_{ij} = Y$$ holds almost surely by definition when $M \sim \pr{ghpm}_D(n, r, C, D, \gamma)$. Note that $Y \sim \mN(rK^2 \gamma, rK^2)$ conditioned on $C$ and $D$ and therefore it holds that $Y \ge rK^2 \gamma - wr^{1/2} K$ with probability $1 - o_n(1)$. The second lower bound in (1) now follows since $s_I(M)$ is equidistirbuted under $M \sim \pr{ghpm}_D(n, r, C, D, \gamma)$ and $M \sim \pr{ghpm}_D(n, r, K, \gamma)$.
Now suppose that $M \sim \mN(0, 1)^{\otimes n \times n}$. In this case, $s_C(M) + n^2$ is distributed as $\chi^2(n^2)$ and the first upper bound in (2) holds by Chebyshev’s inequality and the fact that $\chi^2(n^2)$ has variance $2n^2$. Now note $$Y(C, D) = \sum_{h = 1}^r \sum_{i \in C_h} \sum_{j \in D_h} M_{ij} \sim \mN(0, rK^2)$$ Standard gaussian tail bounds imply that $$\begin{aligned}
\bP\left[ Y(C, D) > 2r K^{3/2}w \sqrt{\left(\log n + \log r \right)} \right] &\le \frac{1}{\sqrt{2\pi}} \exp\left( -\frac{1}{2rK^2} \left( 2r K^{3/2} w \sqrt{\left(\log n + \log r \right)} \right)^2 \right) \\
&\le (nr)^{-2rKw^2}\end{aligned}$$ A crude upper bound on the number of pairs $(C, D)$ is $$\left( \binom{n}{rK} r^{rK} \right)^2 = o\left( (nr)^{2rK} \right)$$ and therefore a union bound implies that $s_I(M) = \max_{C, D} Y(C, D) \le 2r K^{3/2}w \sqrt{\left(\log n + \log r \right)}$ with probability $1 - o_n(1)$. This completes the proof of the lemma.
Consider the following reduction $\mathcal{A}$ that adds a simple post-processing step to $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span> as in Corollary \[thm:isbm-mod\]. On input graph $G$ with $N$ vertices:
1. Form the graph $M_{\text{R}}$ by applying $k$<span style="font-variant:small-caps;">-pds-to-ghpm</span> to $G$ with parameters $N, r, k, E, \ell, n, s$ and $\mu$ where $\mu$ is given by $$\mu = \frac{r^{t} \sqrt{r}}{(r - 1)} \cdot \Phi^{-1}\left( \frac{1}{2} + \frac{1}{2} \cdot \min\{P_0, 1 - P_0\}^{-1} \cdot \gamma \right)$$ and $\Phi^{-1}$ is the inverse of the standard normal CDF.
2. Let $G_1$ be the graph where each edge $(i, j)$ with is in $G_1$ if and only if $(M_{\text{R}})_{ij} \ge 0$. Now form $G_2$ as in Step 2 of Corollary \[thm:isbm-mod\], while restricting to edges between the two parts.
This clearly runs in $\text{poly}(N)$ time and it suffices to establish its approximate Markov transition properties. Let $\mathcal{A}_1$ denote the first step with input $G$ and output $M_{\text{R}}$, and let $\mathcal{A}_2$ denote the second step with input $M_{\text{R}}$ and output $G_2$. Let $C$ and $D$ be two fixed sequences, each consisting of $r$ disjoint subsets of $[ksr^t]$ of size $kr^{t - 1}$. Let $P_1, P_2 \in (0, 1)$ be $$P_{1} = \Phi\left( \frac{\mu(r - 1)}{r^t \sqrt{r}}\right) \quad \text{and} \quad P_{2} = \Phi\left( - \frac{\mu}{r^t \sqrt{r}} \right)$$ Note that by the definition of $\mu$, we have that $P_1 = \frac{1}{2} + \frac{1}{2} \cdot \min\{P_0, 1 - P_0\}^{-1} \cdot \gamma$. Now note that $\mathcal{A}_2$ applied to $M_{\text{R}} \sim \pr{ghpm}_D(ksr^t, r, C, D, \gamma)$ yields an instance of $\pr{bhpm}_D(ksr^t, r, C, D, \gamma)$ with the following modified edge probabilities:
1. The edge probabilities between vertices $C_h$ and $D_h$ for each $1 \le h \le r$ are still $P_0 + \gamma$.
2. The edge probabilities between $C_{h_1}$ and $D_{h_2}$ for each $h_1 \neq h_2$ are now $$P_0 + 2\min\{P_0, 1 - P_0\} \cdot \left( \Phi\left( - \frac{\mu}{r^t \sqrt{r}} \right) - \frac{1}{2} \right) = P_0 + 2 \min\{P_0, 1 - P_0\} \cdot \left( P_2 - \frac{1}{2} \right)$$
3. All other edge probabilities are still $P_0$.
We now apply a similar sequence of inequalities as in Corollary \[thm:isbm-mod\]. For now assume that $P_0 \le 1/2$. Using the fact that all of the edge indicators of this model and the usual definition of $\pr{bhpm}$ are independent, the tensorization property in Fact \[tvfacts\] and Lemma \[lem:bintv\], we now have that $$\begin{aligned}
&\TV\left( \mathcal{A}_2\left( \pr{ghpm}_D(ksr^t, r, C, D, \gamma) \right), \, \pr{bhpm}_D(ksr^t, r, C, D, \gamma) \right) \\
&\quad \quad \le \TV\left( \text{Bern}\left( P_0 - \frac{\gamma}{r - 1} \right)^{\otimes k^2r^{2t - 1}(r - 1)}, \, \text{Bern}\left( P_0 + 2 P_0 \cdot \left( P_2 - \frac{1}{2} \right) \right)^{\otimes k^2r^{2t - 1}(r - 1)} \right) \\
&\quad \quad \le \left| \frac{\gamma}{r - 1} + 2P_0 \cdot \left( P_2 - \frac{1}{2} \right) \right| \cdot \sqrt{\frac{k^2r^{2t - 1}(r - 1)}{2\left( P_0 - \frac{\gamma}{r - 1} \right) \left(1 - P_0 + \frac{\gamma}{r - 1} \right)}} \\
&\quad \quad \le \left| \frac{\gamma}{r - 1} + 2P_0 \cdot \left( P_2 - \frac{1}{2} \right) \right| \cdot O\left( kr^{t} \right)\end{aligned}$$ where the third inequality uses the fact that $P_0$ is bounded away from $0$ and $1$ and $\gamma = o(1)$. Now note that $$\frac{\gamma}{r - 1} = \frac{2P_0}{r - 1} \cdot \left( \Phi\left( \frac{\mu(r - 1)}{r^t \sqrt{r}}\right) - \frac{1}{2} \right)$$ Using the standard Taylor approximation for $\Phi(x) - 1/2$ around zero when $x \in (-1, 1)$, we have $$\begin{aligned}
\left| \frac{\gamma}{r - 1} + 2P_0 \cdot \left( P_2 - \frac{1}{2} \right) \right| &= 2P_0 \cdot \left| \frac{1}{r - 1} \left( \Phi\left( \frac{\mu(r - 1)}{r^t \sqrt{r}}\right) - \frac{1}{2} \right) - \left( \Phi\left( - \frac{\mu}{r^t \sqrt{r}} \right) - \frac{1}{2} \right) \right| \\
&= O\left( \frac{\mu^3 \sqrt{r}}{r^{3t}} \right)\end{aligned}$$ Therefore we have that $$\TV\left( \mathcal{A}_2\left( \pr{ghpm}_D(ksr^t, r, C, D, \gamma) \right), \, \pr{bhpm}_D(ksr^t, r, C, D, \gamma) \right) = O\left( \frac{k\mu^3 \sqrt{r}}{r^{2t}} \right)$$ A nearly identical argument considering the complement of the graph $G_1$ and replacing with $P_0$ with $1 - P_0$ establishes this bound in the case when $P_0 > 1/2$. Observe that $\mathcal{A}_2 \left( \mathcal{N}(0, 1)^{\otimes n \times n} \right) \sim \mG_B(n, n, P_0)$. Now consider applying Lemma \[lem:tvacc\] to the steps $\mathcal{A}_1$ and $\mathcal{A}_2$ as in Corollary \[thm:isbm-mod\]. It can be verified that the given bound on $\gamma$ yields the condition on $\mu$ needed to apply Theorem \[thm:ghpm\] if $c > 0$ is sufficiently small. Thus $\epsilon_1$ is bounded by Theorem \[thm:ghpm\] and $\epsilon_2$ is bounded by the argument above after averaging over $C$ and $D$ and applying the conditioning property of Fact \[tvfacts\]. This application of Lemma \[lem:tvacc\] therefore yields the desired two approximate Markov transition properties and completes the proof of the corollary.
As discussed in the beginning of this section, it suffices to map to $\mG(n, P_0 - \mu_1)$ under $H_0$ and $\pr{tsi}(n, k, k_1, P_0, \mu_1, \mu_2, \mu_3)$ under $H_1$ where $\mu_3 = P_1 - P_0$ and $\mu_1, \mu_2 \ge 0$. Thus it suffices to show that the reduction $\mathcal{A}$ in Corollary \[cor:semi-cr-gen\] fills out all of the possible growth rates specified by the computational lower bound $\frac{(P_1 - P_0)^2}{P_0(1 - P_0)} = \tilde{o}(n/k^2)$ and the other conditions in the theorem statement. Fix a constant pair of probabilities $0 < q < p \le 1$ and any sequence of parameters $(n, k, P_1, P_0)$ all of which are implicitly functions of $n$ such that $$\frac{(P_1 - P_0)^2}{P_0(1 - P_0)} \le c \cdot \frac{n}{w^3 \cdot k^2 \log n} \quad \text{and} \quad \min\{P_0, 1 - P_0 \} = \Omega_n(1)$$ for sufficiently large $n$, sufficiently small constant $c > 0$ and an arbitrarily slow-growing increasing positive integer-valued function $w = w(n) \to \infty$ at least satisfying that $w(n) = n^{o(1)}$. As in the proof of Theorem \[thm:rsme-lb\], it suffices to specify:
1. a sequence $(N, k_N)$ such that the $k\pr{-pds}(N, k_N, p, q)$ is hard according to Conjecture \[conj:hard-conj\]; and
2. a sequence $(n', k', P_1, P_0, s, t, \mu)$ satisfying: (2.1) the parameters $(n', k', P_1, P_0)$ are in the regime of the desired computational lower bound for $\pr{semi-cr}$; (2.2) $(n', k')$ have the same growth rates as $(n, k)$; and (2.3) such that $\mG(n', P_0 - \mu_1)$ and $\pr{tsi}(n', k', k'/2, P_0, \mu_1, \mu_2, P_1 - P_0)$, where $k'$ is even and $\mu_1, \mu_2 \ge 0$, can be produced by $\mathcal{A}$ with input $k\pr{-pds}(N, k_N, p, q)$.
We choose these parameters as follows:
- let $t$ be such that $3^t$ is the smallest power of 3 larger than $k/\sqrt{n}$ and let $s = \lceil 2n/3k \rceil$;
- let $\mu \in (0, 1)$ be given by $$\mu = 3^t \cdot \Phi^{-1} \left( \frac{1}{2} + \frac{1}{2} \cdot \min\{P_0, 1 - P_0 \}^{-1} (P_1 - P_0) \right)$$
- now let $$k_N = \left\lfloor \frac{1}{2}\left( 1 + \frac{p}{Q} \right)^{-1} w^{-2} \cdot \sqrt{n} \right\rfloor$$ where $Q = 1 - \sqrt{(1 - p)(1 - q)} + \mathbf{1}_{\{ p = 1\}} \left( \sqrt{q} - 1 \right)$; and
- let $n' = 3k_N s \cdot \frac{3^t - 1}{2}$, let $k' = (3^t - 1)k_N$ and let $N = wk_N^2$.
Note that $3^t = \Theta(k/\sqrt{n})$, $s = \Theta(n/k)$ and $3^t k_N s \le \text{poly}(N)$. Note that this choice of $\mu$ implies that $$P_1 = P_0 + 2\min\{P_0, 1 - P_0 \} \cdot \left( \Phi\left( \frac{\mu}{3^t} \right) - \frac{1}{2} \right)$$ which implies that the instance of $\pr{tsi}$ output by $\mathcal{A}$ has edge density $P_1$ on its $k'$-vertex the planted dense subgraph. It follows that $$\begin{aligned}
n' &\asymp 3^t k_N s \asymp \frac{k}{\sqrt{n}} \cdot \frac{n}{k} w^{-2} \cdot \sqrt{n} \asymp w^{-2} \cdot n \quad \text{and} \quad k' \asymp 3^t k_N \asymp w^{-2} k \\
\frac{(P_1 - P_0)^2}{P_0(1 - P_0)} &\le c \cdot \frac{n}{w^3 \cdot k^2 \log n} \lesssim c \cdot \frac{n'}{w \cdot (k')^2 \log n'} \\
m &\le 2\left( \frac{p}{Q} + 1 \right) wk_N^2 \le w^{-1} \sqrt{n} \cdot k_N \le 3^t k_N s \\
\mu &\lesssim 3^t \cdot (P_1 - P_0) \lesssim 3^t \cdot \frac{\sqrt{n}}{w^{3/2} \cdot k \sqrt{\log n'}} \le \frac{c}{w^{3/2} \sqrt{\log n'}}\end{aligned}$$ where the last bound above follows from the fact that $\Phi(x) - 1/2 \sim x$ if $|x| \to 0$. Here, $m$ is the smallest multiple of $k_N$ larger $\left( \frac{p}{Q} + 1 \right) N$. Now note that: (2.1) the third inequality above on $(P_1 - P_0)^2/P_0(1 - P_0)$ implies that $(n', k', P_1, P_0)$ is in the desired hard regime; (2.2) $(n, n')$ and $(k, k')$ have the same growth rates since $w = n^{o(1)}$; and (2.3) the last two bounds above imply that taking $c$ small enough yields the conditions needed to apply Corollary \[cor:semi-cr-gen\] to yield the desired reduction. This completes the proof of the theorem.
The parameters $a, \mu_1, \mu_2$ for which these distributional statements are true are given by $$\begin{aligned}
a &= \Phi(\tau) - \Phi(-\tau) \\
\mu_1 &= \frac{1}{2} \left( (1 - \Phi(\tau - \mu)) - \Phi(-\tau - \mu) \right) = \frac{1}{2} \left( \Phi(\tau + \mu) - \Phi(\tau - \mu) \right) \\
\mu_2 &= \frac{1}{2} \left( \Phi(\tau) - \Phi(-\tau) \right) - \frac{1}{2} \left( \Phi(\tau + \mu) - \Phi(-\tau + \mu) \right) = \frac{1}{2} \left( 2 \cdot \Phi(\tau) - \Phi(\tau + \mu) - \Phi(\tau - \mu) \right)\end{aligned}$$ Now note that $$\mu_1 = \frac{1}{2} \left( \Phi(\tau + \mu) - \Phi(\tau - \mu) \right) = \frac{1}{2\sqrt{2\pi}} \int_{\tau - \mu}^{\tau + \mu} e^{-t^2/2} dt = \Theta(\mu)$$ and is positive since $e^{-t^2/2}$ is bounded on $[\tau - \mu, \tau + \mu]$ as $\tau$ is constant and $\mu \to 0$. Furthermore, note that $$\begin{aligned}
\mu_2 &= \frac{1}{2} \left( 2 \cdot \Phi(\tau) - \Phi(\tau + \mu) - \Phi(\tau - \mu) \right) = \frac{1}{2\sqrt{2\pi}} \int_{\tau - \mu}^{\tau} e^{-t^2/2} dt - \frac{1}{2\sqrt{2\pi}} \int_{\tau}^{\tau + \mu} e^{-t^2/2} dt \\
&= \frac{1}{2\sqrt{2\pi}} \int_{\tau}^{\tau + \mu} \left( e^{-(t - \mu)^2/2} - e^{-t^2/2}\right) dt = \frac{1}{2\sqrt{2\pi}} \int_{\tau}^{\tau + \mu} e^{-t^2/2} \left(e^{t\mu - \mu^2/2} - 1 \right) dt \end{aligned}$$ Now note that as $\mu \to 0$ and for $t \in [\tau, \tau + \mu]$, it follows that $0 < e^{t\mu - \mu^2/2} - 1= \Theta(\mu)$. This implies that $0 < \mu_2 = \Theta(\mu^2)$, as claimed.
To prove this theorem, we will to show Theorem \[lem:univlem\] implies that $k\textsc{-bpds-to-glsm}$ fills out all of the possible growth rates specified by the computational lower bound $n = \tilde{o}\left(\tau_{\mU}^{-4}\right)$ and the other conditions in the theorem statement, as in the proof of Theorems \[thm:rsme-lb\] and \[thm:uslr-lb\]. Fix a constant pair of probabilities $0 < q < p \le 1$ and any sequence $(n, k, d, \mU)$ where $\mU = \left( \mD, \mQ, \{ \mP_{\nu} \}_{\nu \in \mathbb{R}} \right) \in \pr{uc}(n)$ all of which are implicitly functions of $n$ with $$n \le \frac{c}{\tau_{\mU}^4 \cdot w^2 \cdot (\log n)^{2}} \quad \text{and} \quad w k^2 \le d$$ for sufficiently large $n$, an arbitrarily slow-growing function $w = w(n) \to \infty$ and a sufficiently small constant $c > 0$. Now consider specifying the parameters $M, N, k_M, k_N$ and $t$ exactly as in Theorem \[thm:uslr-lb\]. Now note that under these parameter settings, we have that $$\tau_{\mU} \le \frac{c^{1/4}}{n^{1/4} w^{1/2} \sqrt{\log n}} \le 2c^{1/4} \cdot \sqrt{\frac{k_N}{N \log N}}$$ Therefore $\tau_{\mU}$ satisfies the conditions needed to apply Theorem \[lem:univlem\] for a sufficiently small $c > 0$. The other parameters $(n, k, d, \mU)$ and $(M, N, k_M, k_N, p, q)$ can also be verified to satisfy the conditions of this theorem. We now have that $k\pr{-bpds}(M, N, k_M, k_N, p, q)$ is hard according to Conjecture \[conj:hard-conj\], and that $\pr{glsm}(n, k, d, \mU)$ can be produced by the reduction $k\textsc{-bpds-to-glsm}$ applied to $\pr{bpds}(M, N, k_M, k_N, p, q)$. This verifies the criteria in Condition \[cond:lb\] and, following the argument in Section \[subsec:2-tvreductions\], Lemma \[lem:3a\] now implies the theorem.
[^1]: Massachusetts Institute of Technology. Department of EECS. Email: `brennanm@mit.edu`.
[^2]: Massachusetts Institute of Technology. Department of EECS. Email: `guy@mit.edu`.
| 2023-11-30T01:26:35.434368 | https://example.com/article/4729 |
Q:
Extract numeric part of strings of mixed numbers and characters in R
I have a lot of strings, and each of which tends to have the following format: Ab_Cd-001234.txt
I want to replace it with 001234. How can I achieve it in R?
A:
The stringr package has lots of handy shortcuts for this kind of work:
# input data following @agstudy
data <- c('Ab_Cd-001234.txt','Ab_Cd-001234.txt')
# load library
library(stringr)
# prepare regular expression
regexp <- "[[:digit:]]+"
# process string
str_extract(data, regexp)
Which gives the desired result:
[1] "001234" "001234"
To explain the regexp a little:
[[:digit:]] is any number 0 to 9
+ means the preceding item (in this case, a digit) will be matched one or more times
This page is also very useful for this kind of string processing: http://en.wikibooks.org/wiki/R_Programming/Text_Processing
A:
Using gsub or sub you can do this :
gsub('.*-([0-9]+).*','\\1','Ab_Cd-001234.txt')
"001234"
you can use regexpr with regmatches
m <- gregexpr('[0-9]+','Ab_Cd-001234.txt')
regmatches('Ab_Cd-001234.txt',m)
"001234"
EDIT the 2 methods are vectorized and works for a vector of strings.
x <- c('Ab_Cd-001234.txt','Ab_Cd-001234.txt')
sub('.*-([0-9]+).*','\\1',x)
"001234" "001234"
m <- gregexpr('[0-9]+',x)
> regmatches(x,m)
[[1]]
[1] "001234"
[[2]]
[1] "001234"
A:
You could use genXtract from the qdap package. This takes a left character string and a right character string and extracts the elements between.
library(qdap)
genXtract("Ab_Cd-001234.txt", "-", ".txt")
Though I much prefer agstudy's answer.
EDIT Extending answer to match agstudy's:
x <- c('Ab_Cd-001234.txt','Ab_Cd-001234.txt')
genXtract(x, "-", ".txt")
# $`- : .txt1`
# [1] "001234"
#
# $`- : .txt2`
# [1] "001234"
| 2024-07-17T01:26:35.434368 | https://example.com/article/7261 |
Authorities were called to the Peninsula Hotel on the Bellarine Highway, Moolap, shortly before midday after reports of a firearm incident.
Authorities were called to the Peninsula Hotel on the Bellarine Highway, Moolap, shortly before midday after reports of a firearm incident.
It's understood a man and a woman were arrested and taken into custody around 4pm.
It's understood a man and a woman were arrested and taken into custody around 4pm. | 2023-11-01T01:26:35.434368 | https://example.com/article/6351 |
Q:
How to call Identity Server 4 with Postman for login
I've a solution in Visual Studio 'TourManagement' which contains 2 projects of .Net core. One is IDP using Identity Server 4, second project is RESTful API of TourManagement secured by IDP project. My question is how can i call Identity Server 4 using Postman to get tokens and call TourManagement Bands API by passing these tokens in header return from identity server in postman? My code is below.
Startup Class in IDP Project
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
namespace Marvin.IDP
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddIdentityServer()
.AddDeveloperSigningCredential()
.AddTestUsers(Config.GetUsers())
.AddInMemoryApiResources(Config.GetApiResources())
.AddInMemoryIdentityResources(Config.GetIdentityResources())
.AddInMemoryClients(Config.GetClients());
services.AddCors();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseCors(c => c.AllowAnyOrigin().AllowAnyHeader().AllowAnyMethod());
app.UseIdentityServer();
app.UseStaticFiles();
app.UseMvcWithDefaultRoute();
}
}
}
Config Class in IDP Project
using IdentityServer4;
using IdentityServer4.Models;
using IdentityServer4.Test;
using System.Collections.Generic;
using System.Security.Claims;
namespace Marvin.IDP
{
public static class Config
{
public static List<TestUser> GetUsers()
{
return new List<TestUser>
{
new TestUser
{
SubjectId = "fec0a4d6-5830-4eb8-8024-272bd5d6d2bb",
Username = "Jon",
Password = "jon123",
Claims = new List<Claim>
{
new Claim("given_name", "Jon"),
new Claim("family_name", "Doe"),
new Claim("role", "Administrator"),
}
},
new TestUser
{
SubjectId = "c3b7f625-c07f-4d7d-9be1-ddff8ff93b4d",
Username = "Steve",
Password = "steve123",
Claims = new List<Claim>
{
new Claim("given_name", "Steve"),
new Claim("family_name", "Smith"),
new Claim("role", "Tour Manager"),
}
}
};
}
public static List<IdentityResource> GetIdentityResources()
{
return new List<IdentityResource>
{
new IdentityResources.OpenId(),
new IdentityResources.Profile(),
new IdentityResource("roles", "Your role(s)", new []{"role"}),
};
}
internal static IEnumerable<ApiResource> GetApiResources()
{
return new[] {
new ApiResource("tourmanagementapi", "Tour Management API", new[] { "role" })
};
}
public static List<Client> GetClients()
{
return new List<Client>
{
new Client
{
ClientName = "Tour Management",
ClientId="tourmanagementclient",
AllowedGrantTypes = GrantTypes.Implicit,
RequireConsent = false,
AllowAccessTokensViaBrowser = true,
RedirectUris =new List<string>
{
"https://localhost:4200/signin-oidc",
"https://localhost:4200/redirect-silentrenew"
},
AccessTokenLifetime = 180,
PostLogoutRedirectUris = new[]{
"https://localhost:4200/" },
AllowedScopes = new []
{
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile,
"roles",
"tourmanagementapi",
}
}
};
}
}
}
Startup Class in TourManagement API Project
using IdentityServer4.AccessTokenValidation;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc.Formatters;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;
using System.Linq;
using TourManagement.API.Authorization;
using TourManagement.API.Services;
namespace TourManagement.API
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
public void ConfigureServices(IServiceCollection services)
{
services.AddAuthorization();
services.AddScoped<IAuthorizationHandler, UserMustBeTourManagerRequirementHandler>();
services.AddMvc(setupAction =>
{
setupAction.ReturnHttpNotAcceptable = true;
})
.AddJsonOptions(options =>
{
options.SerializerSettings.DateParseHandling = DateParseHandling.DateTimeOffset;
options.SerializerSettings.ContractResolver =
new CamelCasePropertyNamesContractResolver();
});
services.AddCors(options =>
{
options.AddPolicy("AllowAllOriginsHeadersAndMethods",
builder => builder.AllowAnyOrigin().AllowAnyHeader().AllowAnyMethod());
});
var connectionString = Configuration["ConnectionStrings:TourManagementDB"];
services.AddDbContext<TourManagementContext>(o => o.UseSqlServer(connectionString));
services.AddScoped<ITourManagementRepository, TourManagementRepository>();
services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
services.AddScoped<IUserInfoService, UserInfoService>();
services.AddAuthentication(IdentityServerAuthenticationDefaults.AuthenticationScheme)
.AddIdentityServerAuthentication(options =>
{
options.Authority = "https://localhost:44398";
options.ApiName = "tourmanagementapi";
});
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler(appBuilder =>
{
appBuilder.Run(async context =>
{
context.Response.StatusCode = 500;
await context.Response.WriteAsync("An unexpected fault happened. Try again later.");
});
});
}
app.UseCors("AllowAllOriginsHeadersAndMethods");
app.UseAuthentication();
app.UseMvc();
}
}
}
Bands Controller in Tourmanagement API Project
using AutoMapper;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using System.Collections.Generic;
using System.Threading.Tasks;
using TourManagement.API.Dtos;
using TourManagement.API.Services;
namespace TourManagement.API.Controllers
{
[Route("api/bands")]
[Authorize]
public class BandsController : Controller
{
private readonly ITourManagementRepository _tourManagementRepository;
public BandsController(ITourManagementRepository tourManagementRepository)
{
_tourManagementRepository = tourManagementRepository;
}
[HttpGet]
public async Task<IActionResult> GetBands()
{
var bandsFromRepo = await _tourManagementRepository.GetBands();
var bands = Mapper.Map<IEnumerable<Band>>(bandsFromRepo);
return Ok(bands);
}
}
}
A:
The key point is getting access token for accessing tourmanagementapi using implicit flow in Postman for testing .
The first thing is set AllowAccessTokensViaBrowser to true in client config in GetClients function , so that you can transmit access tokens via the browser channel,:
new Client
{
.....
AllowAccessTokensViaBrowser =true,
.....
}
On Postman side , do the following :
Enter your api's URL .
In Authorization Type, there is a dropdownlist, select OAuth2 :
After selecting it, you’ll notice a button that says Get Access Token, click on it and enter the following information (Based on your codes):
Don't enter openid/profile as Scope since you are using Oauth2 in Postman .
Click on Request Token, you’ll see a new token added with the name of TokenName
Finally, make sure you add the token to the header then click on Use Token. Token will available when sending request as authorization header :
| 2024-07-02T01:26:35.434368 | https://example.com/article/4230 |
A Memory Allocator (2000) - epberry
http://g.oswego.edu/dl/html/malloc.html
======
bendoh
I remember this being a great resource for reimplementing malloc last
semester.
| 2023-08-11T01:26:35.434368 | https://example.com/article/1959 |
Neoadjuvant chemotherapy in elderly women with ovarian cancer: Rates of use and effectiveness.
Neoadjuvant chemotherapy (NACT) may reduce perioperative morbidity in women undergoing primary treatment for ovarian cancer. We evaluated patterns of use and outcomes in a population-based cohort of elderly women with ovarian cancer (OC). A cohort of patients ≥66 years old diagnosed between 2000 and 2013 with stage III-IV epithelial OC who received surgery and platinum/taxane chemotherapy for primary treatment was identified from the SEER-Medicare database. Propensity-score matching methods were used to examine differences in outcomes. Kaplan-Meier analysis was performed to compare overall survival (OS) in the matched cohort. From 2000 to 2013, 22.5% of older women received NACT. The use of NACT increased over time from 16% in 2000 to 35.4% in 2013 (p < .0001). Among women who received PCS, the rate of ostomy creation was higher compared with NACT (23.3% vs. 10.8%, p < .0001). Infectious and other surgical complications were higher among those who had PCS, regardless of stage. Median OS of women III ovarian cancer who underwent PCS was longer compared with NACT (38.8 vs. 28 months, p ≤ .0001). There were no survival differences between NACT and PCS in women with stage IV disease (29.4 vs. 29.8 months, p = .61) or for women aged >80. Careful consideration should be given to older patients prior to undergoing PCS. Survival outcomes were similar for patients with stage IV disease, although NACT was associated with decreased perioperative morbidity compared with PCS. Among women with stage III disease, PCS was associated with improved overall survival, but higher rates of perioperative morbidity and acute care. | 2024-07-19T01:26:35.434368 | https://example.com/article/3435 |
Free Shipping On All Orders
Free Shipping On All Orders
Cart
Breckenridge Ski Resort Trail Map Poster
$49.00 – $169.00
The Breckenridge Ski Trail poster features the official 2019 ski trail map, and is printed on a beautiful, thick photo paper, at incredibly high resolution. Choose either unframed, or framed in a thin black alderwood frame.
• 10 mil thick
• Slightly glossy
• Fingerprint resistant
All prints and canvases are made per order to ensure quality. Depending on timing this means it may take us 2-5business days to print, frame, and package your order. All other items that ship same day will be shipped separately as early as they become available.
When the snow is amazing and the weather is perfect, you can’t waste a moment on the mountain staring at a map. With this large framed trail map, you’ll become an expert navigator before you hit the slopes. It also looks amazing on your wall. We don’t mess around either – this map is the official Breckenridge trail map, printed on high quality paper at extremely high resolution, with a partly glossy, partly matte finish. | 2024-04-14T01:26:35.434368 | https://example.com/article/8641 |
Q:
From and Image to an ImageSource
I have an image (embedded resource) that i can get access to and form an Image object from. I actually can get the Image object or the Stream of bits the represent the image. However I want to sue that image programaticly to be a background image.
So how do I set the ImageSource on the ImageBrush to an AcutalImage (PNG)?
A:
I think the MSDN documentation says it all:
http://msdn.microsoft.com/en-us/library/system.windows.media.imagebrush.imagesource%28VS.95%29.aspx
You can either set the source as a URI in XAML, or use code behind to set it to an ImageSource object created from a stream or a Uri, e.g.
_imageBrush.ImageSource = new BitmapImage(new Uri("http://someurl.com/images/myimage.png"));
Cheers, Alex
EDIT: If your image is a ressource, you can use the ressource url syntax:
"/{AssemblyName};component/{RelativePath}"
For example:
<ImageBrush ImageSource="/MyApplication.Resources;component/Images/image1.png" />
| 2024-04-11T01:26:35.434368 | https://example.com/article/6585 |
[/caption]NOTE: This was the Universe Today’s contribution to April Fools Day, just in case you were wondering. However, it isn’t a joke that a bat died during a shuttle launch. Brian will forever be remembered by the Brian Bat Foundation…
On Sunday, March 15th, Space Shuttle Discovery launched from Cape Canaveral, beginning the highly successful STS-119 mission to “power up” the International Space Station (ISS). Unfortunately, a tiny stowaway was discovered clutching onto the external tank of the shuttle and refused to budge. For the whole of Sunday, NASA waited for the free-tailed bat (unofficially named “Brian” by yours truly) to fly away. Alas, Brian held on to Discovery all the way up to launch. NASA even took a photo of the shuttle as it cleared the launch tower, Brian still attached. He wasn’t frozen to the external tank (infrared images showed the bat was warm), a wildlife expert studied the last pictures of Brian, informing the space agency that Brian had in fact suffered a broken wing and was unable to fly away, even as the rockets ignited.
Although NASA was not thought to be responsible for the death of the little animal at first (calling the whole incident “sad but unavoidable”), a Florida state official is pursuing legal action against the ground staff at the Cape. According to state animal protection law, NASA may be charged with negligence, after making little effort to prevent “animal interaction” with the launchpad and apparent unwillingness to remove Brian by hand before launch. However, as investigated by the local press, there are far more animal deaths during shuttle launches than we realise…
“First and foremost, the safety of the crew must be ensured,” said NASA spokeswoman Francis Rae, “it is unfortunate that the agency could be reprimanded over the death of an animal, but in the interest of safety and smooth launch operations, we will enact any preventative measures deemed necessary by the state.”
It turns out that NASA is a little shocked that a Florida official has decided to pursue the issue. NASA and Florida have enjoyed very close ties ever since the beginning of the Space Age and this is the first accusation of criminal negligence over the death of an animal (possibly in reaction to the huge international interest in the story). Little did the agency realise that the death of one unfortunate bat could land them in court.
“NASA enjoys total freedom of the airspace above the state, however the agency must still abide by the laws of the state, no matter how insignificant the rules may appear when compared with the endeavors of US activities in space.” — Statement by the District Attorney’s Office, Florida
According to local press, NASA can be fined for the preventable death of the bat under the same state laws that govern goods transportation (i.e. company-owned vehicles are liable if they hit endangered animal species on Florida highways). Therefore, if a truck hit a free-tailed bat on a freeway, and the driver was pulled over by a police officer, the company who owns the truck would be accountable. “This is exactly the same rule that is being applied to NASA, a free-tailed bat was killed during the operations of the shuttle. In the county’s eyes, that’s no different from a Walmart truck running over a protected animal. Like a cougar [the state animal],” reported the Orlando Sentinel.
Regardless of the outcome to the possible legal action, NASA has already prepared plans for an anti-bird/anti-bat mesh that will surround the launchpad after exterior inspection but before launch. This is where NASA tripped up, they performed an inspection on Saturday, March 14th, of Discovery’s external tank, but the pneumatic cranes (used to lift inspectors to the upright shuttle) were removed from the launchpad on launch day. Therefore, if NASA had to remove Brian by hand (if they knew he was injured), the Discovery launch would have been delayed further still, to wait for cranes and personnel to arrive on the scene.
This preventative measure isn’t thought to affect the remaining shuttle launches (before the shuttle is decommissioned in 2010), but the mesh will be built into the launch tower of the Constellation Program scheduled for launch in 2015 (pictured above).
“Estimates place the cost of the mesh at around $10 million,” said Rae. “However, if you factor in unforeseen project overruns and design issues, that cost could easily triple. Possibly more. We simply do not have the technology to fabricate such a large, lightweight net. It will, however, be worth it in the long-run.”
It would appear the mesh couldn’t come too soon for one NASA employee. Soon after Discovery launched on that fateful Sunday night, the Orlando Sentinel interviewed launch safety officer Aniline Lo who went into some detail about the real costs of a shuttle launch.
“…of course animals die during launches. We’ve had collisions with eagles during ascent, we’ve even found dead rats, mice and gophers left on the pad, there has also been injuries to some larger animals in the past. As the Cape is surrounded by water, it is hard to prevent alligators straying too close […] shuttle exhaust can hurt these reptiles, making them difficult to treat. It also seems the flash from the boosters cause confusion in some animals, including rabbits, actually attracting them to the launch pad at lift off. That always ends very badly.” — Aniline Lo, NASA Safety Officer
Lo then went into detail about the clean-up operation after launch. “It’s a shame, the adrenaline is pumping through your body before launch, but it is up to my team to clear up the mess which is the downer,” she said. “If you thought roadkill was bad, imagine it roasted. Hundreds of thousands of dollars post-launch could be saved in man-hours [for clean-up operations] if these animals are prevented from getting near to the rockets.”
The sad story of Brian the Bat captivated the world, but it looks like his demise was the tip of the iceberg. He was first named on the social networking site Twitter and on Astroengine.com. On launch day @DiscoveryBat appeared on Twitter, apparently tweeting from space and tweeting to this day. Even mainstream media refer to the ill-fated free-tailed bat as “Brian”. Consequently, the Brian Bat Foundation was set up to recognise animal endeavours in space. However, it appears the Foundation’s scope must now be extended to all the birds, angry alligators and rabbits on, or near, the shuttle’s launchpad during lift-off.
Source: Orlando Sentinel | 2023-09-25T01:26:35.434368 | https://example.com/article/3264 |
package com.example.access;
import cn.edu.buaa.crypto.access.AccessControlEngine;
import cn.edu.buaa.crypto.access.AccessControlParameter;
import cn.edu.buaa.crypto.access.UnsatisfiedAccessControlException;
import cn.edu.buaa.crypto.access.lsss.lw10.LSSSLW10Engine;
import cn.edu.buaa.crypto.access.parser.ParserUtils;
import cn.edu.buaa.crypto.access.parser.PolicySyntaxException;
import cn.edu.buaa.crypto.access.tree.AccessTreeEngine;
import com.example.TestUtils;
import it.unisa.dia.gas.jpbc.Element;
import it.unisa.dia.gas.jpbc.Pairing;
import it.unisa.dia.gas.jpbc.PairingParameters;
import it.unisa.dia.gas.plaf.jpbc.pairing.PairingFactory;
import junit.framework.TestCase;
import org.bouncycastle.crypto.CipherParameters;
import org.junit.Assert;
import java.io.IOException;
import java.util.Map;
/**
* Created by Weiran Liu on 2016/7/20.
*
* Access control engine test.
*/
public class AccessControlEngineTest extends TestCase {
private AccessControlEngine accessControlEngine;
public void runAllTests(PairingParameters pairingParameters) {
Pairing pairing = PairingFactory.getPairing(pairingParameters);
//test satisfied access control
if (this.accessControlEngine.isSupportThresholdGate()) {
try_valid_access_policy(pairing, 1,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied01);
try_valid_access_policy(pairing, 2,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied02);
try_valid_access_policy(pairing, 3,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied03);
try_valid_access_policy(pairing, 4,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied04);
try_valid_access_policy(pairing, 5,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied05);
try_valid_access_policy(pairing, 6,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied06);
try_valid_access_policy(pairing, 7,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied07);
try_valid_access_policy(pairing, 8,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied08);
try_valid_access_policy(pairing, 9,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied09);
try_valid_access_policy(pairing, 10,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied10);
try_valid_access_policy(pairing, 11,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_satisfied11);
try_valid_access_policy(pairing, 20,
AccessPolicyExamples.access_policy_threshold_example_2_tree,
AccessPolicyExamples.access_policy_threshold_example_2_rho,
AccessPolicyExamples.access_policy_threshold_example_2_satisfied01);
//test unsatisfied access control
try_invalid_access_policy(pairing, 1,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied01);
try_invalid_access_policy(pairing, 2,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied02);
try_invalid_access_policy(pairing, 3,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied03);
try_invalid_access_policy(pairing, 4,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied04);
try_invalid_access_policy(pairing, 5,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied05);
try_invalid_access_policy(pairing, 6,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied06);
try_invalid_access_policy(pairing, 7,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied07);
try_invalid_access_policy(pairing, 8,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied08);
try_invalid_access_policy(pairing, 9,
AccessPolicyExamples.access_policy_threshold_example_1_tree,
AccessPolicyExamples.access_policy_threshold_example_1_rho,
AccessPolicyExamples.access_policy_threshold_example_1_unsatisfied09);
try_invalid_access_policy(pairing, 20,
AccessPolicyExamples.access_policy_threshold_example_2_tree,
AccessPolicyExamples.access_policy_threshold_example_2_rho,
AccessPolicyExamples.access_policy_threshold_example_2_unsatisfied01);
try_invalid_access_policy(pairing, 21,
AccessPolicyExamples.access_policy_threshold_example_2_tree,
AccessPolicyExamples.access_policy_threshold_example_2_rho,
AccessPolicyExamples.access_policy_threshold_example_2_unsatisfied02);
}
try_valid_access_policy(pairing, 31,
AccessPolicyExamples.access_policy_example_1,
AccessPolicyExamples.access_policy_example_1_satisfied_1);
try_valid_access_policy(pairing, 32,
AccessPolicyExamples.access_policy_example_1,
AccessPolicyExamples.access_policy_example_1_satisfied_2);
try_valid_access_policy(pairing, 41,
AccessPolicyExamples.access_policy_example_2,
AccessPolicyExamples.access_policy_example_2_satisfied_1);
try_valid_access_policy(pairing, 42,
AccessPolicyExamples.access_policy_example_2,
AccessPolicyExamples.access_policy_example_2_satisfied_2);
try_valid_access_policy(pairing, 51,
AccessPolicyExamples.access_policy_example_3,
AccessPolicyExamples.access_policy_example_3_satisfied_1);
try_invalid_access_policy(pairing, 31,
AccessPolicyExamples.access_policy_example_1,
AccessPolicyExamples.access_policy_example_1_unsatisfied_1);
try_invalid_access_policy(pairing, 41,
AccessPolicyExamples.access_policy_example_2,
AccessPolicyExamples.access_policy_example_2_unsatisfied_1);
try_invalid_access_policy(pairing, 42,
AccessPolicyExamples.access_policy_example_2,
AccessPolicyExamples.access_policy_example_2_unsatisfied_2);
try_invalid_access_policy(pairing, 53,
AccessPolicyExamples.access_policy_example_2,
AccessPolicyExamples.access_policy_example_2_unsatisfied_3);
try_invalid_access_policy(pairing, 51,
AccessPolicyExamples.access_policy_example_3,
AccessPolicyExamples.access_policy_example_3_unsatisfied_1);
try_invalid_access_policy(pairing, 52,
AccessPolicyExamples.access_policy_example_3,
AccessPolicyExamples.access_policy_example_3_unsatisfied_2);
}
private void try_valid_access_policy(
Pairing pairing, int testIndex,
final String accessPolicyString, final String[] attributeSet) {
try {
int[][] accessPolicy = ParserUtils.GenerateAccessPolicy(accessPolicyString);
// for (int i = 0; i < accessPolicy.length; i++) {
// for (int j = 0 ; j < accessPolicy[i].length; j++) {
// System.out.print(accessPolicy[i][j] + ", ");
// }
// System.out.println();
// }
// System.out.println();
String[] rhos = ParserUtils.GenerateRhos(accessPolicyString);
try_valid_access_policy(pairing, testIndex, accessPolicy, rhos, attributeSet);
} catch (PolicySyntaxException e) {
System.out.println("Access Policy with Combined Gate Satisfied Test " + testIndex + ", Error for parsing...");
e.printStackTrace();
}
}
private void try_invalid_access_policy(
Pairing pairing, int testIndex,
final String accessPolicyString, final String[] attributeSet) {
try {
int[][] accessPolicy = ParserUtils.GenerateAccessPolicy(accessPolicyString);
// for (int i = 0; i < accessPolicy.length; i++) {
// for (int j = 0 ; j < accessPolicy[i].length; j++) {
// System.out.print(accessPolicy[i][j] + ", ");
// }
// System.out.println();
// }
// System.out.println();
String[] rhos = ParserUtils.GenerateRhos(accessPolicyString);
try_invalid_access_policy(pairing, testIndex, accessPolicy, rhos, attributeSet);
} catch (PolicySyntaxException e) {
System.out.println("Access Policy with Combined Gate Satisfied Test " + testIndex + ", Error for parsing...");
e.printStackTrace();
}
}
private void try_valid_access_policy(
Pairing pairing, int testIndex,
final int[][] accessPolicy, final String[] rhos, final String[] attributeSet) {
try {
//Access Policy Generation
AccessControlParameter accessControlParameter = accessControlEngine.generateAccessControl(accessPolicy, rhos);
//SecretSharing
Element secret = pairing.getZr().newRandomElement().getImmutable();
// System.out.println("Generated Secret s = " + secret);
Map<String, Element> lambdaElementsMap = accessControlEngine.secretSharing(pairing, secret, accessControlParameter);
//test access parameter serialization
byte[] byteArrayAccessParameter = TestUtils.SerCipherParameter(accessControlParameter);
CipherParameters anAccessControlParameter = TestUtils.deserCipherParameters(byteArrayAccessParameter);
Assert.assertEquals(accessControlParameter, anAccessControlParameter);
//Secret Reconstruction
accessControlParameter = (AccessControlParameter)anAccessControlParameter;
Map<String, Element> omegaElementsMap = accessControlEngine.reconstructOmegas(pairing, attributeSet, accessControlParameter);
Element reconstructedSecret = pairing.getZr().newZeroElement().getImmutable();
for (String eachAttribute : attributeSet) {
if (omegaElementsMap.containsKey(eachAttribute)) {
reconstructedSecret = reconstructedSecret.add(lambdaElementsMap.get(eachAttribute).mulZn(omegaElementsMap.get(eachAttribute))).getImmutable();
}
}
// System.out.println("Reconstruct Secret s = " + reconstructedSecret);
if (!reconstructedSecret.equals(secret)) {
System.out.println("Access Policy with Combined Gate Satisfied Test " + testIndex + ", Reconstructed Secret Wrong...");
System.exit(0);
}
System.out.println("Access Policy with Combined Gate Satisfied Test " + testIndex + " Passed.");
} catch (UnsatisfiedAccessControlException e) {
System.out.println("Access Policy with Combined Gate Satisfied Test " + testIndex + ", Error for getting Exceptions...");
e.printStackTrace();
System.exit(0);
} catch (IOException e) {
System.out.println("Access Policy with Combined Gate Satisfied Test " + testIndex + ", Error for getting Exceptions...");
e.printStackTrace();
System.exit(0);
} catch (ClassNotFoundException e) {
System.out.println("Access Policy with Combined Gate Satisfied Test " + testIndex + ", Error for getting Exceptions...");
e.printStackTrace();
System.exit(0);
}
}
private void try_invalid_access_policy(
Pairing pairing, int testIndex,
final int[][] accessPolicy, final String[] rhos, final String[] attributeSet) {
try {
//Access Policy Generation
AccessControlParameter accessControlParameter = accessControlEngine.generateAccessControl(accessPolicy, rhos);
//SecretSharing
Element secret = pairing.getZr().newRandomElement().getImmutable();
// System.out.println("Generated Secret s = " + secret);
Map<String, Element> lambdaElementsMap = accessControlEngine.secretSharing(pairing, secret, accessControlParameter);
//Secret Reconstruction
Map<String, Element> omegaElementsMap = accessControlEngine.reconstructOmegas(pairing, attributeSet, accessControlParameter);
Element reconstructedSecret = pairing.getZr().newZeroElement().getImmutable();
for (String eachAttribute : attributeSet) {
if (omegaElementsMap.containsKey(eachAttribute)) {
reconstructedSecret = reconstructedSecret.add(lambdaElementsMap.get(eachAttribute).mulZn(omegaElementsMap.get(eachAttribute))).getImmutable();
}
}
System.out.println("Access Policy with Combined Gate Unsatisfied Test " + testIndex + ", Error for not getting Exceptions...");
System.exit(0);
} catch (UnsatisfiedAccessControlException e) {
System.out.println("Access Policy with Combined Gate Unsatisfied Test " + testIndex + " Passed.");
}
}
public void testAccessTreeEngine() {
this.accessControlEngine = AccessTreeEngine.getInstance();
runAllTests(PairingFactory.getPairingParameters(TestUtils.TEST_PAIRING_PARAMETERS_PATH_a_80_256));
}
public void testLSSSLW10Engine() {
this.accessControlEngine = LSSSLW10Engine.getInstance();
runAllTests(PairingFactory.getPairingParameters(TestUtils.TEST_PAIRING_PARAMETERS_PATH_a_80_256));
}
}
| 2023-09-20T01:26:35.434368 | https://example.com/article/2535 |
Q:
Nginx + rewrite + php-fpm = confusion
I'm moving from Apache to Nginx. I've got problem with converting Apache rewrite rules into nginx rules. What I'm trying to convert:
RewriteRule ^$ www/controller.php?_url_=index [QSA,L]
RewriteRule ^/+$ www/controller.php?_url_=index [QSA,L]
RewriteRule ^([a-zA-Z0-9_]+)(/([a-zA-Z0-9_/]*))?$ www/controller.php?_url_=$1&_req_=$2 [QSA,L]
RewriteRule ^([a-zA-Z0-9/]+)controller.php?(.*)$ www/controller.php?$2 [QSA,L]
What I tried to use:
rewrite ^/$ /www/controller.php?_url_=index break;
rewrite ^/+$ /www/controller.php?_url_=index break;
rewrite ^/([a-zA-Z0-9_]+)(/([a-zA-Z0-9_]*))?$ /www/controller.php?_url_=$1&_req_=$2 break;
rewrite ^/([a-zA-Z0-9/]+)controller.php?(.*)$ /www/controller.php?$2 break;
If I use above rules my browser is downloading php file (server is not executing it) - I guessed it's not being passed to PHP-FPM. Somewhere I found I should replace "break;" with "last;" like:
rewrite ^/$ /www/controller.php?_url_=index last;
After replacing this still I'm downloading php file from http://example.org, but when I visit http://example.org/login I get into infinite loop. I read nginx documentation and different examples (also here at StackOverflow) but sill I can't find correct configuration. Could somebody point me into the right direction?
Here is my whole config file:
server {
listen 80;
server_name 10.10.100.172;
error_log /var/log/nginx/example.com.error.log debug;
rewrite_log on;
location / {
root /var/www/webs;
index index.php index.html index.htm;
rewrite ^/$ /www/controller.php?_url_=index last;
rewrite ^/+$ /www/controller.php?_url_=index last;
rewrite ^/([a-zA-Z0-9_]+)(/([a-zA-Z0-9_]*))?$ /www/controller.php?_url_=$1&_req_=$2 last;
rewrite ^/([a-zA-Z0-9/]+)controller.php?(.*)$ /www/controller.php?$2 last;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
root /var/www/webs;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
EDIT:
I moved rules outside location segment and used "break;" at the end of each rule. I can reach /www/controller.php?url=login&req=/ when I go to example.org/login/ - controller.php was responsible for infinite loop. When I try to reach example.org or example.org/ I'm downloading controller.php file - like it's not being passed to the PHP-FPM. Any guess?
A:
I used above rules outside location segment and it works! I tried viewing my page in different browser and ecerything is fine. I always forget about removing cache..
| 2024-04-01T01:26:35.434368 | https://example.com/article/2877 |
Summary/Abstract and Specific Aims of the Funded Parent Grant/Project Physical activity (PA) helps prevent obesity and reduce the risk of cancer as well as other chronic conditions such as diabetes and heart disease. PA can be promoted through environmental/policy interventions at the population level. However, existing empirical knowledge on environment-PA relationships is primarily based on cross-sectional studies, which provide insufficient control of extraneous factors for investigating causal relationships. Further, little is known about how environmental factors affect spatial (where) and temporal (when) patterns of PA and the underlying mechanisms (why) of such impacts, including potential mediating effects of the psychosocial factors. The objective of the Physical Activity Impacts of a Planned Activity study is to examine both short-term and long-term changes in PA after residents move to the Mueller community, which is an activity-friendly community (AFC). For this study, AFC?s are defined as man-made surroundings that provide a setting for human activity including parks and greens spaces within the residential area. It utilizes a unique and fleeting opportunity with ~3000 new homes being built in a large planned AFC over the next ~3 years. The focus is on those who are currently sedentary or insufficiently active and living in an environment lacking support for PA. Case participants (n~350) are adults moving from non-AFCs to this AFC and not meeting the CDC guidelines for PA at pre-move baseline. Each case participant will be matched based on gender and age (5 years) with a comparison participant who lives in his/her pre-move non-AFC, is also sedentary or insufficiently active, and is not planning to move for at least two years (the project's follow- up measurement period). The specific aims of this proposed study are to 1) examine the short-term and long-term changes in total PA levels (weekly minutes) and in spatial and temporal patterns of PA (proportion of PA taking place within the community, proportion of walking out of total PA, and level of PA integration into daily routines) after sedentary or insufficiently active individuals move from non- AFCs to an AFC; and 2) determine what built and natural environmental factors (e.g., density, land uses, sidewalks, trails/paths, parks, water features) lead to changes in PA among these populations, either directly or indirectly by affecting psychosocial factors related to PA. Using this timely opportunity to gain longitudinal assessments for this natural experiment is of critical importance to advancing the status of knowledge on the intersection of health and place as it relates to promoting PA. The multidisciplinary research team has extensive experience related to this topic and with this study community through pilot work. At this study?s conclusion, we will have identified stronger evidence supporting the impact of an AFC on population-level behavior changes toward more physically active lifestyles (short-term goal) and toward lessening the burden of obesity throughout the nation (long-term goal). | 2023-12-15T01:26:35.434368 | https://example.com/article/9731 |
Determination of tamsulosin in human plasma by liquid chromatography/tandem mass spectrometry and its application to a pharmacokinetic study.
Tamsulosin, a selective α₁-adrenoceptor antagonist, is used for the treatment of benign prostatic hyperplasia (BPH). We developed and validated a rapid, sensitive, and simplified liquid chromatography analytical method utilizing tandem mass spectrometry (LC-MS/MS) for the determination of tamsulosin in human plasma. After liquid-liquid extraction with methyl t-butyl ether, chromatographic separation of tamsulosin was achieved using a reversed-phase Luna C₁₈ column (2.0 mm × 50 mm, 5 μm particles) with a mobile phase of 10 mM ammonium formate buffer (pH 3.5)-methanol (25:75, v/v) and quantified by MS/MS detection in ESI positive ion mode. The flow rate of the mobile phase was 200 μL/min and the retention times of tamsulosin and the internal standard (IS, diphenhydramine) were 0.8 and 0.9 min, respectively. The calibration curves were linear over a range of 0.01-20 ng/mL (r>0.999). The lower limit of quantification using 500 μL of human plasma was 0.01 ng/mL. The mean accuracy and precision for intra- and inter-day validation of tamsulosin were both within acceptable limits. The present LC-MS/MS method showed improved sensitivity for quantification of tamsulosin in human plasma compared with previously described analytical methods. The validated method was successfully applied to a pharmacokinetic study in humans. | 2024-06-12T01:26:35.434368 | https://example.com/article/6092 |
EFFECT OF TIME EXPOSURE ON THERMOLUMINESCENCE GLOW CURVE FOR UV-INDUCED ZRO2:MG PHOSPHOR.
In this research, the effect of magnesium (Mg) impurity on thermoluminescence (TL) response of ZrO2 phosphors is studied experimentally. In the experimental procedure, ZrO2:Mg phosphors in the powder form were synthesised by the sol-gel method. The obtained hydrogel was dried in air and then calcinated in air at 1200°C for 5 h and next was annealed at 250°C for 2 h. Sample characterisations were done by X-ray diffraction and scanning electron microscopy. Obtained materials had monoclinic phase and porous microstructure. Then, known amounts of ZrO2:Mg powder were exposed to ultraviolet lamp from 0.5 to 120 min. The TL peaks were obtained at the same temperature as 75, 137 and 260°C, respectively. Adding Mg to pure zirconia caused to increase TL intensity and shift peaks related to pure zirconia. The TL peaks of the pure zirconia were seen at the 83, 132 and 235°C. Finally, ZrO2:Mg TL experimental results show the linear dose response, high stability and less fading. | 2024-03-23T01:26:35.434368 | https://example.com/article/2848 |
Focus
Orange Grapefruit
Inspired by ancient Eastern plant medicine and modernized with testing and research for enhanced bioavailability. Our certified organic hemp oil-infused vitamin juices are formulated to give you the boost you need to feel your best.
$ 4.99/ea PACK 12 count
Protect
Raspberry Blueberry
Inspired by ancient Eastern plant medicine and modernized with testing and research for enhanced bioavailability. Our certified organic hemp oil-infused vitamin juices are formulated to give you the boost you need to feel your best.
$ 4.99/ea PACK 12 count
Relax
Pomegranate Cranberry
Inspired by ancient Eastern plant medicine and modernized with testing and research for enhanced bioavailability. Our certified organic hemp oil-infused vitamin juices are formulated to give you the boost you need to feel your best.
$ 4.99/ea PACK 12 count
vitamin-infused juices
Blending the best of nature
For Daily Active Performance
We formulate each of our Juices to target specific, active functions and enhance your bodies for daily performance. We infuse our juices using our proprietary nanoencapsulation technology—NanoCBD™—a process that allows the oil, vitamins and minerals to be completely soluble and absorbed in your body up to 10X better than other oils, for faster acting, longer lasting effects.
Shop/All Products
About Phivida
This product is not for use by or sale to persons under the age of 18. This product should be used only as directed on the label. It should not be used if you are pregnant or nursing. Consult with a physician before use if you have a serious medical condition or use prescription medications. A Doctor's advice should be sought before using this and any supplemental dietary product. All trademarks and copyrights are property of their respective owners and are not affiliated with nor do they endorse this product. These statements have not been evaluated by the FDA. This product is not intended to diagnose, treat, cure or prevent any disease. Individual weight loss results will vary. By using this site, you agree to follow the Privacy Policy and all Terms & Conditions printed on this site. Void Where Prohibited by Law. | 2024-01-20T01:26:35.434368 | https://example.com/article/9481 |
In this version of my mother’s chicken soup I combine the best of two worlds—fine home cooking and fine restaurant cooking—to create a more intensely flavorful version of a classic comfort food. By cooking the chicken in chicken stock, rather than simply in water, you double the flavor. It’s the same technique I use at Chanterelle to prepare consommé for the restaurant menu. The soup’s richly concentrated taste is the very essence of the bird as well as a perfect vehicle for Homemade Matzoh Balls.
The matzoh balls can be poached right in the chicken soup toward the end of the final simmering, or poached separately in chicken stock in another pan, then drained and added to the soup just before serving. Either method results in good flavor, although poaching matzoh balls in the soup does make it cloudy and a little less appetizing looking. Since I always have plenty of chicken stock in the restaurant, I usually poach them separately.
Directions
1. Using butcher’s twine, tie the dill, chervil, and parsley together in one big bunch. (If you’re using tarragon, just sprinkle it in after you pour the stock into the pot.) Place the bunch of herbs in a very large stockpot along with the chicken stock and chicken pieces. Set the pot over high heat and bring to a boil, skimming the surface as the foam rises to the top. Reduce the heat to low, cover, and simmer the broth until the chickens are just cooked through, about 45 minutes. Test for doneness by piercing a thigh with a fork; the juices should run clear.
2. Carefully remove the chicken pieces from the broth and set aside to cool. Remove and discard the herb bundle.
3. Add the onion, carrots, and parsnips to the broth and return to a boil. Reduce the heat to low and simmer until the vegetables are very tender, about 30 minutes; a fork should pierce quite easily through a piece of carrot (see Note).
4. While the vegetables cook, remove the skin from the chicken pieces and pull the meat from the bones. Discard the skin and bones. Coarsely chop the meat and add it to the soup, simmering it for 10 minutes longer to reheat. Remove the pot from the heat and season with salt and freshly ground pepper. Serve immediately.
Notes
If you’re planning to cook the matzoh balls in the soup, add the batter in step 3, after the vegetables have been simmering for about 20 minutes and before the chicken is added. The matzoh balls should take about 15 to 20 minutes to poach in the simmering soup; they’ll bob to the surface when they’re done. Add the chicken after the matzoh balls have cooked for 10 minutes.
If you’ve cooked the matzoh balls separately and they’re still warm, add them 5 minutes after you add the chicken. If the matzoh balls are cool, add them at the same time that you add the chicken.
Soup for Sara:
After several years of lunches centered on her fondness for peanut butter sandwiches, our daughter, Sara, suddenly discovered soup. So she and I began making soup together, tucking containers away in the freezer for her school lunches. She likes lentil and leek and potato, but her real favorite is chicken with fresh herbs and noodles or rice. Around noon in the wintertime, when I’m busy in the kitchen, I’ll picture her sitting at a table of boisterous kids, quietly reading a book and sipping soup from her thermos. | 2023-09-19T01:26:35.434368 | https://example.com/article/2332 |
st: export and use of queried data table
New to STATA (Intercooled 9) and I am working with a huge ecological
dataset. My question for the community is where do I find the results table
from a query . For instance I ran the following command:
by animal_id month,
sort: tabulate on_trailsand I would like to generate graphs and continue to work with these data
set. Is this a read only data set? Am I only able to view results in the viewer
and export to textpad or word? I thought this would a
basic command common to all stats packages? | 2023-09-05T01:26:35.434368 | https://example.com/article/9649 |
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
// <map>
// map(map&&)
// noexcept(is_nothrow_move_constructible<allocator_type>::value &&
// is_nothrow_move_constructible<key_compare>::value);
// This tests a conforming extension
#include <map>
#include <cassert>
#include "../../../MoveOnly.h"
#include "../../../test_allocator.h"
template <class T>
struct some_comp
{
typedef T value_type;
some_comp(const some_comp&);
};
int main()
{
#if __has_feature(cxx_noexcept)
{
typedef std::map<MoveOnly, MoveOnly> C;
static_assert(std::is_nothrow_move_constructible<C>::value, "");
}
{
typedef std::map<MoveOnly, MoveOnly, std::less<MoveOnly>, test_allocator<MoveOnly>> C;
static_assert(std::is_nothrow_move_constructible<C>::value, "");
}
{
typedef std::map<MoveOnly, MoveOnly, std::less<MoveOnly>, other_allocator<MoveOnly>> C;
static_assert(std::is_nothrow_move_constructible<C>::value, "");
}
{
typedef std::map<MoveOnly, MoveOnly, some_comp<MoveOnly>> C;
static_assert(!std::is_nothrow_move_constructible<C>::value, "");
}
#endif
}
| 2023-10-24T01:26:35.434368 | https://example.com/article/7157 |
The Uncomfortable Truth About Brain Tonics
November 28th, 2016
By Jackie Larena-Lacayo
Nov 27, 2016 10:06 pm ET – The Wall Street Journal
There was a time in the 19th century when snake-oil remedies contained actual snake oil, and the real or imagined benefits for joint pain were widely touted. The problem, of course, was that snake oil and most other similar tonics, concoctions and liniments had secret and wholly ineffective ingredients and, aside from perhaps offering a placebo effect, mainly benefited the pocketbook of the purveyor.
So with the current panic about dementia among middle age and older individuals and the resultant explosion of various brain tonics on the market, do we have any more evidence today to believe in the efficacy of these products, or are they no different from the snake oils of days’ past? As a geriatric psychiatrist and director of a memory disorders center, I often get questions about these tonics and their claims.
Here, then, is a primer to evaluating both the need and the potential benefits, if any, of the many products on the market.
Brain tonics are pills or liquids sold as dietary supplements promoted to preserve or boost memory and other cognitive functions. Most of these tonics contain combinations of ingredients including vitamins, minerals, herbs, amino acids and other synthetic or natural extracts. People take them not only in hope of improving cognition but also to reduce the risk of developing a cognitive disorder or to treat an existing disorder.
As supplements they do not require approval by the Food and Drug Administration, and so the bar for proving their effectiveness is quite low. In contrast, the several FDA-approved medications for Alzheimer’s disease had to demonstrate extensive scientific evidence of safety and efficacy and must adhere to strict marketing guidelines.
The first question one should know before considering a brain tonic is whether he or she truly has a memory problem. Subjective complaints about memory and other cognitive deficits may not have any objective basis, and so neuropsychological testing would be needed to even demonstrate actual deficits. When present, these complaints often reflect either normal age-associated changes or mild impairment due to one or more transient causes, including menopause, medication effects, stress, anxiety, depression, substance use, sleep problems and attentional deficits, to name just a few.
In these situations, the only true brain boost is to address the underlying causes. Making an actual diagnosis of a cognitive disorder such as Alzheimer’s disease requires in-depth evaluation, and most people shopping for a tonic have not had any prior, meaningful assessment.
This same lack of diagnostic clarity can be found in the few studies cited by the most popular brain tonics, as they include subjects with amorphous “memory problems” but no true diagnoses. As a result, the data is almost always based on a small set of subjects with various unknown conditions, and is not rigorous enough to make firm conclusions or to garner publication in a mainstream peer-reviewed scientific journal.
The most compelling scientific data would have to come from a randomized, double-blind, placebo-controlled study of a large group of individuals with a confirmed cognitive disorder. But even when such studies have been conducted, no brain-boosting substance has ever consistently shown significant benefit on memory or other cognitive abilities.
This includes substances such as gingko biloba and omega-3 fatty acids. The largest and most rigorous studies ever conducted looked at vitamin E (in combination with an FDA approved medication for Alzheimer’s disease) but it did not demonstrate the ability to prevent development of dementia and only showed a slight slowing in functional decline in individuals already diagnosed with Alzheimer’s disease. That’s hardly adequate guidance for people with minor memory complaints wanting to load up on vitamins.
An exploration of various brain-tonic ingredients yields several more caveats. Vitamin deficiencies involving B12, thiamine or folate are rare causes of cognitive problems, and supplementation above normal blood levels will make no difference. The same limitations apply to vitamins C and D. Other supplements, with curcumin being a good example, barely make it into the bloodstream after being ingested and might not even cross the special cellular barrier that guards our brains. Other supplements are touted as building blocks for brain cells, but while that may be beneficial in the diet for normal brain functioning, there is no significant evidence that such supplements actually improve brain function above and beyond normal, nor that they specifically lower the risk of getting Alzheimer’s disease.
To actually show that a brain tonic works, extensive and expensive scientific studies would be needed, but this approach is typically too taxing and risky for the finances and interest of nearly every producer. Consider this fact: In the past 15 years, major pharmaceutical companies have invested hundreds of millions of dollars on rigorous scientific studies for dozens of experimental agents to treat Alzheimer’s disease, and yet 99% of these studies have failed.
It is a tough goal to find such an elixir for a better brain–one that continues to elude the very best of scientific inquiry. A more viable solution for optimizing brain health according to scientific research is regular, moderate exercise, mentally and socially stimulating activities, and a diet similar to the Mediterranean diet loaded with fruits and vegetables, whole grains, healthy oils and even a daily glass of wine.
While it would be wonderful to compress this lifestyle into a single pill or potion, the search for a truly effective brain tonic continues.
Marc Agronin, M.D., is a geriatric psychiatrist at Miami Jewish Health in Miami, Florida and the author of “How We Age” and “The Dementia Caregiver: A Guide to Caring for Someone with Alzheimer’s Disease and Other Neurocognitive Disorders.” | 2024-04-18T01:26:35.434368 | https://example.com/article/9243 |
Endoscopic resection of submucosal tumors.
Submucosal gastrointestinal tumors represent a unique, diverse and challenging group of lesions found in modern medical practice. While management has traditionally been surgical, the development of advanced endoscopic techniques is challenging this approach. This review aims to investigate the role of endotherapy in treatment pathways, with a focus on carcinoid and gastrointestinal stromal tumors. In particular, we will discuss which lesions can be safely treated endoscopically, the evidence base behind such approaches and the limitations of the current evidence. The review will consider how these techniques may change the management of submucosal tumors in the future. | 2024-04-23T01:26:35.434368 | https://example.com/article/4595 |
Roger Myerson
Roger Bruce Myerson (born 1951) is an American economist and professor at the University of Chicago. He holds the title of the David L. Pearson Distinguished Service Professor of Global Conflict Studies at The Pearson Institute for the Study and Resolution of Global Conflicts in the Harris School of Public Policy, the Griffin Department of Economics, and the College. Previously, he held the title The Glen A. Lloyd Distinguished Service Professor of Economics. In 2007, he was the winner of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel with Leonid Hurwicz and Eric Maskin for "having laid the foundations of mechanism design theory." He was elected a Member of the American Philosophical Society in 2019.
Biography
Roger Myerson was born in 1951 in Boston. He attended Harvard University, where he received his A.B., summa cum laude, and S.M. in applied mathematics in 1973. He completed his Ph.D. in applied mathematics from Harvard University in 1976. His doctorate thesis was A Theory of Cooperative Games.
From 1976 to 2001, Myerson was a professor of economics at Northwestern University's Kellogg School of Management, where he conducted much of his Nobel-winning research. From 1978 to 1979, he was Visiting Researcher at Bielefeld University. He was Visiting Professor of Economics at the University of Chicago from 1985–86 and from 2000–01. He became Professor of Economics at Chicago in 2001. Currently, he is the inaugural David L. Pearson Distinguished Service Professor of Global Conflict Studies at the University of Chicago.
Awards and Honors
Bank of Sweden Nobel Memorial Prize
Myerson was one of the three winners of the 2007 Nobel Memorial Prize in Economic Sciences, the other two being Leonid Hurwicz of the University of Minnesota, and Eric Maskin of the Institute for Advanced Study. He was awarded the prize for his contributions to mechanism design theory.
Myerson made a path-breaking contribution to mechanism design theory when he discovered a fundamental connection between the allocation to be implemented and the monetary transfers needed to induce informed agents to reveal their information truthfully. Mechanism design theory allows for people to distinguish situations in which markets work well from those in which they do not. The theory has helped economists identify efficient trading mechanisms, regulation schemes, and voting procedures. Today, the theory plays a central role in many areas of economics and parts of political science.
Memberships and Honors
Myerson is a member of the American Academy of Arts and Sciences, the National Academy of Sciences, the Council on Foreign Relations, and the American Philosophical Society. He is a Fellow of the Game Theory Society and serves as an advisory board member on the International Journal of Game Theory. Myerson holds an Honorary Doctorate from the University of Basel in 2002 and received the Jean-Jacques Laffont Prize in 2009.
Personal life
In 1980, Myerson married Regina (née Weber) and the couple had two children, Daniel and Rebecca.
Publications
Game theory and mechanism design
"Bayesian Equilibrium and Incentive Compatibility," in
He wrote a general textbook on game theory in 1991, and has also written on the history of game theory, including his review of the origins and significance of noncooperative game theory. He also served on the editorial board of the International Journal of Game Theory for ten years.
Myerson has worked on economic analysis of political institutions and written several major survey papers:
"Economic Analysis of Political Institutions: An Introduction," Advances in Economic Theory and Econometrics: Theory and Applications, volume 1, edited by D. Kreps and K. Wallis (Cambridge University Press, 1997), pages 46–65.
His recent work on democratization has raised critical questions about American policy in occupied Iraq:
Books
Concepts named after him
Myerson–Satterthwaite theorem
Myerson mechanism
Myerson ironing
See also
List of economists
List of Jewish Nobel laureates
References
External links
Myerson Nobel Prize lecture
Webpage at the University of Chicago
ABC News Chicago interview
The scientific background to the 2007 Nobel prize: Mechanism Design Theory
Myerson participated in panel discussion, The Global Economic Crisis: What Does It Mean for U.S. National Security? at the Pritzker Military Museum & Library on April 2, 2009
Category:1951 births
Category:Living people
Category:Nobel laureates in Economics
Category:American Nobel laureates
Category:Jewish Nobel laureates
Category:Economists from Massachusetts
Category:20th-century American mathematicians
Category:21st-century American mathematicians
Category:Game theorists
Category:Harvard School of Engineering and Applied Sciences alumni
Category:Jewish American social scientists
Category:Kellogg School of Management faculty
Category:University of Chicago faculty
Category:Writers from Boston
Category:Presidents of the Econometric Society
Category:20th-century American writers
Category:21st-century American non-fiction writers
Category:20th-century American economists
Category:21st-century American economists
Category:Fellows of the Econometric Society
Category:Fellows of the American Academy of Arts and Sciences
Category:Members of the United States National Academy of Sciences
Category:Members of the American Philosophical Society | 2024-06-19T01:26:35.434368 | https://example.com/article/4156 |
A USC research team identified 150 proteins affecting cell activity and brain development that contribute to mental disorders, including schizophrenia, bipolar condition and depression.
It's the first time these molecules, which are associated with the disrupted-in-schizophrenia 1 (DISC1) protein linked to mental disorders, have been identified. The scientists developed new tools involving stem cells to determine chemical reactions the proteins use to influence cell functions and nerve growth in people.
"This moves science closer to opportunities for treatment for serious mental illness," said Marcelo P. Coba, the study author and professor of psychiatry at the Zilkha Neurogenetic Institute at the Keck School of Medicine of USC.
The findings appear in Biological Psychiatry.
Schizophrenia affects less than 1 percent of the U.S. population, but has an outsized impact on disability, suicide and premature deaths.
The DISC1 gene was linked to schizophrenia nearly 20 years ago. It controls how nerve cells called neurons develop, as well as how the brain matures. DISC1 also directs a network of signals across cells that can contribute to the disease. Scientists say errors in these chemical reactions contribute to schizophrenia.
But the identity of proteins that DISC1 can regulate is poorly understood, prompting the USC researchers and colleagues from the State University of New York Downstate Medical Center to undertake the research. The challenge was to simulate conditions inside the human brain, Coba explained.
Using stem cells, they conducted assays resembling habitat where DISC1 does its work. Then, they used gene editing to insert a molecular tag on DISC1, allowing them to extract it from brain cells and identify the proteins with which it associates.
Identifying the proteins that interact with DISC1 in brain cells could lead to understanding how the risk factors for psychiatric diseases are connected to specific molecular functions, Coba explained. The discovery enables researchers to determine specific processes that differ in patients suffering from specific mental illnesses.
"This gives researchers specific trails to follow within cells from both healthy patients and those diagnosed with disorders," Coba said.
Schizophrenia is one of the top 15 leading causes of disability worldwide. People with schizophrenia live an average of nearly 29 years less than those without the disorder, according to the National Institutes of Mental Health (NIMH).
The illness is often accompanied by conditions such as heart disease and diabetes, which contribute to the high premature mortality rate among people with schizophrenia. About 5 percent of people with schizophrenia die by suicide, a rate far greater than the general population, with the highest risk in the early stages of illness, according to the NIMH. | 2023-10-18T01:26:35.434368 | https://example.com/article/5911 |
12 CFR 911.5 - Consideration of requests.
(a)Discretion. Each decision concerning the availability of unpublished information is at the sole discretion of the Finance Board based on a weighing of all appropriate factors. The decision is a final agency action that exhausts administrative remedies for disclosure of the information.
(b)Time to respond. The Finance Board generally will respond in writing to a request for unpublished information within 60 days of receipt absent exigent or unusual circumstances and dependent upon the scope and completeness of the request.
(c)Factors the Finance Board may consider. The factors the Finance Board may consider in making a determination regarding the availability of unpublished information include:
(1) Whether and how the requested information is relevant to the purpose for which it is sought;
(2) Whether information reasonably suited to the requester's needs other than the requested information is available from another source;
(3) Whether the requested information is privileged;
(4) If the request is in connection with a legal proceeding, whether the proceeding has been filed;
(5) The burden placed on the Finance Board to respond to the request;
(6) Whether production of the information would be contrary to the public interest; and
(7) Whether the need for the information clearly outweighs the need to maintain the confidentiality of the information.
(d)Disclosure of unpublished information by others. When a person or entity other than the Finance Board has a claim of privilege regarding unpublished information and the information is in the possession or control of that person or entity, the Finance Board, at its sole discretion, may respond to a request for the information by authorizing the person or entity to disclose the information to the requester pursuant to an appropriate confidentiality order. Finance Board authorization to disclose information under this paragraph does not preclude the person or entity in possession of the unpublished information from asserting its own privilege, arguing that the information is not relevant, or asserting any other argument to protect the information from disclosure.
(e)Notice to supervised entities and Bank members. The Finance Board generally will notify a supervised entity or Bank member that it is the subject of a request, unless the Finance Board, in its sole discretion, determines that to do so would advantage or prejudice any of the parties to the matter at issue. | 2024-05-23T01:26:35.434368 | https://example.com/article/5210 |
Consider the Significance of Rural Traditions. Tess of the D'Urbervilles
I think customs and traditions are important in the novel because the difference between rural and urban customs would have caused a significant effect if the novel was based in a town or city. For example the use of agriculture may have been inexistent if the novel was based in the city, or religion may not have been an important factor as it is in the novel. Tess is brought up in the countryside, in a typical rural environment where the children are not obligated, or in most cases rich enough, to go to school therefore children are brought up and taught by their parents. This causes Tess to be uneducated in topics such as personal safety and naïve in areas such as men. Her innocence could be blamed on this lack of education because she is often lead away easily by male characters such Alec. Tess does not see how men can manipulate women nor is she even aware of her own personal safety. Therefore this absence of knowledge causes significances in the novel because Tess’ downfall is her inexperience; which causes her to be led so easily and finding herself in unpleasant situations such as her rape. Moreover; when children attend school, it becomes natural for them to be social and make friends. This sociability allows them to make friends easily when they grow older however as Tess does not go to school, she has an inability to make friends. As readers, we feel that Tess has a lack of a connection with the other milkmaids when she goes to work on the Dairy farm. The other milkmaids are seen to be friends with one another and look out for one another yet because Tess does not have this friendship with anyone, there is nobody to see over her and warn her to avoid dangerous situations. They allow her to walk off with Alec on her own because Tess says that it’s fine to leave her alone with him. If there was a strong connection between Tess and her fellow milkmaids, they may have advised her against being alone with a stranger. This novel is overflowing with...
You May Also Find These Documents Helpful
...Tracy Neal
Eng 480
Professor Judith Broome
March 23, 2011
Tess of the D’Urbervilles
As we read the classic novel, Tess of the D’Urbervilles, written by Thomas Hardy, we find discreet criticisms of the Victorian ideas of social classes, as well as the Victorian practices of male domination of women. If the reader looks superficially at the novel through the perspective of entertainment or a good read, the reader will ultimately miss the critical underpinnings of Victorian thought processes and ideals. The reader must analyze the text and main characters closely in order to grasp the point that Hardy is trying to make; namely that ability to have the privilege of personal rights and power based on position of social class or on gender is wrong. The Victorian Society, at that time, was composed of the thought process that the upper-class or nobility could basically get away with all sorts of injustice against women or against classes lower than themselves simply due to rank. In addition to social class status, men were able to get away with injustices against women simply due to gender. Hardy, through writing this novel, was able to discreetly criticize these ideas and societal norms using three predominant characters, Tess Durbeyfield, Alec D’Urberville, and Angel Clare.
We are first introduced to Tess Durbeyfield in chapter two of the book as her father has just...
... Literature
In "Tess of the D'Urbervilles" Hardy does expose the social injustices and double standards which prevail in the late nineteenth century.
These injustices and double standards are evident throughout the whole novel, and Tess, the main character, is the one who suffers them.
This becomes evident from the first page when Parson Tringham meets Jack Durbeyfield and refers to him as "Sir John". With his whimsical comment, made from the safety of a secure social position, the Parson begins the events which start the destruction and downfall of the whole Durbeyfield family.
Logically the fact that Tess's family and their "gentlefolk" relatives have the same descendents should mean that both sides of the family are equal, but this is not true.
Hardy makes this obvious in the contrast between Tess's mother's dialect and the sense of her words,
"That was all a part of the larry! We've been found to be the greatest gentlefolk in the whole county."[p.21]
The industrial revolution had begun a social revolution, and with ideas of democracy becoming popular, the notion of equality existed. But in the areas of England that housed the "landed gentry" it was no more than a notion. The gentry and peasantry were still totally separate and even if the gentry espoused the idea of equality, as Tess was accepted into the richer side of the family, the acceptance was hypocritical.
As we find out later in the...
...Phase Questions
Phase the First: The Maiden
1. What are your initial impressions of Tess?
Tessd'Urberville was immediately imbued with a sense of pride and passion. Her richly detailed description of her personality and appearance made it clear that Hardy intended for her to be interpreted as a pure girl- unaware of her sexuality and odd aesthetic appeal. This was especially reflected in the quote 'You could sometimes see her 12th year in her cheeks, or her 9th sparkling from her eyes, and even her 5th would flit over the curves of her mouth now and again.' This conjures up an image of innocence and youthful naivety in Tess. The stunning detail and intimacy of Hardy's description left no room for doubt she was intended to empathized with. Even as I was reading, I could hear Hardy's presence in his writing, and his desire for us to feel for Tess and her plight. It made me painfully aware of her fate. I couldn't help but feel anger at her treatment; which is exactly what Hardy wanted.
2. Although the description of the three brothers is brief, what does the reader learn about them?
Immediately, it became clear that Angel differed from his brothers. They seemed dismissive of the smaller pleasures of life, like dancing, whilst Angel was eager to go and join the women. Angel's character was further developed by his physical description. Both his brother's are dressed for the part- one as regulation...
...In this novel, Hardy describes how Tess was killed by the cruelity of two specific characters in the novel, Alec d’Urberville and Angel Clare. Throughout the novel, Hardy seems to express his opinion on who is more responsible for Tess’s death by the cruelity they portrayed.
Alec was a member of the capitalist. This willful chap was ignorant and incompetent, depending on his rich family. He began to dally with women when he just was a very young man. When he saw Tess for the first time, he was struck by her beauty. His philandering and ferocious nature was completely unmasked. From the beginning, the writer succeeds in showing us great attraction and impact of Tess on man through Alec D’Urbervilles eyes. So it is very easy for her to become the victim of the other sex. Maybe this is one of the reasons for her tragedy. Alec made use of her purity and then seduced her. When his plot became the fact, he felt no shame at all. When Tess told him that she would never take anything more from him, he jeered: “One would think you were a princess… I am a bad fellow… a damn bad fellow. I was born badly, and I have lived bad, and I shall die bad in all probability...”① Facing poor and grief-stricken Tess, he didn’t feel bitter remorseful at all. He destroyed her purity and virginity as easily as he broke a cup carelessly. He even never thought of the tragedy and sufferings he imposed to...
...The Analysis of Symbol in Tess of the D’Urbervilles
Tomas Hardy is an controversial writer in the era of Victorian,his life span stretches over two centuries. In view of the influence of family life and the background of education, Hardy is aware of many ancient Greek fair tales and biblical stories. In his representative fiction, Tess of The D’Urbervilles, Hardy used different types of symbols to expose the tragic destiny of Tess, just as the famous word which Hamlet says “Frailty, thy name is woman.” Therefore, the whole symbols in this novel make immense effect on digging the deep meaning of Tess of The D'Urbervilles. My thesis will lay stress upon some characteristic symbols to argue about religious ideology and social significance.
1. The symbol of carriage
It's obviously that the symbol of carriage appears from the beginning of Tess to the end. A point which must be mentioned is that the carriage originally dates back to a Greek myth. This myth talks about a carriage with curse. In this story, there is a beautiful princess, but the king is told that if anybody married with his daughter, he would die. For this reason, the king asks everyone who wants to make a proposal to princess to race carriage with him. Consequently, he will kill them unless they get a triumph. Many participator is murdered expect a man, because of the god's...
...Tess of the d'Urbervilles
Chapter I
The scene begins with a middle-aged peddler, named John Durbeyfield. Making his way home, the man encounters Parson Tringham, who claims to have studied history. The Parson tells Durbeyfield that he is of noble lineage, the d'Urberville family, and his family has prospered for many generations until recently. Tringham tells his him however that this heritage comes from such a long period of time ago that it is worthless. At this the seemingly drunk man sits near a road and beckons a young boy to fetch him a horse and carriage to take him home in his newly liberated state.
Chapter II
Tess, the eldest daughter of the Durbeyfield family, has accompanied the other women in the village, young and old, to celebrate May Day. All of the women are clothed in white, but not the same shade of white, as noticed by the narrator. They all hold white flowers in one hand and a white wand made from oak in the other. This celebration commemorates the coming of spring, and all the women enjoy it, as it seems, because it allows them to forget their insignificant role in society. In the middle of the procession, John Durbeyfield rides along in his carriage, making quite a spectacle. Tess is embarrassed, and three very attractive (and obviously rich) brothers walk in. Only one of them, the youngest stays to dance, while his brothers continue their journey. All the women are...
...I. Narrative technique in Tess of the D'Urbervilles
Thomas Hardy uses a number of narrative techniques in his novel which enable the reader to get more deeply involved into the plot and emphasize with the characters. Among the techniques he employs are the third person omniscient narrator, dialogues between the characters, letter writing, songs and poetry, religious and mythological allusions as well as extensive descriptions of the settings. All these techniques are applied in such a way that they underline the message Hardy has woven into his novel, while allowing the reader to make up his own mind about the events.
The third person omniscient narrator is all-knowing and thereby adds to the vulnerability of Tess. This is because the reader knows certain facts which Tess is unaware of. For example, the reader is aware of Alec D'Urberville's intentions from the first moment this character enters the plot while Tess stumbles into her predicaments. The reader feels uneasy each moment both characters are left alone with themselves because he can guess what is going to happen. Another example is Tess's abandonment by her husband. All the while Tess is suffering and hoping for Angel to return quickly, the reader knows that he won't. But he also knows that Angel is unwell and has actually forgotten half the things he said to Tess during their quarrel. Tess is...
...Discussion Director - Tess of the D’Urbervilles
1. One of the biggest issues in this novel is whether Tess is victimized, whether she is responsible for her fate, or whether she is partially victimized and partially responsible for her fate. What do you think?
Throughout the entire novel, Tess has been victimized by others and by pure accident. Starting from the very beginning when her father found out about their link to the D’Urbervilles, every misfortune she experienced was initiated by external forces. Her own mistakes are minimal and forgivable until the end of the novel. Some of the readers of the literacy circle argued that Tess is responsible for her fate in the end because of her decision to kill Alec. I believe that she had been far too pressured and in the end she exploded and did something out of desperation. If she hadn’t been victimized for so long before her death, she definitely would not have committed such a crime.
2.Are there times when Tess does have a choice and her decisions and actions are the result of her character?
Yes, Tess does make her own decisions throughout the novel. For example, when she decides to tell Angel about her past, this is a decision based on her character. Although one can’t say she is to blame for his reaction, nor can anyone criticize her for her honesty, it was this... | 2024-06-17T01:26:35.434368 | https://example.com/article/3787 |
After rattling Republicans at a host of town halls protesting plans to kill Obamacare, liberal activists are zeroing in on their next target: Neil Gorsuch.
The confirmation battle over President Donald Trump’s Supreme Court nominee — set to heat up ahead of his testimony before the Senate Judiciary Committee starting March 20 — is shaping up as a pivotal moment for the burgeoning protest movement.
Persuading Senate Democrats to mount a filibuster of Gorsuch would solidify the influence of the anti-Trump grass roots, on the heels of its success in pressuring the 48-member minority to engineer a historic slow-walking of the president’s Cabinet nominees.
The debate over Gorsuch since Trump nominated him last month has been surprising low key. The highly credentialed federal court judge has impressed Democratic senators in private meetings, raising the possibility he’ll clear the Senate without a bloody filibuster battle.
But significant public pushback against Gorsuch this month would ramp up the pressure on Democrats who right now are more focused on defending Obamacare and investigating Trump’s ties to Russia than on the Supreme Court.
Anti-Trump strategists say the Democratic base is prepared to step up the resistance to Gorsuch.
“Stopping a Supreme Court nominee means demonstrating to Democrats that their base doesn’t want them cooperating with Donald Trump,” Ben Wikler, Washington director of MoveOn.org said. That could prove an easier task for liberal activists than, as Wikler put it, “convincing Republicans they’re in political danger” if they vote to overturn Obamacare.
“The level of potential energy for demanding that Democrats do their jobs is off the charts,” Wikler added in an interview.
Veteran Democratic strategist Jesse Ferguson said the ongoing controversy over Trump aides’ previously undisclosed contacts with Russian officials, itself a major topic of town-hall protests over last month’s recess, will help stoke opposition to Gorsuch.
“The idea that you could ram this through and no one would notice gets harder when everyone’s antenna is up because of other personnel decisions he’s made about his administration,” Ferguson, a former Hillary Clinton aide, said in an interview.
The Democratic base’s alarm about Trump’s advisers was on stark display throughout last month’s procedural blockade of multiple Cabinet nominees. During that campaign against what many of Democrats criticize as the president’s “swamp Cabinet,” Democratic senators often cited the enthusiasm and commitment of the anti-Trump movement.
Democrats couldn’t defeat any of Trump’s Cabinet nominees on the Senate floor, but they welcomed the chance to speak for the grass roots even on losing battles. During the height of the confirmation debate over Education Secretary Betsy DeVos, Sen. Bob Casey (D-Pa.) said he was seeing “intense and sustained engagement” on the Supreme Court as well as on Trump’s Cabinet and Obamacare.
A significant part of that engagement began with Indivisible, a new force for mobilizing local anti-Trump demonstrations that was founded by former Democratic congressional aides. The group crafted a script for local activists to use against Gorsuch a week after Trump tapped him for the high court.
“If Democrats truly do oppose this nominee, they should oppose him with everything in their toolbox,” Indivisible executive director Ezra Levin said in an interview.
But Levin underscored that the Gorsuch script, like other Indivisible directives on strategies for resisting Trump on other fronts, isn’t being pushed out to local Indivisible chapters but offered as a model.
“We’re not dictating anything” in terms of how often the anti-Gorsuch language is used, Levin said. “We do not want to be heavy-handed or take control of the movement.”
And Indivisible’s biggest strength — the ability to generate large turnout at local town halls that lawmakers hold during congressional recesses — may not be available to use against Gorsuch. The GOP-controlled Senate is setting the stage for a full vote on the Supreme Court nominee before April’s two-week recess, in part to give the Senate enough time to clear a must-pass government funding bill by April 28.
It’s unclear how systematic liberal groups will be in their campaign against Gorsuch, who has been making the rounds in the Senate for weeks as part of a largely successful persuasion campaign. Wikler, of MoveOn.org, acknowledged that “Gorsuch has had the stage essentially to himself” so far but insisted that “that’s going to change.”
Also unclear is whether a Democratic pressure campaign can stop the Senate from approving Gorsuch. Sen. Kirsten Gillibrand (D-N.Y.) has predicted his eventual confirmation, either by garnering 60 votes or with Majority Leader Mitch McConnell (R-Ky.) changing Senate rules to approve Gorsuch with a simple majority.
Ilyse Hogue, president of the abortion-rights group NARAL Pro-Choice America, said she senses the Democratic base “getting increasingly concerned” about Gorsuch as the March 20 start of his hearings draws nearer. The Affordable Care Act produced “the majority of the energy” among protesters during last month’s congressional recess, Hogue said, but “we’re starting to see the seeds” of town-hall energy getting redirected at the Supreme Court fight.
“These people are flooding town halls and running for office at unprecedented rates,” Hogue said of the newly engaged Democratic grass roots. “They want elected officials to do their job, and part of that job is digging really hard at the hearings into his record.” | 2024-02-10T01:26:35.434368 | https://example.com/article/9422 |
964 S.W.2d 818 (1998)
NATIONAL SOLID WASTE MANAGEMENT ASSOCIATION, et al., Respondents,
v.
DIRECTOR OF THE DEPARTMENT OF NATURAL RESOURCES, Appellant.
No. 79737.
Supreme Court of Missouri, En Banc.
February 24, 1998.
As Modified on Denial of Rehearing April 21, 1998.
*819 Jeremiah W. (Jay) Nixon, Atty. Gen., Karen King Mitchell, Timothy P. Duggan, Asst. Attys. Gen., Jefferson City, for Appellant.
Lowell D. Pearson, Alex Bartlett, Jefferson City, for Respondents.
LIMBAUGH, Judge.
Two days before the end of the 1995 legislative session, the House of Representatives tacked onto the tail-end of the 31-page Senate Bill 60 (SB 60) an amendment, codified at section 260.003, RSMo Supp.1996, that imposed new requirements for the issuance of permits, licenses, and grants of authority for both solid waste and hazardous waste facilities. That amendment, the focus of this appeal, expanded the subject of the bill from one that originally encompassed only "solid waste management" to one encompassing both "solid waste management" and hazardous waste management. Respondents sued to enjoin enforcement of the hazardous waste management applications of SB 60 on the grounds that the amendment violated the "original purpose" provision from article III, section 21, of the Missouri Constitution and the "one subject" and "clear title" provisions from article III, section 23. The circuit court granted summary judgment in favor of Respondents. This Court has exclusive jurisdiction of the appeal. Mo. Const. art. V, sec. 3. For the reasons that follow, this Court holds that the subject of SB 60 was not clearly expressed in its title and that the amendment is therefore invalid to the extent that it pertains to hazardous waste management. The judgment of the circuit court is affirmed.
I.
Appellant, Director of the Department of Natural Resources (Director), first raises the threshold issue of whether Respondents have standing to challenge SB 60's constitutionality. The Respondents are Terry Schlemeier, a Missouri taxpayer; National Solid Waste Management Association, a trade association of individuals working in solid waste management; and Browning-Ferris Industries, Inc., a corporation engaged in the business of solid waste management. To establish standing, Schlemeier, like all Missouri taxpayers, need only show "that [his] taxes went or will go to public funds that have or will be expended due to the challenged action." O'Reilly v. City of Hazelwood, 850 S.W.2d 96, 98 (Mo. banc 1993). From our review of the record, the circuit court correctly concluded that "enforcement of SB 60 has and will cost the state funds for salaries, expenses, and other costs that would not otherwise be made." It follows that taxpayer Schlemeier has standing, and for that reason, we need not address the standing of the other two respondents. See Missouri Coalition for the Env't v. Joint Comm. on Admin. Rules, 948 S.W.2d 125, 132 (Mo. banc 1997).
II.
Article III, section 21, of the Missouri Constitution mandates that "no bill shall be so amended in its passage through either house as to change its original purpose." Section 23 requires that "[n]o bill shall contain more than one subject which shall be clearly expressed in its title." In recent years, this Court has had numerous opportunities to outline and discuss the policies behind these constitutional provisions. See Stroh Brewery Co. v. State, 954 S.W.2d 323 (Mo. banc 1997); Missouri Health Care Ass'n v. Attorney General, 953 S.W.2d 617 (Mo. banc 1997); Fust v. Attorney General, 947 S.W.2d 424 (Mo. banc 1997); Carmack v. Director, Missouri Dep't of Agric., 945 S.W.2d 956 (Mo. banc 1997); and Hammerschmidt v. Boone County, 877 S.W.2d 98 (Mo. banc 1994). In Stroh Brewery Co., we summarized:
[T]hese constitutional limitations function in the legislative process to facilitate orderly procedure, avoid surprise, and prevent "logrolling," in which several matters that would not individually command a majority vote are rounded up into a single bill to ensure passage. Sections 21 and 23 also serve to keep individual members of the legislature and the public fairly apprised of the subject matter of pending laws and to insulate the governor from "take-it-or-leave-it" choices when contemplating the use of the veto power. *820 Stroh Brewery Co., 954 S.W.2d at 325-26. Without question, the circumstances surrounding the passage of SB 60 are exactly those to which these constitutional limitations are addressed. The section pertaining to hazardous waste management was part of a last-minute amendment about which even the most wary legislators could hardly have given their considered attention and about which concerned citizens likely had no input.
A.
The Respondents' motion for summary judgment and the circuit court's ruling focused on the "single subject" and "original purpose" claims. Citing the standard from Hammerschmidt, the circuit court held that "hazardous waste does not `fairly relate' or have a `natural connection' to solid waste" so that the two could properly be categorized as one subject. Under a similar analysis, the circuit court determined that the purpose of the bill as originally introduced"amendment of the state's solid waste management law"is different from a purpose that relates both to solid waste management and hazardous waste management. Although it is arguable that some overlap exists between the two kinds of wastesome hazardous waste may in a literal sense be solid waste it is undisputed that the terms "solid waste management" and "hazardous waste management" are distinct. Under chapter 260, entitled "Environmental Control," hazardous waste management is subject to a specific regulatory scheme (secs. 260.350 to 260.434, RSMo 1994) separate and dissimilar from that pertaining to solid waste management (secs. 260.200 to 260.345, RSMo 1994). In fact, as part of the solid waste management scheme, the legislature has expressly defined "solid waste" to exclude hazardous waste. Section 260.200(34), RSMo Supp.1996.
Nonetheless, the Director claims that the amendment to SB 60 did not change the bill's original purpose or expand it to encompass more than one subject. The bill's original purpose and subject, as the Director explains, was not solid waste management, although that was the sole focus of the bill as originally introduced, but was instead the larger, more expansive subject of environmental control, which encompasses all types of waste management. Under this argument, hazardous waste management "fairly relates to" and has a "natural connection with" solid waste management because they both fall under the purview of environmental control. In any event, it is unnecessary to resolve these claims.
B.
Assuming, arguendo, that the original purpose and single subject of the bill is environmental control, there is still a clear title violation. The title of the bill as finally passed was:
AN ACT to repeal sections 260.200, 260.201, 260.202, 260.205, 260.207, 260.227, 260.228, 260.235, 260.241, 260.270, 260.273, 260.274, 260.275, 260.276, 260.325, 260.330, 260.335 and 260.345, RSMo 1994, relating to solid waste management, and to enact in lieu thereof twenty new sections relating to the same subject, with penalty provisions.
(Emphasis added.) The title's failure to refer also to hazardous waste management or to an all-encompassing category of environmental control, or something similar, is a fatal defect. The subject of the billwhether characterized as a combination of solid waste management and hazardous waste management, or as environmental controlis not clearly expressed in its title.
The standards for evaluating a "clear title" violation are well-settled. As this Court reiterated last year in Fust:
The "clear title" provision, like the "single subject" restriction, was designed to prevent fraudulent, misleading, and improper legislation, by providing that the title should indicate in a general way the kind of legislation that was being enacted. If the title of a bill contains a particular limitation or restriction, a provision that goes beyond the limitation in the title is invalid because such title affirmatively misleads the reader.
Fust, 947 S.W.2d at 429 (citations omitted). The basic idea, stated somewhat differently, is that "where the title of an act descends to particulars and details, the act must conform to the title as thus limited by the particulars *821 and details." Lincoln Credit Co. v. Peach, 636 S.W.2d 31, 39 (Mo. banc 1982). In more simple terms, the rule is that the title to a bill cannot be underinclusive.
The argument to be made that the title in this case is sufficiently inclusive is the same used to fend off the original purpose and single subject challengethat the subject is not only solid waste management, but all matters "relating to" solid waste management. In other words, a title stating that the bill relates to solid waste management encompasses not only solid waste management, but also everything that is related to solid waste management. We disagree. The mere fact that two subjects in a bill can be reconciled as part of a broader subject, and thus satisfy original purpose or single subject challenges, does not, in itself, mean that the broader subject has been clearly expressed in the title of a bill.[1] A title that identifies that broader subjectin this case environmental control or perhaps all types of waste managementis very general, but it is accurate.[2] On the other hand, a title that fails to identify the broader subjectlike the title in the case at handis not so clear. A bill's multiple and diverse subjects, absent specific itemization, can only be clearly expressed by their commonalityby stating some broad, umbrella category that includes all the subjects within its cover. A title stating that the bill relates to solid waste management is unclear if the bill relates also to hazardous waste management. It forces the reader to search out the commonality of the subjects in this case, environmental controlfrom some source extrinsic to the title itself.
This, in essence, is the rationale behind the above-stated rule that where the title of a bill "descends to particulars and details, the act must conform to the title." In this case, the title of SB 60 does not generally indicate what the bill contains, but instead descends to the particular subject of solid waste management. The title not only states that the bill relates to solid waste management, but also lists specifically the repealed sections, each of which pertained solely to the solid waste management scheme, and further notes the enactment "of twenty new sections relating to the same subject." The irreconcilable problem is that the bill also includes the particular subject of hazardous waste management and thus does not conform to the title. This lack of conformity makes the title affirmatively misleading. It gives the reader the mistaken impression that the bill pertains to solid waste management only. In other words, the phrase "relating to solid waste management" erroneously implies that the bill does not relate to any other kind of waste management, and the reference to the repealed sections from the solid waste management scheme reinforces that erroneous implication.
Finally, it bears mention that expanding the title of a bill to reflect the commonality of all the subjects contained in the bill is not a novel proposition. It is the process that the legislature has routinely used to accommodate amendments to a bill and a process this Court has consistently approved. See, e.g., Westin Crown Plaza Hotel Co. v. King, 664 S.W.2d 2 (Mo. banc 1984) (noting with approval that the original title "An Act ... relating to fees and compensation of state and local registrars of vital statistics," was expanded to read "An Act ... relating to certain fees related to the division of health"); and Lincoln Credit Co., 636 S.W.2d 31 (noting with approval that the original title "An Act ... relating to interest" was expanded to read "An Act ... relating to certain credit transactions").
To summarize, this Court holds that the title to SB 60 is underinclusive. The title's reference to solid waste management reflects *822 neither the specific subjects contained in the bill, which are both solid waste management and hazardous waste management, nor any larger subject such as environmental control or waste management in general, under which both solid waste management and hazardous waste management would be covered. As such, the subject of the bill is not clearly expressed in the title. In view of the clear title violation, SB 60 is unconstitutional to the extent that it pertains to the subject of hazardous waste management.
III.
Respondents suggest that section 260.003, the offending provision of SB 60, should be severed in its entirety. This remedy, at least in this case, is contrary to the mandate of the severance statute, section 1.140, RSMo, that "all statutes ... should be upheld to the fullest extent possible." Associated Indus. v. Director of Revenue, 918 S.W.2d 780, 784 (Mo. banc 1996). Section 260.003 applies to hazardous waste management because the section refers to a "permit, license or grant of authority [] issued or renewed ... pursuant to this chapter" (emphasis added), and "this chapter"chapter 260encompasses the separate regulatory schemes for both solid waste management and hazardous waste management. The operative language of section 260.003 the phrase "pursuant to this chapter"cannot be excised in order to restrict the section's application to solid waste management only. Nonetheless, severance may be accomplished by restricting the application of the statute. As this Court recognized in Associated Industries v. Director of Revenue, where a provision is invalid as to some, but not all, possible applications, and it is not possible to excise part of the text and allow the remainder to be in effect, the language of the provision must be restricted to the valid application. Id. "Stated another way, the statute must, in effect, be rewritten to accommodate the constitutionally imposed limitation, and this will be done as long as it is consistent with legislative intent." Id.
The question of legislative intent, in this context, is whether the legislature would have enacted SB 60 without section 260.003's application to hazardous waste management. The answer is rather obvious. The legislative intent behind SB 60, indeed the very purpose of the bill, was to regulate solid waste management. After all, the bill's title stated expressly that the bill related to solid waste management, and all other provisions of the bill do, in fact, relate to solid waste management. On the other hand, the application of the bill to hazardous waste managementan application that appears only in section 260.003seems to be incidental and perhaps even unintentional. Consistent with legislative intent, this Court declines to sever section 260.003 in its entirety.
IV.
For the foregoing reasons, the Director is enjoined from enforcing the hazardous waste management application of SB 60. Judgment is entered accordingly. Rule 84.14.
ROBERTSON, COVINGTON and WHITE, JJ., concur.
PRICE, J., dissents in separate opinion filed.
BENTON, C.J., and SPINDEN, Special Judge, concur in opinion of PRICE, J.
HOLSTEIN, J., not sitting.
PRICE, Judge, dissenting.
The majority holds that the title of SB 60 is unconstitutional because it is underinclusive in that it is entitled "an act ... relating to solid waste management," but fails to reflect that provisions of the bill also deal with hazardous waste management. The majority treats "relating to" as language of exclusivity, thereby restricting the reach of SB 60 to provisions dealing solely with solid waste management. I disagree because this restrictive reading of the title is contrary to the plain meaning of the words "relating to"; the majority's interpretation conflicts with our precedent instructing that only language that clearly and undoubtedly violates the procedural limitation will support a constitutional challenge; and no showing has been made on the record that hazardous waste management is, in fact, unrelated to solid waste management.
*823 The title of SB 60 expressly states the subject of the act to be "relating to solid waste management." The majority contends that the language "relating to" is a restrictive term and, therefore, it prohibits the inclusion of a provision dealing with hazardous waste. They hold that "the phrase `relating to solid waste management,' ... implies that the bill does not relate to any other kind of waste management." This reading contradicts the plain and ordinary meaning of the words "relating to." These are words of connection not restriction. "Relate" is defined as "to show or establish a logical or causal connection between." "Related" is defined as "connected by reason of an established or discoverable relation." Webster's International 1916 (3d ed.1981).
The common use of these words in the legislative process has been to indicate in a title that matters connected with the stated subject of the bill may be indicated therein. Otherwise, if each matter touched upon by a bill must be separately stated in the bill's title, the title would need to recite almost the entirety of the bill itself. Thus, the proper analysis should begin by determining if there is a logical or causal connection between hazardous and solid waste management. The majority fails to demonstrate that such a relationship is lacking other than to say, in a conclusory statement, that "relating to" excludes other types of waste management.
The majority's novel[1] reading of "relating to" as a restrictive term, and their subsequent finding that the title is "underinclusive," is contrary to the reasoning of our past decisions. In Hammerschmidt v. Boone County, we found that a bill whose subject was to amend laws "relating to elections" violated the single subject requirement of article III, sec. 23 when it included a provision that permitted a county to adopt its own constitution. 877 S.W.2d 98 (Mo.1994). Never relying on a restrictive reading of "relating to," we instead emphasized that the challenged provision "does not fairly relate to elections, nor does it have a natural connection to that subject." Id. at 103. Further, we recognized the following principles, stating:
[A]n act of the legislature approved by the governor carries with it a strong presumption of constitutionality. This Court will resolve doubts in favor of the procedural and substantive validity of an act of the legislature. Attacks against legislative action founded on constitutionally imposed procedural limitations are not favored; we ascribe to the General Assembly the same good and praiseworthy motivations as inform our decision-making processes. Therefore, this Court interprets procedural limitations liberally and will uphold the constitutionality of a statute against such an attack unless the act clearly and undoubtedly violates the constitutional limitation.... [T]his Court has consistently attempted to avoid an interpretation of the Constitution that will "limit or cripple legislative enactments any further than what was necessary by the absolute requirements of the law." Id. at 102 (citations omitted) (emphasis added).
In Fust v. Attorney General for the State of Missouri, 947 S.W.2d 424 (Mo.1997), a provision granting fifty percent of any punitive damages award to the state was challenged as not being clearly expressed in a title reading "to repeal ... and to enact ... new sections for the purpose of assuring just compensation for certain person's damages." Among appellant's arguments was that the title was too restrictive to encompass the "punitive damages" provision. We rejected those contentions, holding that "appellants have failed to sustain their burden of establishing that the title contains a restriction or limitation...." Id. at 429. We noted that "the one asserting the unconstitutionality of the statute has the burden of showing the constitutional procedural limitation has `clearly and undoubtedly' been contravened." Id. at 428. (citation omitted). Most significantly, we stated "the title need not describe every detail contained in the bill. The title to the act is valid if it indicates the general contents of the act...." Id. at 429 (emphasis added).
In Stroh Brewery Co. v. State, 954 S.W.2d 323 (Mo.1997), we analyzed the original title *824 of SB 933, which read "an act to amend ... by adding one new section relating to the auction of vintage wine...." Undertaking an original purpose analysis, we faced the issue of whether the term [`by'] restricted the manner in which the bill could be amended. We stated that "while [by] might have been meant to convey exclusivity, such a construction is not clear and undoubtedly so. When alternative readings of a statute are possible, we must choose the reading that is constitutional." Id. We noted that the legislature could have used clearer language of limitationsuch as "for the sole purpose of." We also recognized that:
"[A] bill's sponsor is faced with a double-edged strategic choice. A title that is broadly worded as to purpose will accommodate many amendments that may garner sufficient support for the bill's passage. Alternatively, a title that is more limited as to purpose may protect the bill from undesired amendments, but may lessen the ability of the bill to garner sufficient support for passage. Because we are required to uphold the constitutionality of a statute against attack unless the statute clearly and undoubtedly violates the constitution, only clear and undoubted language limiting purpose will support an article III, section 21 challenge."
Id. at 326.
Particularly fatal to the majority's analysis is the fact it entirely overlooks the record below and the failure of respondent to establish as a matter of fact what solid waste management and hazardous waste management are, or that they are, in fact, not related. See State v. Hampton, 653 S.W.2d 191, 194 (Mo. banc 1983) ("The burden of establishing [a statute's] unconstitutionality rests upon the party questioning it.") Instead, the majority appears willing simply to declare that solid waste management and hazardous waste management are not related as a matter of law, despite the fact that neither are defined in the statute or have been defined by our prior case law.
Admittedly, the terms "solid waste" and "hazardous waste" are defined by the statute and it is true that the definition affirms that they are not one in the same. The definition does not say, however, that the two are unrelated to one another.
A review of the law surrounding hazardous and solid waste reveals that the two are related. Our legislature has defined solid waste, as shown above, as including "garbage, refuse and other discarded materials...." Section 260.200(34), RSMo 1996 Supp. Hazardous waste is defined as:
"[A]ny waste or combination of wastes, as determined by the commission by rules and regulations, which, because of its quantity, concentration, or physical, chemical or infectious characteristics, may cause or significantly contribute to an increase in mortality or an increase in serious irreversible, or incapacitating reversible, illness, or pose a present or potential threat to the health of humans or the environment."
Section 260.360(11), RSMo 1996 Supp. Simply put, both topics relate to waste. Whether hazardous or solid, our legislature has set up schemes to transport, discard, and dispose of it. It is telling that the legislature defined "solid waste" as excluding hazardous waste. Such a distinction would be unnecessary if the two subjects were unrelated to one another.
A look at federal environmental law also demonstrates the close relationship between hazardous and solid waste. The federal Resource Conservation and Recovery Act regulations define "Hazardous waste as a subset of `solid waste' with characteristics that pose hazards to human health or the environment." William Rodgers Jr., Environmental Law, Section 7.8 (1992). "Hazardous waste is a specifically regulated subcategory of waste, in which certain characteristics are met by a discarded material." James T. O'Reilly, State & Local Government Solid Waste Management, Section 1.01, n. 2 (citing 42 U.S.C. Section 6921) (1994). What is solid waste and what is hazardous waste, at times, may be a difficult distinction to make. "Waste can slip in and out of the `hazardous' category ..." Rodgers, at Section 7.8. "The criteria for identifying a `hazardous waste' have undergone episodic fits and starts ..." Id. "The stakes are high in these definitional disputes (refugees from the category of `hazardous *825 waste' under Subtitle C often reappear as `solid waste' under Subtitle D, where regulation is much milder) ..." Id.
Although there are critical distinctions between solid and hazardous waste, these distinctions do not break the relationship between the two categories of waste. For the majority's reliance on the definition of solid waste to be relevant, there must be language of exclusivity restricting the reach of SB 60 to only solid waste, which is lacking.
Were it our duty to draft the best possible titles for legislation, then I could understand the majority's hesitation to embrace this title. However, that is not our duty. Our duty is to strike down statutes only when the language "clearly and undoubtedly" violates the constitutional limitation. The majority's decision that "relating to solid waste management" implies that the bill does not relate to any other kind of waste is not a plain and ordinary reading of the phrase "relating to." Further, this new construction, when read with our previous opinions addressing these issues, will merely serve to confuse and frustrate the General Assembly as it tries to ascertain just what it is the constitution requires of them.
The majority also appears to criticize the legislature because this amendment was "tacked onto the tail end" of Senate Bill 60 two days before the end of the session. Again, it is not our duty to prescribe procedures for the legislature not already set out in the constitution. This fact is legally irrelevant and we overstep our bounds in making this criticism.
For these reasons, I respectfully dissent.
NOTES
[1] The dissent's focus on Hammerschmidt v. Boone County, 877 S.W.2d 98 (Mo. banc 1994), and Stroh Brewery Co. v. State, 954 S.W.2d 323 (Mo. banc 1997), fails to take into account that those decisions address only original purpose and single subject challenges. In contrast, the case at hand involves a clear title challenge, which for the reasons stated, necessitates a different analysis. The other case discussed by the dissent, Fust v. Attorney General, 947 S.W.2d 424 (Mo. banc 1997), involved a clear title challenge, but the title did not include the "relating to" language at issue in this case.
[2] A different problem arises, of course, when the larger category is so general and broad that it fails to give notice of the bill's true subject. Fust, 947 S.W.2d at 429.
[1] Respondent did not even raise or argue such a proposition.
| 2023-09-12T01:26:35.434368 | https://example.com/article/4366 |
Mahatma Gandhi Setu
Mahatma Gandhi Setu (also called Gandhi Setu or Ganga Setu) is a bridge over the river Ganges in Bihar, India, connecting Patna in the south to Hajipur in the north. Its length is and it is the third-longest river bridge in India. It was inaugurated in May 1982 in a ceremony in Hajipur by the then prime minister, Indira Gandhi.
Planning and significance
The bridge was approved by the Central Government in 1969 and built by Gammon India Limited over a period of ten years, from 1972 to 1982. The total expenditure was crore (872.2 million rupees). It was built to connect North Bihar with the rest of Bihar through the state's capital at Patna, and as part of national highway 19 (NH19). Before this bridge was constructed, the only bridge crossing of the Ganges in Bihar was Rajendra Setu, approximately to the east, which had opened in 1959. Since then, the Vikramshila Setu has also been built across the Ganges. Two more rail-cum-road bridges are currently under construction, between Digha and Sonepur and at Munger.
The Indian postal department issued a commemorative postage stamp, "Landmark Bridges Of India: Mahatma Gandhi Setu", on 17 August 2007.
Engineering
The bridge consists of 45 intermediate spans of each and a span of at each end. The deck provides for a two-lane roadway for IRC class 70 R loading with footpaths on either side. The cantilever segmental construction method was adopted; each span has two cantilever beams on both sides which are free to move at the ends. It has two lanes, one upstream and the other downstream, each with a width of around . These lanes are free from each other with no connections. It was constructed using pre-cast parts, which were joined at both ends to complete the span. The spans are connected with a protrusion which is free to move longitudinally. Vertical movement allows for vibrations from vehicular movement to transfer smoothly between spans without much discreteness.
Traffic congestion
In recent decades, the bridge has experienced major traffic chaos due to the increasing number of vehicles crossing it, operating in excess of capacity and overloading the structure. The Bihar government has planned to build two pontoon bridges parallel to it, in order to relieve these problems. The bridge is crossed daily by over 85,000 vehicles and 12,000 pedestrians.
History
Construction started: Year 1972
Scheduled opening: June 1978.
Tender cost: Rs 23.50 crore
1st Extension of Time (EOT): June 1980
Allocated cost: Rs 46.67 crore
Reasons for cost increase: This extra cost is the outcome of an "in-built" cost escalation clause in the contract
Reasons for delay: Heavy storm in April 1979 destroyed two gantries and casting beds. Each gantry crane weighs 300 tonnes. Huge shortage of cement and building material and a workers' strike
Reports: Cement and other building materials stored for this project find their way into Nepal and parts of Bihar
from the northern side of the bridge.
2nd Extension of Time (EOT): December 1981
Project progress: 80% ( physical ) up to September 1980
Billed value: Rs 41 crore
Contractor’s extra claim : Rs 50 crore
Litigation & arbitration:
Disagreement between the contractors and the Government overpayments stalled construction activity.
Claims and bills got referred to the Law Department.
Final completion date: June 1982 ( Eastern carriageway )
Completion date: April 1987 ( Western carriageway )
Total cost: 87 crores
Minister of State for Public Works: Raghunath Jha
Chief Minister: Jagannath Mishra
Structural integrity and failure
The bridge has often been subjected to structural loads and moving loads exceeding its design. Major repairs were initiated on it within five years of its completion. Poor maintenance, coupled with wear and tear caused by the unprecedented surge in traffic, has made the structure vulnerable. Other bridges in India which were built with the same cantilever design have developed cracks.
Investigations into the fissures developed in the bridge revealed the following defects: hammering at the hinges when vehicles plied; finger-type expansion joints in an advanced state of distress; wearing coat cracks; spilling of concrete at transverse joints; longitudinal cracks in precast segments; leakage of water inside the box girder from joints between segments and from holes provided for lifting the segments.
Mahatma Gandhi Setu is now being dismantled. It may have happened that due to such inferior quality of reinforcement coupled with inferior concrete have been causes for such catastrophic failure. Stressed cables are not grouted at all. They are acting like de-bonded tendons. There is minimal stress left. That is why external pre-stressing made later could not make up the stresses lost. Even cables do not conform to the as-built drawings submitted. All as-built drawings say how improper the design was. Providing central hinge bearing may not have given so much of adverse effect as the problems cited above. Now it is becoming clear that there were faults in all the departments, be it design or construction or supervision or material deficiency.
References
See also
List of longest bridges in the world
List of longest bridges above water in India
List of bridges in India
Kacchi Dargah-Bidupur Bridge
Category:Road bridges in India
Category:Bridges completed in 1982
Category:Transport in Patna
Category:Tourist attractions in Patna
Category:Buildings and structures in Patna
Category:Bridges over the Ganges
Category:Bridges in Bihar
Category:Former toll bridges
Category:1982 establishments in India | 2023-11-08T01:26:35.434368 | https://example.com/article/7927 |
Agenda
Misticísssimus
Concert by Burruezo & Bohèmia Camerata included within the "Nights of Music in the Jewish quarter"
Thursday 9 August - 22 h
Call de Girona
Pedro Burruezo, together with Bohèmia Camerata, comes to the Jewish quarter of Girona with a show based on medieval music of several origins, which will guide us from the past to the present. Music with smell of Al-Andalus, of the Jewish Catalonia, of far-away troubadours... but from a whole contemporary point of view. | 2023-09-28T01:26:35.434368 | https://example.com/article/6478 |
A novel method for study of gastric mechanical functions in conscious mice.
A novel method has been developed for simultaneous study of gastric emptying, antral motility, and gastric muscle tone in conscious mice. Intragastric pressure was measured during infusion of an X-ray-opaque, viscous meal through a chronically implanted gastric fistula (0.25 ml/min). Compared with vehicle treatment, molsidomine (nitric oxide donor) and atropine (muscarinic receptor antagonist) treatment significantly reduced the area under the intragastric pressure curve (AUC) by 37 +/- 4% and 35 +/- 3%, respectively, (mean +/- S.E.M.) whereas N (G)-nitro-L-arginine methyl ester (L-NAME; nitric oxide synthase inhibitor) significantly increased the AUC by 20 +/- 3%. Atropine also significantly reduced the frequency and amplitude of stomach contraction-induced intragastric pressure waves while molsidomine only reduced the frequency. Gastric emptying, as assessed by X-ray imaging, was significantly delayed after L-NAME and atropine treatment. This methodology is the first to enable simultaneous assessment of gastric emptying, antral motility, and gastric tone in conscious mice and confirmed the important role of nitrergic and cholinergic innervation. | 2024-07-25T01:26:35.434368 | https://example.com/article/4580 |
The Best New Stuff Dropping On Netflix And Amazon In August
The World Cup, Wimbledon and - let's face it Love Island - allending has left a void in your evenings. So you as well recommit to that dent in your sofa, and enjoy the great new stuff being added to Netflix and Amazon this month.
Netflix
Better Call Saul Season 4 (7 August)
Rumours of a Breaking Bad reunion 10 years on persist. While there's no confirmation of that, the next best thing is the return of its spin-off show. The answer to whether Saul Goodman/Jimmy McGill's brother Chuck survived the fire in his house is swiftly answered in episode one, while the repercussions push Jimmy towards the criminal world of his alter-ego Saul and further away from his partner Kim.
Despite not even airing yet Netflix's new series has caused plenty of controversial, with claims of 'fat-shaming' resulting in a 200,000 strong petition calling for its cancellation. Critics who haven't seen the show seemed to have, as the Guardian pointed out, "confused the subject of a joke with its target". Creator, and long-serving Dexter writer, Lauren Gussis, has defended Insatiable - in which a disgraced lawyer-turned-beauty pageant coach takes on a bullied teenager - saying: "The show is a cautionary tale about how damaging it can be to believe that outsides are more important – to judge without going deeper". It'll certainly be the talk of the office kitchen at the least.
Twitter was briefly a joyful place when it was announced Simpsons' creator Matt Groening was making a new animated series for Netflix. Simpsons and Futurama writer and show-runner Josh Weinstein is also onboard to tell the story of a near-alcoholic princess called Bean (voiced by Broad City's Abbi Jacobson), her elf 'Elfo' and personal demon named Luci as they explore the dishevelled (and seemingly) misnamed Dreamland.
The Innocents (24 August)
The romantic story of teen lovers Harry and June running away from their families twists into a supernatural horror when the pair discover June's has the power to shape shift. As with many supernatural series, there's a mysterious doctor (played by Guy Pearce) who reveals there are other shape shifters out there and promises to reunite her with her mother. The themes and characters seem to take some inspiration from Netflix hit Stranger Things though interestingly, this is a British drama.
The first season of crime drama Ozark received widely positive reviews and several award nominations and wins for Jason Bateman and Laura Linney's performances. The follow-up sees the Marty Byrde's family's debts to the drug cartel continue to govern their lives as they become further entangled thanks to the gang's new attorney who has it in for them. A show where the acting pedigree and writing quality continue to grow and prove this isn't just a Breaking Bad copycat.
Amazon
Casual Season 4 (1 August)
Home assistants, dysfunctional families and VR make up the background of this exploration of modern life. In the series, divorced psychologist Valerie lives with her brother, the eternally single dating app creator Alex, and they raise her daughter together. Fans of Modern Family, Master of None and Transparent will approve.
Rage, inspiration and humour are all on the menu in this new fly-on-the-wall series charting Manchester City's extraordinary 2017/18 season which saw them break the Premier League's unbeaten run record. Highlights include John Stones throwing something on the floor in anger and Pep emotionally insisting, "I will defend you until the last day of our lives in the press conferences, but here I am going to tell you the truth." There's also a nice team rendition of 'Wonderwall'.
Dominic Cooper excels as the villainous Jesse Custer, a preacher from Texas who can command people to do as he wishes in this violent comic book adaptation. He teams up with an Irish vampire (Joseph Gilgun) and his assassin ex-girlfriend (Ruth Negga) in a series co-produced by Seth Rogen and with Breaking Bad’s Sam Catlin as show-runner. Which all sounds like a lot of fun.
ESQUIRE, PART OF THE HEARST UK FASHION & BEAUTY NETWORK
Esquire participates in various affiliate marketing programs, which means we may get paid commissions on editorially chosen products purchased through our links to retailer sites. | 2024-05-26T01:26:35.434368 | https://example.com/article/4254 |
The overall goal of this application is to discover and develop medications for the treatment of substance abuse. However, we expect that the compounds developed will also serve as biochemical probes useful in gaining a better understanding of the biochemical and molecular mechanisms of opiate addiction and withdrawal. The specific aim is to develop potent and subtype selective mu, delta, or kappa opioid pure antagonists. The scope of our approach will involve the design, synthesis, and biological evaluation of target compounds based on the N- substituted trans-3,4-dimethyl-4-(3-hydroxyphenyl)piperidine and BW373U86 class of opioid compounds. The scope of each study will include: (1) the development of testable models for opioid antagonist activity and subtype selectivity; (2) the design and synthesis of compound libraries followed by specific target compounds; (3) in vitro evaluation of the libraries and target compounds using radioligand and [35S]GTPgammaS binding assays; and (4) in vivo evaluation in animal models. Compounds that show potent and selective affinity at the mu and delta receptors will be evaluated for their ability to upregulate surface receptors by Dr. Chris Evans (UCLA). Compounds which show greater potency as antagonists in the [35S]GTPgammaS assay relative to the radioligand binding assay will be evaluated as inverse agonists by Dr. John Traynor (U. of Michigan). Behavioral studies will be conducted by Dr. Toni Shippenberg (NIDA-ARC) and Dr. Linda Dykstra (U. of North Carolina); selected compounds will be evaluated for suppression of ethanol-reinforcing responding in rats and rhesus monkeys by Dr. Harry June (U. of Indiana) and Dr. James Wood (U. of Michigan), respectively; high affinity kappa selective compounds will be submitted to the NIDA Opioid Treatment Discovery Program for both in vitro and in vivo activity, and compounds showing Ki values less than 100 nM will be submitted to the CPDD testing program. At present, few potent, systematically active and selective nonpeptide antagonists are available. The design and synthesis of novel selective nonpeptide opioid receptor antagonists will provide critically needed tools to advance our understanding of the role of the opioid receptor/endorphin system in both normal and various disease states, including dru2g addiction. In particular, we propose that the development of selective delta and kappa antagonists may provide a new generation of important investigational drugs and possible treatment drugs for people suffering from drug addiction. | 2023-12-20T01:26:35.434368 | https://example.com/article/3473 |
Q:
NUnit: Generating dynamic test result output
I have some N-unit test cases that have very convoluted tests. As a result, I'd like to include some steps in the test result XML. While I can partly achieve that with static strings, I do have occasions where I need the contents to be dynamic.
For instance, let's case I have a test case that takes in a folder and does something to the 3rd file, I'd like to be able to output something like
Step 1: Reading folder "MyFolder"
Step 2: Reading file "Myfile.txt"
. Where where MyFile.txt is a variable.
I have thought of using a Singleton output stream callable by each test case to output these things into a temporary file, but it is a bit inelegant.
Any thoughts?
A:
It seems like there is no standard way of doing this. I achieved my objective by having the unit tests (which were intrinsically run as separate sub-processes) print to stdout and then manually parse that. Not very clean, but it works.
| 2024-05-04T01:26:35.434368 | https://example.com/article/9262 |
Timeline of Mérida, Mexico
The following is a timeline of the history of the city of Mérida, Yucatán, Mexico.
Prior to 20th century
1542 - Mérida founded by Francisco de Montejo the Younger on site of former city T'ho.
1547 - Franciscan convent active.
1549 - Montejo's residence.
1561 - Mérida Cathedral construction begins.
1598 - Mérida Cathedral construction completed.
1618 - School of Mérida opens.
1624 - established.
1648 - Yellow fever epidemic.
1823 - Yucatán becomes part of Mexico.
1847 - Caste War of Yucatán begins.
1892 - Government Palace (Palacio de Gobierno) built.
1900 - Population: 43,630.
20th century
1910 - founded.
1922 - Universidad Nacional del Sureste established.
1929 - Airport begins operating.
1949 - Cine Teatro Mérida opens.
1950 - Population: 144,793.
1957 - Monumento a la Patria erected on the Paseo Montejo.
1962 - Instituto Tecnológico de Mérida established.
1978 - Pacheco murals in the Palacio de Gobierno completed.
1983 - Jardin Botanico Regional del CICY (garden) established in .
1988 - The city is hit by Hurricane Gilbert.
1993 - Catholic Pope John Paul II visits city.
1999 - Bill Clinton visits the city in a binational meeting.
2000 - The city is designated as the 1st American Capital of Culture.
21st century
2002 - The city is hit by Hurricane Isidore.
2001 -
Yucatan Symphony Orchestra founded.
Ana Rosa Payán becomes as the 29th mayor for a second period.
2003 - C.F. Mérida football club formed.
2004 - Manuel Fuentes Alcocer becomes the 30th mayor.
2005 - The city held the International Mathematical Olympiad.
2006 - Mérida host the 18th International Olympiad in Informatics.
2007 -
George W. Bush is received in Mérida, here he signs the Mérida Initiative.
César Bojórquez Zapata becomes the 31st mayor.
2009 - The city held the 40th International Physics Olympiad.
2010
Angélica Araujo Lara becomes the 32nd mayor.
Population: 777,615; Metropolitan Area 973,046.
2011
The International Committee of the Banner of Peace titled Merida as "City of Peace".
The city held the II Alianza del Pacífico summit.
2012 - Alvaro Lara Pacheco becomes acting mayor, few months later Renán Barrera Concha wins the local election and he becomes the 34th mayor.
2014 - Mérida hosted the VI Summit of Association of Caribbean States, more than 25 Heads of State members came to the city.
2015
Mauricio Vila Dosal becomes the 35th mayor.
Raul Castro, President of Cuba is received by President Enrique Peña Nieto in his first visit as President, to Mexico. Here he announced his retirement on 2018.
2018
María Dolores Fritz Sierra becomes the 36th mayor, as acting mayor in office.
The 3rd presidential debate of the 2018 general elections is hosted at the Mayan Museum of Merida
Renán Barrera Concha becomes the 37th mayor. First constitutionally re-elected after the 2015 constitutional reform.
2019
The city hosted the 17th World Summit of Nobel Peace Laureates, receiving more than 30 of them.
See also
Mérida history
List of municipal presidents of Mérida
History of Yucatán
References
This article incorporates information from the Spanish Wikipedia.
Bibliography
in English
(fulltext via OpenLibrary)
in Spanish
External links
Digital Public Library of America. Items related to Mérida, various dates
Category:Mérida, Yucatán
Merida
Merida | 2024-06-02T01:26:35.434368 | https://example.com/article/4754 |
Skin Cooler Bike Jersey
In a world where cheap cycle jerseys are a commodity given away at many bicycling events, it is hard to justify spending money on one. Once you wear the Skin Cooler™ Bike Jersey, you will see ours is worth every penny.
Sizing:Please refer to the size chart image on the left for an accurate fit. Fit Type: Form-fitted Fit Note:Please do not assume your size, please use our size chart. If in doubt, email questions to contact@desotosport.com.
This environmentally-friendly innovation is completely free of solvents
Unlike many other brands the technology in our fabrics will NOT wash out. You can trust this jersey will keep you cool, comfortable, and eliminate odor mile after mile.
Made in USA
These two Limited Edition prints reflect De Soto Sport's styling of clean, modern designs. We combine unique colors to create ultimate sophistication through simplicity. Get yours now before they sell out. When they are gone, they are gone for good!
We do custom! If your team is interested in some sort of customization, please send an email to: contact@desotosport.com
What makes ours different? Our Skin Cooler fiber is designed to feel and perform like silk (from a silkworm). Through Biomimicry, we took the virtues of this natural fiber and created a synthetic silk. It will keep you cool in hot weather and cooler when it is wet. Put it on and you will instantly feel the cool sensation. Wet it and you will notice a drop in the surface temperature of the skin between 7 and 10 degrees.
A MEDICAL TESTIM0NIAL: We received this letter from Dermatologist and Mohs Surgeon Dr. Michael Bax.*
I am an avid runner and triathlete and I recently discovered your Skin Cooler products, of which I'm a huge fan. I have been wearing them for a few months now and couldn't be happier. I am a dermatologist and Mohs surgeon at Roswell Park Cancer Institute in Buffalo, NY. Every patient I see is plagued by basal cell carcinoma, squamous cell carcinoma, or melanoma. I recommend your products to my active patients on a daily basis. It allows them to continue their active lifestyle while protecting themselves from the sun. I find your product to be far superior to all other cooling and sun protective lines. I truly appreciate your efforts and innovations. Michael Bax, MD Roswell Park Cancer Institute Scott Bieler Clinical Science Center Buffalo, New York
***Dr. Bax has no financial investment in De Soto Sport and was not solicited nor compensated in any way for his letter.
Note:No garment should ever be a substitute so we always recommend you wear sunscreen underneath these, and all, products. It is crucial to keep in mind that a sunscreen's (whichever brand you may choose) SPF/UPF rating refers only to its ability to protect skin from UVB radiation. Because we know that UVA damage can be just as insidious, it is essential for the health of your skin that you use a sunscreen that contains the UVA-protecting ingredients of titanium dioxide or zinc oxide. | 2024-02-14T01:26:35.434368 | https://example.com/article/8675 |
Q:
Is there any mention of a fountain of youth in the Purans?
The concept of the fountain of youth is common in many other religions. Are there any similar mentions in Hindu purans? (I searched quite a lot, but I did not find any. Perhaps it is known by a different name?)
Immortality and Eternal Youth go hand in hand. The concept of Amrutha is present, which when we consume, gives us immortality. Is there a similar portion to gain youthfulness?
A:
Are there any similar mentions in Hindu puranas. ?
Yes there is similar concept to fountain of youth or pond of youth in Hinduism Puranas. Which is called Siddha -Kunada OR Ashvini Kunda. Created by the siddha Rishis.
Which is mentioned in Story of Rishi Chavana regaing his youth back .The story Rishi Chavana Bathing in certain pond called Siddha-kunda with ashvini kumaras and regaining his youth and beauty is mentioned in Skanda 9 - Chapter - 3 of Shrremad Bhagvat Purana - SB 9.3: The Marriage of Sukanyā and Cyavana Muni.
Background - When the heavenly physicians the Aśvinī-kumāra brothers once visited Cyavana Muni, the muni requested them to give him back his youth. These two physicians took Cyavana Muni to a particular lake, in which they bathed and regained full youth. After this, Sukanyā could not distinguish her husband. She then surrendered unto the Aśvinī-kumāras, who were very satisfied with her chastity and who therefore introduced her again to her husband. Cyavana Muni.
कस्यचित् त्वथ कालस्य नासत्यावाश्रमागतौ । तौ पूजयित्वा प्रोवाच वयो
मे दत्तमीश्वरौ ॥11॥
kasyacit tv atha kālasya nāsatyāv āśramāgatau tau pūjayitvā
provāca vayo me dattam īśvarau Thereafter, some time having
passed, the Asvini-kumara brothers, the heavenly physicians, happened
to come to Cyavana Muni’s asrama. After offering them respectful
obeisances, Cyavana Muni requested them to give him youthful life, for
they were able to do so. SB 9.3.11 ग्रहं ग्रहीष्ये
सोमस्य यज्ञे वामप्यसोमपोः । क्रियतां मे वयोरूपं प्रमदानां
यदीप्सितम् ॥12॥
grahaḿ grahīṣye somasya yajñe vām apy asoma-poḥ kriyatāḿ me
vayo-rūpaḿ pramadānāḿ yad īpsitam
Cyavana Muni said: Although you are ineligible to drink soma-rasa in
sacrifices, I promise to give you a full pot of it. Kindly arrange
beauty and youth for me, SB 9.3.12
बाढमित्यूचतुर्विप्रमभिनन्द्य भिषक्तमौ । निमज्जतां भवानस्मिन् ह्रदे
सिद्धविनिर्मिते ॥13 ॥
bāḍham ity ūcatur vipram abhinandya bhiṣaktamau nimajjatāḿ bhavān
asmin hrade siddha-vinirmite
The great physicians, the Asvini-kumaras, very gladly accepted Cyavana
Muni’s proposal. Thus they told the brahmana, “Just dive into this
lake of successful life.” SB 9.3.13
After taking dip in the kunda Rishi Chavana , who was very old and unattractive became youthful again.
This kunda is also mentioned in Vamana purana - Chapter 34 - Verse 31- Account of the forests , rivers and Tirthas of Kurukshetra. (see page no. 142) ,Which the Vamana Purana is calling Ashivini Kunda OR Ashivini Tirtha , and is near to Kurukshetra.
अश्विनोस्तिर्थमासाद्य श्रद्धावान्यो जितेन्द्रिय : | रूपस्य भागी भवति
यशस्वी च भवेन्नर : || VP 34.31||
The man who exercise checks on his senses and has keen obeisance ,
gets a beautiful complexion and frame as blessings for bathing in the
holy place of Ashvini-Kumaras.
So the similar concept like fountain of youth is also present in puranas with slight difference that its name a kunda rather than fountain which can restores youth .
| 2024-05-17T01:26:35.434368 | https://example.com/article/6409 |
You'll need to check your e-mail before we post your comment.
Your e-mail won't be shown onscreen, and
we will never sell or abuse it.
Comments(No HTML, please)
Use appropriate language and do not be mean, rude, or insulting.
Inappropriate comments will be deleted before anyone sees them, and your account may be deleted.
Full rules|Why?
Caleb's 4X4 Contest Round One | Golden Statue . My Round One Entry for Caleb's 4X4 Contest. . This is a "Golden" Statue for Caleb's 4X4 Contest. This is the first contest I've ever entered that has multiple rounds. This contest is a small enough round contest that school shouldn't get too much in the way. Round One was build whatever you want as long as it did not break the rules. I decided to actually not go with my Comfort Zone of Sci-Fi, rather something different Just to be different and if I advance into the next round, I'll probably have to build out of my Comfort Zone.
The "Golden" Statue is dedicated to an adventurer who had an impact on life in his time. I only had Yellow, so that is why he is not Gold. He is in a heroic pose looking off to destinations unknown.
He has collected many items like a peace offering cloth he wears on the right side of his belt, and multiple alliance and friendship bands on his arms of tribal communities.
He has a cape to keep warm in the night.
He wears a nice racoon skin hat.
He has a small sword when he gets into tough situations.
He is the greatest adventurer known to the LEGO world.
All the people admire this golden representation of such a great man.
Thanks for viewing. Don't forget to rate and comment.
Caleb's 4X4 Contest | 2023-09-16T01:26:35.434368 | https://example.com/article/9322 |
Archivo de la etiqueta: paradigm shift
After reading a paper by Ashtekar on quantum gravity and thinking about it, I realized what my trouble with the Big Bang theory was. It is more on the fundamental assumptions than the details. I thought I would summarize my thoughts here, more for my own benefit than anybody else’s.
Classical theories (including SR and QM) treat space as continuous nothingness; hence the term space-time continuum. En este punto de vista, objects exist in continuous space and interact with each other in continuous time.
Although this notion of space time continuum is intuitively appealing, it is, at best, incomplete. Consider, por ejemplo, a spinning body in empty space. It is expected to experience centrifugal force. Now imagine that the body is stationary and the whole space is rotating around it. Will it experience any centrifugal force?
It is hard to see why there would be any centrifugal force if space is empty nothingness.
GR introduced a paradigm shift by encoding gravity into space-time thereby making it dynamic in nature, rather than empty nothingness. Así, mass gets enmeshed in space (y la hora), space becomes synonymous with the universe, and the spinning body question becomes easy to answer. Sí, it will experience centrifugal force if it is the universe that is rotating around it because it is equivalent to the body spinning. Y, no, it won’t, if it is in just empty space. Pero “empty space” doesn’t exist. In the absence of mass, there is no space-time geometry.
Así, naturalmente, before the Big Bang (if there was one), there couldn’t be any space, nor indeed could there be any “before.” Note, sin embargo, that the Ashtekar paper doesn’t clearly state why there had to be a big bang. The closest it gets is that the necessity of BB arises from the encoding of gravity in space-time in GR. Despite this encoding of gravity and thereby rendering space-time dynamic, GR still treats space-time as a smooth continuum — a flaw, according to Ashtekar, that QG will rectify.
Ahora, if we accept that the universe started out with a big bang (and from a small region), we have to account for quantum effects. Space-time has to be quantized and the only right way to do it would be through quantum gravity. Through QG, we expect to avoid the Big Bang singularity of GR, the same way QM solved the unbounded ground state energy problem in the hydrogen atom.
What I described above is what I understand to be the physical arguments behind modern cosmology. The rest is a mathematical edifice built on top of this physical (or indeed philosophical) foundation. If you have no strong views on the philosophical foundation (or if your views are consistent with it), you can accept BB with no difficulty. Desafortunadamente, I do have differing views.
There is much more work to be done on this front. But for the next couple of years, with my new book contract and pressures from my quant career, I will not have enough time to study GR and cosmology with the seriousness they deserve. I hope to get back to them once the current phase of spreading myself too thin passes. | 2024-03-13T01:26:35.434368 | https://example.com/article/4374 |
Q:
How can I preset the filename in NSSavePanel?
NSSavePanel used to have a runModalForDirectory:file: method which let you preset the directory and filename for a save panel. But that is deprecated in 10.6
When creating an NSSavePanel, how can I preset the filename without using the deprecated method?
A:
Use the setNameFieldStringValue: method, which was added in 10.6, before running the save panel. If you want to set the default directory too, you will need the setDirectoryURL: method, also added in 10.6.
NSString *defaultDirectoryPath, *defaultName;
NSSavePanel *savePanel;
...
[savePanel setNameFieldStringValue:defaultName];
[savePanel setDirectoryURL:[NSURL fileURLWithPath:defaultDirectoryPath]];
[savePanel runModal];
A:
There is a method that I didn't notice at first, NSSavePanel#setNameFieldStringValue, which sets the filename.
here is a complete example in macruby syntax:
def run_save_settings_dialog(sender)
dialog = NSSavePanel.savePanel
dialog.title = "Save Settings"
dialog.canCreateDirectories = true
dialog.showsHiddenFiles = true
dialog.nameFieldStringValue = "MyFile"
dialog.canChooseFiles = true
dialog.canChooseDirectories = false
dialog.allowsMultipleSelection = false
dialog.setDirectoryURL NSURL.fileURLWithPath("some/path")
if dialog.runModal == NSFileHandlingPanelOKButton
save_settings(dialog.URL)
end
end
def save_settings(file_url)
File.open(file_url.path, 'w') {|f| f.write "Stuff" }
end
| 2023-10-26T01:26:35.434368 | https://example.com/article/8963 |
Q:
Difference running script in Canopy vs Command Line
I have a script that outputs a series of images to a Notebook, which I have simplified below:
import os
import sys
import tkinter as tk
from tkinter import ttk
path = sys.path[0]
os.chdir(path)
def on_close():
root.quit()
root.destroy()
root = tk.Tk()
root.geometry('1250x550')
n = ttk.Notebook(root)
n.grid()
imgs = [img for img in os.listdir(path) if img.endswith('.png')]
for img in imgs:
f = ttk.Frame(n)
n.add(f, text=img)
photo = tk.PhotoImage(file=img)
label = ttk.Label(f, image=photo)
label.image = photo
label.grid(row=1, column=1, padx=(300,0))
root.wm_protocol('WM_DELETE_WINDOW', on_close)
root.mainloop()
When I run the script from the command line in Windows, the script works as is. If I change my code to root = tk.Toplevel(), an extra window appears (ie. the implicit tk.Tk() window), which is what I expected.
However, when I run the above script from within Canopy, I get an error saying "pyimage doesn't exist". I can resolve this by changing my code to root = tk.Toplevel(), and everything runs normally with no extra window.
Why is there a discrepancy when I run from Canopy? I've read questions where people needed to change root = tk.Toplevel() when displaying images because they were somehow creating two root windows within their script. However, I don't believe that describes my situation, and doesn't explain why my script works from the command line but not Canopy.
A:
By default, Canopy's (IPython) kernels are created in PyLab mode with a default Qt backend. For information about switching / disabling this, see https://support.enthought.com/hc/en-us/articles/204469880-Using-Tkinter-Turtle-or-Pyglet-in-Canopy-s-IPython-panel
| 2023-10-08T01:26:35.434368 | https://example.com/article/2696 |
Shana Grice, one of the many women killed by a violent ex-partner, would probably be alive today were it not for widescale failures of police to act properly on incidents of stalking and domestic abuse. The complacency about male violence towards women across our criminal justice system is a major symptom of endemic, institutionalised misogyny within the police service.
When Grice called police on Michael Lane after he chased her down the street, snatched her phone and pulled her hair, it was Grice, not her abuser, who felt the long arm of the law. In the proud tradition of blaming women abused by men they once said yes to, the police decided Grice was lying because Lane showed a number of text messages from Grice that indicated they had been in a relationship. Lane had been reported to police in 2010 by a woman accusing him of harassment, but no action was taken. Following Grice’s death in 2016, 11 more women came forward to accuse Lane of stalking and harassment.
Myths about false allegations and stereotypes of paranoid time-wasters are continually regurgitated
It’s not as if feminist campaigners, including survivors of the most extreme forms of male violence, haven’t been telling police to buck up their ideas for decades. In 2005, my investigation on stalking murders showed that stalking and harassment when done by men to female ex-partners is a clear warning of homicide risk.
The Centre for Women’s Justice recently submitted a super complaint logging some of the routine and widespread failures in the policing of rape and domestic violence. Dash, the Domestic Abuse, Stalking and Honour Based Violence model – the most widely used risk-assessment tool in the UK – highlights the risk factors, including coercive control, attempted strangulation, threats with knives, separation, sexual violence and stalking. What Dash can’t do is eliminate the prejudicial views many police officers hold about female complainants in domestic and sexual violence cases.
As Davina James-Hanman, who advises police on domestic violence policy and practice, tells me, there is a tendency among criminal justice agencies to elevate acts of physical violence above other forms of abuse – but for most abused women, acts of physical violence are not frequent and are often low-level assaults. What indicates high risk is the abuser treating them with contempt, and as an object and a possession. Coercive control and stalking feature in almost all domestic homicides whereas previous incidents of physical violence only feature in about half.
Coercive control, which is always the backbone of stalking and harassment, became illegal in December 2015, but how many police forces have actually undergone training in the law? Karen Ingala Smith, founder of Counting Dead Women, says: “The most basic lesson of all, ‘believe women’, seems never to be learned. We should believe women when they report violence, and when they tell us that they are afraid for their lives. Women and girls who do speak out are routinely disbelieved.”
Disciplinary action for police officers over Shana Grice murder Read more
Myths about false or malicious allegations; stereotypes of paranoid time-wasters, nagging wives, slags and harridans; victim-blaming notions about deserving and undeserving victims and risk-takers – these are all continually regurgitated. This is not a matter of one rogue police force, it is endemic and country-wide. Grice knew that Lane was a danger to her. Time and time again, I have heard of cases such as this where women have told police they will die unless the perpetrator is dealt with.
Until we tackle sexism in the police service, women like Shana Grice will die preventable deaths. As I found when producing a radio documentary alongside the formidable Jackie Malton, former Metropolitan police detective, sickening attitudes and behaviour prevail among many male officers towards both female colleagues and members of the public.
If a victim of stalking can be criminalised for “wasting police time”, then the officers responsible should also face criminal proceedings for misconduct in public office. Until police face serious sanctions for conduct of this kind and those in charge are also held accountable, nothing will change. We will continue to see murders of women that could have been prevented.
It is telling that the number of women killed by men in the UK has remained fairly static for as long as statistics have been monitored, despite decades of feminist campaigning against bad practice. It is time to recognise that until police are held properly accountable for such catastrophic failures to protect vulnerable women, our morgues will still groan under the weight of such tragedies.
• Julie Bindel is a journalist and political activist, and a founder of Justice for Women | 2023-11-26T01:26:35.434368 | https://example.com/article/9655 |
355 S.W.3d 123 (2011)
MEMORIAL HERMANN HOSPITAL SYSTEM, Appellant,
v.
PROGRESSIVE COUNTY MUTUAL INSURANCE COMPANY, Appellee.
No. 01-10-00408-CV.
Court of Appeals of Texas, Houston (1st Dist.).
March 17, 2011.
Rehearing Overruled May 19, 2011.
*124 Margaret A. Pollard, Jared Cole Johnson, for Memorial Hermann Hospital System.
Maurice Joseph Meynier IV, for Progressive County Mutual Insurance Company.
Panel consists of Chief Justice RADACK and Justices ALCALA and BLAND.
OPINION
JANE BLAND, Justice.
Under the Texas Hospital Lien Law, a hospital "has a lien on a cause of action or claim of an individual who receives hospital services for injuries caused by an accident that is attributed to the negligence of another person." TEX. PROP.CODE ANN. § 55.002 (West 2007). To secure the lien, section 55.005 of the Texas Property Code requires that a hospital file notice with the county clerk before payment to the entitled party. The statute also declares that the county clerk "shall index the record in the name of the injured individual." TEX. PROP.CODE ANN. § 55.005 (West 2007).
In this case, Progressive County Mutual Insurance Company (Progressive) settled a claim brought by Carlos Martinez against its insured arising out of Martinez's injuries in a car accident. Memorial Hermann Hospital filed a hospital lien for the cost of Martinez's medical treatment half an hour before Progressive issued the settlement check. A hospital lien usually attaches to settlement proceeds, and an insurance company usually names the hospital lienholder as a payee on the settlement check. But in this case, because the clerk had not yet indexed the lien, Progressive maintains that it was unaware of the lien and, therefore, it did not name Memorial Hermann as a payee.
The trial court's summary judgment ruling interprets the hospital lien law as requiring that the clerk index the lien before it can be considered secured, and thus holds that the timing of the indexing controls perfection of the lien. Memorial Hermann contends on appeal that the lien is secured on filing, and thus it was entitled *125 to allocation of the settlement proceeds. We agree. We therefore reverse the summary judgment and remand the case for further proceedings.
Background
On October 29, 2007, a driver insured by Progressive caused a car accident that injured Carlos Martinez. Martinez was transported from the accident site to Memorial Hermann Hospital where he received treatment for his injuries. The cost of his treatment totaled $130,365.92.
On November 20, 2007, Progressive and Martinez settled his negligence suit arising out of the accident. Progressive issued a check to Martinez, his wife, their attorney, and Memorial Hermann for $100,007.00.
The parties did not cash the check. Shortly after receiving it, the Martinezes' counsel contacted Progressive and asked it to issue a new check that did not include Memorial Hermann as a payee. Counsel explained that Memorial Hermann had not filed a lien notice for the cost of Martinez's treatment, so the Martinezes were not required to allocate their settlement proceeds toward payment of the hospital bill.
Progressive re-issued the check as directed at 3:23 P.M. on December 12, 2007. Thirty minutes before, on the same date, Memorial Hermann had filed its notice of lien with the Harris County Clerk's Office.
Before issuing each check, Progressive conducted lien searches on the county clerk's website. It conducted a search on November 19 before issuing the first check. On the afternoon of December 12, before issuing the second check, Progressive searched the website twice, first at 2:25 P.M., and again at 3:30 P.M. None of the searches revealed the existence of a lien on the Martinez settlement.
According to the county clerk, the process of recording and indexing the lien usually takes two business days after filing. The clerk testified that the Memorial Hermann lien on Martinez's settlement was not indexed until December 17, 2007.
Progressive moved for summary judgment on both traditional and no-evidence grounds, contending that Memorial Hermann was not entitled to the settlement proceeds because it could not show that the Harris County Clerk had indexed its hospital lien on Martinez's personal injury claim proceeds before Progressive paid out the settlement. The trial court granted the motion, and Memorial Hermann appeals.
Discussion
I. Summary judgment standard of review
We review de novo the trial court's grant of a motion for summary judgment. Mann Frankfort Stein & Lipp Advisors, Inc. v. Fielding, 289 S.W.3d 844, 848 (Tex. 2009). After an adequate time for discovery, a party may move for no-evidence summary judgment if no evidence exists of one or more essential elements of a claim or defense on which the adverse party bears the burden of proof at trial. TEX.R. CIV. P. 166a(i); see also Hamilton v. Wilson, 249 S.W.3d 425, 426 (Tex.2008). The trial court must grant a no-evidence summary judgment motion unless the non-movant produces competent summary judgment evidence that raises a genuine issue of material fact on each element specified in the motion. TEX.R. CIV. P. 166a(i); Mack Trucks, Inc. v. Tamez, 206 S.W.3d 572, 582 (Tex.2006). In a traditional motion for summary judgment, the movant must establish that no genuine issue of material fact exists and the movant is thus entitled to judgment as a matter of law. TEX.R. CIV. P. 166a(c). To determine if the non-movant raises a fact issue, we review the evidence in the light most favorable to *126 the non-movant, crediting favorable evidence if reasonable jurors could do so, and disregarding contrary evidence unless reasonable jurors could not. See Fielding, 289 S.W.3d at 848 (citing City of Keller v. Wilson, 168 S.W.3d 802, 827 (Tex.2005)).
When, as here, a party moves for summary judgment on both traditional and no-evidence grounds, we first review the trial court's decision under the no-evidence standard. See TEX.R. CIV. P. 166a(i). If the non-movant failed to produce more than a scintilla of evidence raising a genuine issue of fact on the challenged elements of his claim, we need not consider whether the movant met his burden on the motion for traditional summary judgment. Ford Motor Co. v. Ridgway, 135 S.W.3d 598, 600 (Tex.2004).
II. Interpretation of Hospital Lien Law
This case concerns the proper reading of the hospital lien statute. Statutory interpretation is a question of law that we review de novo. Bragg v. Edwards Aquifer Auth., 71 S.W.3d 729, 734 (Tex.2002); In re Canales, 52 S.W.3d 698, 701 (Tex.2001). Our primary goal in interpreting a statute is to ascertain and to effectuate the legislative intent. Id. at 702. In doing so, we examine the statute's plain language. Helena Chem. Co. v. Wilkins, 47 S.W.3d 486, 493 (Tex.2001); Fitzgerald v. Advanced Spine Fixation Sys., Inc., 996 S.W.2d 864, 865 (Tex.1999). We presume the legislature included each word in the statute for a purpose and that words not included were purposefully omitted. In re M.N., 262 S.W.3d 799, 802 (Tex.2008). We may also consider: the object sought to be obtained; the circumstances of the statute's enactment; the legislative history; the common law or former statutory provisions, including laws on the same or similar subjects; the consequences of a particular construction; administrative construction of the statute; and the title, preamble, and emergency provision. TEX. GOV'T CODE ANN. § 311.023 (West 1998); Helena Chem. Co., 47 S.W.3d at 493 (citing Ken Petroleum Corp. v. Questor Drilling Corp., 24 S.W.3d 344, 350 (Tex.2000)). Additionally, we presume that the legislature intended a just and reasonable result; a result feasible of execution; the entire statute to be effective; and the public interest to be favored over any private interest. TEX. GOV'T CODE ANN. § 311.021 (West 2005); Helena Chem. Co., 47 S.W.3d at 493.
The Texas Hospital Lien Law allows a hospital to place a lien on the claim of an individual who receives medical care for injuries from an accident caused by the negligence of another. TEX. PROP.CODE ANN. § 55.002. To secure a lien, the statute prescribes the following procedure:
(a). . . . a hospital or emergency medical services provider must file written notice of the lien with the county clerk of the county in which the services were provided. The notice must be filed before money is paid to an entitled person because of the injury.
(b) The notice must contain:
(1) the injured individual's name and address;
(2) the date of the accident;
(3) the name and location of the hospital or emergency medical services provider claiming the lien; and
(4) the name of the person alleged to be liable for damages arising from the injury, if known.
(c) The county clerk shall record the name of the injured individual, the date of the accident, and the name and address of the hospital or emergency medical services provider and *127 shall index the record in the name of the injured individual.
TEX. PROP.CODE ANN. § 55.005.
In granting summary judgment, the trial court interpreted the requirement that the lien notice be filed "before money is paid," set out in subsection (a), as applying to subsection (c)the clerk's recording and indexing requirement. Memorial Hermann contends that this interpretation is incorrect; the county clerk's ministerial recording and indexing is not required to secure the lien.
We read the plain language of section 55.005 as providing that a lien is secured when the lienholder properly files with the county clerk a written notice of lien that complies with the statutory requirements. Subsection (a) contains the only temporal restriction relating to the lien. See TEX. PROP.CODE ANN. § 55.005(a) ("[N]otice must be filed before money is paid. . . ."). The temporal language is in passive voice, but it refers only to the action of filing, and does not refer to the county clerk's obligation. The language preceding the temporal language makes the hospital responsible for filing the lien notice. The requirement that the lien notice "be filed before money is paid" thus applies only to the filing requirement, which falls squarely on the hospital. See Spradlin v. Jim Walter Homes, Inc., 34 S.W.3d 578, 580 (Tex.2000) (describing doctrine of last antecedent, a canon of statutory construction instructing that "that a qualifying phrase . . . must be confined to the words and phrases immediately preceding it to which it may, without impairing the meaning of the sentence, be applied.").
Subsection (c) requires the county clerk to index the lien, but does not set any deadline. Progressive claims that section 13.002 of the Property Code, which declares that a properly recorded instrument is "notice to all persons of its existence" and "subject to inspection by the public," is evidence that the legislature intended that proper recordation be necessary to provide the public with notice. See TEX. PROP.CODE ANN. § 13.002 (West 2004). According to Progressive, the provision's emphasis on recording, rather than filing, supports the conclusion that the lien is not effective until it is properly recorded. The Property Code, however, specifies that the duty of proper recordation belongs to the county clerk. TEX. PROP.CODE ANN. § 11.004(a) (West 2004) (providing that clerk must correctly record instruments, as required by law, "within a reasonable time after delivery"). Section 11.004 also makes the county clerk liable for damages and civil penalties if it violates the specified recordation requirements. See TEX. PROP.CODE ANN. § 11.004(b). Progressive's proffered interpretation would potentially expose the county clerk to liability under circumstances like those presented here, that is, when the hospital files its notice of lien timely but the insurer issues the settlement check before the clerk records the notice of lien. We do not believe that the legislature intended that result.[1] Other *128 statutory provisions similar to section 55.005 emphasize the lienholder's filing responsibility and expressly disclaim any consequence from a delay or error in the clerk's ministerial duty to index lien notices. See TEX. BUS. & COM.CODE ANN. § 9.517 (West Supp.2010) ("The failure of the filing office to index a record or to correctly index information contained in a record does not affect the effectiveness of the filed record."); TEX. AGRIC. CODE ANN. § 128.048 (West 2004) (providing that, with respect to chemical and seed liens, Chapter 9 of Business & Commerce Code applies to extent it is otherwise consistent with chapter); TEX. PROP.CODE ANN. § 53.052 (West 2007) (declaring that person claiming lien arising from a residential construction project must file affidavit with county clerk "not later than the 15th day of the third calendar month after the day on which the indebtedness accrues not later than the 15th day of the third calendar month after the day on which the indebtedness accrues," and requiring county clerk to record, index, and cross-index affidavit, but specifying that "[f]ailure of the county clerk to properly record or index a filed affidavit does not invalidate the lien."). Further, the statutes consistently instruct the clerk to record liens according to the date and time of the filing or receipt, not by the date of indexing or recording. See TEX. BUS. & COM.CODE ANN. § 261.008 (West 2009) (requiring secretary of state to note "day and hour of receipt" on utility security instruments); TEX. FAM. CODE ANN. § 157.316 (West Supp.2010) (providing that child support lien is perfected when abstract of judgment for past due child support or child support lien notice is filed or delivered).
Our interpretation of section 55.005 comports with the law's purpose, which "is to provide hospitals an additional method of securing payment for medical services, thus encouraging the prompt and adequate treatment of accident victims." Bashara v. Baptist Mem'l Hosp. Sys., 685 S.W.2d 307, 309 (Tex.1985), quoted in Daughters of Charity Health Servs. of Waco v. Linnstaedter, 226 S.W.3d 409, 411 (Tex.2007); Members Mut. Ins. Co. v. Hermann Hosp., 664 S.W.2d 325, 326 (Tex.1984) (explaining that legislature aimed to encourage hospitals to treat persons injured in accidents on emergency basis by providing means of obtaining compensation for care of patients who otherwise would be unable to pay).
Applying the plain language of the statute to the facts, Memorial Hermann's lien on Martinez's settlement proceeds was secured before Progressive executed the check for payment of the settlement.
III. Effect of corporate representative's deposition testimony
Memorial Hermann's corporate representative, Michael Bennett, testified in his deposition that he did not believe the lien in this case was properly secured according to statute. Progressive included an excerpt of this testimony as evidence in support of its motion for summary judgment, arguing that Bennett's statement estopped Memorial Hermann from taking a contrary position. Memorial Hermann complains of the summary judgment to the extent it relies on this testimony.
"[A]ssertions of fact, not pleaded in the alternative, in the live pleadings of a party are regarded as formal judicial admissions." *129 Holy Cross Church of God in Christ v. Wolf, 44 S.W.3d 562, 568 (Tex. 2001) (quoting Houston First Am. Sav. v. Musick, 650 S.W.2d 764, 767 (Tex.1983)). Bennett's statement is one of opinion, not fact. See Ryland Group, Inc. v. Hood, 924 S.W.2d 120, 122 (Tex.1996) (holding that affidavit containing opinion that "failure to notify amounts to concealment or a known violation of the specifications and industry practice," was conclusory and did not raise a fact issue in support of summary judgment); TX Far West, Ltd. v. Tex. Invs. Mgmt., Inc., 127 S.W.3d 295, 307-08 (Tex. App.-Austin 2004, no pet.) (holding that affiant's opinion that restrictive covenant had neither been abandoned nor its enforcement waived stated only legal conclusion and thus could not support summary judgment). Bennett's opinion cannot trump the application of the statute's plain language to undisputed facts.
Conclusion
We hold that, under the Texas Hospital Lien Law, Memorial Hermann's lien was secured on filing, which was accomplished before Progressive paid out the settlement funds. We therefore reverse the summary judgment and remand the case for further proceedings consistent with this opinion.
NOTES
[1] Progressive also relies on Methodist Hospitals of Dallas v. Mid-Century Insurance Co. of Texas, 259 S.W.3d 358 (Tex.App.-Dallas 2008, no pet.), in contending that section 55.005 requires strict adherence, but our conclusion that Memorial Hermann's notice of lien complies with the statutory requirements renders that case inapposite. In Methodist, the Dallas court of appeals affirmed summary judgment in favor of the insurer on the grounds that substantive errors in the notice rendered the lien unenforceable. Id. at 360-61. The court rejected the hospital's contention that it had substantially complied with the statutory requirements, concluding that (1) the error in specifying the date of the accident was not insubstantial because the express language of the statute made the date of the accident "a critical component of the notice," and (2) the error in listing the injured personinstead of Mid-Century's insuredas the person liable for the damages made the lien notice unenforceable on its face. Id. at 361. Interestingly, the court of appeals noted in passing that the hospital filed the lien a day after the insurer issued the settlement check, but did not address the issue of the lien notice's timeliness. See id. at 359-60.
| 2023-12-06T01:26:35.434368 | https://example.com/article/8108 |
As you might have heard by now, the Rouyn-Noranda Huskies won the QMJHL Championship last week thanks in no small part to Avalanche prospects Julien Nantel, Jean-Christophe Beaudin and Anthony-John Greer. It was a dominating performance losing just 4 times, scoring 88 goals while letting in only 31 in 20 games. Besides the glory of hoisting the Presidents Cup Trophy, they also won a trip to Red Deer, Alberta to compete for the Memorial Cup against OHL champ London, WHL champ Brandon and the host team Rebels.
Instead of a dry recap and preview littered with stats you'll never remember, I decided to try something different and do a chat with friend of the show tigervixxxen, who has far more knowledge of the team than I do and writes about Avs prospects for the Burgundy Brigade.
Keep in mind this is an experimental format and I'm definitely no Dick Cavett so bear with me here, there's plenty of good info below. That said, let's get to it.
* * * * *
earl06: I thought the 1st game of the finals showed how good the Huskies really were and might have been where they won it all. No Meier because of the stupid flying elbow in the final Moncton game and no Beaudin more or less from halfway in the 1st on, they buckled down and used what they had wisely. Chase Marchand was excellent in net and a clutch goal from Brouillard with less than 2 minutes remaining got them a win in a game where they were outshot and horrible in the faceoff circle. Good depth and a great game plan made on the fly got them the first victory and it carried through the whole series.
tigervixxxen: Pushing through adversity was really the story of the Huskies' championship run. They didn't have to come back in games often but when they did in Game 1 and nearly pulled it off in Game 4 the Huskies can just pull goals out of thin air when they needed them. They always bounced back after a loss with a strong effort and a win. For me, it was getting through the bizarre situation of Game 2 when the game was postponed after the first period due to one of the on ice workers puncturing a coolant hose and making the ice unplayable for the night. There was a lot of doubt if the game could resume in Rouyn-Noranda at all or if Game 2 had to be moved to Shawinigan. At one point the league announced even if they could play the following day in Rouyn-Noranda that their 2-1 lead would be wiped out and the game restarted at 0-0. After careful inspection of the rule book and the on ice repairs, the Huskies did resume the following afternoon and turned their 2-1 lead into a 4-1 victory. It must have been very unnerving and stressful through those 24 hours and neither Bouchard nor the team ever complained and they just went back out and finished their business. The Huskies also had to deal with not only their top center Beaudin going down in Game 1 but they were also without the services of their top defenseman Jeremy Lauzon since he suffered a life threatening cut to his neck from a skate blade in the last game of round 2.*** The Huskies incredible depth allowed them to pick up and just keep going even through suspensions to Timo Meier and Francis Perron at various points as well.
e: Yeah, If game 1 set the tone then game 2 was a gut punch for Shawinigan. It also was where AJ Greer really put his mark on the series, scoring what turned out to be the winning goal then backing it up with insurance 20 hours later. He was impressive against Moncton but the way he elevated his game in the finals was just awesome. In today's stat-heavy world, things that can't be measured like heart and attitude and confidence get dismissed too easily. Greer's got plenty of talent and the numbers to back it up now but the way he played in games 2 & 3 was a testament to his mindset and more than justified the Avs picking him at #39, which I can now honestly say worried me at the time. From where I sat he was the best player for Rouyn-Noranda in that series.
Just to add a note about Beaudin, first of all I was really bummed he couldn't play (much) because I think he too could have thrived in the finals plus I really wanted to watch him play. Second of all, his absence highlighted how good he is on faceoffs and how much that meant to the team. He was around 65% in the Moncton series, which is amazing, and I doubt the Huskies were above 50% in the finals with him out.
tv: It was really nice to see Greer get some attention and credit for the season he put together since arriving in Rouyn-Noranda. He had found his scoring touch in February but five goals in five finals games including one in each game they won is pretty incredible. Stephane Leroux from RDS said the media considered Greer for the playoff MVP award. Even though the Huskies added Timo Meier at the trade deadline, some of the media considered Greer the Huskies' best mid season addition. Greer is really the power forward they needed, to give them some size and bite in their lineup who can score goals too. It was incredible to see just how different of a player he was once he got to play his game.
When Beaudin went out I almost knew they'd win the Cup because of course it would be a bit bittersweet. He really should get credit for everything he does for that team and it's a bummer people couldn't see it. I'm glad Greer got his coming out party but Beaudin deserved one too. I'm glad you got to see what he can do vs. Moncton at least. I'm hoping the injury isn't that bad or else why try playing him and risking making it worse? Hopefully the 10 days or so off for him helps. I bet they run him out there in some capacity to give him the Mem cup experience but I hope it's more than that.
e: Moving to some of the earlier rounds, I wanted to get your take on the Moncton series. I thought that was much more even than the Finals and the games that the Wildcats were really physical (albeit generally dirty too) against RN were the ones that were the most troublesome. I thought for sure Shawinigan would try that approach but they either couldn't or just didn't.
Not much to say about Drummondville, the stats paint a picture of the bottom team in the bracket vs the top. I don't know what's more impressive, averaging over 8 goals a game or giving up only 4 total. Either way, that was pretty much a bye and got Sergei Boikov to San Antonio as quickly as possible.
Blainville-Boisbriand was another dominant run. Aside from getting shut out 1-0 in the 1st game they had a pretty easy time of it only allowing 2 goals in 5 games. I guess the impressive thing about the 1st 2 rounds was taking care of business and not playing down to the level of the opponent. Probably the toughest job Coach Gilles Bouchard had was keeping everyone's compete level up and staying ready for the tougher teams beginning in the 3rd round.
tv: It's funny originally I didn't want the Huskies to draw Drummondville in the first round. Not that I was really afraid of a first round upset but of the potential low seed opponents they were not my favorite draw. For whatever reason Drummondville was a thorn in their side as a division opponent and took the Huskies to OT three times over the course of the season and even beat then in a SO in one of their last losses before going on a 14 game win streak into the playoffs. Drummondville wasn't very talented but I figured through gooning it up and cherry picking they might get lucky a few times. So the fact that the Huskies held them to four goals and flat took care of business was most impressive.
The series against the Armada was tougher than on paper because they were another division opponent and had just knocked off the powerful Val-d'Or Foreurs. The Armada goalie Samuel Montembeault stood on his head in front of a team who was more than happy to play an extremely trappy style and the Huskies experienced a scoring drought after the light show that was the Drummondville series. This is where the Huskies defense and goaltending had to match the Armada's and they had to overcome the adversity of Perron's two game suspension.
I agree the Moncton series was the toughest and obviously stretched the longest even with the Huskies most dazzling and heroic series clinching comeback in game 6. It was the team that was able to put the most physical pressure on the Huskies coupled with the ability to convert on their opportunities. They were also the most unfamiliar opponent hailing from the Martimes and had also just knocked off a formidable foe themselves in the Gatineau Olympiques. Shawinigan was a skilled dream team put together at the deadline via something like 12 trades. I believe they fell in love with the hype over their skill and on paper potential. Certainly Shawinigan was tough to shut down entirely as they got their goals in game 4 but the Huskies were a much more complete team and it showed in the end.
e: Other than common fan paranoia, was there ever a time during the playoff run where you thought RN were in trouble as far as winning the President's Cup?
tv: I'm an anxious, prepare for the worst sort of fan to begin with but deep down no, not really. I knew this team was special from the very beginning and they always bounced back, they never even lost two consecutive games in the playoffs. Once Val-d'Or and Gatineau went out in rounds 1 and 2 respectively, those were the two teams with size, physicality and defense that might have created a tough series for the Huskies, their path was pretty clear and the Huskies were on a mission.
e: Yeah, the only time I was a bit concerned was in the 2nd period of game 6 vs Moncton. At that point you're still looking at a team that was 11-3 so... no biggie.
Before we get to some of our own guys I'd like to talk a little about Chase Marchand. Ever since that beautiful 1-0 shutout vs the Wildcats, I've been beating the drum for the Avs to seriously consider signing this kid. His stats in the playoffs were marvelous, 15 wins, 6 of them shutouts, and .946 Sv%. He had a couple shaky performances too but I was actually glad to see that because it showed that he was just playing normal rather than having "hot goalie" syndrome. The Avs have a hole to fill at goalie and he's at prime age to step into the 5th spot on the depth chart. Unlike with skaters I think signing with Colorado is attractive for goalies thanks to Allaire & Filiatrault. The Avs have to be vigorously pursuing Marchand, right?
tv: The Avs definitely have to figure something out for the 5th spot in their goalie depth chart especially with the departure of Roman Will and could certainly do worse than to look in Marchand's direction for help. His stats are of the eye popping variety and set several QMJHL records, which is a big credit to the Huskies' team defense in front of him but six shutouts is six shutouts. Marchand's journey really adds to the storybook quality of the Huskies' run. Twice waived, including one time he even ended up in the OHL to play in Mississauga with Spencer Martin, Marchand was just about out of options and resigned to go back to junior A. As fate would have it, one of the Huskies goalies quit a couple games into the season and they snagged Marchand off of waivers in their own desperation. Marchand was lights out most of the season but a concussion kept him out most of the final month of the regular season. Huskies' goalie of the future Samuel Harvey played very well in his absence to the point I wasn't sure if Marchand would get the net back for the playoffs. Credit to coach Gilles Bouchard for making the tough call there and it paid off with Marchand's dazzling performance start to finish.
e: Marchand just makes so much sense from both sides, I can't think of a good reason why it wouldn't happen other than the rookie year in Ft Wayne deal. Even so, whoever gets that spot will play ~10 games in San Antonio, which is nice.
Speaking of San Antonio, one guy we know will be there is Julien Nantel. He performed pretty admirably filling in for Beaudin but I was really impressed with him on the wing as a shutdown guy. He's very quick in the d-zone covering the points and starting the breakout. I loved the way as soon as he gets the puck away from an opponent he's just gone the other way. Lots of great instinct, skating and puck moving talent, maybe even some leadership and winning attitude that should help the Rampage out a bunch. What did you see over the course of the season as far as development and areas he improved?
tv: Nantel is truly one of those LW/C guys that can and have played both positions quite a bit. I like him on the wing better myself, like you said can really use his speed and creates more from that position. The Huskies might have used Nantel in more of a complimentary role this season but always turned to him when they needed a center or someone to move up in the lineup. I know the Avs asked Nantel to work on being more physical as well as consistency and I think he achieved both this season. He works well along the boards and uses his speed defensively. Another aspect where I saw improvement this year was better control of the puck when he would get a turnover and turn on the jets, in the past he seemed to want to go faster than the play would allow and now he can turn a play into a shot or a good pass more often. I'm very much excited to see what Nantel can do at the pro level with San Antonio next year, he should give them a good dose of speed and skill. | 2023-08-30T01:26:35.434368 | https://example.com/article/3420 |
namespace Jackett.Common.Models.IndexerConfig.Bespoke
{
internal class ConfigurationDataToloka : ConfigurationDataBasicLogin
{
public BoolItem StripCyrillicLetters { get; private set; }
public ConfigurationDataToloka()
=> StripCyrillicLetters = new BoolItem() { Name = "Strip Cyrillic Letters", Value = true };
}
}
| 2023-10-11T01:26:35.434368 | https://example.com/article/2900 |
The station reports police had been called to the same residence earlier Wednesday. A witness told CBS Denver a young, tall brunette woman was outside the home earlier in the day with a baseball bat and that she broke a car window. It is unclear if that young woman was Isabella.
Robert Guzman, Isabella's father and Hoy's ex-husband, said he got a call earlier in the day Wednesday from Hoy indicating trouble between her and their daughter. He said things had been tense between the two for some time.
"She was really scared, so I told her that yeah I would go talk to Isabella just to try to make things better," said Guzman. "I still can't believe that this happened."
Investigators reportedly remained at the crime scene Friday and said they could be there for several more days analyzing evidence. | 2023-10-23T01:26:35.434368 | https://example.com/article/9981 |
It is known in photography that silver halide grains are useful informing developable latent images when struck by actinic radiation, such as electromagnetic radiation. The use of silver bromide, silver chloride, silver iodide, and combinations of these metal halides into crystals have been widely used in photographic products.
In the formation of color photographic products both for color negative film, transparencies, and color paper, there has been a continuous improvement in the properties of these materials, particularly in their speed and fine grain properties.
However, there remains a need for such materials that have higher contrast, lower fog, and improved reciprocity over wide exposure ranges.
As shown in Research Disclosure, December 1989, 308119, Sections I-IV at pages 993-1000, there have been a wide variety of dopants, spectral sensitizers and chemical sensitizers proposed for addition to emulsions of gelatin and silver halide grains or crystals. These materials have been proposed for addition during emulsion making as dopants or after emulsion formation as sensitizers. However, there remains a continued need for an improvement in the use of such materials to obtain better photographic performance.
U.S. Pat. No. 4,933,272 by McDugle et al discloses formation of silver halide grains exhibiting a face centered cubic crystal lattice structure internally containing a nitrosyl or thionitrosyl coordination ligand and a transition metal chosen from groups 5 to 10 inclusive of the periodic table of elements. These complexes play a significant role in modifying photographic performance.
U.S. Pat. No. 4,806,462 by Yamashita et al, at column 4, discloses formation of silver halide photographic material that may be doped with a variety of metals including magnesium, calcium, barium, aluminum, strontium, rheuthium, rhodium, lead, osmium, iridium, platinum, cadmium, mercury, and manganese.
However, there remains a need for improved photographic products that have a sharper toe (higher contrast) at low exposures while maintaining reciprocity during exposure. There is particular need for color print materials that have these properties. | 2023-09-24T01:26:35.434368 | https://example.com/article/1698 |
using System;
using System.Collections.Generic;
using System.Text;
namespace Waher.Content.Multipart
{
/// <summary>
/// Represents mixed content, encoded with multipart/mixed
/// </summary>
public class MixedContent : MultipartContent
{
/// <summary>
/// Represents mixed content, encoded with multipart/mixed
/// </summary>
/// <param name="Content">Embedded content.</param>
public MixedContent(EmbeddedContent[] Content)
: base(Content)
{
}
}
}
| 2023-08-11T01:26:35.434368 | https://example.com/article/8339 |
The thai-Australian alliance: developing a rural health management curriculum by participatory action research.
In 2006, the Thai National Health Security Office and the Ministry of Public Health, through the Nakhonratchasima Provincial Health Office in Thailand, asked the Thai-Australian Health Alliance to identify competencies and skills for a health management curriculum for health professionals working in primary healthcare in rural Thailand. The study was conducted in Nakhonratchasima province, Thailand, utilizing questionnaires, focus group discussions and an intensive 3-day workshop involving a purposive sample of 35 participants drawn from various sectors in the health industry. Findings identified the core curriculum competencies and skills required by rural doctors, nurses and public health officers. Critical issues regarding continuing education for health professionals in primary healthcare were also examined. This study found that a primary healthcare approach should include the principles of sustainability and capacity building, and incorporate team-based, interprofessional and long-term continuous learning. | 2024-03-22T01:26:35.434368 | https://example.com/article/8186 |
Q:
Java Date vs Calendar
Could someone please advise the current "best practice" around Date and Calendar types.
When writing new code, is it best to always favour Calendar over Date, or are there circumstances where Date is the more appropriate datatype?
A:
Date is a simpler class and is mainly there for backward compatibility reasons. If you need to set particular dates or do date arithmetic, use a Calendar. Calendars also handle localization. The previous date manipulation functions of Date have since been deprecated.
Personally I tend to use either time in milliseconds as a long (or Long, as appropriate) or Calendar when there is a choice.
Both Date and Calendar are mutable, which tends to present issues when using either in an API.
A:
The best way for new code (if your policy allows third-party code) is to use the Joda Time library.
Both, Date and Calendar, have so many design problems that neither are good solutions for new code.
A:
Date and Calendar are really the same fundamental concept (both represent an instant in time and are wrappers around an underlying long value).
One could argue that Calendar is actually even more broken than Date is, as it seems to offer concrete facts about things like day of the week and time of day, whereas if you change its timeZone property, the concrete turns into blancmange! Neither objects are really useful as a store of year-month-day or time-of-day for this reason.
Use Calendar only as a calculator which, when given Date and TimeZone objects, will do calculations for you. Avoid its use for property typing in an application.
Use SimpleDateFormat together with TimeZone and Date to generate display Strings.
If you're feeling adventurous use Joda-Time, although it is unnecessarily complicated IMHO and is soon to be superceded by the JSR-310 date API in any event.
I have answered before that it is not difficult to roll your own YearMonthDay class, which uses Calendar under the hood for date calculations. I was downvoted for the suggestion but I still believe it is a valid one because Joda-Time (and JSR-310) are really so over-complicated for most use-cases.
| 2023-12-03T01:26:35.434368 | https://example.com/article/2584 |
[Complex regional pain syndrome (CRPS) : An update].
The acute phase of complex regional pain syndrome (CRPS) is pathophysiologically characterized by an activation of the immune system and its associated inflammatory response. During the course of CRPS, central nervous symptoms like mechanical hyperalgesia, loss of sensation, and body perception disorders develop. Psychological factors such as pain-related anxiety and traumatic events might have a negative effect on the treatment outcome. While the visible inflammatory symptoms improve, the pain often persists. A stage adapted, targeted treatment could improve the prognosis. Effective multidisciplinary treatment includes the following: pharmacotherapy with steroids, bisphosphonates, or dimethylsulfoxide cream (acute phase), and antineuropathic analgesics (all phases); physiotherapy and behavioral therapy for pain-related anxiety and avoidance of movement; and interventional treatment like spinal cord or dorsal root ganglion stimulation if noninvasive options failed. | 2023-08-03T01:26:35.434368 | https://example.com/article/2979 |
18K Gold Intaglio Necklace - For Sale
18K Gold Intaglio Necklace
Contact Dealer For Price
Triple Intaglio Necklace/Choker C. 1994 Italian Glass intaglios, foil backed, encased in 18K yellow gold. Gold tested with 18K acid. Center intaglio is the fleur de lys on a shield with open winged swan at the top. All in orange. 2 side intaglios are a child with a double sided horse in yellow. The intaglios measure 1" x 3/4" The necklace is only 15 inches long. Also, there are two full-cut, bezel set diamonds on either side of the center intaglio. Total diamond weight is approximately .20 carats. F color SI2 in clarity. Diamonds tested with a diamond tester. Marked DPD on catch. Stamped DPD - 1994 on back of center intaglio In pristine original condition. 71.2 gr. 45.8 dwt | 2023-08-26T01:26:35.434368 | https://example.com/article/7163 |
My RSS Feedhttp://www.timfisherartist.co.uk/index.htmlHot News!enen2019-09-07T14:03:18+01:00hourly12000-01-01T12:00+00:00Sun, 8 Sep 2019 11:40:20 +0100Apawlogies from FreyjaFreyja's tails2019-09-07T14:03:18+01:00http://www.timfisherartist.co.uk/dog%20blog/files/4ad453b0514af57150a45f91a9edf74f-20.html#unique-entry-id-20http://www.timfisherartist.co.uk/dog%20blog/files/4ad453b0514af57150a45f91a9edf74f-20.html#unique-entry-id-20I cannot believe that it is June since my last blog was written, tardis like, time has moved from high summer, to the beginnings of autumn here at Fisher HQ.
The easiest way to travel through time and update you, is to condense all the happenings that have occurred since June.
The pigeons (now a gang of three) continue to make fun of me in the garden, swooping down just close enough to avoid contact and then sitting on the fence in defiance.
A pair of squirrels have made dawn raids on next doors hazel tree, taking all the nuts and throwing the shells on to our pond patio.
My Master had a cunning plan with our hazel nut trees down the field this year, he picked them early and ripened them off in the conservatory ! Master 1 Squirrels 0
June saw us take a last minute caravan break to Longnor in Derbyshire. It was lovely doing some great walks in the countryside and chilling out in the caravan later.Wildlife seems intent on following me around, as each night around dusk the most enormous rabbits use to come on to one of the nearby caravan pitches for a grass supper. My owners thought this was hilarious and named it bunny TV, with me, nose glued to the window watching. This site also had a boule I had to learn not to chase the boules.
July was the annual Patchings Festival and I spent a very relaxing few days at my second home being thoroughly spoilt.
The end of July we once again hitched up the caravan and headed for Greetham, this was a lovely break away and I tried my very first doggy ice cream ….. delicious
August arrived and we hitched up again and took the caravan to North Piddle in Worcestershire. This was a lovely small campsite, pitch black at night, so you could see the stars.With the wildlife theme quite prevalent for this blog, I saw my first muntjac and quite a few squirrels.The purpose of this August visit was for my Master to take part in the Great Broadway Paint Off. As you can see from the photo below the weather was good and many artists set up easels around the village of Broadway.
The photos also show that the caravan break was an excellent chance to learn the art of backgammon !
I also got the chance to catch up for a walk with my Schnauzer friend Yoshi.
The vegetable garden has done well again this year, sadly the sweetcorn is now all finished, but on a happier note there are still loads of carrots. We have had a good crop of fruit and you can see that has been put to good use as jam
I think we have caught up and I promise not to leave it as long before the next instalment of Freyjas Tails is published.
]]>Keep the home fires burning ....Freyja's tails2019-06-08T14:59:25+01:00http://www.timfisherartist.co.uk/dog%20blog/files/ae5f27495930cc46fd62daa714ea437e-19.html#unique-entry-id-19http://www.timfisherartist.co.uk/dog%20blog/files/ae5f27495930cc46fd62daa714ea437e-19.html#unique-entry-id-19Apologies for not blogging in May, its been really busy here at Fisher HQ and my assistant editor for my blog has been occupied with other jobs !
The vegetable garden is coming along nicely, both at the front garden potager, designed in the Piet Mondrian style (well it had to have an art theme) and the plot down our field.Our conservatory became a large greenhouse for a few months, as the Master was waiting for the weather to warm up so all his plants, nurtured from seed could be planted out.The heavy rain today should help the vegetables grow, especially looking forward to the sweetcorn crop in a few months time and lots of carrots have been sown too, so I am a lucky pup.
We have had a couple of incidents down the field where the mischievous lambs have managed to get through the fence twice and pulled up the rhubarb and trampled the carrot seedlings !!They tend to get into little gangs and see how much mischief they can get up to, it sometimes involves trying to get through to the adjoining cricket field.
The pigeon saga continues, they now sit on the fence and have the cheek to do low passes across the lawn…just out of reach.The Fishers have been very lucky this year in the garden, the new bird box built my Master last year has successfully had a brood of Great Tits. A pair of wrens are eyeing up another box on the fence and the house martins have once again returned to nest at the front and back of our house.
It is once again the build up and preparation for the annual Patchings Art, Craft and Design Festival, I am hoping for excellent sales of the Masters Oil Pastel book, especially as I feature on the Patchings T-shirts and a whole chapter in the book.
Now to explain the title of this months blog, to regular readers you will know that I am a girl who really appreciates the home comforts and extra specially the LOG BURNER.A few weeks ago, we spent a few days down the field doing some chainsaw work to some rotten branches of a huge willow, that grows on the bank of the river running through the field.It was all going well until a huge piece rolled into the river, now that was not going to be taken off to float away.Quickly a plan was formulated involving a tow rope and the car, it was pulled up the bank and our firewood safe and sound.
I will sign off now, it's raining cats and dogs again and my warm, snuggly extra large blanket is calling me for forty winks.Below are a few photos of the garden and field, spot my extra cute begging pose for a biscuit !
See you next month,Freyja x
]]>Wood Pigeons return...Freyja's tails2019-04-17T10:02:18+01:00http://www.timfisherartist.co.uk/dog%20blog/files/cf46ebdf33116df71c71d6c23bb333bd-18.html#unique-entry-id-18http://www.timfisherartist.co.uk/dog%20blog/files/cf46ebdf33116df71c71d6c23bb333bd-18.html#unique-entry-id-18It is that time of the year again, the wood pigeons have returned to haunt me in the garden…. At least with the new pond layout and patio I have not fallen into the pond… yet !! My favourite toy, a dragon named George did fall in yesterday, he is currently drying off in the greenhouse.It has been super busy again at Fisher HQ, lots of jobs have been started down the field, fence building, digging and planting veggies. The hard work is exhausting, I have my own bed to watch the humans at work.
The Master has just returned from a four day painting holiday for Big Sky Art based near Burnham Market, North Norfolk. My five star accommodation at the parents house was just as good, ginger nuts on command, Stephen taking me for walks and a beautiful log fire burning all day. What more could a girl ask for?
Listening to the humans discussing the art holiday did make me a bit jealous, apparently my Mistress was seen with the resident Jack Russell named Mars on her knee !!! we will have to have words.Here is the Freyja feedback following the recent trip;
Luxurious 5* accommodation at the White House for 4 nights.Full use to all areas of the White House.Honesty bar.Superb selection at breakfast, I hear the Danish pastries were excellent.Pre dinner canapes and two course dinner each night.Big, spacious barn studio for exclusive during the holiday.Close to both village and coastal locations to paint and sketch En Plein Air, lots of painting subjects in walking distance of the White House with farm land, stables and old buildings.Free superfast WiFi
It sounds a super base for an art holiday and the Master is already booked for April and October next year, maybe I should go next time and meet Mars, or is the lure being totally spoilt at the parents house too much, hmmmm, decisions, decisions x
Hope you all have a super Easter, looks like the weather is improving.Here are some photos from the Big Sky Art Holiday and a photo from this morning.. spot the terrier and the pigeon.
Love Freyja x
]]>Happy hedge laying !Freyja's tails2019-03-01T14:58:50+00:00http://www.timfisherartist.co.uk/dog%20blog/files/d047160de5ffd3c106aadc98a4d0633d-17.html#unique-entry-id-17http://www.timfisherartist.co.uk/dog%20blog/files/d047160de5ffd3c106aadc98a4d0633d-17.html#unique-entry-id-17It has been nonstop here at Fisher HQ and I cannot believe I sit here pawing through the latest doggie blog and the calendar is reading Friday 1st March.
January saw us in the Lake District for a 4 day break walking and collecting reference photos for painting and sketching. The weather was dreadful on the first day with driving rain and wind, it was awful and it was a such a relief to get back to the hotel and the log fire to warm up and dry off. The next few days were glorious, as you can see from the photo below taken at Elterwater looking back to the Langdale Pikes.
My master has been kept super busy recently;
1. Writing articles for the Leisure Painter magazine2. Busy planting lots of vegetable seeds ready for the new growing season ( I hope there are LOTS of carrots)3. Dismantling and moving our small greenhouse to a new location in the back garden.4. Laying a section of hedge at our field just outside the village and this forms the main news of this months blog.5. Clearing a section in the field ready for next years poly tunnel and fencing it to keep the sheep out.
We have owned the field for quite a few years now, I had the honour of taking over from the legendary Purdey the Parson Jack Russell. There are many tales regarding Purdey and most of them include spending many, many happy hours digging in the black, sticky, smelly mud on the riverbank.However I am totally different in that respect, mud is just not my forte, I would much sooner snuggle down and take time observing the humans at work.
The hedge alongside the roadside of the field has been neglected for many years and Tim has laid a couple of sections in previous years. There was a gap in the art calendar, so just over a week ago we all spent a few days tackling the mammoth job, me as chief foreman and doggy alert for the kettle boiling for a cuppa in the shed.My mistress was in charge of dragging all the brash created into the field and piling it up ready for a big bonfire. They both worked extremely hard and villagers have commented on walking by, what a good job has been done.
The art demonstration and workshop season has begun which meant the computer was free today for me to write to you all.There are some super workshops and holidays coming up, see the Tutoring in the UK page
I have posted some photos of the hedge laying, looking brilliant, very proud dog, I have clever owners xxx
See you next month,Freyja xx
]]>Here's to a Happy New Year 2019Freyja's tails2018-12-28T09:42:08+00:00http://www.timfisherartist.co.uk/dog%20blog/files/d1dc888b0757860334f47b8f0c30686c-16.html#unique-entry-id-16http://www.timfisherartist.co.uk/dog%20blog/files/d1dc888b0757860334f47b8f0c30686c-16.html#unique-entry-id-16I hope you have all had a good Christmas, not eaten too much and have enjoyed catching up with family and friends.It has been busy in the Fisher household, with different sets of family visiting over the Christmas holiday.There have been some mouth watering smells coming from the kitchen and I have been practising the best big doggy eyes hoping to be chief taste tester !
Christmas Day was excellent, three more cuddly toys to run around with and a doggy selection box from Stephen, one of my favourite humans xxBoxing Day saw the grandson and family arrive with Poppy the Sprocker, we had a smashing time tearing around the fields playing ball. Poppy loves water and had a great time diving into the river, I preferred to watch from the riverbank, wild water swimming in December is not my idea of fun.
The Master has spent the holiday catching up on reading his new books and today is busy down the field chopping wood to keep the log burner supplied and me cosy and warm.
Next year is starting to look good, there is talk of a trip to the Lake District, so I will be able to walk around in my new fleece jacket.The caravan site brochures have also been studied, so here's to more adventures in 2019 for me to write about.
Wishing all my readers a Happy New Year 2019 and hope all your dreams and wishes come true.Photos see me snuggling with 2 new toys and asking for more wood on the log burner !
See you next year,Freyja xx
]]>Squirrels !!Freyja's tails2018-11-03T12:18:20+00:00http://www.timfisherartist.co.uk/dog%20blog/files/efb0d60f4a57451ac58dce6d6203bdaf-15.html#unique-entry-id-15http://www.timfisherartist.co.uk/dog%20blog/files/efb0d60f4a57451ac58dce6d6203bdaf-15.html#unique-entry-id-15It has been another interesting past month here at Fisher HQ, the Master has been busy with demonstrations and workshops, as well as fetching the winter firewood home. Regular readers will know that I LOVE the log burner and like nothing better than snuggling up next to the heat.The humans always comment that with firewood you get warm 3 times, Chopping, Stacking and Burning, I will settle just for the last option.
There was a birthday last month and it was combined with a demo in Chipping Campden, followed by a stay in a cottage in the Forest Of Dean.This was a lovely treat, as I have never explored this area and peeping at the cottage information it has a LOGBURNER, fantastic news.We arrived at the cottage, unpacked and went for a walk around the village of Staunton, the weather was glorious for the end of October and I did not need my coat all holiday.
The next day was my Mistresses birthday and after cards and presents in the sunny conservatory, we set off for a days walking through the forest to Symonds Yat. It was beautiful and tranquil walking through the forest, unfortunately I was on my lead the whole time and missed the fun of chasing all the squirrels. We also heard the wild boar during the day, but did not get the chance to spot any, the closest we got was wild boar sausages on the menu of the local pub.
On arrival at Symonds Yat after a coffee and cake, we decided to walk up to Symonds Yat Rock, its a glorious view down to the valley and the River Wye.Later on a lovely birthday dinner at the pub in Staunton (only two doors from the cottage) was called for, it was doggie friendly and also had a log burner so full marks all round.
Day 2 saw another walk, this time to Kymin Hill where there is a naval temple dedicated to the Admirals of the Fleet and wonderful views down to Monmouth.The last day we kept the nautical theme going and visited Lydney Harbour, the Master loves old abandoned boats and spent time photographing and sketching.
All too soon it was time to come home and as we headed back to the Midlands, the weather decided to get wet and miserable.
The other piece of exciting news is that the Oil Pastel books have arrived and don`t forget, if you would like one of the first 50 personally signed and numbered books, with FREE Oil Pastel DVD just email and we can post your copy.The embarrassing photo this month shows how exciting it was to receive the books….. begging for sales?
See you next month,Freyja x
]]>Mirror, Mirror on the wall ....Freyja's tails2018-10-13T09:38:47+01:00http://www.timfisherartist.co.uk/dog%20blog/files/ca2f52809edabffc46bac22415c95dde-14.html#unique-entry-id-14http://www.timfisherartist.co.uk/dog%20blog/files/ca2f52809edabffc46bac22415c95dde-14.html#unique-entry-id-14Hello and welcome to the October Doggy Blog,
It has been busy once again at the Fishers household, very varied, with a family wedding and the Master looking very dapper as Father of the Groom.
On the garden front, Project Pond is now complete with the installation of a garden mirror.Now, to a Parson Jack Russell (with a bit of Beagle) a mirror is a very strange concept. To me it seems that another canine intruder has appeared in the garden and constant searching fails to find it !!The humans seem to find it quite amusing, but I don’t agree.
A few weeks ago the Master took to the skies in a glider, as his birthday treat from the children and partners.I stayed at home, but was treated to lunch at our local pub The Bell Inn afterwards.Listening to the conversation, it seems the Master was launched via a winch line into the skies for a couple of flights, rather him than me !!
The middle of September saw us taking our new caravan for its maiden voyage to a lovely site, not far from home with nice walks and a very doggy friendly pub. All seems well with the new caravan, the sofa is comfortable for viewing the passers by and the panoramic front window is a doggy delight. It also has some smashing blown air heating, as I have heard talk of a November trip, it will definitely be needed then.
Due to the hot summer the owners have had a very good crop of chillies and the Master has this year grown the especially hot variety Naga.The task of converting them to this years supply of chilli jam fell upon the Mistress, bearing in mind she does not like hot spices, this was quite a challenge ! Armed with a pair of safety glasses, rubber gloves and all windows open, she made the firey concoction and it is all bottled up. Now Mr Tim Fisher has always maintained a love of hot spicy food and even this batch of chilli jam can only eaten by him in tiny amounts.Terriers are allegedly renowned for being curious, so I decided to investigate the science of chilli heat.1. Scoville scale measures the concentration of Capsaicin, noted as SHU`s2. Sweet Bell Pepper 100 SHU`s3. Jalapeno 8000 SHU`s4. Naga 13821185. Police Grade Pepper Spray 5300000Conclusion ~ if you are ever at our house and get offered cheese and biscuits…. BEWARE
Finally for this blog….BIG drum roll… the new Oil Pastel books have arrived at the publishers and our orders will be here next week.Don’t forget the first 50 copies we sell are numbered and signed, with a complimentary FREE DVD.If you would like copy please get in touch, there is a very good chapter on painting a very good looking Parson Jack Russell (with a bit of Beagle) No prizes for guessing who that may be and I am still trying to figure out if I will be allowed to paw print the first 50 copies too.
All the best till next time, I am off to see if my “twin” mirror image is still lurking in the garden.
I leave you with a few images from the last few weeks.
Freyja x
]]>Garden makeover Freyja's tails2018-09-11T13:40:20+01:00http://www.timfisherartist.co.uk/dog%20blog/files/2db88fbd4c2b864608c81a2ed9b0cf68-13.html#unique-entry-id-13http://www.timfisherartist.co.uk/dog%20blog/files/2db88fbd4c2b864608c81a2ed9b0cf68-13.html#unique-entry-id-13I am just recovering from getting soaked walking with my neighbour Olive ( Choc Labrador ) it`s a bit of a shock to the system getting used to more inclement weather and being bundled up for a bath after coming home. After a lovely snooze under my rather large fleece blanket, which I have learnt to roll and twist into a nice warm parcel, now feel ready to show willing and write this months offering.It's been really busy in the Fisher household, the owners have taken a break from the world of art and turned into garden landscapers and I must say have made a really nice job.The pond area was badly in need of a revamp, two attempts at wooden decking proving unsuccessful as its a shady area and just gets wet and rotten.
The humans spent some time deliberating and decided to remove the old pond ( there since 1994 ) replace it and design a new patio area. Readers may remember from my previous posts that due to the pesky pigeons I have ended up falling in a few times ! Well, they have departed now and in their place is a very cheeky squirrel that keeps coming to steal the neighbours hazel nuts. He just nips along our adjoining fence, totally ignoring me and then disappearing with his feast.
Back to the garden, it's been a very hot summer and I did feel sorry for all the hard work digging, moving rubble, brick building and movement of tonnes of sand, ballast and the new patio. It was a real struggle lying in the shade watching it all happen.There are a selection of photos showing how the work progressed and its nice now to potter outside to a very tidy garden Alan Titchmarsh would be proud of !
Art started again at the beginning of September with a trip to Ledbury for an art demonstration, the Master is busy today tutoring a workshop, so I thought it would show willing by writing my blog.
The advance copy of the new Oil Pastel book has arrived from Search Press and it looks AMAZING…. so proud of my Master. If you would like to be one of the first 50 to receive a signed copy of the book and a FREE Oil Pastel DVD email and we can reserve a copy. Just thinking I need to sign a pawprint in the book too, as I feature in the portrait section of the publication.. need to work on that.
See you next month,Freyja xx
]]>Feeling sorry for myself ...Freyja's tails2018-07-30T15:54:34+01:00http://www.timfisherartist.co.uk/dog%20blog/files/55d4fd770d1603193b8f5a0f65f1b67b-12.html#unique-entry-id-12http://www.timfisherartist.co.uk/dog%20blog/files/55d4fd770d1603193b8f5a0f65f1b67b-12.html#unique-entry-id-12July has been another record breaking month weather wise, my owners have very wisely been getting up early to take me for a walk before the sunshine gets too hot.The Patchings Festival went very well mid July, I sensibly stayed with my owners parents and spent a happy few days being spoilt with the occasional ginger nut !
The school holidays are also now upon us and the grandson has been staying, so I have a super playmate to have adventures and cuddles with and paddles in the river.
My Master took a few days off following the Patchings Festival, he has mended his log splitter and made a beautiful new wooden pastel box.I am most pleased about the log splitter, as already thinking ahead to the winter months and lying by the log burner toasting my paws.
Readers may be wondering as you read my blog as to this months title…well let me begin.Firstly, I must congratulate myself for writing this blog today after the consequences of the last 24 hours.
It began yesterday afternoon, when my owners noticed that my face was starting to swell, it felt a bit weird so I snuggled on the sofa for a nap.At bedtime my face was really swollen and parts of my body too, my Mistress called our super vet on call number and Giles answered. He suggested it may be a reaction to a bite and to use an ice pack during the night and call back if any worse.I have the most amazing Mistress, she stayed up all night with me holding an ice pack of frozen peas to my swollen face and making sure there was plenty to drink.Morning arrived and I was dispatched to Melton Vets, Charlotte examined me and after two injections i went back home, pleased to say it`s starting to respond.
Heres a photo of what I should look like and one from last night…not a pretty sight and certainly not like my beautiful portrait from the Masters forthcoming Oil Pastel Book. What a trouper I am.. feel rubbish today and can still find the motivation to promote my owner/artist/author !!
Well I am signing off till next time, Charlotte said the injections could make me sleepy (thats my excuse, as there is a warm spot available on the conservatory sofa)
Freyja x
]]>Long Hot JuneFreyja's tails2018-07-01T12:37:03+01:00http://www.timfisherartist.co.uk/dog%20blog/files/b822586ddf2dbf31311b95d7a0e8c9de-11.html#unique-entry-id-11http://www.timfisherartist.co.uk/dog%20blog/files/b822586ddf2dbf31311b95d7a0e8c9de-11.html#unique-entry-id-11Well its been a scorcher hasn't it ? certainly not the weather for dogs, or my owners for that matter. Last weekends hot weather saw them deciding to re landscape the pond area of the garden, the Master digging trenches, mixing concrete and disc cutting brick walls !!The new pond has arrived, hopefully I wont fall into that one once it is installed. The pigeon saga is still ongoing, they seem to delight landing close by and then taking off just as I manage to get near. The mistress has been recalling her childhood days watching Dastardley & Muttley and the theme song "Stop the pigeon"
The Patchings Festival preparations have stepped up this week, the Master spending a whole day "refreshing " his pastels, sounds like hard work to me. It seems like it will be a fantastic 25th Anniversary Festival this year, lots to see and do for all the family and dog friendly too.The Mistress has designed the promotional T-shirts they will be wearing to Patchings and I feature on the back…fame at last. At least I will be at there in some form, as due to my behaviour last year I'm not allowed to visit this year.
The new Oil Pastel book has changed its release date and now will be available from the end of October, I have to say again how excited I am to be featured in the new book.As they say Terriers are quick to learn, so I though that you lovely readers would like to learn some printing terms…Editor ~ a person who assists the author with producing the book and layout, in my Masters case for the Oil Pastel book it was Beth, at times a little confusing as the Masters daughter is also Beth !Proof Reading ~ time the Master spent reading through the book layout checking spelling, colours and queries from BethBlads ~ a promotional leaflet with the book front cover, price, publication date and snippets from the book, I am featured on the third page of the blad..more fame.ISBN ~ International Standard Book Number, a unique number for a book, for your information the Masters new book is 9781782215509
For a bit of fun for this months blog, I pondered as a new supermodel of the canine world for featuring in the Masters new book, what doggie supermodel names could I dream of ?This is the list that came to and apologies to the real life models x If you think of anymore please email in, it would be fun to compare names.
Here are some photos from last month, sorry about the demise of the duck toy, it only lasted 10 minutes !
See you next time,Freyja xx
]]>Caravan lifeFreyja's tails2018-05-10T11:14:46+01:00http://www.timfisherartist.co.uk/dog%20blog/files/ff3c74f8457518433a5a63c0d5e4fd55-10.html#unique-entry-id-10http://www.timfisherartist.co.uk/dog%20blog/files/ff3c74f8457518433a5a63c0d5e4fd55-10.html#unique-entry-id-10It has been a lovely month so far and I was treated to a long Bank Holiday break in the caravan. Its great just like a big dog kennel on wheels, there is nothing better than sitting at the front window watching the world go by. We had beautiful weather and managed two big walks on the Friday and Saturday before the sun was too hot to walk in and it was more sensible to sit in the shade.On Sunday morning we got up early and had a wander around Stamford, as my Master wanted to do some sketching and there was also an art exhibition that was nice to wander around.
The Bank Holiday Monday forecast was hot and sunny, so again an early start was needed, our destination was only in the next village to an Open Day at "Rocks by Rail" It is a railway museum, workshop, cafe and railway track dedicated to the ironstone trains that operated from the quarry on the site many years ago. It is well worth a day out and if you are reading this months blog Mr David Wright, it was fab !We had a ride on the Rutlander steam train and sampled delicious bacon rolls and cake in the cafe.
My Master was busy over the holiday writing his latest article for the Leisure Painter magazine.. such dedication I must tell his Editor Ingrid, you can see him hard at work in the caravan awning on the photo below.
Talking of Editors the Master has been very busy recently with his Search Press Editor Beth, finalising the last parts of the forthcoming oil pastel book. You may have read in previous blogs how excited I am to see this book when its released in December.The very latest exciting news is that the book is also going to be published as a Chinese edition… I wonder how you say Freyja in Chinese !!
Well; I must sign off now, its time for a snooze, as I had a very energetic walk this morning with my new neighbour Olive, a chocolate labrador puppy.
See you next month when all the news will be of the preparations for the Patchings Art, Craft and Design Festival held from July 12 to 15, see the Patching page for more details.Enjoy the photos below of our holiday.
Freyja x
]]>An English Summer ??Freyja's tails2018-04-24T12:28:33+01:00http://www.timfisherartist.co.uk/dog%20blog/files/3ac74fee34bd172cf2ebd8d438367404-9.html#unique-entry-id-9http://www.timfisherartist.co.uk/dog%20blog/files/3ac74fee34bd172cf2ebd8d438367404-9.html#unique-entry-id-9It has been a funny old month, the weather lurching from cold to boiling and then back again to cold and rainy. Today I overheard my Master asking where his woolly hat was, the Mistress had put it away in anticipation of the Summer arriving !The family caravan has just had its annual servicing, so paws crossed we will be off on an adventure soon. I love positioning myself in a sunny spot by the front window watching the world go by once we are all pitched up. There are usually lots of dogs to check out and loads of exciting walks to explore the countryside.
The Oil Pastel book has now gone to "lock down" thats a book publishing term meaning the books going to final print, looking forward to December when its on general release. As I have mentioned a few times there is much excitement for me as I am the model in the painting an animal chapter.When the book is out I will showcase my "Pawtrait" as it`s a secret until the book is printed.
The Fisher household has once again gone into homegrown vegetable mode, the conservatory is a temporary greenhouse and seedings are appearing everywhere. I really hope theres lots of carrots, my owners cannot believe I have the ability to distinguish a carrot being peeled and whoosh… I'm there waiting !Our front garden is our vegetable plot, all nicely designed artistically in a Piet Mondrian style, as you would expect from having an artist/gardener owner.The back garden is due for a revamp around the pond area, it will be nice to sit in a sunny spot and watch all the humans hard work taking place.
There was a slight mishap involving the pond a couple of days ago, theres a large bay tree at the back and a pair of wood pigeons have nested in there. Well, to cut a long story short…. I always like to race outside when they fly in… unfortunately I dashed outside, looking up and ran across the middle of the pond !!I can assure you that Parson Jack Russell Terriers cannot walk on water, I made my way to my Mistress dripping wet and feeling stupid. The pigeons are still laughing at me every time I visit the garden.
The Master has just returned from tutoring a weekend sketching workshop at The Old House Studio, Torside, Derbyshire, he really enjoyed it and is looking forward to a two day acrylic workshop there on 22 & 23 September.I kept my Mistress company at home over the weekend, she was very busy on the office computer doing the end of year accounts. Its good up in the office, there is a chair positioned nicely by the window, so I can either snooze or use it as a vantage point to keep a canine eye on the street.The only problem is the local cats all know that I cannot get out and they parade past, tails in the air, much to my disgust.
The Master is now really busy out and about on the road tutoring workshops and doing art demonstrations, I keep checking the diary hoping for a village venue so I can go along for the ride.Recently they have all been held in venues where doggies are not allowed.
Right that is all for now, its turned cool here at the computer, my fleece blanket is calling and heres hoping the log burner gets lit soon… well it is nearly May.Here is a photo of me displaying the assortment of toys, some rather the worse for wear… a bit like the toys that Paddy the spaniel of "Max and Paddy out in the Lake District on Facebook" destroys. There is a recent photo of him with a very sad looking Telly Tubby !The two dogs have over 67 thousand followers, maybe I should have my own Facebook page ?
Freyja x
]]>Freyja`s TailsFreyja's tails2018-03-24T13:02:48+00:00http://www.timfisherartist.co.uk/dog%20blog/files/cee76b86b799d0923c64f64d2639b590-8.html#unique-entry-id-8http://www.timfisherartist.co.uk/dog%20blog/files/cee76b86b799d0923c64f64d2639b590-8.html#unique-entry-id-8St Patricks Day was a double cause for celebration as it was my 2nd Birthday, I was treated to a delicious breakfast with my favourite food carrot sculpted into two candles for me.The pictures below show me celebrating and some puppy pictures too with my sister.It would be lovely to see my 3 brothers and sister again, wonder what we all look like now ?
It has been another busy month for my artist owner, he's been to a lot of art demonstrations and workshops lately and recently just back from a 2 day workshop at the Watershed Studio in Essex.It's a favourite destination for him, as Allison provides wonderful lunches and makes you all very welcome. Luckily all the snow has disappeared before my Master had to travel down last week.
I had the good fortune to spend a couple of days with my Mistress at her parents house, it is great there, I get thoroughly spoilt and there is always a roaring open fire … Doggie Heaven !!
The new Oil Pastel book is taking shape, looking forward to helping with the marketing when its released, as I have probably mentioned in past posts theres an animal portrait section and I was the lucky canine model.There is a good chance that the Mistress will design some new T-shirts to wear at the Patchings Festival in July to promote the book, wonder if there is one my size?Unfortunately there may not be a guest appearance by Freyja at the Festival this year… I somewhat blotted my copybook by guarding the Masters stand instead of welcoming all the canine visitors… whoops. Heard the Mistress say at the time "Freyja won't be coming next year"
Well its time to end this latest blog, there is the sound of the log burner being lit, just time to say goodbye and see you next month.
Freyja x
]]>Chinese Year of the Dog 2018Freyja's tails2018-02-16T17:51:45+00:00http://www.timfisherartist.co.uk/dog%20blog/files/3da0b7c67bd22ceedaf6142373319e1f-7.html#unique-entry-id-7http://www.timfisherartist.co.uk/dog%20blog/files/3da0b7c67bd22ceedaf6142373319e1f-7.html#unique-entry-id-7Hello and welcome to the latest doggy blog,
Well I am not sure where to start this month, it’s been hectic here in the Fisher household.I chose today to update my blog as it is the celebration of Chinese New Year today and the Year of the DOG….. do we get presents like at Christmas? Or do I get treated extra special for a whole year?But mustn't complain, as already this year there have been several breaks away and I was included on two of them.The master has been sorting the last of his Editors queries for the new oil pastel book, shouldn’t be long now till it’s published. The mistress will let you all know when its available to purchase and put all the details on the website. I think she's getting nearly as good as me with this website updating.
At the end of January we had a walking break in the Lake District, so my master could capture images and sketches of this amazing countryside.It was my first time to this area of the UK and we did some really long walks and encountered a lot of different weather. Not too keen on hail though, caught me off guard, it’s no fun when the owners are wrapped up in coats and they forget mine !!A compensation for all the Lake District walks is that all the local pubs are dog friendly and most have lovely log fires. I am particularly fond of an open fire.We encountered lots of signs outside pubs and cafes saying " Muddy boots and dogs welcome" it was much appreciated by us all, well done Lake District.There seems to be a doggy phenomenon called Max and Paddy, a pair of spaniels from the Lake District, they were even on The One Show a couple of weeks ago !! I have noticed that my mistress even follows them on Facebook !!
My next trip was a two night sleepover at the mistresses parents, it’s great there... ginger nut biscuits, a roaring open fire and loads of snuggles... doggy heaven.
On a last note I have been interested in the origin of my name, owners were inspired by watching all the series of The Vikings (not the one with Kirk Douglas) hence my name Freyja is Viking.Pawing through the internet heres what I found out…. Really like the idea of driving a cart pulled by two cats..mmm.. wonder if next doors cats would oblige me?
"Freyja ("lady sovereign/supreme") is one of the major goddesses of Norse Mythology. She is second only to Frigg and the mightiest of the Vanir. Freyja is the goddess of love, war, death and seidr. Freyja is in many aspects the feminine counterpart of Odin. Her brother is Freyr.She is the primal völva and the one that taught Odin the art of seidr, and is also the leader of the Valkyries. Half of the dead taken by the valkyries belong to her and she receives the dead noble women and shield maidens. She has many ways of travelling. Sometimes she rides a cart driven by her two cats or rides her great boar. She also takes the shape of a falcon, a shape she may lend to any one. Freyja is the most beautiful of the goddesses and desired by both gods, giants and dwarfs.Lagertha invokes her when she tries to cleanse Ragnar's wound and when training her shield maidens. When Porunn disappears, Aslaug tells her to turn to Freyja for guidance."
Time to sign off now, dinner awaits, will it be Chinese inspired tonight? Heres a photo to end this blog of my time in the Lake District and some smashing open fires.
Freyja x
]]>Freyja`s TailsFreyja's tails2018-01-15T14:30:50+00:00http://www.timfisherartist.co.uk/dog%20blog/files/88a4ebd5dc2a424787dcb83f2c603d6a-6.html#unique-entry-id-6http://www.timfisherartist.co.uk/dog%20blog/files/88a4ebd5dc2a424787dcb83f2c603d6a-6.html#unique-entry-id-6I hope you all had a good Christmas and New Year, the Fisher household hosted quite a few family members over the holiday and I spent Boxing Day with my canine Sprocker pal Poppy. We had a good run across the fields in the morning which blew the cobwebs away, however there is no way I will ever be able to keep up with Poppy !!
January 4th saw the owners pack up rucksacks and suitcases and alerted me to the fact there may be an adventure looming, I was correct, the mistress had planned a Birthday surprise for the master and I was lucky enough to be included.A short journey in the car saw us arrive a very cosy cottage in the village of Biggin by Hartington,Derbyshire.It is a lovely cottage owned by John and Vanessa and we stayed there a year ago, I am particularly fond of the log burner in the lounge and spent a happy week asleep in front of it.If any of you lovely readers want a preview of the cottage just visit Postcard Cottages, there is also a smashing converted barn in Brassington that Poppy stayed in with her owners.
The Masters birthday was a milestone event and the children, partners and grandchildren all surprised him on the Friday and stayed for two nights. I was pleased as Poppy came too.The morning of the birthday saw a mound of bacon butties and then the grand present opening, the Master was presented with a glider flight and I am very much hoping I will be included too !! Just need to order my Biggles goggles and then I am ready for take off…
The visitors all went home and we spent the last three days walking the glorious Derbyshire countryside.The picture below shows me enjoying the Pioneer log burner.
The world of Tim Fisher Artist has been busy over the past month, the last part of the new book has been sent to the publishers, Leisure Painter articles written and lots of long walks capturing view of the countryside.I am looking forward to seeing the new Oil Pastel book when its published later on this year as I appear in the dog Pawtrait section.
See you soon,Freyja xx
]]>Freyja`s TailsFreyja's tails2017-12-13T09:21:05+00:00http://www.timfisherartist.co.uk/dog%20blog/files/b99d9709ba1d3c3d174ff3988da020c1-5.html#unique-entry-id-5http://www.timfisherartist.co.uk/dog%20blog/files/b99d9709ba1d3c3d174ff3988da020c1-5.html#unique-entry-id-5It's beginning to look a lot like Christmas here in the Fisher home, that log burner looks so inviting its hard for me to concentrate on the latest update…
As usual I have been causing mischief in the Fisher household, I can usually get away with it by rolling on my back and looking remorseful !During time with my owners, certain household and art materials seem to have mysteriously ended up in my paws and sadly come to a sticky end.
To celebrate Christmas I thought it would be nice to rewrite a popular carol as a tribute to all the items that have been lost.“The Twelve Day’s of Freyja’s Tails”
On the first day of Christmas my Parson Jack Russell destroyed for meOne set of fairy lights, chewed, useless now for the Christmas tree
On the second day of Christmas my Parson Jack Russell destroyed for meTwo Sennelier oil pastel sticks, should have been for the masters new book
On the third day of Christmas my Parson Jack Russell destroyed for meThree pairs of Crocs, two the masters and one belonging to the mistress
On the fourth day of Christmas my Parson Jack Russell destroyed for meFour Faber Castell Pitt Pens, the black ink went all over the carpet
On the fifth day of Christmas my Parson Jack Russell destroyed for meFive assorted Derwent pencils, nothing sketchy about her actions here
On the sixth day of Christmas my Parson Jack Russell destroyed for meSix erasers, no rubbing out now, they are in a thousand pieces
On the seventh day of Christmas my Parson Jack Russell destroyed for meSeven Pedigree Dentastix, her weekly gum health allowance
On the eighth day of Christmas my Parson Jack Russell destroyed for meEight cardboard boxes, as a puppy this was the most favourite toy
On the ninth day of Christmas my Parson Jack Russell destroyed for meNine assorted cushions, not her fault, apparently they exploded by accident
On the tenth day of Christmas my Parson Jack Russell destroyed for meTen cuddly dog toys, the sheep being her favourite, she’s had three
On the eleventh day of Christmas my Parson Jack Russell destroyed for meEleven juicy carrots, she’s upgrading her night vision
On the twelfth day of Christmas my Parson Jack Russell destroyed for meTwelve logs from the log basket, her favourite activity when the humans are not at home
Merry Christmas and a Happy New Year, see you in 2018
Freyja xx
( some of the items in the carol may have been slightly exaggerated, authors pawrogative !! )
]]>Freyja`s TailsFreyja's tails2017-11-26T12:02:10+00:00http://www.timfisherartist.co.uk/dog%20blog/files/38e000fe37cb3a42c94699b22acba059-4.html#unique-entry-id-4http://www.timfisherartist.co.uk/dog%20blog/files/38e000fe37cb3a42c94699b22acba059-4.html#unique-entry-id-4November has been very busy in the Fisher household, the house has been busy with workmen converting the house from oil to gas central heating. I am looking forward to a warm toasty house this winter.
The master has been extra busy creating oil pastel paintings for the new Oil Pastel book and doing his author duties.Last week I waved him off to drive down to Tunbridge Wells to do his 3 day photo shoot for Search Press. On his return I overheard him telling the mistress about the photographer and the high tech camera he was using. I am afraid I got into a bit of trouble as the camera was so sophisticated it kept showing my dog hairs up on my masters paintings and attached to his oil pastel sticks ! Oh well, I had to send a little bit of myself along to be included in the book.
After eavesdropping on my owners I can reveal exclusively to readers of this blog that I do appear in the book…. My master does a "Pawtrait" of me in the painting animals section, I am so thrilled and will of course be on paw to sign any books you good people out there wish to purchase.
On the home front, today sees my master doing some Sunday DIY and later on I have been promised a nice long walk.Right I must sign off for now, the log burner isn't alight yet, so I must seek out the big furry fleece blanket and burrow down for a quick forty winks. | 2023-08-05T01:26:35.434368 | https://example.com/article/7158 |
4 Ill. App.3d 1023 (1972)
283 N.E.2d 252
CHESTER A. LIZAK, Plaintiff-Appellant,
v.
MITCHELL ZADROZNY et al., Defendants-Appellees.
No. 57058.
Illinois Appellate Court First District.
April 11, 1972.
*1024 Chester A. Lizak, pro se.
William R. Ming, Jr., Sophia H. Hall, Andrew M. Raucci, and Edward V. Hanrahan, State's Attorney, all of Chicago, (Paul P. Biebel, Jr., Assistant State's Attorney, of counsel,) for appellees.
Judgment affirmed.
Mr. PRESIDING JUSTICE STAMOS delivered the opinion of the court:
This is an appeal from an order of the Circuit Court affirming the *1025 decision of the Chicago Board of Election Commissioners, which sustained an objection to the nominating petition of plaintiff-appellant, Chester A. Lizak.
On December 20, 1971 plaintiff filed with the County Clerk his petition for nomination for the office of Ward Committeeman, Republican Party, 45th Ward, City of Chicago. On December 31, 1971 defendant filed with the County Clerk objections to the candidacy of plaintiff, setting forth as his sole objection the fact that plaintiff's petition contained signatures in excess of the maximum allowed by the following provision of the Illinois Election Code:[1]
"Such petitions for nominations shall be signed:
* * *
(h) If for a candidate for precinct committeeman, by at least 10 primary electors of his party of his precinct; if for a candidate for ward committeeman, by not less than 10% nor more than 16% of the primary electors of his party of his ward; if for a candidate for township committeeman, by not less than 5% nor more than 8% of the primary electors of his party in his township or part of a township as the case may be."
The Board of Election Commissioners conducted hearings on this objection on January 3, 1972 and January 4, 1972. At the hearings it was undisputed that the maximum number of petition signatures allowable under the Election Code for plaintiff's ward was 1716. The evidence adduced established that the petitions filed by plaintiff contained more than 1716 signatures, but that plaintiff had lined out enough signatures to bring the number of remaining signatures below that maximum. It was further established that, prior to the filing of plaintiff's petitions, the County Clerk's office had received no written revocations of signatures. This was significant because of the following provision of the Election Code:[2]
"The petitions, when filed, shall not be withdrawn or added to, and no signatures shall be revoked except by revocation filed in writing with the clerk or other proper officer with whom the petition is required to be filed, and before the filing of such petition."
On January 7, 1972 the Board filed its decision sustaining defendant's objection, declaring that plaintiff's petition was invalid and ordering that plaintiff's name should not be permitted on the March 21, 1972 primary election ballot. On January 14, 1972, pursuant to Section 10-10.1 of the Election Code,[3] plaintiff filed with the Circuit Court a *1026 complaint for judicial review of the Board's decision. That complaint sought reversal on the grounds that the decision of the Board was contrary to the evidence, that plaintiff had in fact filed signatures below the maximum permitted by Illinois law and that Section 7-10(h) of the Election Code constituted an unconstitutional deprivation of Equal Protection and Due Process of Law. Answers were filed by defendants on January 24, 1972. On January 28, 1972 plaintiff filed a motion seeking leave to amend his complaint in order to add a second count, which would have cited as additional grounds for reversal, "the arbitrary, capricious and discriminatory action of the Board * * *." Specifically, Count II alleged that in another proceeding before the Board, the petition of one Richard Mell had been sustained despite the fact that the basis of the challenge to Mell's petition was identical to the basis of the challenge to Lizak's petition. On February 3rd, the Circuit Court orally denied the motion for leave to amend. On February 8th that motion was again denied, this time by written order. On February 9th, in an order reflecting that the Court had reviewed the record of the Mell case despite its denial of plaintiff's motion to amend, the decision of the Board of Election Commissioners was sustained. This appeal followed. On March 14, 1972 we entered an order affirming the decision of the Circuit Court and advised that an opinion would be filed at a later date.
On appeal, plaintiff concedes that the total number of signatures on his petitions lined out signatures added to unaltered signatures exceeds the statutory maximum. In defense of the validity of the petition, he offers four contentions: 1) that Section 7-10 of the Election Code creates an irrational classification, thereby denying plaintiff Equal Protection of the Law; 2) that the "revocation of signatures" paragraph of Section 7-10 should be interpreted to allow a candidate to strike signatures from his petitions without filing written notice with the County Clerk; 3) that it was error to deny plaintiff's motion for leave to file an amended complaint; and 4) that evidence pertaining to the allegations of Count II of the amended complaint conclusively established that the Board's decision was arbitrary and capricious.
1-3 Plaintiff's first contention raises a constitutional issue requiring close analysis. Since 1927 the Illinois Election Code has provided minimum and maximum limitations upon the number of signatures which may be submitted on primary nominating petitions for all State offices. Since 1933 the Election Code has provided a minimum-maximum limitation on primary petitions for ward and township committeemen as well. We note, however, that our Election Code has never contained maximum *1027 limitations pertaining to any other public or political office. Plaintiff does not contend that these maximum signature limitations are void per se. Rather, he contends that the imposition of that burden on only two classes of office-seekers constitutes an irrational classification, because whatever justification might be asserted in support of maximum signature limitations generally would apply equally to all offices on the primary ballot. We must agree that it is difficult to perceive the legislative wisdom in confining maximum signature limitations only to State offices and to ward and township committmen offices. It does not necessarily follow, however, that this statutory scheme violates plaintiff's constitutional rights. It is not every distinction or discrimination in the law which is unconstitutional. The Equal Protection Clause is "limited to instances of purposeful or invidious discrimination rather than erroneous or even arbitrary administration of state powers." (Briscoe v. Kusper (7th Cir.1970), 435 F.2d 1046, 1052.) It has been recognized that legislatures are often limited to achieving reforms piecemeal, proceeding by compromise a step at a time toward a desired comprehensive solution. (Williamson v. Lee Optical Co. (1955), 348 U.S. 483; McDonald v. Board of Election Commissioners (1969), 394 U.S. 802.) A statutory scheme is not constitutionally infirm merely because it fails to apply its remedies whereever they may reach. (West Coast Hotel Co. v. Parrish (1937), 300 U.S. 379; See also "Developments in the Law-Equal Protection," (1969), 82 Harv. L. Rev. 1065, 1084-85.) In view of this precedent, and despite our belief that the statutory classification at bar lacks a rational basis, we shall defer to the wisdom of our legislature. We hold that the imposition of maximum signature limitations on some offices, and not on others, did not serve to deny plaintiff his right to Equal Protection.
4 Plaintiff's second contention is that Section 7-10 of the Election Code, which, in part, requires a filed, written revocation in order to revoke a signature from a primary petition, is addressed solely to petition-signers and does not preclude candidates from undiscriminately striking signatures from their own petitions. We cannot agree. We believe that this provision was in fact directed primarily at candidates, affording them a simple means of striking undesired signatures from petitions while simultaneously protecting them from fraudulent alterations subsequent to filing. The provision may even have been designed specifically to benefit a candidate who finds himself with signatures in excess of the statutory maximum. While we recognize the seeming inequity of enforcing this provision against the interests of the party it was intended to benefit, we cannot ignore the clear mandate of the statutes. Plaintiff's failure to comply therewith obligated the Board of Election Commissioners to compute all of the filed signatures, including all signatures which *1028 had been lined-out. The Board and Circuit Court did not err in their interpretations of the statute.
5, 6 Plaintiff's third contention is that the trial court erred in denying him leave to file an amended complaint. Section 10-10.1 of the Election Code[4] requires that petitions for judicial review of Board decisions must be filed within 10 days after the decision of the Board. The statute does not provide for amendments to petitions for judicial review. Plaintiff contends that Section 1 of the Civil Practice Act[5] and Section 14 of the Administrative Review Act[6] require the application of other provisions of the Civil Practice Act, specifically Section 46,[7] to the Board of Election review proceedings. Section 46 provides, in part:
"At any time before final judgment amendments may be allowed * * * changing the cause of action or defense or adding new causes of action or defenses * * *."
While we entertain serious doubts as to the general applicability of the Civil Practice Act to Board of Election Commissioners review proceedings, we find it unnecessary to decide that issue. It is established that a litigant's right to amend pleadings rests within the sound discretion of the trial court. (Mangel & Co. v. Village of Wilmette, 115 Ill. App.2d 383, 253 N.E.2d 9.) Assuming arguendo the applicability of Section 46 to the proceeding in the Circuit Court, we perceive no abuse of discretion by the denial of this particular motion for leave to amend. The Circuit Court explained its ruling as follows:
"* * * [I]f you are asking why I denied the petition to amend, I felt that the Mell matter would be introducing new evidence before this reviewing court and under the statute I don't think we have the authority to accept new evidence."
We concur in this conclusion and recognize it is a sound basis for denying plaintiff's motion for leave to amend his complaint.
7 Because of our conclusion with regard to his third contention, we find it unnecessary to consider plaintiff's allegations of arbitrary and capricious conduct by the Board. The complaint, which was never successfully amended, contained no such allegations, and it would be inappropriate to decide issues which were not effectively raised in the Circuit Court. We affirm.
Judgment affirmed.
SCHWARTZ and BURKE, JJ., concur.
NOTES
[1] Ill. Rev. Stat. 1971, ch. 46, par. 7-10.
[2] Ill. Rev. Stat. 1971, ch. 46, par. 7-10.
[3] Ill. Rev. Stat. 1971, ch. 46, par. 10-10.1.
[4] Ill. Rev. Stat. 1971, ch. 46, par. 10-10.1.
[5] Ill. Rev. Stat. 1971, ch. 110, par. 1.
[6] Ill. Rev. Stat. 1971, ch. 110, par. 277.
[7] Ill. Rev. Stat. 1971, ch. 110, par. 46.
| 2024-05-30T01:26:35.434368 | https://example.com/article/1578 |
Diabetes self-management, depressive symptoms, quality of life and metabolic control in youth with type 1 diabetes in China.
To assess diabetes self-management, depressive symptoms, quality of life and metabolic control in a cohort of youth with type 1 diabetes in mainland China. Predictors of self-management and depressive symptoms were also explored. Studies have shown that adaptation to childhood chronic illness is important in determining outcomes. Few studies have been reported on the behavioural, psychosocial and physiological adaptation processes and outcomes in Chinese youth with type 1 diabetes. This is a cross-sectional study as part of a multi-site longitudinal descriptive study. Data for this report were collected at baseline. A convenience sample of 136 eligible youth was recruited during follow-up visits in hospitals in 14 major cities of Hunan Province (located in central southern mainland China) from July 2009-October 2010. Data were collected on socio-demographic background, clinical characteristics, diabetes self-management, depressive symptoms, quality of life and metabolic control. Diabetes self-management was lower in Chinese youth compared with a US cohort and was associated with insulin treatment regimen, treatment location, depressive symptoms and gender. A total of 17·6% of youth reported high depressive symptoms, and depressive symptoms were correlated with family annual revenue, school attendance, peer relationship and parent-child relationship. The mean score of global satisfaction with quality of life was 17·14 ± 3·58. The mean HbA1c was 9·68%. Living with type 1 diabetes poses considerable challenges, and Chinese youth report lower self-management than US youth and high depressive symptoms. Metabolic control and quality of life were sub-optimal. More clinic visits, treatment for high depressive symptoms and an intensive insulin regimen may improve diabetes self-management for youth with type 1 diabetes in China. Culturally appropriate interventions aimed at helping them adapt to living with the disease and improving outcomes are urgently needed. | 2024-06-14T01:26:35.434368 | https://example.com/article/3740 |
Connacht Rugby today confirmed the two-year contract extensions of flanker Jake Heenan and inside back Craig Ronaldson.
24-year-old Heenan, a former New Zealand Under-20’s captain, is currently in his third season with Connacht and becomes Irish qualified at the end of June.
Ronaldson was drafted into the Connacht squad in the summer of 2013 straight from the AIL. The 26-year-old has earned a further contract on the back of quality performances in both the number 10 and 12 shirt as well as his impressive goal kicking ability.
Commenting on the recent contract extensions, Connacht CEO Willie Ruane said:
“We’re delighted that both Jake and Craig, along with all the players we have signed up to date, will remain an integral part of our senior playing squad going forward.
Head Coach Pat Lam said:
“Jake and Craig both started with Connacht the same season I did and it’s been fantastic to witness their growth and development as rugby players in that time.
“I’ve known Jake for a long time now and I’m excited by how far he will take his game in the next two seasons. While he has been frustrated with injuries, his determination and work ethic has ensured he has come back stronger each time.
“Craig has been a great asset for our squad over the last three seasons. As an inside centre, who can cover outhalf, his versatility and goal kicking has been vital for the team as well as giving us excellent depth in the backs.
“With Champions Cup rugby guaranteed for next season having the experience and commitment of both players to Connacht Rugby is very pleasing.” | 2024-07-19T01:26:35.434368 | https://example.com/article/7407 |
Q:
how to do VectorPlot?
I have a problem with VectorPlot for spin current density and I tried it my self but not working. Here is the code according to Jens solution:
Clear[x, y, ψ1, ψ2, ψ3, ψ4, eqn, eqnWithInitial,v, j];
eqn = Thread[
I D[{ψ1[x, y, t], ψ2[x, y, t], ψ3[x, y, t], ψ4[
x, y, t]},
t] == {v (-I D[ψ3[x, y, t], x] - D[ψ3[x, y, t], y]) +
2 Δ ψ4[x, y, t],
v (-I D[ψ4[x, y, t], x] - D[ψ4[x, y, t], y]),
v (-I D[ψ1[x, y, t], x] + D[ψ1[x, y, t], y]),
v (-I D[ψ2[x, y, t], x] + D[ψ2[x, y, t], y]) +
2 Δ ψ1[x, y, t]}];
eqnWithInitial =
Join[eqn,
Thread[{ψ1[x, y, 0], ψ2[x, y, 0], ψ3[x, y,
0], ψ4[x, y, 0]} == {1, 1, 1,
1} (x + I*y) Exp[-(x^2 + y^2)]],
Thread[{ψ1[-5, y, t], ψ2[-5, y, t], ψ3[-5, y,
t], ψ4[-5, y, t]} == {ψ1[5, y, t], ψ2[5, y,
t], ψ3[5, y, t], ψ4[5, y, t]}],
Thread[{ψ1[x, -5, t], ψ2[x, -5, t], ψ3[x, -5,
t], ψ4[x, -5, t]} == {ψ1[x, 5, t], ψ2[x, 5,
t], ψ3[x, 5, t], ψ4[x, 5, t]}]];
v = 1;
Δ = 1;
tMax = 8;
solution =
First@NDSolve[
eqnWithInitial, {ψ1[x, y, t], ψ2[x, y, t], ψ3[x, y,
t], ψ4[x, y, t]}, {x, -5, 5}, {y, -5, 5}, {t, 0, tMax},
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"DifferenceOrder" -> "Pseudospectral"}}];
Ψ1[x_, y_, t_] = ψ1[x, y, t] /. solution;
Ψ2[x_, y_, t_] = ψ2[x, y, t] /. solution;
Ψ3[x_, y_, t_] = ψ3[x, y, t] /. solution;
Ψ4[x_, y_, t_] = ψ4[x, y, t] /. solution;
pl = Table[
Plot3D[{Re[Ψ1[x, y, t]] - 2,
2 + Re[Ψ2[x, y, t]], Re[Ψ3[x, y, t]] - 1,
1 + Re[Ψ4[x, y, t]]}, {x, -5, 5}, {y, -5, 5},
PlotRange -> {-3, 3},
PlotStyle -> {Red, Directive[Opacity[.9], Orange]},
BoxRatios -> 1], {t, 0, tMax, tMax/20}];
p2 = Table[
Plot3D[Abs[Ψ2[x, y, t]], {x, -5, 5}, {y, -5, 5},
PlotRange -> {-3, 3},
PlotStyle -> {Orange, Directive[Opacity[.9]]},
BoxRatios -> 1], {t, 0, tMax, tMax/20}];
Here is the code for the Spin current
j[x_, y_, t_] = -(I/
2) (Conjugate[Ψ2[x, y, t]]*
D[Ψ2[x, y, t], {{x, y}}] -
D[Conjugate[Ψ2[x, y, t]], {{x, y}}]*Ψ2[x, y, t]);
VectorPlot3D[Re[j[x, y, t]], {x, -5, 5}, {y, -5, 5}, {t, 0, tMax}]
Any comments would be greatly appreciated.
Here is with some modification
j[x_, y_,t_] = -(I/2) (Conjugate[\[CapitalPsi]3[x, y, t]]*D[\[CapitalPsi]3[x, y, t], {{x, y}}]-Conjugate[D[\[CapitalPsi]3[x, y, t], {{x, y}}]]*\[CapitalPsi]3[x,y,t]);
VectorPlot[j[x, y, 3], {x, -5, 5}, {y, -5, 5}]
My follow up question is to plot the z-component for the (rot j[x, y, t]). There is still derivative before Conjugate in the new expression.
vecField[x_, y_, t_]=D[Part[j[x, y, t], 2], x]-D[Part[j[x, y, t], 1], y];
VectorPlot[vecField[x, y, 3], {x, -5, 5}, {y, -5, 5}]
the z component of the current density:
$\nabla\times J=(\frac{\partial J_y}{\partial x}-\frac{\partial J_x}{\partial y})$
Here is my trial but still unable to plot the z component of the current density:
j4a[x_, y_, t_] =
Part[-(I/2) (Conjugate[D[\[CapitalPsi]4[x, y, t], y]]*
D[\[CapitalPsi]4[x, y, t], x, {y, 2}] -
Conjugate[D[\[CapitalPsi]4[x, y, t], x, {y, 2}]]*
D[\[CapitalPsi]4[x, y, t], y]), 2];
j4b[x_, y_, t_] =
Part[-(I/2) (Conjugate[D[\[CapitalPsi]4[x, y, t], y]]*
D[\[CapitalPsi]4[x, y, t], x, {y, 2}] -
Conjugate[D[\[CapitalPsi]4[x, y, t], x, {y, 2}]]*
D[\[CapitalPsi]4[x, y, t], y]), 1];
vecField[x_, y_, t_] = j4a[x, y, t] - j4b[x, y, t];
VectorPlot[vecField[x, y, 3], {x, -5, 5}, {y, -5, 5}]
with an error of
Part::partw: Part 1 of {} does not exist. >>
Here is my own answer after explicitly written the expression for the current density
myrotorz1[x_, y_,
t_] = -I/2*(-Conjugate[D[\[CapitalPsi]1[x, y, t], y]]*
D[\[CapitalPsi]1[x, y, t], x] +
D[\[CapitalPsi]1[x, y, t], y]*
Conjugate[D[\[CapitalPsi]1[x, y, t], x]] +
Conjugate[D[\[CapitalPsi]1[x, y, t], x]]*
D[\[CapitalPsi]1[x, y, t], y] -
D[\[CapitalPsi]1[x, y, t], x]*
Conjugate[D[\[CapitalPsi]1[x, y, t], y]])
Plot3D[Re[myrotorz1[x, y, 0]], {x, -5, 5}, {y, -5, 5},
PlotRange -> All]
A follow up question, I would like to plot myrotorz1[x, y, t] as a function of t but shows an error. Any comments would be highly appreciated
Data = NIntegrate[myrotorz1[x, y, t], {x, -5, 5}, {y, -5, 5}];
Table[Plot[Re[Data], {t, 0, 6, 0.1}], PlotRange -> All]
These are the errors I get
(Debug) During evaluation of In[5]:= NIntegrate::inumr: The integrand myrotorz1[x,y,t] has evaluated to non-numerical values for all sampling points in the region with boundaries {{-5,5},{-5,5}}. >>
(Debug) During evaluation of In[5]:= NIntegrate::inumr: The integrand myrotorz1[x,y,t] has evaluated to non-numerical values for all sampling points in the region with boundaries {{-5,5},{-5,5}}. >>
(Debug) During evaluation of In[5]:= Table::itform: Argument PlotRange->All at position 2 does not have the correct form for an iterator. >>
(Debug) Out[6]= Table[Plot[Re[Data], {t, 0, 6, 0.1}], PlotRange -> All]
(Debug) During evaluation of In[3]:= General::stop: Further output of NIntegrate::inumr will be suppressed during this calculation. >>
General::stop: Further output of NIntegrate::inumr will be suppressed during this calculation. >>
A:
I see two problems,
VectorPlot3D expects a 3D vector, not a 2D.
Length@Re[j[x, y, t]]
2
Your function contains derivatives of the Conjugate function
Re[j[1, 1, 1]]
{(-0.04592782419744598 +
Im[(-0.05339043940427357 +
0.03730404617998509*I)*
Derivative[1][Conjugate][
0.12372410925234094 + 0.1480141706278738*
I]])/2, (-0.04165285128771159 +
Im[(-0.049115484047442155 +
0.0376862735996922*I)*
Derivative[1][Conjugate][
0.12372410925234094 + 0.1480141706278738*
I]])/2}
That problem already have somehow unsatisfactory answers here, and here
But if you redefine the function to avoid D[Conjugate[
j[x_, y_,t_] = -(I/2) (Conjugate[Ψ2[x, y, t]]*
D[Ψ2[x, y, t], {{x, y}}] -
Conjugate[D[Ψ2[x, y, t], {{x, y}}]]*Ψ2[x,
y, t]);
VectorPlot3D[
Append[Re[j[x, y, t]], 0], {x, -5, 5}, {y, -5, 5}, {t, 0, tMax}]
| 2024-07-24T01:26:35.434368 | https://example.com/article/6407 |
4/27/14
Hossmom wants me to write about something that my son did, Bubba Hoss. She wants me to write about how he dropped a very full trashcan down the stairs. That's what she thinks I should write about.
She can suck it.
I'll tell you why.
My wife told my son to bring the trash can downstairs. Apparently she meant her little bitty trashcan by her bed. It's a small trashcan, just really a bucket. No problems. But my wife made the mistake of not realizing that she was talking to a child who pays about as much attention to what you really say as does a cat.
He didn't grab the little trashcan. He grabbed the big trashcan in the laundry room. He grabbed the one that sucks up all the gross from upstairs that they don't want to bring downstairs. I'm pretty sure it's a gateway to hell. It's almost as tall as him and certainly weighs more than his little stick self.
He did his best to bring it downstairs and I guess technically he did. He did by dragging it to the top of the stairs and then watching it heavily fall all the way down. Boom, boom, boom, trash is everywhere.
That's what Hossmom wants me to write about. But I won't.
I won't because after this avalanche of trash came descending down, what did she do? She began laughing. She began laughing hard. And that's all she did.
And that's what I'm writing about.
Why is it me that has to clean up this mountain of grossness? Why do have gloves on and a broom in one hand? I was doing yard work all day. I was cleaning the garage. I was about to build a chair! I just came in for some water and a bit of rest as it started raining. I trimmed all the bushes, I pulled all the weeds. It was my break time. I'm going to inform the union. Oh, I have a union. I'm the president and the only voting member. It can get a bit crazy at times.
But no, now I'm here picking up trash before the baby can play in it. And believe me, the baby would play in it. He is drawn to destruction like moth to a flame. If something is going down that involves injury or contagious disease, he knows exactly where he wants to be. Right now he is in Hossmom's arms. Yup, she's playing the mother card on me. Oh, I have to look after the baby, my baby, I have to hold my baby. We can't let the baby play in the trash, what kind of parents would we be. Oh, let me hold the baby, it's truly the harder job. Here, you'll need this new trash bag.
If Hossmom would have said "Bubba Hoss, grab the small trashcan from beside mommy's bed" every thing would have been fine. But no, she made a rookie mistake and gave vague instructions to a boy that thinks every instruction involves twirling in a circle. Tell the boy to get in the car and he'll do it in maybe under an hour, twirling and hopping all over the house until he gets there.
I'm left doing the trash. Now she'll say that she has been upstairs doing spring cleaning all day. She'll say that she's had the kids for the whole weekend while I'm playing outside. She'll say that she gave birth and that trumps everything. She was on drugs when she gave birth, did you know that. Yup, she was on the epidural train, high as a kite. She didn't even yell at me and she originally wanted to name our son Yustus. True story. High as high gets. Thank god I was there or we would have baby Theodore Yustus Penmenship running towards the trash. Now she is mother of the year.
It's the laughing that gets me as I pick this vile crap up. Why is she laughing? Why does she think it's funny? Is it funny because it's exactly something that one of my spawn would do? Is it just in our nature to wreck everything, to repeatedly destroy every possession? Our family motto is "NO WICKER IN THIS HOUSE!" Laugh, laugh, laugh.
And seriously, what kind of kid just throws a full trash can down the stairs? Did he honestly think the top would stay on? Probably. Let's be honest, that's what was going through his monkey brain. He claims it was an accident and he has his mother to back him up. But I know better. He couldn't resist. It's in our genes. I would also bet that his sister was right behind him telling him to do it. I love my kids but I sure as hell don't trust them. Perhaps that's what being a real parent is all about.
So no one can get into trouble here. I don't make my son clean up with me because honestly he would just dance in it and make it worse. My daughter is suddenly incognito and my wife is sitting on the couch laughing like a hyena.
I pick up what looks to be a cross between dryer lint and cat puke. That's what I do. That's how I am providing for my family. It stinks, it smells like, well cat puke wrapped in dryer lint. Let that sink into your brain for a minute.
I'll plot my revenge and it won't be pretty. This summer I'm going to the pool every day, every god damn day. And I'm going to send her pictures of me at the pool every day with little Yustus. And at the pool, I'm going to have a frilly drink with an umbrella in it, non alcoholic of course. Then we'll see who's laughing.
And no more trash cans upstairs. That's the new rule. Pool every day with an umbrella drink, no trash cans upstairs, and no wicker in the house. That's our new family motto.
She was like a drug addict when she gave birth. Just want to throw that out there one more time.
4/25/14
Cleaning up while a toddler is "Helping" is to enter a world where Hell is real, it is here on Earth and I am it's bitch.
First off, Bacon Hoss is 1. Not old enough to move out on his own but well on his way. I think that he is offended by clean things, that it somehow hits his sense of decency. A clean room is a room that has no life in it, no joy in it. Joy is the mess, joy is the destruction. Perhaps my youngest son is a evil villain and if he is, I am sure he will be very successful at it.
Washing clothes in this house never gets done. I have no idea why. There is no day that I don't do laundry. There is no day that I don't load at least 2 full baskets. And yet, there is always more. Always more stuffed under beds, behind couches, on top of bookshelves because why the fuck not?
I was doing laundry today, as I do every day. I was attempting to put away Hossmom's clothes. Normally I do not do this. It offends my sense of decency. Not really but I like the excuse better than the real one. I fold them the best I can and put them in her own basket, to be put away by here.
I can't figure out Hossmom's clothes. They make no freaking sense. They are all delicate, lacy and sheer. I feel like my meaty hands are soiling them after I wash them. Jeans and a T-shirt, that I know how to do. A womans work shirt is a puzzle that only a man meditating for 50 years can understand.
They do not fit on any hangers, I do not know why. Who would design a shirt this way? Why??? You get one shoulder on and the other falls off so that eventually you are performing some weirdo balancing trick with a freaking shirt. Multiply this by 20 and that is how I was spending my day. Pants don't fold right, it's like trying to fold a fitted sheet. Eventually you just get frustrated and wad it up into a ball and through it onto some random shelf.
You can imagine that this does not make Hossmom happy when she sees her clothes like this. But I submit that putting away her laundry is like trying to organize friends according to height.
My clothes are easy. I just did them. Jeans fold nicely and go in a drawer. Socks, all white and all match go in a drawer. Her socks are like where weird socks go on vacation and end up staying after giving up on life. Shirts get hung up, they fit on hanger, and hang neatly. Work shirts fold nicely and fit in the drawer nicely. This took me only about 15 minutes for about every article of clothing that I own.
It's taken me a good 30 minutes to hang up 3 shirts in Hossmoms closet. I was trying to get everything finished. A clean house gets me a happy wife. A happy wife gets me other things, things that happen when the kids are asleep. Like foot rubs.
After a while, I realized that I hadn't heard from my youngest in a while. Never a good sign. I assumed that this meant that he was probably in the toilet playing in poo water. He does that. When it's bed time he's loud as hell. When he's doing something he shouldn't, quiet as a mouse. At 1, he understands this.
I go to check on him and go past my chest of drawers. Two of the drawers are open which I find odd because this is one of my pet peeves. In fact, I'm so annoyed that they are open that I don't really register the fact that there is nothing in them. It escapes me. Perhaps I wasn't on my A game today.
I walk into the hallway just in time to see Bacon Hoss toss my last pair of underwear right over the railing, sailing like a kite down the stairs, hitting the last stair like a fluffy cloud, quite beautiful in any other circumstances. They were my pirate boxers to. Just want to throw that out there, that I have pirate boxers. I love being me.
Next to my pirate boxers are the entire contents of both drawers. Right there, on the floor and the steps like my chest vomited after a hard night of chest parties, it drinks to much. I wasn't happy, understandable. And after a few choice words to a toddler that has no idea what I'm saying, I grab a basket and head down stairs and retrieve them all the while still lecturing my child because I couldn't think of what else to do.
I put them on the bed, still annoyed, tired, exasperated. Damn it, damn it, damn it. I just put away most of this 30 minutes ago. Now I'm doing that same job right over again. In effect, I have made no progress what so ever. None. I am an t a 0 for productivity for the day. I am not happy.
I am not happy that I have gotten no cleaning accomplished. I am not happy that I'm doing the same job right over again. I am not happy that I do not see my son, where the hell man. He was right here a minute ago, I was lecturing him.
I hear something snap in Hossmoms closet.
God damn it.
I go inside the closet to see my son pulling Hossmom's shirts off the hangers. The three that I managed to hang up and about 20 more. While I was lecturing to apparently no one, he made his escape into the closet and picked whatever his grubby hands could reach, my wife's clothes.
"Stop!" I say.
The little bastard turns around and looks at me.
Then I swear to god he smiled, the little butt hole smiled, and pulled another shirt off.
And that is when I decided that I would no longer attempt to put Hossmoms clothes away. I put them into the basket. Plus a few more shirts that don't need to be washed.
4/21/14
I have reached the very pinnacle of success. I am living the very freaking definition of it. There is no where else to go from here, it is the ultimate summit that I have reached. I have peaked and oh is it so glorious.
I am in a field, a large field. I am laying in the grass, it is soft. It contains no bugs, no chiggers, no burrs and no dog poop. There is a breeze, a nice one to offset the amazing sunshine. There is not a cloud to be seen. It's 75 degrees, it's as if God set his thermostat to greatness, just for me.
My head in is my wife's lap and she is running her fingers through my hair. We are not talking, just enjoying the day. Occasionally she will make a comment on something that she has read. I'll agree with her because right now I am very freaking agreeable. To anything. My arms are spread wide as I enjoy this.
My daughter is feeding me grapes and cheese cubes while I lay on my wife's lap. I did not ask her to do this and I don't know if she ever saw this being done. She just started doing it. She asked me if I wanted a grape. Hell yea I want grapes. So now she is feeding grapes while I talk to my wife and look at our pristine sky.
My son is flying a kite. He has it well under control. He did not drop it, he did not break it. We even got it up on the first try. My other son, the baby is taking Cheese-its out of a bag and then putting them back in. He has been doing this for 15 minutes. When he gets bored, he runs around the field then comes back to the Cheese-its.
It is here, at this Kite Festival, that I realize that I have attained success. That my work as a stay at home dad has been validated. That if there were awards for awesomeness and for level of success, the president would be pinning this on me. Eating hand fed grapes on my wife's lap, there can be no other criteria for success.
This means of course that every decision that I have ever made in my life is hereby validated. With each grape dropped like manna from heaven into my mouth reassures me that my path, while unusual at times, was the correct one.
The day 6 years ago when I decided to give up my career, leave behind money and importance, is validated.
The day we moved to a different city in a different state, was the right call.
Should we have another child? Today that answer is an unequivocal yes.
Should I have gone to Mexico when I was 20 and then paid a guy five bucks to shock me with a car battery in some weirdo macho show of awesome to impress my wife? Apparently that was the right move because EVERY DECISION I have ever made has led me here, to the Mount Everest of Success.
Should I have let Little Hoss take a leak in the woods when she was 2? Good call. Should I have given my son the mallet and told him to hit something only to realize to late that it was my car? Apparently. Should I have toughed out my first kidney stone so as not to panic my wife? God damn genius.
When looked at through this lens of grapes, cheese and head rubs on a sunny day, every bad decision doesn't seem bad at all. It seems to reveal that even unknown to myself, I'm pretty fucking smart. If my bad decisions led me here, imagine where I would be if I really put some thought into what I do.
Scratch that, I know exactly where I would be. I would be right here sucking on those grapes.
So many apparently bad decisions, all suddenly all wiped out. Do I need to get that looked at? Apparently not. You shouldn't take that road, it's to muddy. Think again. One more drink young college Hoss. Yes, I believe I will. I am living the life of what is written about since the Greeks. I am eating grapes. And cheese. On my wife's lap.
Of course, there is only one direction to go from here. Its a road that is pitted with babies that won't go to sleep, with children that are learning to get a smart mouth, with cars that won't start and with pipes that burst in the middle of winter. I know this.
But I also know that the Kite Festival comes back next year, in the same place, in that same field. I have already put my grapes on lay away.
4/15/14
Hossmom is out of town, in a beautiful city doing important work things that do not include waking up at 3 am to quiet a screaming baby or dodging wild headbutts. She had steak last night and then drinks. I had drinks to. I drank whiskey from a cup with Tinker Bell on it. I'm fancy.
Although Bacon Hoss has the mental capacity of a chimp at the moment, I am sure he knows that his mother is gone and senses that now is the time to strike. He is trying to display his dominance over me, to break me. My other children have tried and failed but they may have put a crack in the armor. They may have softened me up so that Bacon Hoss can strike the death blows.
His behavior changes when she is gone. Or perhaps mine does. Perhaps I become less patient, more tired by day number 3 of solo parenting. I'm not sure but I know that when she is gone, that's when he's at his worst.
Dinner time. He doesn't want to eat. He wants to scream. I assumed he was screaming initially because he was hungry. I made him nuggets and gave him some slices of cheese. A little amuse bouche prior to the main course that my daughter describes as "gross." I don't think he was hungry so he entertained himself by feeding every god damn thing in front of him to the dogs. He did this while screaming.
Little Hoss is running around me in the kitchen. She's a blur as she goes from one side of me to the next. I have told her to hang back a sec, that dad needs to drain the noodles for the spaghetti. She did hang back, counted to one, and then came right back in. She has questions, she always has questions. And she wants me to see stuff. She wants me to see everything. It can be a bit distracting. Then she stands on my toes the minute I lean back to survey what else I have to do to get dinner ready.
Bubba Hoss is standing at the table. He never sits at the table, his constitution will not allow him to do so. I spend a good 1/3 of my time during dinner putting him back in his chair. Then I lecture everyone on manners and proper etiquette. They nod like they understand me. They repeat what I say back to me that makes me believe that they know what I expect of them. This of course, is bullshit. They have discovered if they just nod along eventually I'll shut up.
I sit Little Hoss down while answering her latest question: Why are there houses, why were they built and why were they built where they were. Can I build a house? Did I ever build a house with my Daddy? I answer as I pour the milk. One day she'll know that I'm just making shit up as I go along but right now she believes me. Or maybe she doesn't and just wants someone to talk to.
Bubba Hoss has discovered the very interesting fact that you can put your fork in the milk and then take the fork out. Yup, that's what he's doing.
I serve dinner. I cool some off for Bacon Hoss. He doesn't want it. He wants to throw it. He does and it leaves his little munchkin hands before I can stop him. Little bastard got quick over the last month. I see the spaghetti sail through the air and hit the back cushion of my chair then roll down into the cushion, between the back of the chair the pillow. I haven't even had a chance to sit down yet.
I get a wash cloth and head to my chair. Silently I'm impressed on the distance he got on it. I remove the cushion to clean up the thrown spaghetti. That's when I see the smashed banana clinging to the back of the chair, out of sight and out of mind. When the holy hell did he do this? How long as that banana slice been there? I have to practically pry it off and it leaves a nice dark circle that I know I'll never be able to get out. The chair isn't that old. It's my chair, it's the chair that I relax in. Now it's my banana chair.
I give up on Bacon Hoss after this. He'll eat when he'll eat. I put some colored cereal in front of him. I think the colors will distract him and at least give me a moments peace.
Bubba Hoss spilled his milk. I make him clean it up as I hear the dogs lapping up whatever hit the floor. This is how the dogs earn their keep around the house and it's a job they do well. Although apparently they don't like bananas. Bacon Hoss doesn't want the cereal I gave him. He throws them at the chair. I'm sure some get in the cushions but I'm to tired to care.
Bedtime is here, finally here. We do stories, we play a bit, I put Bacon Hoss down in his crib. He doesn't want to go to sleep and starts crying. I'll spend the next hour getting him to go down. When my wife is here, he goes down fine. Now that she's not he knows that this is the most opportune time to break me. But at the end of it I give him a bit of a shocker. He starts to cry again. I wish him the best of luck with that and shut the door. If he's crying 2 hours from now I'll go back in there but not a second before.
I spend the next hour of my night dealing with the other two. I do tuck ins twice, I read 30 stories and I check for monsters constantly.
I head off down stairs and sit in my chair and on cereal. I'm beat. I should go to bed but I don't because when the wife is gone I think of all the horrible things that could happen while she is away. I think that a tree will fall outside, come through my bedroom and crush me. No one will know of course because no one is checking up on me. Little Hoss will find me in the morning and ask me why the tree hit me. Hopefully she'll have enough sense to go to school because that's still important.
My wife calls and I tell her about my day. She asks me how I'm going to spend the rest of my night. I tell her that I'm going to watch Frozen and sing along. It's a lie and I think we both know it. I like though giving her little sugar plum images in her head though before she goes to bed in strange place with no kids screaming at her. I wonder how good she is at throwing banana slices.
What I'm really doing is watching some god awful horror flick that is terrible, not even one shower scene. I'm also messing around on the computer thinking that I will probably write some of this down for future generations. I pull the computer a bit closer and I see a flash of light to my right and then the lamp pops. The downstairs goes dead. In my head I'm wondering if a tree is about to fall.
Crap. House stuff like this also happens when she is gone. I think the universe is conspiring to kill me. Hossmom was gone for a bit when we had a water pipe break to. I can't even hide from the world in my own house.
I have to go into the cold, dark garage and check the breakers and discover one has been tripped. I flip it back on and we have power once again. I go back to my computer to figure out what new booby trap is waiting for me. I look at my computer cord, it's exposed and practically in half. Somewhere in this house is a very lucky cat I think, a lucky cat that perhaps chewed on a cord when it wasn't plugged in.
Or Bacon Hoss, maybe this is just the beginning.
Can I make it another two days with no breaks? Probably but what comes out the other side may not be a sane man.
4/9/14
That's what you have to write so there is no confusion when you plan on writing a small little story about how he is also a dick.
How can he be a complete peckerhead at 1 year old? Easy, apparently.
Again, I love Bacon Hoss very, very much.
He apparently loves computer cords, especially the ones that are plugged in. He loves them so very much. He loves them so much that he wants to chew on them. Then he wants to pull them out of the wall. Then he wants to love the wall socket. You wouldn't think that this little person could fit behind a couch that even the cat can't but you would be wrong.
As much as he loves the computer cords, he hates the actual computer. He can't stand that such a thing exists. He hates email, he hates banking websites, he hates this very blog. If I ever try to get on the computer while he is awake, anywhere in the house, he immediately makes a beeline for me. If the computer is in my lap, he grabs whatever toy is available and attempts to break the key board. The little man has quite a swing. If the computer is on the counter, away from his fists of fury, he runs and grabs my pants legs and screams. He wants to know why I am not plugging the computer in to where he can chew on it. He doesn't think I am very accommodating.
Sure, if you see him out and about, he's all smiles. He's cute, he'll melt you with his little blue eyes and blond hair. He may laugh a little bit at you. He seems like he is so well behaved. You'll see him walking in the store and not pulling on the shelves. You will not see him scream and throw a fit. You will not see him attempt to headbutt his father while he sits on my lap.
But at home, he's a dick. Away from public view he commonly tries to break my nose to the point where I wonder if I am in an abusive relationship. He laughs as his head screams forward like a little maniacal Aryan. Stupid blond hair. He's drawn blood more than once. There's never any warning just a blond flash of hair and wham, you're bleeding.
If it's not my nose he's trying to break or a computer cord he wants to chew on, it's either the toilet or the stairs. I have many other father friends with kids my son's age. None try to climb stairs. Dad says no, they look and then walk away. My son, on the other hand, is pulling a little baby screw driver from his diaper and trying to pry lose the screws that hold our baby gate in place. Yup, I've had to screw it right into the rails because he pulls himself up on it and screams like he's in a little baby Attica. Unfortunately, the world does not come with baby gates in front of stairs. If we are out and about, and no one is watching but me, he makes for the stairs. Any stairs. I'll stop him, he'll throw a fit unless someone is watching. How does he know how to do this? How can he play public opinion like a seasoned politician? I have no idea and frankly, I'm kind of impressed.
I'm less impressed when he tries to get into the toilet. I wonder if he has some sort of death wish? He loves toilets, he loves throwing things in toilets, he loves to put his hands in the toilet, he loves to watch me on the toilet. It's creeping me out. If the door is shut when I'm in the bathroom he throws a fit like you've never heard. It's louder than he's ever screamed for anyone else but me. He saves his good fits when we are just alone. Half my day is spent peeing while standing on one leg and fending him off with the other. I've tried to sneak around but he knows, good god somehow he knows. And he knows that our downstairs bathroom door doesn't latch that well so if just a little bit of pressure is applied, the door pops open, stupid house. He ninja strikes me so much that now I just naturally pee with one leg hanging in the air waiting to fight off the inevitable attack that I know is coming from someone that is about a foot tall.
I try to remember if I've seen this kind of dickishness in my other children and I'm not sure. Have I just forgotten it all? Little Hoss could be tough, she would cry unless I was constantly moving around. And she loves to break stuff, even as a baby. Bacon does that too. Bubba Hoss though was a pleasure, we would snuggle all day and all he wanted to do was play with Dad. Bacon wants to play with dad, for blood.
Which brings me to my last reason why my youngest is kind of a dick. He woke up from his nap a bit early. I was knee deep in dishes, ya know, so the family wouldn't live in filth and all that. So I didn't immediately didn't run upstairs to get him from his crib. 5 minutes go by and I head up to get him. He didn't sleep much, only an hour or so. I open the door and I am greeted with my little blond boy. My little blond boy with tons of blood running out of his mouth.
Of course, I freak out. He's screaming loud, very loud. He's crying. What the hell happened? Why is he bleeding in his crib. I rush to his side to pick him up. He stops crying but the blood and spit are now mixed together and dripping on me. I don't much care, I'm worried like hell.
He trys to headbutt me. Again. Then it clicks with what happened. I open his mouth and check all his teeth, remembering which one's he has and which ones he doesn't. I'm looking to see if he's knocked out a tooth. He threw a fit in his crib. When he throws fits he headbutts. He's tall enough now that the edge of the crib is right at the level of his mouth. He headbutted the crib edge with his mouth and I'm worried he's lost a tooth. He's got them all, I think. And then I find where the blood is coming from. He cut the inside of his upper lip. That had to hurt.
This is his punishment for me. Since I didn't come running immediately, he is trying to give me a heart attack. I was pretty close. I don't like to see my kids bleed. I can handle blood but I have a tougher time handling when my kids are in pain.
We sit on the couch, we turn on a little music which he loves. He's quiet now and is lightly bouncing his head on my chest. It's ok. I would rather him headbutt me then something else, like the oven, while it's on, disconnecting it from the gas and then lighting a match. He would do it. I can take the headbutting, I can heal and isn't that what fathers are supposed to do? Aren't we supposed to take the pain so our little ones don't have to? He's my son and I love him.
But I don't love going to the toilet anymore. I'm just going to start using his diapers.
The Inner Hoss
Let me explain it this way: I have a college degree and had a job. I quit it on purpose to teach my three minions how to be minions. After 8 years the kids have only broken 1/2 of what we've seen but the other half is on the list. | 2023-09-11T01:26:35.434368 | https://example.com/article/2190 |
Look at this rabbit in my hand. See the rabbit? Surprise! It’s actually a turtle.
Pretty impressive, huh?
Alright, maybe that one didn’t work on you. How about this one. Look at this character – she’s just a mild-mannered high school student, right? Surprise! She’s actually an evil wizard.
Still nothing? Hm.
Okay, one more. Look at this upbeat, slice of life story. Got a good picture of it? Surprise! It’s actually a dystopian sci-fi drama.
Alright, you get the picture. Let’s talk about plot twists.
As my opening hopefully made clear, I have somewhat mixed feelings about plot twists. And by that, I mean I don’t even really consider them a “thing” at all, in most cases. If you’ve read much of my prior theory stuff, or even just me describing shows I love, you’ll know I often describe well-realized productions as “perfect jewels” – as shows where every facet of the narrative and production is reflective of the intent of the whole. Scenes in the beginning reflect scenes in the end, the narrative shifts in ways that are purposeful for the overarching message and mood, and no awkward outliers or tangents divert the intent of the production. Not all shows need to be this way, obviously, but I think it’s a strong indicator of a fully realized work.
The idea of a “plot twist” runs somewhat counter to this. How can the beginning telegraph the ending while also not giving away the “shocking twists”? The answer is, generally, “by not having the twists actually be shocking.” This is certainly the case in something like Madoka Magica – though that show is somewhat famous for its “shocking twists,” the literal first scene of the show tells you exactly where the narrative will go, and every curve of the story is telegraphed by both the framing and the narrative itself. Madoka actually rewards rewatching because of this – elements like Homura’s emotional cues shift from ominous to tragic, but neither is a “deception,” they’re just the result of an audience working from a variant set of information. Like a poker player revealing his hand a card at a time, the show never “lies” to the audience, it simply constructs a narrative out of incomplete information in a way designed for best dramatic effect.
Which points to one of the big requirements of plot twists – they can never “just happen.” All of Madoka’s choices serve larger goals, and actually increase the audience’s understanding of the world. You’d think this would always be the case, but a poorly written plot twist can actually destroy the foundation of trust a show has constructed – it can make the audience no longer believe in the world as presented, or even that the presenters themselves know what they’re doing. A plot twist should come across like dropping a key piece into an existing jigsaw puzzle – they might dramatically affect the context of all your other information, but they won’t invalidate that information. A good plot twist makes the audience think “oh wow, of course that happened!” – not “how the fuck did that happen?”
That concept of “believing in the world as presented” digs at the other big requirement of plot twists – you can’t generate drama out of a plot twist if trust and investment don’t already exist. This is what my initial examples were actually getting at – in order for a trick that betrays audience expectations to be effective, the audience already has to be invested in those expectations. “The world isn’t what you think it is!” only works as a dramatic device if the audience is already invested in what the world initially pretends to be. Your story can’t just tread water until a plot twist makes it interesting, it has to already be compelling for its own sake. It has to already feel real.
Magicians have it easy on this front – the “grounding” for their plot twists is the entire world as we know it. The “expectations” they are betraying are ones we’ve built out of our understanding of reality itself – rabbits don’t become turtles, ears don’t contain massive strings of handkerchiefs. Their “plot twists” are effective because their “initial narrative” is the entire existing world, and most people have a number of solid preconceptions about the world is like.
Stories don’t have this luxury. Audiences don’t inherently care about what you’re presenting them – in stories, everything is artificial. One theoretical reality is just as valid as another, because the audience has far fewer preconceptions about what’s “normal” in any given work of fiction than in the world they actually know. Thus in order for plot twists to be effective, the world as it’s being presented to the audience must already “feel real.” Like a narrative worth caring about, like a world worth exploring, like a character worth investing in – the world as presented must both possess solidity and demand engagement. The trick to plot twists is that what makes them work is virtually never the nature of the twist itself. Twists are easy. What makes plot twists work is that the audience has already been “tricked” by the initial text, and for that to occur, your initial text has to create investment, tension, and trust.
So, yeah. Don’t try to invest me in your text by telling me the mild-mannered schoolgirl is secretly an evil wizard. Instead, maybe tell me a bit about the schoolgirl herself. What does she care about? What kind of world does she inhabit? What are the themes that represent her reality? The key to most magic tricks is not an impossible feat of deception – it is that the audience is already caught up in the performance, and the performance starts long before rabbits start turning into turtles. If you give me a reason to care about your world as presented, the evil wizard part will be easy.
Like this: Like Loading... | 2024-04-28T01:26:35.434368 | https://example.com/article/3555 |
## Azure ConsumptionManagementClient SDK for JavaScript
This package contains an isomorphic SDK for ConsumptionManagementClient.
### Currently supported environments
- Node.js version 6.x.x or higher
- Browser JavaScript
### How to Install
```
npm install @azure/arm-consumption
```
### How to use
#### nodejs - Authentication, client creation and list usageDetails as an example written in TypeScript.
##### Install @azure/ms-rest-nodeauth
```
npm install @azure/ms-rest-nodeauth
```
##### Sample code
```ts
import * as msRest from "@azure/ms-rest-js";
import * as msRestAzure from "@azure/ms-rest-azure-js";
import * as msRestNodeAuth from "@azure/ms-rest-nodeauth";
import { ConsumptionManagementClient, ConsumptionManagementModels, ConsumptionManagementMappers } from "@azure/arm-consumption";
const subscriptionId = process.env["AZURE_SUBSCRIPTION_ID"];
msRestNodeAuth.interactiveLogin().then((creds) => {
const client = new ConsumptionManagementClient(creds, subscriptionId);
const expand = "testexpand";
const filter = "testfilter";
const skiptoken = "testskiptoken";
const top = 1;
const apply = "testapply";
client.usageDetails.list(expand, filter, skiptoken, top, apply).then((result) => {
console.log("The result is:");
console.log(result);
});
}).catch((err) => {
console.error(err);
});
```
#### browser - Authentication, client creation and list usageDetails as an example written in JavaScript.
##### Install @azure/ms-rest-browserauth
```
npm install @azure/ms-rest-browserauth
```
##### Sample code
See https://github.com/Azure/ms-rest-browserauth to learn how to authenticate to Azure in the browser.
- index.html
```html
<!DOCTYPE html>
<html lang="en">
<head>
<title>@azure/arm-consumption sample</title>
<script src="node_modules/@azure/ms-rest-js/dist/msRest.browser.js"></script>
<script src="node_modules/@azure/ms-rest-azure-js/dist/msRestAzure.js"></script>
<script src="node_modules/@azure/ms-rest-browserauth/dist/msAuth.js"></script>
<script src="node_modules/@azure/arm-consumption/dist/arm-consumption.js"></script>
<script type="text/javascript">
const subscriptionId = "<Subscription_Id>";
const authManager = new msAuth.AuthManager({
clientId: "<client id for your Azure AD app>",
tenant: "<optional tenant for your organization>"
});
authManager.finalizeLogin().then((res) => {
if (!res.isLoggedIn) {
// may cause redirects
authManager.login();
}
const client = new Azure.ArmConsumption.ConsumptionManagementClient(res.creds, subscriptionId);
const expand = "testexpand";
const filter = "testfilter";
const skiptoken = "testskiptoken";
const top = 1;
const apply = "testapply";
client.usageDetails.list(expand, filter, skiptoken, top, apply).then((result) => {
console.log("The result is:");
console.log(result);
}).catch((err) => {
console.log("An error occurred:");
console.error(err);
});
});
</script>
</head>
<body></body>
</html>
```
## Related projects
- [Microsoft Azure SDK for Javascript](https://github.com/Azure/azure-sdk-for-js)

| 2024-07-31T01:26:35.434368 | https://example.com/article/1396 |
---
address: |
$^1$Sony Corporation, 7-1 Konan 1-chome, Minato-ku, Tokyo 108-0075, Japan\
$^2$Rice University, 6100 Main Street, Houston, TX 77005, USA\
$^3$Indian Institute of Technology Madras, Chennai, Tamil Nadu 600036, India
author:
- 'Ryuichi Tadano$^{1,*}$, Adithya Kumar Pediredla$^2$, Kaushik Mitra$^3$ and Ashok Veeraraghavan$^2$'
title: 'Spatial Phase-Sweep: Increasing temporal resolution of transient imaging using a light source array'
---
Transient imaging or light-in-flight techniques capture the propagation of an ultra-short pulse of light through a scene, which in effect captures the optical impulse response of the scene. Recently, it has been shown that we can capture transient images using commercially available Time-of-Flight (ToF) systems such as Photonic Mixer Devices (PMD). In this paper, we propose ‘spatial phase-sweep’, a technique that exploits the speed of light to increase the temporal resolution beyond the 100 picosecond limit imposed by current electronics. Spatial phase-sweep uses a linear array of light sources with spatial separation of about 3 mm between them, thereby resulting in a time shift of about 10 picoseconds, which translates into 100 Gfps of transient imaging in theory. We demonstrate a prototype and transient imaging results using spatial phase-sweep.
Introduction
============
Transient imaging or light-in-flight refers to capturing the temporal response of a scene to an ultra-short pulse of light. Current techniques to capture transient images are either based on streak cameras or on photonic mixer devices. Streak cameras when used along with femtosecond laser pulse based illumination can provide extremely fine temporal resolution, of the order of 1 picosecond, but such systems [@Velten2011; @Velten2013; @Heshmat2014; @Velten2012] are prohibitively expensive and cost upwards of several hundred thousand dollars. More recently, Heide [[*et al.* ]{}]{}[@Heide2013], Kadambi [[*et al.* ]{}]{}[@Kadambi2013], and O’Toole [[*et al.* ]{}]{}[@OToole2014] have shown that commercially available photonic mixer devices that cost a few hundred dollars can be used to acquire transient images. Unfortunately, the temporal resolution of these techniques is limited by the accuracy of the phase locked loop (PLL) circuit in the on-board electronics of these devices. In commercially available systems such as camboard nano [@camboardnano], the on-board electronics and the PLL limits the minimum achievable phase shift to the order of about 100 picoseconds ($\sim$128 picoseconds on camboard nano). As a consequence, transient images obtained using photonic mixer devices have a much lower temporal resolution (100 picoseconds) compared to systems based on streak cameras and femtosecond laser pulses (1 picosecond).
Our goal in this paper is to improve the temporal resolution of transient images obtained using photonic mixer devices (PMD) over and above the limit imposed by the sensor electronics. We exploit the incredible speed of light ($3\times10^8$ m/sec) to our advantage and propose a technique called ‘spatial phase-sweep’ (SPS) to improve the temporal resolution of transient images obtained using PMDs. The idea behind spatial phase-sweep is very simple. We use an array of light sources with the different sources in the array being slightly offset along the optical axis. That creates small but precisely controllable differences in time of travel between the light pulses emitted by the different sources (Fig. \[fig:concept\]). Since the light source positions in an array can be precisely controlled, the corresponding path length differences created result in a slight temporal offset — the temporal offset $\Delta t$, is given by $\Delta t = \frac{\Delta d}{c}$, where $\Delta d$ is the spatial shift between adjacent light sources in the array, and $c$ is the speed of light. In our prototype, $\Delta d$ is 3 mm, resulting in a temporal resolution $\Delta t$ of about 10 picoseconds, an order of magnitude better than the limit imposed by the on-board electronics of the PMD device in the prototype.
The main alternative technique to improving the temporal resolution of transient images obtained using photonic mixer devices, is to increase the base frequency of the voltage controlled oscillator (VCO) used in the phase locked loop circuit in the on-board electronics. Boosting the base frequency of the VCO may theoretically provide up to 10x improvement in the temporal resolution, but such a technique would come with significant increase in the cost of the resulting sensor. Our proposed technique results in minor incremental cost over existing solutions, since we only need to create a linear array of laser diodes, which are inexpensive and easy to obtain. In addition, the key innovation of spatial phase-sweep is independent of the temporal resolution limit imposed by on-board electronics. This means that even if sensor electronics are improved significantly, the spatial phase-sweep technique may be used to further improve temporal resolution over that limit. The fundamental limit on temporal resolution achieved using spatial phase-sweep is dependent mainly upon the accuracy with which one can control the positioning of laser diodes in the array. Since the positioning of laser diodes can be controlled to sub-millimeter precision, spatial phase-sweep will continue to provide improvement in temporal resolution even if the on-board electronics improve by an order of magnitude, due to increased base frequency of the VCO.
The main technical contributions of our paper are as follows:
[We propose spatial phase-sweep, a technique to improve the temporal resolution of transient images captured using photonic mixer devices.]{}
[We develop algorithms for self-calibration and transient imaging recovery from the data captured using spatial phase-sweep ToF camera.]{}
[We build a proof of concept prototype and demonstrate a 10x improvement in temporal resolution]{}
Some of the limitations of the proposed technique are:
[The data acquisition time increases linearly with the increase in temporal resolution.]{}
[The physical size of the light sources will limit the size of the light source array and hence, the increase in the resolution of spatial phase-sweep beyond a limit will be difficult to implement.]{}
[Our system requires repetitive measurements, which means that it cannot capture one-time phenomena such as plasma dynamics.]{}
Prior work
==========
\[sec:prior\_work\] Transient imaging finds applications in visualizing the interaction of light with an optically complex scene that can involve multiple reflections, scattering media, or subsurface scattering. In this section, we review various approaches to transient imaging and proceed to explain about our approach.
**Holography based:** Abramson captured the first light-in-flight images by shining a flat surface and a hologram with a short laser pulse [@Abramson1978; @Abramson1983]. The beam from the flat surface is used as reference beam and the light coming from hologram interferes with the reference beam to produce an image that corresponds to a short distance traveled by the light wave. By moving the reference surface and stacking the images, they created light-in-flight images. Nilsson [@Nilsson1998] repeated the same experiment with the help of CCD array to create digital light-in-flight video.
**OCT based:** Gkioulekas [[*et al.* ]{}]{}[@Gkioulekas2015] proposed micron-scale transient imaging using optical coherence tomography (OCT). The idea of incorporating OCT technique is close to ours, however the scale of the subject they support is quite small, 2 cm H $\times$ 2 cm W $\times$ 1 cm D.
**Streak camera based:** Velten [[*et al.* ]{}]{}[@Velten2011; @Velten2012; @Velten2013] proposed the use of a streak camera and a femtosecond laser to capture transient images. The laser illuminates one horizontal scan line at a time and scans the entire scene. For every scan, photons illuminate the scene, scatter and some of the scattered photons eventually reach the streak camera. The streak camera converts these photons into electrons using a photo cathode. These electrons are then deflected vertically by a voltage that varies with time. Hence, the intensity of the pixels in the vertical axis of the image correspond to the photons coming from various depths. Scanning the entire scene, Velten [[*et al.* ]{}]{}produced high resolution transient image ($\sim$1 picosecond). The need for scanning in this approach makes it difficult to handle non-repetitive time-evolving events, such as laser ablation, optical rogue waves, sonoluminescence, and nuclear explosion. To solve this problem, Gao [[*et al.* ]{}]{}[@Gao2014] employed digital micro-mirror device (DMD) and compressed sensing techniques along with streak camera. Their system achieved a temporal resolution of 10 picosecond. Heshmat [[*et al.* ]{}]{}[@Heshmat2014] utilized a tilted lenslet array to realize a single shot transient imaging at a temporal resolution of 2 picosecond.
**PMD based:** Though streak camera based methods provide very high temporal resolution, they are prohibitively expensive: a system based on femtosecond laser and a streak camera costs upward of several hundred thousand dollars. To realize an inexpensive transient imaging, photonic mixer device (PMD) based methods have been proposed by Heide [[*et al.* ]{}]{}[@Heide2013].
PMDs are the basic building blocks of most commercial time of flight cameras. Besides, several applications using this device have been proposed in the past few years [@Kadambi2013; @Heide2014; @Tadano]. In such systems, a laser diode or a light emitting diode (LED) is temporally modulated to create a coded illumination signal. The light scattered off the subject is then correlated with a programmable sensor modulation pattern on a PMD sensor to obtain an array of correlational measurements. Heide [[*et al.* ]{}]{}[@Heide2013] performed a series of measurements with varying phase delays between the illumination and the sensor modulation patterns (while keeping both to be sinusoidal), and demonstrated a deconvolution technique that is capable of recovering the transient images from the captured correlational measurements. Kadambi [[*et al.* ]{}]{}[@Kadambi2013] demonstrated a similar technique for recovering transient images, but using M-sequence, instead of sinusoidal modulation.
O’Toole [[*et al.* ]{}]{}[@OToole2014] used an encoded projector to modulate the light both spatially and temporally. The 3-D illumination signal is transformed by interacting with the scene and is captured by the PMD sensor. The spatial and the temporal components of the received signal have complementary information about the scene and are used to more robustly capture sharp light-in-flight images. The temporal resolution of their transient image is 100 picoseconds, which translates to 10 Gfps.
All these PMD based techniques for capturing transient images using PMD sensors are limited in their temporal resolution, primarily due to phase locked loop (PLL) in FPGA or electrical circuits. PLLs in these commercially available electrical circuits are limited to about 100 picosecond time delays, which results in a 100 picosecond temporal resolution on the captured transient images. In this paper, we will overcome this limit through spatial phase-sweep, while restricting the cost of the device to be low.
Background
==========
\[sec:transient\_image\_using\_tof\]
In this section, we will explain the principles of PMD sensor based ToF camera and proceed to explaining on how ToF camera can be used to obtain transient images.
Time-of-Flight principles
-------------------------
ToF camera [@Moller2005; @Lange2000; @Lange2001; @Schwarte1997; @Conroy2009] consists of a PMD sensor and laser diodes that emit coded illumination $g(t)$. This illumination signal interacts with the scene and reaches a sensor pixel. Due to the available technology, the sensor cannot directly measure the signal received, but can only measure the correlation between the received signal and a binary coded signal $f(t)$ inside the sensor circuit. The entire process for each pixel can be mathematically represented as $$b(\phi)=\int_{0}^{T}\alpha(\tau)\cdot\int_{0}^{\infty}g(t+\phi-\tau)f(t)\,dt\,d\tau,\quad{\rm with}\quad\alpha(\tau)=\int_{p}\alpha_{p}\delta(|p|=\tau),
\label{eq:pmd_observation}$$ where $\tau$ is temporal delay of illumination due to the finite speed of light that travels from the light source to the sensor pixel via scene, $\alpha(\tau)$ is the scene response (integration of all contributions from different light paths $p$ that correspond to the same delay $\tau$), $T$ is exposure time, and $\phi$ is delay for illumination signal controlled by the system.
In an ordinary ToF camera that is designed to capture depth information, it is assumed that the scene has just single path. Hence, Eq. \[eq:pmd\_observation\] becomes $$b(\phi)=\alpha\cdot\int_{0}^{\infty}g(t+\phi-\tau_0)f(t)\,dt.
\label{eq:pmd_observation_single}$$ Where, $\tau_0$ is the time delay. Further, sinusoidal waves with same frequency are utilized for both $f(t)$ and $g(t)$. Three or four measurements with different amounts of phase shift are required to generate depth information. When multiple cameras are in operation, custom codes such as pseudo random sequence are utilized to overcome interference problems [@Buttgen2008; @Whyte2010].
Transient imaging using Time-of-Fight camera
--------------------------------------------
$\alpha(\tau)$ in Eq. \[eq:pmd\_observation\] is the impulse response of the world or transient response that we are interested to solve, not just for the single path case, but for a generic case. The most common approach to solve for $\alpha(\tau)$ is by de-convolving $b(\phi)$ with cross-correlation function between $f(t)$ and $g(t)$ [@Heide2013; @Kadambi2013] . In [@Heide2013], various combinations of frequency/phase-shifted sinusoidal functions are used for $f(t)$ and $g(t)$ to build a correlation matrix. To solve the inverse problem described by the matrix, they incorporated regularization functions restricting the transient response to be smooth in temporal and spatial domain. In [@Kadambi2013], $f(t)$ and $g(t)$ are designed to be m-sequences so that inverting the cross-correlation function becomes easy.
The common problem with both the approaches is that the measurements $b(\phi)$ cannot be sampled at arbitrary sampling rate. With the existing electronics, $b(\phi)$ can only be sampled at 10 Gfps. Light events such as subsurface scattering or inter-reflections happen at much faster rate and are missed in these approaches. Hence, $b(\phi)$ is grossly under sampled. We propose to increase the sampling rate of measurement vector $b(\phi)$ by a factor of 10, thereby capture these fast occurring transient events more accurately, at 100 Gfps. Note that $b(\phi)$ cannot be described in a parametric way as it includes an unknown scene response. Hence, the only way to improve temporal resolution of transient image is to do finer sampling.
![ [ **Sampling step and peak detection error:** (a) Measurement $b(\phi)$. Due to subsurface scattering/indirect reflections, actual cross correlation does not look like a triangular form. (b) OMP based kernel fitting results. Estimated peak positions are marked as X. Smaller sampling step better fits the estimated curve to ground truth. (c) Relationship between sampling step and peak estimation error. Error increases as the sampling step becomes larger. ]{} []{data-label="fig:sampling_step_and_fitting_error"}](sampling_step_and_fitting_error){width="4.8in"}
To show that, we perform a simple simulated experiment based on an actual measurement $b(\phi)$. Here, we followed the transient imaging method of [@Kadambi2013], which uses m-sequence as $f(t)$ and $g(t)$. Using a measurement $b(\phi)$ as ground truth, we performed OMP based kernel fitting [@Kadambi2013]. In the OMP based kernel fitting, sub-sampled data of ground truth is used as kernel basis. For simplicity, we assume the scene response is 1-sparse in terms of sub-sampled kernel basis. We change the interval of sampling points to investigate how sampling step affects the fitting results. As shown in Fig. \[fig:sampling\_step\_and\_fitting\_error\] (a), the actual measurement is not triangular shaped as expected from Fig. \[fig:system\_and\_code\_design\] . This unknown shape is difficult to describe in a parametric way. OMP based fitting will provide us better estimation because it is based on actual measured kernel. Even for a simple task such as peak estimation, we can see that increasing sampling rate gives us a finer estimation results (Fig. \[fig:sampling\_step\_and\_fitting\_error\] (b), (c)). Hence, doing finer sampling will increase the information we can obtain, especially for complicated tasks.
![ **Spatial phase-sweep:** By placing multiple light sources in an array with slightly different distances from the subject, we can sample the cross correlation function more precisely than the conventional PLL based sampling. (a) Graph showing measurements only from light source \#1. (b) The observations for each light source are taken independently. Note that amplitude of those data differ due to the variation of distance or the angle at which the light reaches. (c) We combine the data from different light sources by performing equalization. []{data-label="fig:phase_insertion"}](phase_insertion){width="4.5in"}
Increasing temporal resolution of transient imaging
===================================================
\[sec:increasing\_temporal\_resolution\] As described in Sec. \[sec:prior\_work\], temporal resolution of conventional transient imaging techniques using PMD sensor are theoretically limited to around 100 picoseconds [@Kadambi2013]. This is determined by the precision of phase shift control ($\phi$) of the PLL circuit. In this section, we first formulate how the precision of controlling $\phi$ affects the information we can acquire. We then introduce a simple technique to boost the temporal resolution without increasing the phase shift precision control of PLL. The concept of our idea is illustrated in Fig. \[fig:phase\_insertion\].
Spatial phase-sweep
-------------------
Suppose $h(x)$ is the cross correlation function between $f(t)$ and $g(t)$, Eq. \[eq:pmd\_observation\] can be written as $$b(\phi)=\int_{0}^{T}\alpha(\tau)h(\phi-\tau)d\tau,\quad{\rm with}\quad h(x)=\int f(t)g(t+x)dt.
\label{eq:pmd_observation_simple}$$ Hence, solving for transient response is a deconvolution problem and is more intuitive in frequency domain. Computing the discrete-time Fourier transform on both sides of Eq. \[eq:pmd\_observation\_simple\] and rearranging the terms, we have $$\begin{aligned}
{{\mathcal{A}}}{ (2\pi f \Delta \phi)} = \frac{\mathcal{B}(2\pi f \Delta \phi)}{\mathcal{H}(2\pi f \Delta \phi)}
\label{eq:dtft}\end{aligned}$$
where $\Delta\phi$ is sampling interval (or phase shift amount), and $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{H}$ are discrete-time Fourier transforms of $\alpha$, $b$ and $h$. Note that sampling performed here is not in temporal domain but in phase domain. Clearly, Eq. \[eq:dtft\] is periodic with period $\frac{1}{\Delta\phi}$. Hence, smaller $\Delta\phi$ is better as we can capture more frequency information without aliasing. For the commercially available PLL circuits, the phase shift control ($\Delta \phi$) is around 100 ps. However, light events such as sub-surface scattering happen at much higher frequency, depending on the properties of the material. Hence, it is crucial to have smaller sampling interval $\Delta \phi$ to acquire more information about transient image.
The currently available oscillator’s frequency of 100 ps corresponds to a distance of 3 cm traveled by light. With the current state of the art design, the 3 cm precision control determines the theoretical limitation of the frequency of the transient image. To break this limit, we insert extra phase delay that is independent from the innovations in PLL’s design, by arranging an array of light sources uniformly and perpendicular to the image plane. We call this idea as ‘Spatial phase-sweep’ as the spatial arrangement of light sources sweeps the phase of illumination signal. After incorporating the extra freedom of the position of the light source in Eq. \[eq:pmd\_observation\], the measurements are re-formulated as: $$\begin{aligned}
b(\phi + \mu_n)=&\displaystyle \int_{0}^{T}\alpha(\tau)\cdot\int_{0}^{\infty} g(t+\phi+\mu_{n}-\tau)f(t)\,dt\,d\tau,
\label{eq:pmd_observation_multiple}
$$ where $\mu_{n}$ is phase delay inserted by $n^{\scriptsize \mbox{th}}$ light source. Though the light sources can be arbitrarily placed, we place them uniformly. Hence, $\mu_{n}$ is given by $ \mu_{n}=n\cdot\frac{\Delta d}{c}$, where $\Delta d$ is the distance between two consecutive light sources. In summary, we can now sample the measurements at the rate of $ \frac{\Delta d}{c}$ = 10 ps, allowing us to acquire information that is previously not possible.
Calibration for phase insertion
-------------------------------
As we change the active light source the amplitude of incident light at each pixel and the distance between object and light source changes. To overcome this inconsistency, we introduce an equalization process between the data taken with different illumination sources. Let us call measurements set for multiple light source positions as $\{b_{n}(\phi)\}$, where $n$ denotes the index of active light source. For each measurement $n$, we calculate equalization coefficient $w_{n}$ by minimizing the following cost function via least squares method: $$\begin{aligned}
w_{n}=&\displaystyle \operatorname*{arg\,min}_{w}\,\,\sum_{\phi}\left|\left|\hat{b}_{n}(\phi)-w\cdot b_{n}(\phi) \right|\right|_{2}^{2} \nonumber \\
=&\displaystyle \frac{\sum_{\phi}(\hat{b}_{n}(\phi)\cdot b_{n}(\phi))}{\sum_{\phi}(b_{n}^{2}(\phi))}\end{aligned}$$ where $b_{n}(\phi)$ is measurement corresponding to $n^{\scriptsize \mbox{th}}$ light source, and $\hat{b}_{n}(\phi)$ is estimation of equalized $b_{n}(\phi)$ obtained by linearly interpolating data set $\{b_{0}(\phi)\}$. The cost function is intended to decrease the squared error between $\hat{b}_{n}(\phi)$ and equalized observation $w\cdot b_{n}(\phi)$. Fig. \[fig:phase\_insertion\] illustrates the basic idea of Spatial Phase-Sweep.
![ (color in electronic version) **System implementation and scene setup:** (a) Our implementation is comprised of Altera FPGA development kit DE2-115, infrared laser diode, PMD 19k-S3, and a translation stage. We show the effect of our spatial phase-sweep technique on three scenes: an object placed between a coupled mirror (b), grapes which includes small spherical surfaces (c), and quantification scene which includes stacked 10 sheets of 3 mm thickness (d). In each picture of setups, the directions faced camera and light source are described with orange and blue arrows. Also, green frames indicate the actual field of view which the camera sees. []{data-label="fig:scene_setup"}](implementation_and_scene_setup){width="\textwidth"}
Experimental Setup
==================
\[sec:experiments\] Our system consists of a PMD sensor, laser diode, Altera’s FPGA development kit DE2-11 and a translation stage. To simulate light source array, we use a translation stage to control the light source position linearly towards the subject. Fig. \[fig:scene\_setup\] (a) shows our setup. FPGA controls various functions of PMD sensor including the reference code $f(t)$. Captured image is read out via FPGA and saved in an external storage. FPGA also controls laser diode driving board by sending illumination code $g(t)$. This ensures that the frequency and phase of the light source and sensor are synchronized. Most of the hardware and software design of our system is based on the work by Kadambi [[*et al.* ]{}]{}[@Kadambi2013].
**Illumination:** The infrared laser diode in our set up is used for illumination. The diodes are driven by iC-HG from iC-Haus. We can choose arbitrary binary sequence as illumination code $f(t)$. In our system, we used a 31 bit m-sequence. The modulation frequency was 50 MHz. As mentioned above, we changed a single light source position for each measurement by a translation stage to simulate a light source array. Using such a mechanical component, we can control $\Delta d$ of Eq. \[eq:pmd\_observation\] in the orders of 0.1 mm. **Code control:** \[sec:code\_control\] The PLL circuit included in the FPGA allows us to shift the phase of the output signal depending on the VCO frequency. In our configuration, we can control $\phi$ by about 96 ps. This is more precise compared to the code modulation frequency. This phase shift amount corresponds to light travel distance of about 2.8 cm, which implies that the frame rate of the transient image we can get is around 10 Gfps. **Translation stage:** In our experiment, we utilized linear translation stage to control the position of the light source in the orders of 0.1 mm. However, for the approximations given in \[subsection:Systematicerror\] to be valid, we used a step size of 2.8 mm. Hence, the translation stage helped us in inserting 9 extra measurement to increase the temporal resolution 10 times, to 100 Gfps.
Results
=======
In this section, we show experimental results both in quantitative and qualitative manner. In the visualization process, we perform a simple peak detection based on Orthogonal Matching Pursuit (OMP) technique to show the wave front propagation, similar to [@Kadambi2013]. While solving OMP, we set the sparsity to one and the bases as a set of phase shifted versions of the observed kernel function. To obtain sufficient data to perform OMP, around two thousand measurements of different $\phi$ is acquired [@Kadambi2013]. Though we only demonstrated a single path method for proof of concept, note that our temporal resolution increasing method is generalizable to multiple path methods like [@Heide2013; @Kadambi2013; @OToole2014].
Effective temporal resolution
-----------------------------
\[sec:performance\] In this section, we experimentally evaluate the increase in temporal resolution by our method. We placed a terraced slope with 3 mm thick sheets arranged in front of the camera as shown in Fig. \[fig:qualification\] . We quantify the temporal resolution as the number of sheets occupied by the wavefront as shown in Fig. \[fig:qualification\] . The reconstructed wave front of light propagation of the state-of-the-art (1x) and our technique (10x) is shown in Fig. \[fig:quantitative\_result\]. We can clearly observe the significantly improved temporal resolution of our spatial phase-sweep.
![ [**Quantitative results:**]{} The array of images shows the successive frames of transient image. Images above the dotted line correspond to the result without spatial phase sweep (original frame rate determined by the PLL phase shift capability) and the ones below the line corresponds to the results of our method. Red pixels indicate that the light reaches those positions at the timing of corresponding frames. The elapsed time and frame index are shown in the top left of each frame. Our result resolves the transient phenomenon into 40 frames which makes the width of the band of red pixels narrower than conventional transient image [@Kadambi2013], which resolve the same phenomenon in only 4 steps. See also Visualization 1 and 2 for video version. []{data-label="fig:quantitative_result"}](result_quantification_1x){width="\textwidth"}
![ [**Quantitative results:**]{} The array of images shows the successive frames of transient image. Images above the dotted line correspond to the result without spatial phase sweep (original frame rate determined by the PLL phase shift capability) and the ones below the line corresponds to the results of our method. Red pixels indicate that the light reaches those positions at the timing of corresponding frames. The elapsed time and frame index are shown in the top left of each frame. Our result resolves the transient phenomenon into 40 frames which makes the width of the band of red pixels narrower than conventional transient image [@Kadambi2013], which resolve the same phenomenon in only 4 steps. See also Visualization 1 and 2 for video version. []{data-label="fig:quantitative_result"}](result_quantification_10x){width="\textwidth"}
The effective temporal resolution can be measured from the width of the red pixel band. We pick up a frame and count the number of sheets the red pixel band occupies. Suppose the band occupies $n$ sheets, the temporal resolution of the transient image in Frames Per Second ($FPS$) will be $(3.0\times 10^{8})/(0.003\times n\times 2)$. The factor 2 is added as the frame rate appears doubled because of the distance traveled by light from light source to camera via the subject. From Fig. \[fig:qualification\] , the band lies in 2–3 sheets in a single frame, which turns into 16.7–25 Gfps. The size of the subject is too small to tell the effective temporal resolution of conventional transient image in the same manner. However, temporal resolution is at most 5 Gfps, since the whole 10 sheets are occupied by the red pixel band in a single frame.
Our spatial phase sweep technique also improves the accuracy of the depth estimate. To illustrate that, we generate depth maps using the data in Fig. \[fig:quantitative\_result\] and plotted them as shown in Fig. \[fig:depth\_reconstruction\]. We can observe that 10x result resolves the object’s depth values into 20 uniform levels whereas 1x resolves to only 2 uniform levels for the same depth range.
Light propagation on tiny objects
---------------------------------
We have evaluated the effect of our technique on several scenes that includes tiny objects, small enough to describe the improvement by the proposed method. The setups are shown in Fig. \[fig:scene\_setup\].
![ [**Coupled mirror scene results:**]{} Images above the dotted line correspond to the result without spatial phase sweep and the ones below the line correspond to the results of our method. The wave front propagation is captured more precisely in the 10x result compared to 1x. The same phenomenon, which occurs within 1–2 frames (\#0–\#2) in the 1x result, is resolved into 10–20 frames (\#0–\#20) in the 10x result. See also Visualization 3 and 4 for video version. []{data-label="fig:result_coupled_mirror"}](result_coupled_mirror_1x){width="\textwidth"}
![ [**Coupled mirror scene results:**]{} Images above the dotted line correspond to the result without spatial phase sweep and the ones below the line correspond to the results of our method. The wave front propagation is captured more precisely in the 10x result compared to 1x. The same phenomenon, which occurs within 1–2 frames (\#0–\#2) in the 1x result, is resolved into 10–20 frames (\#0–\#20) in the 10x result. See also Visualization 3 and 4 for video version. []{data-label="fig:result_coupled_mirror"}](result_coupled_mirror_10x){width="\textwidth"}
#### Coupled mirror: {#coupled-mirror .unnumbered}
Consider the set up in Fig. \[fig:scene\_setup\] (b). The transient images are shown in Fig. \[fig:result\_coupled\_mirror\]. The effects of 1x and 10x are similar to the quantification experiment. Consider the top row of 1x and top three rows of 10x results. We can notice that the propagating wave front of the light on the stuffed toy’s surface is resolved more precisely in the 10x result. The light hits at its nose and arms first, then gradually propagates onto its stomach and forehead taking 10–20 frames in the 10x result. On the other hand, the same phenomenon occurs within only 1–2 frames in the 1x result. The width of the band of red pixels is narrower in 10x than 1x. Note that the wave front moves from outside in the last half of the sequence because of the imaginary light sources created by the mirror (recall Fig. \[fig:scene\_setup\] (b)).
![ [**Grapes scene results:**]{} Images above the dotted line correspond to the result without spatial phase sweep and the ones below the line correspond to the result of our method. 10x result resolves the way light propagates even on a single grape while 1x result takes only 1 frame to cover each grape. See also Visualization 5 and 6 for video version. []{data-label="fig:result_grapes"}](result_grapes_1x){width="\textwidth"}
![ [**Grapes scene results:**]{} Images above the dotted line correspond to the result without spatial phase sweep and the ones below the line correspond to the result of our method. 10x result resolves the way light propagates even on a single grape while 1x result takes only 1 frame to cover each grape. See also Visualization 5 and 6 for video version. []{data-label="fig:result_grapes"}](result_grapes_10x){width="\textwidth"}
**Grapes scene:** Fig. \[fig:result\_grapes\] shows the transient imaging result of set up in Fig. \[fig:scene\_setup\] (c). The light source is placed on the right side of the scene. Although we can infer that light is traveling from right side to left side in both of 1x and 10x results, it can be noticed that 10x result describes the phenomenon more precisely than 1x result. In 10x result, we can observe the light propagation even on a single grape.
**Hue colorization:**
![ [**Hue colorization:**]{} Hue colorized visualizations are shown using the same data as the results above. (a) Quantification, (b) coupled mirror, and (c) grapes. Left images correspond to the 1x result and right images correspond to the 10x result. We can notice that while the temporal resolution of 1x result is too small to represent scene response using the all color indicated in the color bar, 10x results illustrates the transient image in color smoothly. []{data-label="fig:hue_images"}](hue_images){width="5in"}
In Fig. \[fig:hue\_images\], we show the hue colorized visualization of the transient images using the same data.
Discussion and conclusion
=========================
**A simple but effective modification:** We have demonstrated that we can increase the temporal resolution of transient imaging dramatically, by a factor of 10, using just a light source array. The light source array does not increase the cost of the setup significantly.
**Frame rate of transient image:** Although, we have empirically shown that our method improves the temporal resolution of PMD based transient imaging system by a factor of $10$, the practical temporal resolution of our system is around 16.7-25 Gfps (see Sec. \[sec:performance\]). On the other hand, the actual amount of phase delay between each successive measurement, or temporal resolution, is 9.6 ps. If we calculate the frame rate using the definitions used in other papers [@Kadambi2013; @OToole2014], the frame rate translates to 104 Gfps. One of the possible reasons of this gap between effective and theoretical temporal resolution is the SNR of the measured correlation. In our OMP based peak detection algorithm, signal noise in the correlation could introduce variance into the detected peak positions. Another possible reason is subsurface scattering effect. Although we obtained OMP kernel from actual data to include the subsurface scattering effect in it, the shape of the kernel loses high frequencies due to subsurface scattering and that negatively affects the OMP based peak detection.
**Limitations:** The physical size of the light sources can limit the size of the light source array and hence, the resolution of spatial phase-sweep. The size of the camera, may increase due to additional light sources. However, this is not a limiting factor for many practical applications. Our solution is feasible today as the phase control interval in spatial domain is 3 cm. If that value is too large, for example several tens of meters, it would have been impractical to make such a large light source array. On the other hand, if the advances in electronics push the phase control of PLL to around 1 ps, then it translates to building a light source array of size 300 $\mu$m, which may not be feasible.
In terms of number of measurements, the data acquisition time of our method increases linearly with the increase in temporal resolution. This can be another limiting factor in increasing the number of light sources.
**Future directions:**
[**Designing of light source array:**]{} In this paper, we used a single light source and a translation stage. By changing the position of the translation stage we simulated the effect of a light source array. This requires us to adjust the translation stage at every measurement and is time consuming. Designing light source array will make the system more compact and will reduce the manual work.
[**Decreasing number of measurements:**]{} Currently, in our method the number of measurements required increases linearly with the increase of sampling rate in phase domain. As some previous works have already shown, the impulse response of the scene is sparse even if it includes multiple paths or scattering. Employing compressive sensing theory might help us to reduce the number of measurements dramatically.
[**Advanced signal model:**]{} The effective temporal resolution of light propagation is limited by the peak detection method we used. Although OMP gives us good results, more advanced signal models such as exponentially modified Gaussians [@Heide2014] might narrow down the variance of the wave front of light.
**Conclusion:** In this paper, we proposed a technique to increase the temporal resolution of transient imaging by translating temporal domain sampling to spatial domain sweeping. Theoretically, we have derived the conditions required to align light source uniformly and to make calibration simple. Though we do a simple modification to existing PMD based transient imaging system, we have demonstrated that our method improves temporal resolution dramatically in several scenes.
Acknowledgments
===============
Most of the hardware and software design of our system is provided by Achuta Kadambi [@Kadambi2013]. We are extremely grateful for the detailed documentation and the in-depth instructions provided by Kadambi [[*et al.* ]{}]{}that allowed us to build our prototype. This work was partially supported by Sony Corporation and by NSF Grants IIS:1116718 and CCF:1117939.
Systematic error analysis of phase insertion
============================================
\[subsection:Systematicerror\] In the case of single path scene, the amount of phase insertion is a function of the angle between the axis of the light source array and light ray to the subject. This angle dependence of spatial phase-sweep is illustrated in Fig. \[fig:systematic\_error\] . Each pixel has a different amount of phase insertion when the light source is changed. We define [*systematic error*]{} as the maximum difference in the phase shifts introduced to all the pixels by the change in the light source. It is possible to account for these differences in the calibration process by estimating the angle for each pixel. However, this demands additional calibration steps to obtain such an information. Hence, to keep the system simple, we will evaluate the systematic error theoretically to find the limit on the number of light sources below which, we can neglect the phase difference between pixels in the same frame.
Consider a simple situation where we have a planar subject and the light sources are perpendicular to the subject as shown in Fig. \[fig:systematic\_error\]. We will first calculate the amount of phase shift introduced for a point A, $S=\mathrm{|O'A|-|OA|}$ as a function of $\theta$ and then find the systematic error by maximizing the difference between two farthest points A and B. For simplicity, let us assume that B is on the line of light source array. Hence, $\mathrm{|O'B|-|OB|}=\Delta d$. Using primal trigonometry, $S$ and its 1st order Maclaurin expansion can be written in terms of $\Delta d$ as follows: $$S=\sqrt{\frac{d^{2}}{\cos^2\theta}+2d\Delta d+\Delta d^{2}}-\frac{d}{\cos\theta}\simeq\Delta d\cos\theta
\label{eq:error_def}$$ with a remainder (error) term: $$|R_{2}|=\left|\frac{\alpha^{2}-\frac{\beta^{2}}{4}}{2(\alpha^{2}+\beta c+c^{2})^{\frac{3}{2}}}\Delta d^{2}\right|\le\left|\frac{\alpha^{2}-\frac{\beta^{2}}{4}}{2\alpha^{3}}x^{2}\right|
\label{eq:remainder_term}$$ where $\alpha=d/\cos\theta$, $\beta=2d$ are substitution variables and $R_{2}$ is the remainder term for 1st order Mclaurin expansion. According to Eq. \[eq:error\_def\], the difference in the inserted phases for point A and B is proportional to the amount of phase shift introduced ($\Delta d$). Suppose we want to increase the temporal resolution by N times. We make sure that the worst phase inserted does not deviate from the ideal phase by the amount of phase shift introduced. Hence, we have $$\begin{aligned}
\floor*{\frac{N}{2}} \Delta d (1-\cos\theta) \le \Delta d \Rightarrow \floor*{\frac{N}{2}} \le \frac{1}{1-\cos\theta}\end{aligned}$$ where $\Delta d$ indicates the minimum distance between the light sources below which the pixels in the same measurement can be considered to have same phase. This is illustrated in Fig. \[fig:systematic\_error\]. Suppose that the illuminated range is less than $50^\circ$ ($\theta\le25^\circ$), like a normal lens, the maximum magnification in temporal resolution will be: $N\le21.3$. From Eq. \[eq:remainder\_term\], we can evaluate the accuracy of the approximation in Eq. \[eq:error\_def\]. As mentioned in Sec. \[sec:experiments\], the scale of our setup is as follows: $d\ge0.1\,\mathrm{m}$, $\theta\le25^\circ$ and $\Delta d\le0.03\,\mathrm{m}$. Then the approximation error is less than $7.3\times10^{-4}\,\mathrm{m}$. This result shows that the approximation in Eq. \[eq:error\_def\] is sufficient.
| 2023-10-28T01:26:35.434368 | https://example.com/article/4204 |
Kovert Creative, New Denver PR firm & More
There’s always something happening in the PR universe, but here’s more on some of the big PR news currently:
Kovert Creative opens its doors
PMK*BMC veteran executives Lewis Kay and Joseph Assad left their previous positions to open up a new agency – a joint venture with VME-IMG, an entertainment and sports conglomerate though the new firm will be run independently. Kay and Assad plan to draw on their extensive digital and personal branding experience in the new business.
As reported by the Hollywood Reporter, Kay moves from his 19-year stint with PMK in New York to new offices in Los Angeles and expects to bring some of his long-time clients such as Amy Poehler, Sarah Silverman, Jimmy Kimmel, Will Arnett, and Jack Black with him.
Assad’s background with digital and video work included Emmy spots from Audi and an ad called “The Challenge” starring Zachary Quinto and Leonard Nimoy. Assad will continue working from New York. The two will work as Co-CEOs of the new firm.
New Denver PR Firm
Two of Denver’s PR people have joined forces to open a new PR firm called Silvers & Jacobson, LLC. Paul Jacobson and Steven Silvers both have many years (nearly three decades) of experience in journalism, government and politics, public relations, and corporate affairs as confirmed by The Denver Business Journal. The new offices will be in the Lakewood area. S&J plans to work with big and small companies to “prepare for and address the increasingly complex challenges of business growth and crisis situations.”
Silvers previously worked as an advisor and spokesman at Noble Energy Inc., a large gas production company in Colorado with headquarters in Houston. Silvers also worked as a journalist previously and in various PR and management jobs for the Pentagon, a Fortune 500 company and two large PR agencies.
Jacobson’s background includes working on Capitol Hill for three U.S. Senators (two of them majority leaders) and many years in the gas and oil industry.
PepsiCo’s European Possibilities
According to The Holmes Report, PepsiCo is taking a look at the roster of PR companies they use for their various products in Europe. One of the major firms, Freud Communications has asked not to be included in the review. Freud represents PepsiCo in most European markets.
Freud’s’ representative stated, “We have declined to participate in the cross-Europe agency roster process. We continue to work with Walkers in the U.K. and have an exciting programme of activity locked across the year.”
There is some speculation that PepsiCo may plan to consolidate its PR efforts for European consumer products, especially since they recently eliminated their marketing procurement department. And just as a reminder, PepsiCo is much more than Pepsi-Cola, they are also Quaker Oats, Walkers, 7-Up, Mountain Dew, Gatorade, Tropicana, So Simple, Lays, Stacy’s ….
With all those products and options, they also are represented by numerous PR agencies around the world.
Omnicom’s various holding PR companies provide a primary source of Pepsi’s PR efforts. However, PepsiCo appears to be moving towards the individual product companies establishing separate PR efforts best suited to that product and location.
Everything-PR News is a leading Public Relations news website founded in 2009. Everything-PR features the latest PR News, industry happenings, crisis communications strategies, RFP’s, PR Firm insights and much more for PR and marketing professionals around the world. | 2024-07-30T01:26:35.434368 | https://example.com/article/4820 |
Page page and configuring it to display in the event of an unhandled exception. Customizing Database Deployments asp check it out has occurred on this web site. error An Error Occurred On The Server When Processing The Url. Please Contact The System Administrator. Controls and Control Extenders (C#)3. How to implement Error Logging asp then select Add -> New Item.
Removing the Exception Used for Testing To allow the Wingtip Toys sample application to (VB)ASP.NET 3.5 - Roles1. Validating User Credentials Against Control Toolkit Control Extender (VB)Accordion1. Creating a Server Farm on 2: Adding a Business Logic Layer and Unit Tests3.Modifying Animations From via ReorderList (C#)3.
Handling Postbacks from A Popup with a Web Service Backend (VB)PasswordStrength1. Therefore, the most important information for an exception can be found in the her latest blog checks on both error objects.Implementing Optimisticusers a custom error page in the face of an error.Click DataList or Repeater Control (C#)2.
Projectfrom current employer -- should I accept?Custom Buttons in the DataList and Repeater Asp Error Number -2147467259 In a Repeater (VB)DropShadow1.Do I need a Repeater Control (C#)3. ERROR==The type 'ASPNetFlash.Flash' exists in both4 - Enterprise Deployment Series 3 Configuring TFS1.
with CascadingDropDown (C#)4.Press CTRL+F5 to run the Wingtipin an ASP.NET Page (C#)4. Running the Application You can run the application to see the ASP.NET HostingOne User Account from Many (VB)5.
Creating a Numeric Up/Down Control Server Compact - Setting Folder Permissions7. Formatting the DataList andOn Production Website (C#)17.in a Text Box (C#)2.This handler catches all exceptions that are not a Database (C#)3.
During his spare time he enjoysCustom Formatting Based completed or the connection aborted. Executing Several Animations Asp On Error Goto 0 Maximizing Performance with the Entity development environment were caused by the developer sitting at her computer.
Then there's the possibility http://videocasterapp.net/on-error/guide-on-error-next-vba.php 4.0 Database First - Part 34.Specifying the Title, Meta Tags, and Other Source Inserting, Updating, and Deleting (C#)3.You can configure the StatusCodePagesMiddleware adding this line to the Configure method: in internet connection to download the package.Visual Studio Web Deployment withwith CascadingDropDown (VB)8.
Using HoverMenu with 4.0 Database First - Part 67. Understanding ASP.NET AJAX Authentication Classic Asp Throw Exception the SqlDataSource Control (VB)6.As you can see in the error details, the exceptiona Control (C#)2.Handling Postbacks from consultant specializing in ASP and VB.
in for Web Deploy Publishing10.For this reason, a generic error messagedeveloping Windows Phone and Windows 8 apps.Compact - Deploying to the Production Environment8.Interacting with the Master PageDAL-Level Exceptions (C#)4.
Some of the Microsoft software http://videocasterapp.net/on-error/guide-on-error-asp-net.php for example, the appropriate URL to view the fiction reviews is Genre.aspx?ID=7683ab5d-4589-4f03-a139-1c26044d0146.Handling Postbacks from A PopupEditing and Inserting Interfaces (VB)13.Using SQL DataList and Repeater Controls (VB)6. Working with Classic Asp Global Error Handling Items and Details6.
Interacting with the Content Page be causing this?Creating a Data from the list of available packages online. Adding a GridView Column
I've added a new folder to the Book Reviews application named the ASP.NET Development Server (VB)23. Slider Control in User Control And JavaScript (VB)FilteredTextBox1. asp Some users complain of an error On Error Resume Next Vbscript Example an Accordion (C#)2. in Building and Packagingwasn't helpful. | 2024-02-12T01:26:35.434368 | https://example.com/article/3755 |
Phillips Exeter minister honored with peace award
ELIOT, Maine — On Saturday evening, the Rev. Robert Thompson, minister at Phillips Church in Exeter, N.H., was awarded the 2012 Sarah Farmer Peace Award at the Kelsey Center at Green Acre Baha'i School, Retreat and Conference Center.
David Ramsay
ELIOT, Maine — On Saturday evening, the Rev. Robert Thompson, minister at Phillips Church in Exeter, N.H., was awarded the 2012 Sarah Farmer Peace Award at the Kelsey Center at Green Acre Baha'i School, Retreat and Conference Center.
He was honored for his work over the past 25 years as the minister at Phillips Exeter Academy, bringing diverse faith traditions together in loving acceptance of each other's spiritual principles, said Jaleh Dashti-Gibson, administrator of Green Acre Center.
Now in its eighth year, the annual Sarah Farmer Peace Award recognizes contributions of area individuals and groups who take effective local action to promote peace and understanding among members of the human family, Dashti-Gibson said.
Under Thompson's leadership, the church hosts Christian, Buddhist, Muslim, Hindu and Jewish worship, as well as an Interfaith Group, she added.
Not only is Phillips Church a place where established religious groups gather to worship together within their own tradition, it is also a place that invites and welcomes those unaligned with a recognized spiritual tradition and practice, or who may have no belief in a god or a creator, she said.
"(Thompson's) efforts in making our school a place where all religious faiths feel safe and welcome can be seen in the myriad religious services. ... His ministry makes possible our diverse student body," said Phillips Exeter Academy Principal Thomas Hassan in a letter read by Dashti-Gibson. "His outreach to individuals and groups off the Exeter campus is legendary."
After singing "Amazing Grace," Thompson thanked world musician Randy Armstrong, who accompanied his singing and also performed other musical offerings.
"I chose 'Amazing Grace' because it is the closest thing we have to a national hymn," Thompson said. "Even if you do not know the words to it, you probably do agree with the sentiment, the idea of being lost and found, the idea of recognizing your own unworthiness in the face of worthy opportunities."
Attendees spoke after the ceremony about their admiration for Thompson and his work at Phillips Exeter.
"He's a force in favor of people being accepted regardless of their beliefs and being accepted for just being human," said Grant Suhm, a summer conference leader at Green Acre, who comes from Texas. "To be a truly great person, you have to accept that people are different and they deserve to have their beliefs ... and you need to help them become the best person they can become."
"I was ... touched by the way he expressed the oneness of mankind," said Mara Khavari of Portsmouth, N.H. "There is a golden thread of unity that runs through all religions and all people. ... That recognition of our common humanity was very eloquently expressed."
In 2011, the award focused on educating the public in the skills of peacemaking and was given to Portsmouth Listens, an all-volunteer nonprofit organization that designs and carries out study circle dialogues to solve community problems.
Never miss a story
Choose the plan that's right for you.
Digital access or digital and print delivery.
Advertise
Original content available for non-commercial use under a Creative Commons license, except where noted.
seacoastonline.com ~ 111 New Hampshire Ave., Portsmouth, NH 03801 ~ Privacy Policy ~ Terms Of Service | 2024-02-26T01:26:35.434368 | https://example.com/article/5888 |
Q:
socket.on calls its callback too many times
On the first click, my client outputs this:
Object {hello: "world"}
Then on the second click:
Object {hello: "world"}
Object {hello: "world"}
And the number of times the line is output for a click increases by one with subsequent click.
Client
var socket = io.connect('http://localhost');
$(document).on('click' , '#test', function(){
socket.emit('news', { my: 'data' });
socket.on('news', function (data) {
console.log(data);
});
});
Server
var app = require('http').createServer(handler)
, io = require('socket.io').listen(app)
, fs = require('fs')
io.sockets.on('connection', function (socket) {
socket.on('news', function (data) {
socket.emit('news', { hello: 'world' });
console.log(data);
});
});
A:
You're binding a new event handler each time the click event handler is triggered. Bind it once outside of the callback:
var socket = io.connect('http://localhost');
socket.on('news', function(data) {
console.log(data);
});
$(document).on('click', '#test', function() {
socket.emit('news', {
my: 'data'
});
});
| 2024-02-19T01:26:35.434368 | https://example.com/article/1522 |
Crista neglecta in man.
The prevalence, location, and size of the crista neglecta in man were investigated by examining the histological sections of 223 human temporal bones (137 cases). The relationship between the crista neglecta and the singular nerve was also described. The crista neglecta was observed in 17 cases, ranging in age from 15-week fetal life to 76 years. This structure was located on the wall of the anterolateral quadrant of the posterior canal ampulla, close to the cribriform area of the singular nerve in the area between the utriculoampullar duct and the intermediate portion of the posterior canal crista. The average width, length, and height of the crista neglecta were described. The crista neglecta had a mound-like shape and contained nerve fibers, transitional epithelium, sensory hair cells, and a cupula. The nerve fibers from the crista neglecta were joined to a small branch of the singular nerve at the cribriform area in 5 of 17 crista neglecta cases, and to the main trunk of the singular nerve in the remaining 12 cases. | 2023-08-23T01:26:35.434368 | https://example.com/article/4781 |
Republican presidential candidate John McCain started his first television ad of the general election Friday.
In it, he portrays himself as an experienced leader capable of keeping the country safe. The commercial also called him "the American president Americans have been waiting for."
The 30-second commercial coincides with McCain's "Service to America" tour next week, when he will give a series of speeches.
While McCain is well-known among Republican loyalists, strategists say the country knows little about him and his life story. These are two things he hopes the ad and tour will help portray.
Joining Forces
McCain also joined forces with former rival Mitt Romney Thursday in Salt Lake City, hoping to draw on Romney's popularity in Utah and Colorado.
Despite their rivalry during the primary campaign, Romney said he will do all he can to help McCain.
At one point in the ad, the 71-year-old stands behind a podium at a campaign rally, saying: "Keep that faith. Keep your courage. Stick together. Stay strong. Do not yield. Stand up. We're Americans. And we'll never surrender."
Then, he is shown a young Naval aviator being interviewed as he lays in a hospital bed after being shot down and tortured in Vietnam. | 2024-07-13T01:26:35.434368 | https://example.com/article/3859 |
Here we see Malfoy father and son: Alex Price as Harry’s former school nemesis Draco Malfoy, and Anthony Boyle in the role of his son, Scorpius.
Alex Price is no stranger to otherworldly television dramas, having previously appeared in television dramas Merlin, Penny Dreadful and Doctor Who. He is also a prominent force in the theatre, with credits including 3 Winters, Birdland and Before the Party.
Belfast-based actor Anthony Boyle was trained at the Royal Welsh College of Music & Drama, and has appeared in productions including Herons and East Belfast Boy, the latter of which he co-wrote.
As Scorpius Malfoy, Anthony looks the spitting image of his stage dad. J.K. Rowling said: ‘I love Draco and Scorpius – they actually look related!’
The transformation also presented another challenge for Anthony – going blond.
‘It was such a game changer,’ Anthony said of his new look. ‘As soon as I saw it, it was like, “Okay, I’m playing Scorpius Malfoy – this is real now.” That was such a big moment.’
Just like Albus Severus Potter and Rose Granger-Weasley, Scorpius will wear the Hogwarts uniform during Cursed Child, with a typical Malfoy twist.
‘He’s wearing the official Hogwarts uniform before you go and get sorted into your house. He’s a Malfoy so his clothes should be really expensive but quite constraining to make him feel a bit awkward.’
J.K. Rowling also teased that Scorpius might be a hit with the ladies. She said: ‘I've got a feeling Scorpius is going to do nothing to turn girls off the Malfoy men.’ | 2024-02-19T01:26:35.434368 | https://example.com/article/9630 |
Radiolabeling in Biology.
Chemistry is the science of chemical reactions, the study of chemical properties, composition, and structure of a molecule. When the molecule under observation is of a biological origin (proteins, carbohydrates, lipids, or nucleic acids), the study of its chemical properties, reactions, and structure is known as biochemistry. Similarly, if the molecule or a biochemical under observation is radioactive, the science becomes radiochemistry or radio biochemistry. So, chemistry is the science which fuses these two diverse fields of applied sciences. Fusion of these two sciences on chemistry platform has enabled the development of various new radioactive formulations which are called as radiopharmaceuticals and are being used the world over for clinical as well as experimental purposes. For the successful development of radiopharmaceuticals, we require in-depth understanding of both biochemistry as well as radiochemistry. So, the present review article summarizes basic relevant details and experimental advances in both these sciences with regard to development of radiopharmaceuticals. | 2024-01-28T01:26:35.434368 | https://example.com/article/1857 |
The data underlying this study belong to the Allgemeine Ortskrankenkasse Niedersachsen (AOKN-General Local Health Insurance of Lower Saxony). Interested researchers can send data access requests to Jona Stahmeyer at the AOKN using the following e-mail address: <Jona.Stahmeyer@aok.nds.de>. The authors did not have any special access privileges.
Introduction {#sec005}
============
In the 1980s James Fries formulated an optimistic perspective on the development of population health \[[@pone.0202631.ref001]\]. His hypothesis of morbidity compression states that prevention, improved living conditions and socio-economic factors are contributing to a prolongation and gains of healthy lifetime \[[@pone.0202631.ref002]--[@pone.0202631.ref005]\]. He assumed morbidity compression not only to take place in higher age groups, but also in earlier periods of life as myocardial infarctions and states of minimal morbidity may already occur around the age of 50 \[[@pone.0202631.ref006]\] (p.1) \[[@pone.0202631.ref003]\](p.164). Over the years Fries published several papers on morbidity compression that have contributed to further refinements of the concept, but they are also giving rise to the need of clarification. At first it has to be emphasized that morbidity compression refers to relationships between morbidity and mortality, but for compression to occur it is not necessary that life expectancy or mean age at death are changing \[[@pone.0202631.ref003], [@pone.0202631.ref006]\]. From Fries' writings morbidity compression may be conceptualized in two ways. It has to be emphasized that they may not necessarily occur jointly, but also independently.
1. The first formulation refers to morbidity compression as the relationship between decreasing morbidity and mortality rates \[[@pone.0202631.ref007]\](p.210). Compression is present if age-specific morbidity rates are decreasing more rapidly than age-specific mortality rates \[[@pone.0202631.ref002]\] (p.811). By analyzing rates, populations have to be considered over defined observation periods.
2. According to the second conceptualization compression occurs "...if the age at first appearance of aging manifestations and chronic disease symptoms can increase more rapidly than life expectancy" \[[@pone.0202631.ref002]\] (p.810) \[[@pone.0202631.ref008]\](p.1638). Empirically this has to be examined by analyzing changes of onset age in relation with life expectancy or age at death. With respect to the analyses below it has to be noted that this refers only to the subset of a population with a myocardial infarction or to those who are dying.
In his early papers Fries was referring to life expectancy in terms of a maximum biological lifespan \[[@pone.0202631.ref001], [@pone.0202631.ref002], [@pone.0202631.ref009]\], a term that has aroused much controversy among demographers \[[@pone.0202631.ref010], [@pone.0202631.ref011]\]. For empirical work the assumption of a fixed lifespan leads to study designs that may confine analyses to morbidity without having to collect information on mortality from the same dataset. If the assumption of a maximum lifespan is abandoned or left as unknown, morbidity and mortality have to be considered together, but in empirical studies this had not always been done.
Irrespective of the need for some conceptual clarifications, Fries hypothesis has stimulated many empirical studies. They are covering a broad variety of outcomes, ranging from physical diseases, mental decay \[[@pone.0202631.ref012]--[@pone.0202631.ref014]\] and functional impairments to the development of health care costs \[[@pone.0202631.ref004], [@pone.0202631.ref015]\], postponement of retirement age \[[@pone.0202631.ref009]\], or the development of self-determined living in old age \[[@pone.0202631.ref016]\]. The studies published so far can be divided into work on general health/ impairments of everyday activities, on mental impairments, and on specific diseases \[[@pone.0202631.ref017]\].
The largest number of studies including Fries' own work deals with general health and impairments of everyday activities. His study on runners examined relationships between physical activity and longevity \[[@pone.0202631.ref006], [@pone.0202631.ref008]\]. From 1984 on physically active women and men were compared with less active controls. In 2005 health impairments and the utilization of health services were assessed. In the active group the prevalence of health impairments was lower, and also the risk of death. These findings were confirmed in a later study where health was conceptualized as a count of diseases and impairments \[[@pone.0202631.ref018]\]. Romeu used data of the Health and Retirement Study for examining changes of everyday impairments \[[@pone.0202631.ref019]\]. After having adjusted for age, cohorts surveyed later had lower degrees of everyday impairments than cohorts surveyed earlier, permitting the conclusion that compression had taken place. Manton \[[@pone.0202631.ref020]\] combined data of six surveys conducted between 1982 and 2004 by considering respondents aged 65 years and older. Self-care limitations impairments of everyday activities were used as outcomes. In women and in men the later surveyed cohorts were living longer with lower degrees of impairments than earlier ones, and cohort effects were most pronounced at the upper end of life, findings that can be interpreted as morbidity compression. This was confirmed in a study based on Medicare-based claims data \[[@pone.0202631.ref021]\]. Graham et al \[[@pone.0202631.ref022]\] used data from New Zealand from 1981 to 1996. They reported increasing rates of functional limitations, but this occurred in terms of moderate degrees, while the number with more severe impairments remained at the same level. These findings cannot be interpreted as compression, but rather as morbidity expansion \[[@pone.0202631.ref023]\].
A German study with routine data examined the long-term development of the need of care by considering amount of need and geographical region \[[@pone.0202631.ref024]\]. Although regional differences emerged, the general trend went towards a general increase of need of care, but severe morbidity decreased, thus rather pointing towards a dynamic equilibrium. This was at least partly confirmed by a second German study on the same outcome \[[@pone.0202631.ref025]\]. The findings of another German study with claims data was pointing into the same direction as multimorbidity rates were increasing from 2005 to 2014 \[[@pone.0202631.ref026]\].
This overview of research with general health measures as outcomes was intended as representative, but not as exhausting. Studies with subjective health measures, functional impairments and disability are representing the bulk of literature on morbidity compression. Beltran-Sanchez considered this as a severe shortcoming of the current state of research, and he pointed out that specific diseases should be considered as endpoints \[[@pone.0202631.ref027]\].
Cognitive impairments are ranging between subjectively assessed health and specific diseases \[[@pone.0202631.ref028]\]. A study conducted between 1993 and 2002 reported that in the first wave the proportion of impaired individuals aged 70 years and older was 12.2%, while at the second wave it dropped to 8.7% \[[@pone.0202631.ref013]\]. These findings were confirmed in a second study with women and men aged 65 year and older \[[@pone.0202631.ref029]\]. In all of these cases better cognitive functioning was associated with higher longevity. A study on dementia with claims data reported decreasing incidences between 2006/2007 and 2009/2010 and dementia-free lifetime was increasing. The authors concluded that morbidity compression had occurred \[[@pone.0202631.ref012], [@pone.0202631.ref029]\].
A US-based study considered morbidity changes in terms of physical diseases in samples surveyed between 1998 and 2004 and 2004 and 2010 \[[@pone.0202631.ref030]\]. It turned out that the more recently surveyed cohorts had higher rates of cancer, diabetes, lung disease and high blood pressure as compared to subjects of the same age group from the earlier waves. These findings are contradicting the compression hypothesis as they were pointing towards higher than lower degrees of morbidity. A German study on diabetes type 2 reported stable incidence rates for 2005 to 2013 in middle and higher age groups, but rates in the 18 to 39-year olds were increasing and age at occurrence was shifting downwardly \[[@pone.0202631.ref031], [@pone.0202631.ref032]\]. So far the development may be described as morbidity expansion, but further analyses will be necessary to differentiate between expansion and a dynamic equilibrium \[[@pone.0202631.ref033]\], i.e. that patients with diabetes may live longer and with better quality of life than in earlier times. In another study covering 2008 to 2014 different types of stroke (cerebral infarction and haemorrhagic stroke) were examined \[[@pone.0202631.ref034]\]. While no changes over time occurred for cerebral infarction, the rates of haemorrhagic stroke decreased, thus morbidity compression had occurred only in a subtype making up only 20% of all onsets.
Myocardial infarction (MI) is one of the most frequently occurring diseases, and first incidence as well as case-fatality rates were decreasing since the 1970s \[[@pone.0202631.ref035]\]. Incidences of cardiovascular diseases were reported to having declined between 1970 and 2000, accounting for about 60% of the increasing life expectancy in the USA \[[@pone.0202631.ref036]\]. Another US-based study used Medicare-based data of 18,670 women treated between 1999 and 2009 \[[@pone.0202631.ref037]\]. Besides a general decrease of cardiovascular risks, mean age at onset increased while survival rates remained unchanged. A US-based regional study used data of the years 1995 to 2012. A total number of 5258 myocardial infarction cases were reported, and incidences declined by 3.3% per year \[[@pone.0202631.ref038]\]. Another study reported decreasing rates of cardiovascular mortality in Germany where rates in males were reported to having declined since 1981, in women the same development was taking place since 1985 \[[@pone.0202631.ref039]\]. The "Early Indicators"-project was based on records of male US-military personnel what made it possible to observe health-related developments at population level over a period of more than 100 years \[[@pone.0202631.ref040]\]. Mean age at onset increased by 10 years while the gain of life expectancy at the age of 50 was only 6.6 years, indicating absolute compression of morbidity. Although morbidity compression was found, the age groups where it occurred were not reported.
Fries assumed that age at death would be determined by a biologically limited life expectancy. Although this assumption is reasonable, numeric estimations of life expectancy have always become outdated. There is evidence that since 1840 the highest measured annual life expectancy had increased by about three months per year \[[@pone.0202631.ref041]\]. In recent decades this was due to changes in the higher age groups. This holds for the USA \[[@pone.0202631.ref042]\], for Germany \[[@pone.0202631.ref039]\], and for several European countries \[[@pone.0202631.ref043], [@pone.0202631.ref044]\], therefore developments of morbidity have to be considered alongside developments of mortality.
Taken together, the findings on morbidity compression appears as heterogeneous. After having reviewed a large number of studies, Crimmins and Beltran-Sanchez concluded that there was evidence in favor of compression as well as counterevidence \[[@pone.0202631.ref042]\]. It has however to be emphasized that the findings have to be interpreted against the backdrop of the outcomes chosen, the certitude onsets can be dated with, by the time period considered and by the country where the data were collected. Gender differences have also to be taken into account.
In the following analyses morbidity compression will be examined for the case of myocardial infarction (MI). This outcome was chosen because it is frequent, it can be diagnosed and dated with sufficient accuracy, and studies on morbidity compression on MI are rare. Against the backdrop of the considerations above the following topics will be dealt with:
- It will be examined whether MI-rates were decreasing over the observation period and whether MI-rates were decreasing to the same extent or stronger than mortality rates. This refers to the abovementioned first formulation of morbidity compression as decreasing rates of myocardial infarction in connection with age-standardized mortality rates.
- It will be examined whether age of onset and age at death have shifted upwardly over the observation period. This refers to the second formulation of morbidity compression as change of mean age at MI-onset as related to changes of mean age at death.
- Does morbidity compression in terms of MI occur in specific age periods or does it take place over the whole age range? This third line of analysis refers to Fries' considerations that morbidity compression may not only take place at the end of life but over the whole age range. It refers to morbidity only, but it integrates the considerations on changes of age and changes of rates.
Materials and methods {#sec006}
=====================
Database {#sec007}
--------
The data used for the following analyses are pseudonymised claims data of a German statutory health insurance, the AOK Niedersachsen (AOKN). The database is covering the years 2005 to 2015 with about 2 m insured per year aged 18 years and older. It does not depict a sample, but a complete population. Power analyses were performed for Cox-regression. Setting the probability of an endpoint event to 0.01, the significance level to p = 0.01, the power of testing to 0.8, and the effect size in terms of hazard ratio to hr = 0.1, the necessary case number is N = 882. This was exceeded in all lines of analysis. Comparative analyses have shown that the distributions of age and gender of our insurance population, those of Lower Saxony and of Germany did not differ, but the insurance population had a higher proportion of individuals with lower occupational qualifications \[[@pone.0202631.ref045]\]. This implies that health and life expectancy of our population should be lower than at nationwide level.
All residents of Germany must have health insurance, and in 2011 only 0.2% were uninsured \[[@pone.0202631.ref046]\]. Below a certain income threshold insurance with the statutory system is mandatory, and in 2011 this applied to 89% of all permanent residents. Insurance premiums within the statutory system are fixed to 14.6% of the pre-tax income. Spouses without employment and children are insured free of charge, irrespective of family size. Health care providers are not paid by patients but by health insurances, thus separating doctor-patient relationship from financial issues. Within the statutory health care system, the amount of health care coverage is the same for all insured individuals. Regular adaptations of coverage are carried out according to the development of medical treatment. The private health insurance sector covers state- and self-employed individuals and those above a certain income threshold (11% of all residents). Insurance premiums are calculated at an individual basis according to predefined health risks \[[@pone.0202631.ref047]\].
Claims data from statutory health insurances are fairly complete as all shifts of money from insurers to providers are registered. Supplementary payments are rare, at least those falling within the topic of this paper. Health insurance records are including socio-demographic information as well as data on unemployment, education, income, occupation, in- and outpatient treatment and medications with the respective dates of occurrence. This time-related structure makes it possible to establish event sequences. A further advantage of claims data is the absence of dropouts. Staying in a hospital or living in an institution (e.g. a retirement home or a prison) does not lead to exclusion from analysis. Diseases and deaths are recorded within the same dataset thus making it possible to analyze them in context. The data were systematically checked for errors, consistency, duplicates, and for the correctness of the temporal order of events.
The following variables will be used:
Classifications of **myocardial infarctions** (MI) are based on hospital diagnoses and coded according to ICD10. Cases were classified as myocardial infarctions if one of the following diagnoses were assigned: ICD-10: I21.0 to I21.9 (acute myocardial infarction with the fourth digit denoting the location) of the International Classification of Diseases (ICD) as issued by the World Health Organization (<http://www.who.int/classifications/icd/en/>). In case of several events only the first one in a chronological order was counted. Cases of recurrent myocardial infarctions (ICD-10: I22) were not considered. Nevertheless, it cannot be excluded that recorded I21.X-cases were falsely classified as first events. In order to reduce the likelihood of misclassifications, a pre-observation period of one year was introduced. It was counted from the beginning of the observation period on, and all MIs occurring within this period were excluded, thus leading to a shortening of the total observation time. The information base for defining pre-observation periods for MIs is scarce as not many studies are available, and the figures are varying according to health care systems as they are setting up the framework of data collections from different countries. Published studies are consistent that the majority of recurrences occurs within 12 months after first MI, that the likelihood of an event is increasing with the age of patients \[[@pone.0202631.ref048], [@pone.0202631.ref049]\], and that recurrence risks were decreasing in recent years \[[@pone.0202631.ref049]\]. In a study from the US it was reported that 14% of the women and 13.5% of men had a recurrence within 12 months after first MI \[[@pone.0202631.ref050]\], and in a UK-based study 5.6% of men and 7.2% of women were reported to having had a second MI within the same period \[[@pone.0202631.ref051]\].
**Mortality** has to be included as the second indicator determining morbidity compression. In the health insurance data death is recorded with its precise date as it terminates health insurance membership.
**Calendar year** is the main variable for stratification if morbidity compression is examined. The insured can be located with respect to their terms of insurance, and every event can be also be located by its date.
### Insurance status {#sec008}
The insurance population is divided into employed, family insured (family members insured free of charge), pensioners, and unemployed as morbidity and mortality risks are differing over these groups. The analyses to follow will focus on changes of morbidity and mortality over calendar years. Insurance status has to be controlled for because the structure of the insurance population may change over time. Individuals without employment and those officially registered as unemployed were shown to having higher health risks than those who were employed \[[@pone.0202631.ref052], [@pone.0202631.ref053]\]. Ignoring insurance structure would lead to erroneous conclusions. This also refers to increasing labour force participation of women over time and to changes of the age at retirement.
Age had to be introduced as a control variable as both outcomes are age-dependent. For MI and for death, age at event occurrence was used, and for censorized cases age at the end of observation was used.
Analyses {#sec009}
--------
According to the three topics formulated at the end of the introduction, analyses are performed in separate lines of analysis. To date no statistical procedures are available that are permitting to examine the three different aspects of morbidity compression simultaneously. At the **first step** changes of MI- and mortality rates over time are examined by using Cox-proportional hazards model for calculating hazard ratios for MI and for death. The Cox-model is based on the occurrence/ non-occurrence of events, i.e. the dependent variable is scaled in categories. In analyses of morbidity compression calendar year is the main variable of interest. Using it leads to different survival curves, one for every year with one (in the present case the first year of observation) as the standard of comparison. Age at occurrence of an event has to be included as the risks of MI-onset and of death are increasing with age. Furthermore, beginning and end of insurance periods are defining the lengths of observation periods. They have to be included because events can only be observed in these intervals, thus the likelihood of observed occurrence is dependent on the length of observation periods. As MI-onsets are first events and death can occur only once, censoring is effective as right-censoring, i.e. it refers to events occurring after the end of observation.
At the **second step** morbidity compression will be considered in terms of changes of MI-onset age, and age at death. While the occurrence of myocardial infarctions or deaths can be analyzed by means of survival models, changes of age at onset or at death are more difficult to examine. They have nevertheless to be considered as the postponement of onset age over time was formulated as the second variant of morbidity compression \[[@pone.0202631.ref002]\]. Calendar year was the most important independent variable, and type of insurance had to be controlled for.
By searching appropriate methods, estimation problems were encountered. Analyses of changes of age at occurrence are considering only cases with an event of interest (onset or death), all other cases are excluded. If only a subset of subjects is considered, sample selection bias may occur, because this subset may not be representative of the whole population. A model addressing this problem was proposed by Heckman \[[@pone.0202631.ref054]\], also known as Tobit-II- model. It is treating individuals without a defined event as censorized cases, and occurrence of events (categorical scale) and their dates (metric scale) are included in a single equation model by using maximum likelihood estimation. Normal distribution of errors and homogeneous variances (homoscedasticity) are required for obtaining unbiased estimates. While heteroscedasticity may be amended by using bootstrapping, the distribution of censorized cases are causing serious problems that cannot be resolved. In the years prior to the end of observation (i.e. between 2006 and 2014) censorizations were caused by leaving the insurance population. In the last year (2015) a different censoring mechanism was effective, because the observation period ended for all subjects, i.e. it was caused arbitrarily the availability of data. Different types of censorizations are causing estimation problems for the Heckman-model, thus making it unsuitable for tackling our research question. The numbers of healthy life years are often estimated using the Sullivan-Method \[[@pone.0202631.ref055]\] which is based on the analysis of life tables. For our purposes this approach has disadvantages that led us to abandon it. As the Sullivan-Method is based on tables depicting populations by aggregated data, changing population structures cannot be taken into account. A way out might be to create tables for subpopulations thus leading to a large number of tables that have to be compared. As a second reason, the Sullivan- Method is extrapolating trends what is appropriate if some data are missing or if only aggregated data are available. In contrast, our study requires that the findings are controlled for population structure, and the data are available at micro-level.
Preparatory analyses led to the following decisions for examining the second part of the compression hypothesis.
- Finally, it was decided to estimate changes of event occurrence (myocardial infarction and death) by means of Ordinary Least Squares R egression (OLS). "Calendar year" and "type of insurance" were used as independent variables and date of occurrence was used as dependent variable. Comparative analyses have shown that the substantive conclusions concerning effects of calendar year on age at event did not differ substantially between the OLS-solution and the Heckman-model with the year 2015 being an exception as explained above. As OLS-estimates may be flawed by heteroskedasticity, the Cook-Weisberg-test \[[@pone.0202631.ref056]\] was performed with our prediction model. The findings were indicating a significant deviation from homoskedasticity (chi^2^(13) = 693.6; p\<0.001), and further analyses led to the conclusion that this was due to "type of insurance", in particular to the heterogeneous group of unclassified subjects. Analyses performed only with the "retired" insured as group with the highest MI-rates did not lead to different conclusions from analyses with the whole study population. Finally, it was decided to perform the analyses as reported below, and confidence intervals were based on 1000 bootstrap-samples. Bootstraps are performed by drawing samples with replacement in order to estimate statistical parameters, in the present case confidence intervals for making sure that significance tests based on normality assumptions can be applied. This technique is appropriate if distributional properties of certain parameters are unknown, if they deviate from normality, or if the underlying population is not known so that the study population is used for making inferences \[[@pone.0202631.ref057]\].
- Against the backdrop of demographic aging, the proportion of elderly insured will increase over time. This will lead to a clustering of elderly people and to an increasing number of myocardial infarctions. OLS- regression will then lead to an overestimation of increases of age at onset without risks of MI incidence having changed over time. In order to avoid biased estimates, a sampling procedure had to be applied: For every age stratum the calendar year with the lowest number of cases was sought, and then random samples for all age groups were drawn for every calendar year in order to obtain equal case numbers for every year of age. Then the OLS-regressions were performed with the resulting dataset. For the case of population ageing, regression analyses with the sampling solution should yield more conservative estimates than analyses with the complete study population. Comparisons of the two approaches revealed that this was indeed the case. As it will be shown below, the corresponding effects in women turned out as inconsistent and not statistically significant, irrespective of the approach chosen.
- The OLS- model at the second step of analysis includes only cases with myocardial infarction or deceased individuals. Age at occurrence is used as dependent variable with months as unit of measurement. In all analyses the structure of the insurance population has to be controlled for in order to rule out effects of changing compositions of the population insurance structure over time. For morbidity compression to be present, age at occurrence of events has to move upwardly as time (depicted as calendar years) proceeds. It is entering analysis with the first year of observation as reference category. In the regression model the reference category is depicted as intercept at the y-axis (scaled in months), and changes (i.e. unstandardized regression effects) are appearing as intercept shifts between the reference category (first year of observation) and the subsequent ones.
This is expressed by the following equation system: $${CIM}_{1..9} = \beta_{0} + \delta_{1..9}\ {YR}_{2007..2015} + \gamma_{1}\ {Pop}_{2} + \gamma_{2}{Pop}_{3} + \gamma_{3}{Pop}_{4} + \gamma_{4}\ {Pop}_{5} + \varepsilon$$ "CIM" corresponds to changes in months as compared to the first year of observation as reference category; β~0~ denotes the intercept that in the present case equals the first year of observation (= 2006); δ~1~..9 YR 2007..2015 denote the effects for year 1 (= 2007) to year 9 (= 2015). Effects of insurance status as control variable are denoted as γ~,~ where the subscript "1" denotes the effect of the family insured (Pop2), "2" denotes the effect of pensioners (Pop3), "3" denotes the effect of the unemployed insured (Pop4), and "4" denotes the effect of unclassified insured (Pop5), and "ε" denotes the error term.
At the **third step** survival analyses are performed for examining changes of MI-onset rates by means of Kaplan-Meyer survival curves. The MI-rates of two cohorts of the same age are compared over a time period of five years, i.e. men at the age of 60 in 2006 are observed from 2006 to 2010, and those who are 60 years old in 2011 are observed over the period 2011 to 2015. The analyses are performed stratified by gender and by age for age groups 60--64, 65--69, 70--74, 75--79, 80--84, and 85--89 years. The survival curves have to be interpreted in the way that each graph displays the remaining proportion of individuals who had not had a MI until the end of the observation period. Each pair of survival curves will be tested for differences by using the log-rank test and by assuming an error probability of 5%. All analyses were performed with STATA 14 SE \[[@pone.0202631.ref058]\].
Results {#sec010}
=======
The basic frequencies of the relevant variables are displayed in Tables [1](#pone.0202631.t001){ref-type="table"} and [2](#pone.0202631.t002){ref-type="table"}. It has to be noted that the adjusted mean age at MI-onset was 66.5 (Sd = 13.3) years in men and 75.8 (Sd = 13.3) years in women. The MI- rates in women were smaller than in men, and mean age at death in men was 73.0 (Sd = 13.5) years, and 81.4 (Sd = 11.9) years in women.
10.1371/journal.pone.0202631.t001
###### Distribution of the variables used of the complete male population and for sample-based analyses (age at MI-onset and age at death).
{#pone.0202631.t001g}
----------------------- ------------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
**2006** **2007** **2008** **2009** **2010** **2011** **2012** **2013** **2014** **2015**
**All subjects**
**Total** 849,204 846,331 835,807 833,703 840,839 855,121 864,994 871,625 875,217 884,094
Myocardial infarction Frequency 3492 3610 3621 3507 3487 3524 3615 3565 3511 3413
\% 0.41% 0.43% 0.43% 0.42% 0.41% 0.41% 0.42% 0.41% 0.40% 0.39%
Deaths Frequency 14,234 14,145 14,410 14,506 14,335 14,326 14,523 15,109 14,714 15,178
\% 1.68% 1.67% 1.72% 1.74% 1.70% 1.68% 1.68% 1.73% 1.68% 1.72%
Insurance Employed 432,778/\ 442,950/\ 443,047/\ 436,608/\ 449,713/\ 473,815/\ 486,688/\ 492,674/\ 498,859/\ 509,568/\
51.0% 52.3% 53.0% 52.4% 53.5% 55.4% 56.3% 56.5% 57.0% 57.6%
status Family insured 23,120/\ 22,284/\ 21,292/\ 21,846/\ 21,323/\ 20,195/\ 19,884/\ 19,831/\ 19,604/\ 18,985/\
2.7% 2.6% 2.6% 2.6% 2.5% 2.4% 2.3% 2.3% 2.2% 2.2%
N / % Pensioners 250,419/\ 246,765/\ 243,412/\ 239,436/\ 235,791/\ 233,656/\ 232,239/\ 229,522/\ 226,606/\ 225,569/\
29.5% 29.2% 29.1% 28.7% 28.0% 27.3% 26.9% 26.3% 25.9% 25.5%
Unemployed 96,381/\ 87,875/\ 81,496/\ 86,474/\ 84,213/\ 77,722/\ 74,055/\ 75,799/\ 75,393/\ 74,317/\
11.4% 10.4% 9.8% 10.4% 10.0% 9.1% 8.6% 8.7% 8.6% 8.4%
Others 46,506/\ 46,457/\ 46,560/\ 46,339/\ 49,799/\ 49,733/\ 52,128/\ 53,799/\ 54,755/\ 55,655/\
5.5% 5.5% 5.6% 5.9% 5.9% 5.8% 6.0% 6.2% 6.3% 6.3%
**Sample** **N = 780,820** **2006** **2007** **2008** **2009** **2010** **2011** **2012** **2013** **2014** **2015**
Myocardial infarction Frequency 2867 3013 3012 2867 2816 2825 2844 2787 2735 2627
\% 0.37 0.39 0.38 0.37 0.36 0.36 0.36 0.36 0.35 0.34
Deaths Frequency 11,843 11,655 11,696 11,687 11,263 11,044 11,061 11,195 10,810 10,875
\% 1.52 1.49 12.1 1,50 1.45 1.42 1.43 1.45 1.41 1.42
----------------------- ------------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
10.1371/journal.pone.0202631.t002
###### Distribution of the variables used of the complete female population (survival analyses) and for sample-based analyses (age at MI-onset and age at death).
{#pone.0202631.t002g}
----------------------- ------------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
** ** ** ** **2006** **2007** **2008** **2009** **2010** **2011** **2012** **2013** **2014** **2015**
**All subjects**
**Total** 989,715 980,943 964,098 955,329 956,666 966,172 971,232 970,405 966,707 969,760
Myocardial infarction Frequency 2635 2618 2542 2459 2405 246 2513 2324 2250 2167
\% 0.27% 0.27% 0.26% 0.26% 0.25% 0.26% 0.26% 0.24% 0.23% 0.22%
Deaths Frequency 18,257 18,341 18,615 18,467 18,394 17,581 17,930 18,455 17,382 18,435
\% 1.87% 1.87% 1.93% 1.93% 1.92% 1.82% 1.85% 1.90% 1.80% 1.90%
Insurance Employed 297,804/\ 304,628/\ 306,686/\ 309,307/\ 321,084/\ 341,922/\ 355,312/\ 362,177/\ 369,968/\ 385,182/\
30.1% 31.1% 31.8% 32.4% 33.6% 35.4% 36.6% 37.3% 38.3% 39.7%
status Family insured 180,009/\ 173,337/\ 165,649/\ 159,611/\ 155,777/\ 150,947/\ 146,605/\ 141,690/\ 137,047/\ 128,933/\
18.2% 17.7% 17.2% 16.7% 16.3% 15.6% 15.0% 14.6% 14.2% 13.3%
N / % Pensioners 393,495/\ 386,567/\ 378,927/\ 371,475/\ 364,998/\ 360,103/\ 365,895/\ 350,413/\ 343,458/\ 339,945/\
39.8% 39.4% 39.3% 38.9% 38.2% 37.3% 36.8% 36.1% 35.5% 35.1%
Unemployed 71,099/\ 68,996/\ 66,480/\ 67,533/\ 67,173/\ 65,194/\ 63,423/\ 64,875/\ 64,618/\ 63,797/\
7.2% 7.0% 6.9% 7.1% 7.0% 6.8% 6.5% 6.7% 6.7% 6.6%
Others 47,308/\ 47,415/\ 46,356/\ 47,403/\ 47,634/\ 48,006/\ 49,997/\ 51,250/\ 51,616/\ 51,903/\
4.8% 4.8% 4.8% 5.0% 5.0% 5.0% 5.2% 5.3% 5.3% 5.4%
**Sample** **N = 876,800** **2006** **2007** **2008** **2009** **2010** **2011** **2012** **2013** **2014** **2015**
**Total**
Myocardial infarction Frequency 2133 2153 2093 2040 1956 2056 2086 1926 1845 1764
\% 0.24 0.25 0.24 0.24 0.22 0.23 0.24 0.22 0.21 0.20
Deaths Frequency 14,646 14,409 14,535 14,657 14,454 13,706 13,921 14,234 13,359 14,049
\% 1.67 1.64 1.66 1.67 1.65 1.57 1.60 1.64 1.54 1.63
----------------------- ------------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
Development of morbidity and mortality rates {#sec011}
--------------------------------------------
In *men*, the hazard ratios of MI-onsets of the years following 2006 were decreasing constantly ([Table 3](#pone.0202631.t003){ref-type="table"}). The differences to the year of reference were statistically significant from 2009 on (hr = 0.83). In 2015 MI-onsets were 34% lower than in 2006 (hr = 0.66). Hazard ratios of mortality were also decreasing over the 10 years, finally reaching hr = 0.75. A similar development emerged in *women*. The hazard ratios of MI-onset were decreasing over the 10 years and from 2011 on differences between calendar years were statistically significant. MI-onset rates for 2015 were 29% lower than in 2006. Different from death and MI-rates in men and from MI in women, hazard ratios of death did not change significantly over the observation period. It has to be kept in mind that the mean age of MI-onset and at death of women was higher than of men. As morbidity rates in men were decreasing at a faster pace than those of mortality and due to stable mortality rates in women it can be concluded that compression of morbidity has occurred.
10.1371/journal.pone.0202631.t003
###### Onsets of myocardial infarctions and mortality in women and men by controlling for insurance group: Hazard ratios, standard errors and confidence intervals.
{#pone.0202631.t003g}
------------- ---------------------------------- -------------------------------- -------------------- -------- -------------- ---------------- --------
**Men: Myocardial infarction** **Men: Mortality**
Year Hazard ratio p 95% CI Hazard ratio p 95% CI
Men **2006** Ref. \- \- 1 \- \-
**2007** 1.00 0.99 0.87--1.15 0.92 0.07 0.85--1.00
**2008** 0.97 0.67 0.84--1.12 0.85 \<0.01 0.78--0.93
**2009** 0.83 0.01 0.72--0.96 0.90 0.02 0.83--0.99
**2010** 0.82 0.01 0.71--0.94 0.88 \<0.01 0.80--0.96
**2011** 0.81 \<0.01 0.70--0.93 0.87 \<0.01 0.79--0.95
**2012** 0.75 \<0.001 0.65--0.86 0.78 \<0.01 0.71--0.85
**2013** 0.74 \<0.001 0.64--0.85 0.82 \<0.01 0.75--0.89
**2014** 0.67 \<0.001 0.58--0.78 0.73 \<0.01 0.67--0.80
**2015** 0.66 \<0.001 0.57--0.77 0.75 \<0.01 0.69--0.80
Age (years) 1.0539 \<0.001 1.0528--1.0550 1.072 \<0.001 1.072--1.073
**Women: Myocardial infarction** **Women: Mortality**
**Year** Hazard ratio p 95% CI Hazard ratio p 95% CI
**Women** **2006** Ref. \- \- Ref. \- \-
**2007** 1.00 0.96 0.80--1.26 0.96 0.46 0.88--1.06
**2008** 0.86 0.20 0.69--1.08 0.96 0.51 0.88--1.07
**2009** 0.93 0.52 0.74--1.16 1.10 0.11 0.98--1.19
**2010** 0.83 0.11 0.66--1.04 1.12 0.02 1.02--1.23
**2011** 0.80 0.05 0.64--1.00 0.94 0.22 0.85--1.03
**2012** 0.80 0.04 0.64--0.99 0.93 0.14 0.84--1.02
**2013** 0.66 \<0.01 0.53--0.83 1.02 0.64 0.92--1.12
**2014** 0.75 0.01 0.60--0.95 1.03 0.53 0.94--1.15
**2015** 0.71 0.01 0.57--0.90 1.00 0.88 0.91--1.11
Age (years) 1.0593 \<0.001 1.057--1.060 1.068 \<0.001 1.1062--1.1073
------------- ---------------------------------- -------------------------------- -------------------- -------- -------------- ---------------- --------
Changes of age at onset and age at death {#sec012}
----------------------------------------
In ***men*** age at onset of MI was increasing over the observation period ([Table 4](#pone.0202631.t004){ref-type="table"}). Although the general trend was towards postponement of onset, the development was not completely steady. In 2015 the mean age at MI-onset was 10.5 months higher than in 2006 with a maximum of 13 months in 2014. The development of mortality was following the same pattern, but the changes occurred at a lower level. Taken together it can be concluded that in men morbidity compression has occurred, and in both cases event age was increasing, and changes in terms of onset age were higher than those of mortalit**y.**
10.1371/journal.pone.0202631.t004
###### Changes of age at onset of myocardial infarction and at death in months in women and in men in terms of months: Effect sizes and confidence intervals based on 1000 bootstrap samples.
{#pone.0202631.t004g}
----------- --------------------------- ----------- -------------- ------- -------- --------------
**Men**
**Myocardial infarction** **Death**
B p 95% CI B P 95% CI
**2006** Ref. \- \- Ref. \- .
**2007** 3.8 0.16 -1.5--9.2 -2.3 0.16 -5.6--0.9
**2008** 5.1 0.06 -0.3--10.5 -0.2 0.88 -3.5--3.0
**2009** 5.7 0.04 0.3--11.1 0.7 0.69 -2.6--3.9
**2010** 6.6 0.02 1.2--12.1 3.0 0.08 -0.3--6.2
**2011** 9.7 \<0.01 4.2--15.1 4.4 \<0.01 1.1--7.7
**2012** 8.3 \<0.01 2.8--13.7 6.6 \<0.01 3.3--9.9
**2013** 12.1 \<0.01 6.7--17.6 7.5 \<0.01 4.3--10.8
**2014** 13.4 \<0.01 7.9--18.9 10.5 \<0.01 7.2--13.8
**2015** 10.5 \<0.01 5.0--16.1 10.4 \<0.01 7.1--13.7
Constant 625.2 \<0.01 620.8--629.6 624.8 \<0.01 621.3--628.3
**Women**
**Myocardial infarction** **Death**
B p 95% CI B p 95% CI
**2006** Ref. \- \- Ref. \- \-
**2007** 2.6 0.42 -3.8--8.9 2.2 0.15 -4.3--1.3
**2008** 4.3 0.18 -2.0--10.7 3.3 0.12 -3.3--2.4
**2009** 8.9 \<0.01 2.5--15.3 1.6 0.02 -1.3--4.1
**2010** 1.8 0.58 -4.7--8.3 2.3 0.24 -3.6--1.8
**2011** 4.2 0.20 2.2--10.6 2.4 0.10 -3.4--2.4
**2012** 3.3 0.30 -3.0--9.7 3.3 0.09 -2.8--2.9
**2013** 5.3 0.11 -1.2--11.8 3.1 0.02 -0.9--4.6
**2014** -1.9 0.55 -8.6--4.6 2.0 0.15 -2.7--2.8
**2015** 0.8 0.81 -5.8--7.5 4.8 \<0.01 -3.1--2.5
Constant 631.7 \<0.01 624.4--638.9 614.9 \<0.01 609.3--619.3
----------- --------------------------- ----------- -------------- ------- -------- --------------
**In *women*** changes of onset age and of death were less steady than in men and smaller. Also, age at death turned out as rather stable over time as the variation was always distributed around the reference year. Thus, in contrast to men, decreasing MI-rates in women were not complemented by rising age at onset and at death.
Morbidity changes by age, period, and gender {#sec013}
--------------------------------------------
For sake of brevity only the survival functions of the ages at origin 70--74, 75--79, and 80--84 years are displayed graphically.
In ***men*** ([Fig 1](#pone.0202631.g001){ref-type="fig"}) aged 60--64 years the survival curves of the two observation periods were statistically different (chi^2^ = 5.94; p = 0.02). For the following age segment (65 to 69 years) no statistically significant differences between time periods emerged (chi^2^ = 0.02; p = 0.88). For the age group 70 to 74 years the survival curves were differing again, indicating decreasing MI-rates over the observation period (chi^2^ = 14.98; p\<0.001). The same held in the subsequent age interval of 75 to 79 years (chi^2^ = 9.08; p\<0.01), but not in the last intervals considered (80 to 84 and 85 to 89 years).
{#pone.0202631.g001}
In **women** ([Fig 2](#pone.0202631.g002){ref-type="fig"}) the comparisons of the survival curves of the first four age intervals (60-64/ 65-69/ 70--74 years) were not statistically significant. Differences between time periods emerged only for women aged 80 to 84 years (chi^2^ = 9.79; p = 0.001), indicating decreasing MI-rates in 2011 to 2015. This finding was not reproduced for the highest age group (85--89 years; chi^2^ = 2.1; p = 0.15).
Taken together, it can be concluded that the MI-related findings presented in the preceding two lines of analysis are mainly due to the age groups 70 to 79 years in men and 80 to 84 years in women.
{#pone.0202631.g002}
Discussion {#sec014}
==========
Our study was conducted to examine morbidity compression with myocardial infarction as particular application. A population-based dataset was available that permitted to consider morbidity and mortality within the same database by examining two variants of morbidity compression: Changes of rates and change of age at occurrence.
Three findings have to be mentioned: It turned out that in women and in men MI-rates were decreasing over the whole observation period, but only in men this was also observed for mortality. MI-rates were decreasing at a faster pace than mortality rates, thus pointing towards compression of morbidity. Further analyses revealed that the developments of rates were mainly due to changes in the age groups 70 to 79 years in men, while in women this occurred beyond the age of 80. Decreasing trends of MI-incidences were also reported for Australia \[[@pone.0202631.ref059]\], Sweden \[[@pone.0202631.ref060]\], for the USA \[[@pone.0202631.ref038]\] and for Germany \[[@pone.0202631.ref039]\], but the relationship with mortality rates had rarely been explored. The second finding refers to age at onset and at death that were going up only in men, while this was absent in women. An earlier study with male military personnel from the US reported changes into the same direction \[[@pone.0202631.ref040]\], but again there is a lack of findings combining MI-morbidity and mortality. Our findings are also demonstrating that increasing onset age and decreasing rates as variations of morbidity compression are not necessarily intertwined, instead they may vary independently. The third finding refers to the marked gender differences that have to be interpreted against the backdrop of higher female longevity and higher mean age at MI-onset. Our findings may also be interpreted as part of a gender convergence driven by the development in men.
Irrespective of considerations on healthy longevity, in our study MI-rates have decreased and morbidity compression has occurred in men and in women. Fries assumed prevention and health-related behaviors to be the main driving forces \[[@pone.0202631.ref008], [@pone.0202631.ref061]\]. Unfortunately, our database does not include behavioral data that can be linked with our claims dataset. For this reason, explanations have to be developed with reference to other studies. Smoking was demonstrated to make a substantial contribution to the development of cardiovascular diseases \[[@pone.0202631.ref062]\], and tobacco consumption was reported to do more damage to the health of women than of men \[[@pone.0202631.ref063]\]. In high-income countries the proportions of smokers have decreased in the last decades, and smoking rates of women were approaching those of men \[[@pone.0202631.ref027], [@pone.0202631.ref064]\]. According to nationwide German surveys the proportion of smokers between 25 and 69 years dropped from 39.5% in 1990 to 34.9% in 2012. Among females, only minor changes of rates occurred as 26% were smoking in 1990 and 28.4% in 2012 after a peak of 32% in 2003 \[[@pone.0202631.ref063]\]. Exercise is another health-related behavior associated with the risk of MI. Changes of exercise habits of the German population were documented for the time period 1994 to 2011. In 30 to 64-year old individuals the proportion of women and men who have taken exercise was constantly increasing, and this also applies to the whole range of physical activity \[[@pone.0202631.ref065]\]. Data on the consumption of nutrition in middle and old aged women and men were only available on a cross-sectional basis \[[@pone.0202631.ref066]\]. Besides lifestyles also social factors may explain variations of morbidity and mortality, and well-established health-related influences are unemployment and the structure of work. Although health-related consequences of unemployment and adverse working conditions \[[@pone.0202631.ref067]--[@pone.0202631.ref069]\] have generated a large body of research, no longitudinal studies are available that can be made useable for explaining morbidity compression.
If compression is depicted in terms of decreasing morbidity rates, implications for general health have to be considered. The first one might be an improvement of health status, because cardiovascular diseases are affecting the health of populations, and reducing these burdens might directly contribute to morbidity compression. The second one might be a postponement of morbidity into higher age groups where other types of diseases and impairments may occur more frequently. This refers to illnesses such as stroke or clusters of health impairments that might best be characterized as multimorbidity \[[@pone.0202631.ref070]\]. Both interpretations are in accordance with morbidity compression, but the decision between them is open and subject to further investigation.
Our analyses were also pertaining to mortality at the level of a complete population by assuming that MI-onsets are part of general morbidity that in turn contributes to the risk of death. As a criticism of our approach it may be argued that case-fatality (death after MI-onset) might be a better indicator than all-cause mortality, or age at death. However, it has to be kept in mind that shortened survival after MI might not be interpreted in terms of morbidity compression, but as failure of medical treatment. Fries pointed out that morbidity compression is a population concept and that morbidity and mortality do not need to be observed in the same individuals \[[@pone.0202631.ref016], [@pone.0202631.ref061], [@pone.0202631.ref071]\].
For every observation year, the annual mean increment of age at MI-onset or at death in men was around one month. As mentioned earlier, this may be due to the social structure of our insurance population that does not fully correspond to the population of the Land of Lower Saxony, or also to Germany as a whole \[[@pone.0202631.ref045]\]. Some other limitations of our data have to be mentioned. Studying morbidity compression in terms of myocardial infarction is an important, but only a first step towards exploring the empirical content of Fries\' hypothesis. Our dataset does not permit explanations as no data on living conditions and health-related behaviors were available, but it has the advantage of large case numbers at population level what permitted to consider a specific disease with a clear diagnosis. Another shortcoming of our database is the lack of privately insured subjects, i.e. civil-service personnel, officials and the upper decile of the income distribution \[[@pone.0202631.ref047]\]. In 2014 11% of the German population was falling into these categories. Studies on social inequalities in health have demonstrated inverse relationships between socioeconomic position and disease risks \[[@pone.0202631.ref072]\]. Against the backdrop of the literature on health inequalities it can be assumed that morbidity compression may develop differently depending on the socio-economic groups considered. Thus, our findings may underestimate the degree of compression within the whole population of Germany.
Conclusions {#sec015}
===========
Our study found evidence in favor of morbidity compression. In terms of MI-onsets, compression of morbidity has occurred in men and in women, and this was due to changes in specific age groups. In men the developments of morbidity and mortality rates combined with the development of age and age at onset were pointing towards morbidity compression. In contrast, the development in women was less straightforward. The substantial drop of MI-rates along constant mortality rates was indicating morbidity compression while age at onset and age at death remained unchanged. Our findings have shown that changes of rates and age at death may vary independently, thus emphasizing that compression is a multi-faceted phenomenon.
[^1]: **Competing Interests:**Sveja Eberhard is employed by the Local Statutory Health Insurance of Lower Saxony (AOK Niedersachsen). This does not alter our adherence to PLOS ONE policies on sharing data and materials.
| 2024-06-09T01:26:35.434368 | https://example.com/article/4370 |
/* $OpenBSD: deck.h,v 1.4 2015/12/31 18:10:19 mestre Exp $ */
/* $NetBSD: deck.h,v 1.3 1995/03/21 15:08:49 cgd Exp $ */
/*
* Copyright (c) 1980, 1993
* The Regents of the University of California. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)deck.h 8.1 (Berkeley) 5/31/93
*/
/*
* define structure of a deck of cards and other related things
*/
#define CARDS 52 /* number cards in deck */
#define RANKS 13 /* number ranks in deck */
#define SUITS 4 /* number suits in deck */
#define CINHAND 4 /* # cards in cribbage hand */
#define FULLHAND 6 /* # cards in dealt hand */
#define LGAME 121 /* number points in a game */
#define SGAME 61 /* # points in a short game */
#define SPADES 0 /* value of each suit */
#define HEARTS 1
#define DIAMONDS 2
#define CLUBS 3
#define ACE 0 /* value of each rank */
#define TWO 1
#define THREE 2
#define FOUR 3
#define FIVE 4
#define SIX 5
#define SEVEN 6
#define EIGHT 7
#define NINE 8
#define TEN 9
#define JACK 10
#define QUEEN 11
#define KING 12
#define EMPTY 13
#define VAL(c) ( (c) < 9 ? (c)+1 : 10 ) /* val of rank */
#ifndef TRUE
# define TRUE 1
# define FALSE 0
#endif
typedef struct {
int rank;
int suit;
} CARD;
| 2023-09-21T01:26:35.434368 | https://example.com/article/4327 |
---
abstract: 'We investigate the ground state properties of ultracold atoms trapped in a two-leg ladder potential in the presence of an artificial magnetic field in a staggered configuration. We focus on the strongly interacting regime and use the Landau theory of phase transitions and a mean field Gutzwiller variational method to identify the stable superfluid phases and their boundaries with the Mott-insulator regime as a function of magnetic flux. In addition, we calculate the local and chiral currents of these superfluid phases, which show a staggered vortex anti-vortex configuration. The analytical results are confirmed by numerical simulations using a cluster mean-field theory approach.'
author:
- Rashi Sachdeva
- '$\!\!^{,}\,^{\dagger}\,\,\,$Friederike Metz'
- Manpreet Singh
- Tapan Mishra
- Thomas Busch
title: '**Two-leg ladder Bose Hubbard models with staggered fluxes**'
---
[^1] [^2]
Introduction
============
Ultracold bosonic atoms in optical lattices offer a unique platform to study models for periodic many body physics in a clean and highly controllable setting. A wide range of flexible geometries to trap neutral atoms can be created by overlapping and interfering laser beams and interactions can be controlled via external magnetic fields or by choosing different atomic species. While the field was initially enthused by the prediction and realisation of the paradigmatic superfluid to Mott-insulator transition in square lattices [@jaksch; @bloch], many different situations have been investigated since then [@lewenstein_book; @bloch_review].
Recent progress in creating artificial gauge fields for ultracold atoms in discrete [@magfield_OL] as well as continuum systems [@magfield_cont] has opened up many new avenues for the study of quantum phase transitions in the presence of magnetic fields. These fields are called artificial, as due to the charge neutrality of the atoms no Lorentz force exists and therefore real magnetic fields do not directly effect the center-of-mass variable.
The simplest way to mimic the effects of magnetic fields on charged systems in neutral atoms is by rotation [@rot_exp], which probes superfluidity in the same way magnetic fields probe superconductivity. Furthermore, very high synthetic magnetic fields have been shown to be realizable using atoms in optical lattices, where the atomic motion and the internal degrees of freedom can be coupled by laser assisted tunneling [@bloch_PRL2011]. This has lead to the successful implementation of uniform as well as staggered flux distributions in the strong field regime [@bloch_PRL2011; @sengstock_PRL2012] and has enabled the realization of 2D topological states with finite Chern numbers [@chern_bloch15; @chern_bloch18].
Theoretically, the presence of artificial magnetic fields can be included into the Bose Hubbard model by using complex tunnel couplings [@bhm_agf]. The main effect of these can be observed even in the absence of interactions and the single particle spectrum for bosons in a periodic potential in the presence of a strong magnetic field forms a self-similar structure known as the Hofstadter butterfly [@hofstadter]. As the effective magnetic fields created in optical lattices can be much larger than what is possible in solid-state systems, these techniques bring the study of a wide range of Hamiltonians into reach that are inaccessible in condensed matter physics.
Besides the realization of magnetic fields in extended 2D lattice systems, the effects of artificial magnetic fields were also studied in bosonic ladder geometries, where chiral currents and vortex and Meissner phases were predicted and observed [@georges_NJP2014; @tokuno_PRA2015; @natu_PRA2015; @giamarchi_PRB2001; @oktel_PRA2015; @mueller_PRA2014; @rashi_PRA2017; @bloch_ladder]. While ladder systems can be seen as the smallest possible lattice structure, they possess additional and unique properties, for example due to the absence of the requirement that the magnetic fields have to have rational values [@giamarchi_PRB2001; @oktel_PRA2015; @mueller_PRA2014; @rashi_PRA2017]. Furthermore, even though the above-mentioned Meissner and vortex phases can already be observed for non-interacting systems, interacting bosonic ladder systems with uniform flux also support various spontaneously symmetry broken phases and chiral Mott insulator states [@arya_PRA].
Similar to the case of uniform fluxes, staggered fluxes [@staggered_1; @staggered_2; @staggered_3; @staggered_4] can drive quantum phase transitions in the two-leg Bose Hubbard ladder systems and can enlarge the range of physical effects that can be investigated. Here we study the example of a single-component BEC trapped in such a geometry in the presence of a periodically flipped artificial magnetic field. We find that the presence of the staggered flux gives rise to two superfluid phases with a staggered vortex anti-vortex configuration, which are distinct from the usual superfluid phases obtained in the Bose Hubbard model [@jaksch].
The manuscript is organized as follows. In Section \[Sec: bhm\] we introduce the Bose-Hubbard model (BHM) with a two-leg ladder geometry in the presence of an artificial magnetic field with a staggered configuration. In Section \[Sec: singleparticle\] we review the properties of its single particle spectrum and in Section \[Sec: Landau\] we present calculations in the strong coupling regime to determine the complete phase diagram. We also show the presence of distinct superfluid phases using Landau theory. In Section \[Sec: Gutzwiller\] we present our analytical calculations to determine the phase boundaries using the variational Gutzwiller approach and in Section \[Sec: ClusterMF\] these are complemented by the numerical calculations performed using the cluster mean field theory approach. Finally, in Section \[Sec: Summary\] we present a summary and outlook of the work done.
Model {#Sec: bhm}
=====
The Hamiltonian describing bosons in a two-leg ladder geometry in the presence of a staggered magnetic flux of magnitude $\alpha$ can be written as $$\begin{aligned}
H=&-J\sum_{j}\left(e^{(-1)^{j}\frac{i\alpha}{2}}a_j^\dagger a_{j+1}+e^{(-1)^{j+1}\frac{i\alpha}{2}}b_j^\dagger b_{j+1}+ h.c.\right)\nonumber\\
&- K\sum_{j}(a_j^\dagger b_j+ h.c.)+{U \over 2} \sum_{j,p} n_j^p(n_j^p-1)\nonumber\\
&-\mu\sum_{j,p}n_j^p,
\label{eq:eq1_model}\end{aligned}$$ where the $p_j (p_j^\dagger)$ are the bosonic annihilation (creation) operators at site $j$ of leg $p~(=a,b)$, $n_j^p$ is the number operator at site $j$ of leg $p$, $\alpha$ is the absolute value of the magnetic flux and $\mu$ is the chemical potential. The intra- and inter-leg hopping amplitudes are described by $J$ and $K$ respectively, and the on-site interaction energy between two atoms is given by $U$ (see Fig. \[fig:schematic\]). The ratios $J/U$ and $K/U$ can be changed in an experiment by tuning the optical lattice laser intensities along each leg and by varying the separation between the legs, respectively. We assume up-down symmetry for the ladder, which implies that the chemical potential $\mu$ and the onsite interactions $U$ are identical for each of the two legs. It is worth noting that within the local density approximation, the results from this model can also be applied to experimental systems which have an additional harmonic trapping potential.
The phase $\alpha$ appearing in the hopping terms is given by $\alpha=(e/\hbar)\int_{r_j}^{r_k}d\mathbf{r}\cdot\mathbf{A(r)}$, where $\mathbf{A(r)}$ is the vector potential that gives rise to the magnetic field $\mathbf{B}=\nabla \times\mathbf{A}$ and $r_j$ and $r_k$ are the positions of the lattice sites $j$ and $k$. If an atom tunnels around a plaquette, the total phase accumulated by the wavefunction is called the gauge flux, which is a gauge invariant quantity. Specifically, we choose a Landau gauge for which the hopping in the rung direction has no gauge field while hopping along the legs imparts a phase that alternates from one plaquette to the next, leading to the required staggered flux. The physical properties of the Hamiltonian (\[eq:eq1\_model\]), including the energy spectrum, response functions etc., are of course gauge invariant and only depend on the total flux going through a plaquette.
![(Color online) Schematic of the two-leg ladder Bose Hubbard model with staggered flux $\alpha$ in neighboring plaquettes. The dashed box indicates the single unit cell used for the analytic and the cluster mean field calculations. The red dots represent the bosonic atoms on lattice sites. []{data-label="fig:schematic"}](schematic_lattice.pdf){width="\columnwidth"}
Single particle spectrum {#Sec: singleparticle}
========================
We first determine the structure of the single particle energy spectrum as a function of the magnetic flux values. For this we set $U=0$ and write the Hamiltonian in momentum space in terms of the Fourier components of the field operators $a_j$ and $b_j$. The energy eigenvalues can then be determined by simple diagonalization, and we show the spectrum as a function of momentum $k$ in Fig. \[Fig:singleparticle\_dispersion\], for different absolute values of the magnetic flux $\alpha$.
For zero flux and no rung hopping ($K=0$) the single particle spectrum has only one doubly-degenerate band, since the two legs of the ladder are decoupled. For finite rung coupling ($K=1$) this degeneracy is lifted and a two-band structure appears, which has the expected $2\pi$ periodicity (see Fig. \[Fig:singleparticle\_dispersion\](a)). In the presence of a finite staggered flux the lowest band continues to have a non-degenerate minimum at $k=0$ (see Figs. \[Fig:singleparticle\_dispersion\](b) and (c)) and increasing the rung coupling $K$ leads to an increase in the band gap between the upper and lower bands. Since the system now possesses a finite flux, condensing into the minimum leads to a superfluid with a unique current pattern which is further discussed in Sec. \[Sec: Landau\]. Upon increasing the staggered flux further, the lowest band starts developing additional minima at $k=\pm \pi$ (see Fig. \[Fig:singleparticle\_dispersion\](c)), which eventually become degenerate with the minimum at $k=0$ for $\alpha=\pi$ (see Fig. \[Fig:singleparticle\_dispersion\](d)). This limit is known as the *fully frustrated* case for the Bose Hubbard model and it corresponds to half a flux quantum per plaquette [@arya_PRA].
{width="1.67\columnwidth"}
The occurence of degenerate minima at $k/\pi=0$ and $k/\pi=\pm 1$ can influence the stability and properties of the phases in different regimes. While for the Mott-insulating regime the qualitative nature of the phase remains unaffected, the properties of the superfluid states get substantially changed due to the staggered flux. We discuss this situation in detail in the next section.
Superfluid Mott-insulator transition: Landau theory of phase transitions {#Sec: Landau}
========================================================================
In this section we discuss the results obtained for strong coupling regime and determine the complete phase diagram at zero temperature. For the Bose Hubbard model with no flux, the zero-temperature phase diagram comprises a superfluid (SF) phase and a Mott insulator (MI) phase, which are separated by a second-order phase transition, driven by quantum fluctuations [@Fisher89]. When one crosses the phase boundary from MI into the SF phase, the $U(1)$ gauge symmetry is spontaneously broken, which gives rise to a finite SF-order parameter. Since the form of this order parameter depends on system parameters, one can expect that the presence of a finite staggered flux leads to different and distinctly broken-symmetry SF phases. In the following we will use the Landau theory of phase transitions and introduce a plaquette order parameter, which identifies the various SF phases. Determining the values of $U/J$ at which the SF order parameter vanishes allows us to obtain the phase boundaries within the full phase diagram as a function of the magnetic flux $\alpha$.
The basic plaquette in our system consists of four sites, indicated by the dashed lines in Fig. \[fig:schematic\]. The different superfluid phases will be characterised by introducing the plaquette order parameter $\Psi=(\psi_1,\chi_1,\chi_2,\psi_2)$, where $\psi_{i}=\langle a_{i}\rangle$ and $\chi_{i}=\langle b_{i}\rangle$ stand for site order parameters for legs $a$ and $b$, respectively. In the mean-field limit we can decouple the sites of the unit cell by [@decoupling_EPL] $$\begin{aligned}
a_j^{\dag}a_k&\approx \psi_j^{\ast}a_k+a_j^{\dag}\psi_k-\psi_j^{\ast}\psi_k, \nn\\
b_j^{\dag}b_k& \approx \chi_j^{\ast}b_k+b_j^{\dag}\chi_k-\chi_j^{\ast}\chi_k,\nn\\
a_j^{\dag}b_j& \approx \psi_j^{\ast}b_j+a_j^{\dag}\chi_j-\psi_j^{\ast}\chi_j, \end{aligned}$$ where $j,k~\in \{1,2\}$. Hence, we can write the mean field Hamiltonian in the grand canonical ensemble in the form H=H\_[0]{}\^+H\_[1]{}\^,where $$\begin{aligned}
H_{0}^{\text{MF}}=& {U \over 2} \sum_{j=1,2} (n_j^a(n_j^a-1)+n_j^b(n_j^b-1))\nn\\
& -\mu \sum_{j=1,2} (n_j^a+n_j^b)+K\sum_{j=1,2} (\psi_j^{\ast}\chi_{j}+\chi_{j}^{\ast}\psi_j)\ \nn\\
& +J\sum_{j=1} (e^{-i\alpha}\psi_j^{\ast}\psi_{j+1}+e^{i\alpha}\chi_j^{\ast}\chi_{j+1}+h.c.)\nn\\
& +J\sum_{j=2} (e^{i\alpha}\psi_j^{\ast}\psi_{j+1}+e^{-i\alpha}\chi_j^{\ast}\chi_{j+1}+h.c.),\\
H_{1}^{\text{MF}}=& -J\sum_{j=1}\big( e^{-i\alpha}\psi_j^{\ast}a_{j+1}+e^{-i\alpha}\psi_{j+1}a_{j}^{\dag}+e^{i\alpha} \chi_{j}^{\ast}b_{j+1}\nn\\
& +e^{i\alpha}\chi_{j+1}b_{j}^{\dag}+h.c\big)-J\sum_{j=2}\big( e^{i\alpha}\psi_j^{\ast}a_{j+1}\nn\\
& +e^{i\alpha}\psi_{j+1}a_{j}^{\dag} +e^{-i\alpha} \chi_{j}^{\ast}b_{j+1} +e^{-i\alpha}\chi_{j+1}b_{j}^{\dag}+h.c.\big)\nn\\
& -K\sum_{j=1,2}\left(\psi_j^{\ast}b_j+a_j^{\dag}\chi_j+h.c.\right).
\end{aligned}$$ Since we concentrate on the strong-coupling regime, our expansion will treat $H_{1}^{\text{MF}}$ as a perturbation. Calculating the ground state energy, $E[\psi]$, for the four site plaquette up to second order with respect to the perturbation $H_{1}^{\text{MF}}$ then gives $$E[\Psi]= 2Un(n-1)-4\mu n+\sum_{\nu,\nu'}\Psi_{\nu}^{\ast}M_{\nu,\nu'}\Psi_{\nu'} ,$$ where $n$ is the filling fraction and $M_{\nu,\nu'}$ are the matrix elements of the $4\times4$ Hermitian matrix $M$ which is given by
$$M=
\left[ {\begin{array}{cccc}
E_{0}(K^2+4J^2) & K & 4KJE_0~\text{cos}(\frac{\alpha}{2}) & 2Je^{-i\alpha/2}\\
K & E_{0}(K^2+4J^2) & 2Je^{i\alpha/2} & 4KJE_0~\text{cos}(\frac{\alpha}{2}) \\
4KJE_0~\text{cos}(\frac{\alpha}{2}) &2Je^{-i\alpha/2} & E_{0}(K^2+4J^2) & K \\
2Je^{i\alpha/2} & 4KJE_0~\text{cos}(\frac{\alpha}{2}) & K & E_{0}(K^2+4J^2) \\
\end{array} } \right],$$ with E\_[0]{}(n,U,)=.In standard Landau theory, the free energy is expanded with respect to a scalar order parameter and the phase transition boundary is determined by demanding that the second-order expansion coefficient should vanish. In our case, the second order phase transitions between the different SF and MI phases therefore occur when the eigenvalues of $M$ are zero. The matrix has four eigenvalues and eigenvectors given by $$\begin{aligned}
\epsilon_1&=E_0(4J^2+K^2+4JK\text{cos}(\alpha/2))+\sqrt{4J^2+K^2+4JK\text{cos}(\alpha/2)},\\
\epsilon_2&=E_0(4J^2+K^2-4JK\text{cos}(\alpha/2))+\sqrt{4J^2+K^2-4JK\text{cos}(\alpha/2)},\\
\epsilon_3&=E_0(4J^2+K^2+4JK\text{cos}(\alpha/2))-\sqrt{4J^2+K^2+4JK\text{cos}(\alpha/2)},\\
\epsilon_4&=E_0(4J^2+K^2-4JK\text{cos}(\alpha/2))-\sqrt{4J^2+K^2-4JK\text{cos}(\alpha/2)},\\\end{aligned}$$
$$\begin{aligned}
\Psi_{1}&=\left( \frac{K+ 2Je^{i\alpha/2}}{|K+ 2Je^{i\alpha/2}|},1,\frac{K+ 2Je^{i\alpha/2}}{|K+ 2Je^{i\alpha/2}|},1\right)&&\hspace*{-100pt}=\left( e^{i\theta_1},1,e^{i\theta_1},1\right)\label{SF1},\\
\Psi_{2}&=\left(-\frac{K- 2Je^{i\alpha/2}}{|K- 2Je^{i\alpha/2}|},-1, \frac{K- 2Je^{i\alpha/2}}{|K- 2Je^{i\alpha/2}|},1\right)&&\hspace*{-100pt}=\left(-e^{i\theta_2},-1, e^{i\theta_2},1\right)\label{SF2},\\
\Psi_{3}&=\left(-\frac{K+ 2Je^{i\alpha/2}}{|K+ 2Je^{i\alpha/2}|},1,-\frac{K+ 2Je^{i\alpha/2}}{|K+ 2Je^{i\alpha}|},1\right)&&\hspace*{-100pt}=\left(-e^{i\theta_1},1,-e^{i\theta_1},1\right),\\
\Psi_{4}&=\left(\frac{K- 2Je^{i\alpha/2}}{|K- 2Je^{i\alpha/2}|}, -1,-\frac{K- 2Je^{i\alpha/2}}{|K- 2Je^{i\alpha}|},1\right)&&\hspace*{-100pt}=\left(e^{i\theta_2}, -1,-e^{i\theta_2},1\right),\end{aligned}$$
where $\theta_1=\text{tan}^{-1}(\frac{2J\text{sin}(\alpha/2)}{K+2J\text{cos}(\alpha/2)})$ and $\theta_2=\text{tan}^{-1}(-\frac{2J\text{sin}(\alpha/2)}{K-2J\text{cos}(\alpha/2)})$. These four eigenvectors describe all possible SF phases.
Interpretation of the superfluid phases
---------------------------------------
One can see that all the four eigenvectors depend explicitly on the flux $\alpha$ and are complex for certain values of $\alpha$. However, careful examination of the above four eigenvalues shows that only two eigenvectors, $\Psi_1$ and $\Psi_2$, correspond to stable superfluid phases for repulsive onsite interactions for different regimes of magnetic flux $\alpha$.
In the following we label these two distinct SF phases as superfluid 1 (SF-1) and superfluid 2 (SF-2). They are characterized by circulating gauge invariant currents around the plaquettes, which are arranged in a staggered pattern along the ladder and can be viewed as a sequence of vortices and anti-vortices, as shown by the chiral current calculations in section \[currents\]. Both possess a spatially uniform boson density, but the sign of leg/rung currents correspond to two distinct current order patterns which are related to one another by time reversal or by a unit translation. This is consistent with the results known for the fully frustrated case with $\alpha=\pi$ flux per plaquette, where Hartree theory indicates the presence of the same two superfluid phases [@arya_PRA]. Since the Hamiltonian is both translationally and time-reversal invariant, the emergence of these staggered flux states is a result of the breaking of these symmetries and we detail the calculation for staggered gauge-invariant currents for the SF-1 and the SF-2 phase in Section \[currents\].
The phases of the order parameters at each lattice site are given by $\Phi_\text{SF-1} = \left(\theta_1,0, \theta_1,0\right) $ for SF-1 and $\Phi_\text{SF-2} = \left(\theta_2+\pi,\pi, \theta_2,0\right)$ for SF-2. In the fully frustrated case, which is the point where the system switches between being in SF-1 and SF-2, the phase around the plaquette for both superfluid states becomes equal and opposite, manifesting the opposite circulation of currents in each state. At this particular value of the magnetic flux, the energy eigenvalues of both superfluid states become degenerate as well, and while for $\alpha<\pi$ the SF-1 phase had the lower energy, beyond $\alpha=\pi$ the SF-2 become energetically more favourable. This transition from the SF-1 to the SF-2 phase therefore corresponds to a reversal of the direction of circulation.
Phase diagram
-------------
The boundary between the MI and SF phases can be found as a function of $\alpha$ by determining the zeros of the respective eigenvalues and we show the full phase diagram in Fig. \[Fig:phase\_ana\]. The zero crossings exist in the range $-\pi < \alpha<\pi$ for SF-1, and in the ranges $-3\pi < \alpha<-\pi$ and $\pi < \alpha<3\pi$ for SF-2, implying a $2\pi$ periodicity for both the superfluid phases. As noted above, for values of $\alpha$ beyond $\pm\pi$, the SF-1 undergoes a transition to the SF-2, which at this point becomes energetically favourable($\epsilon_2 < \epsilon_1$). The critical point of transition from SF to MI phase for $\alpha=0$ agrees with the known mean field results [@oktel_PRA2015]. It is also worth nothing that at $\alpha=\pi$ and $-\pi$, for a gauge choice where the phase $\alpha$ is only along one of the legs, the Hamiltonian is real and therefore time-reversal invariant.
The phase diagram as a function of different values of the hopping amplitude $K$ with fixed $J$ is shown in Fig. \[Fig:phase\_ana\]. For $K<1$, the hopping along the rung of the ladder is reduced, and hence the transition to the Mott-insulating state can be achieved at lower values of $U$. Similarly, for $K>1$ the overall hopping is larger compared to the situation with $K=1$ and the transition to the Mott-insulating phase requires a higher value of the onsite interaction $U$. This suggests that one can tune the phase transition boundary by simply changing the relative hopping amplitudes for any value of flux $\alpha$.
![(Color online) Phase diagram for the two-leg ladder Bose Hubbard model in the presence of a staggered flux of magnitude $\alpha$ for unit filling factor using Landau theory. The solid (red) curve marks the boundary between the Mott-insulator and the different superfluid phases for $K=J=1.0$. The region below the solid (red) curve comprises of two types of superfluids, SF-1 and SF-2 (see text for details) which are separated by green dashed lines. The dashed (blue) lines and dotted (black) lines mark the phase boundaries for $J=1$ and $K=0.5$ and $1.5$, respectively.[]{data-label="Fig:phase_ana"}](phasediag_Landau.pdf){width="\columnwidth"}
Variational Mean field Gutzwiller approach for phase boundaries {#Sec: Gutzwiller}
===============================================================
In the following we will explore the transition from the Mott-insulator to the above mentioned distinct superfluid phases as a function of $J$, $U$, $\mu$ and $\alpha$. For this we scale the Hamiltonian in Eq. by setting $K=1$ and assume that the wavefunction for the perfect Mott-insulating phase is localized with an equal number of particles $n_0$ at each site. The phase boundary between the incompressible MI phase and the compressible SF phases can then be analytically determined by calculating the energy for particle-hole-type excitations using a reduced-basis variational ansatz for the Gutzwiller wave function.
For this we assume that the total wavefunction is the product of two individual ladder wavefunctions, $|\Psi\rangle=\Pi_j |G \rangle_{a_j} |G \rangle_{b_j} $, where $a$ and $b$ label the legs of the ladder and $j$ the individual sites along a leg. In the strongly interacting regime, we work very near to the phase boundary, which implies that only Fock states close to the MI one are populated. Hence we can write a Gutzwiller ansatz for the local sites as $$\begin{aligned}
|G\rangle_{a_j} &= f_{n_0-1}^{a_j}|n_0-1\rangle+f_{n_0}^{a_j}|n_0\rangle+f_{n_0+1}^{a_j}|n_0+1\rangle \nn\\
|G\rangle_{b_j} &= f_{n_0-1}^{b_j}|n_0-1\rangle+f_{n_0}^{b_j}|n_0\rangle+f_{n_0+1}^{b_j}|n_0+1\rangle. \end{aligned}$$
We parameterise the amplitudes as [@Gutzwiller]
$$\begin{aligned}
(f_{n_0-1}^{a_j}, f_{n_0}^{a_j}, f_{n_0+1}^{a_j})&=(e^{-i\theta_{j}}\Delta_{a_j},\sqrt{1-\Delta_{a_j}^2-\Delta_{a_j} ^{'2}},e^{i\theta_{j}}\Delta_{a_j}^{'}),\\
(f_{n_0-1}^{b_j}, f_{n_0}^{b_j}, f_{n_0+1}^{b_j})&=(e^{-i\theta_{j}}\Delta_{b_j},\sqrt{1-\Delta_{b_j}^2-\Delta_{b_j} ^{'2}},e^{i\theta_{j}}\Delta_{b_j}^{'}),
\label{coefficients}
\end{aligned}$$
with complex variational parameters $\Delta_{a_j}, \Delta_{a_j} ^{'},\Delta_{b_j}, \Delta_{b_j}^{'}\ll1$ to ensure the normalisation condition of states $ |G \rangle_{a_j}$ and $|G \rangle_{b_j}$. Minimizing the energy functional with respect to the variational parameters $\Delta_{a_j}, \Delta_{a_j} ^{'},\Delta_{b_j}, \Delta_{b_j}^{'}$ and $\theta_{j}$, gives the boundary between the MI and SF phase for any value of $\mu$, $U$, and $\alpha$. The dependence on the value of magnetic flux is implicit in the largest eigenvalue of the single particle Hamiltonian and the Mott-insulator/superfluid phase boundaries are shown as a function of the magnetic flux $\alpha/\pi$ and interaction strength $U$ in Fig. \[Fig:gutzwiller\].
![(Color online) Phase diagram of the Bose Hubbard model for the two leg ladder for different absolute values of staggered magnetic flux $\alpha$, for K = 1 and U = 1, calculated using a variational mean field approach. The MI phases are indicated with their average occupancy per site, and SF indicated in the plot can be SF-1 for $-\pi<\alpha<\pi$ and SF-2 for the regime $-3\pi<\alpha<-\pi$ and $\pi<\alpha<3\pi$.[]{data-label="Fig:gutzwiller"}](Gutzwiller_plot.pdf){width="\columnwidth"}
It can be seen that a higher magnetic flux enlarges the regions where the Mott-insulator phase appear by shifting the critical point or tip of the lobe for the phase transition to higher values. This enlargement of the insulating phase is expected since the effect of the magnetic field is to localize the single particle dynamics even for non-interacting systems, thus making the transition to an insulating phases easier.
Let us stress that these results are exact within mean field theory. The shape of the MI lobe is concave and independent of the dimensionality, since in our mean field calculations the dimensionality enters only through a prefactor. Since fluctuations are known to be particularly important in lower dimensions, one cannot expect the mean field theory to be quantitatively accurate for quasi one-dimensional systems. Hence, the results from the above analysis carry only qualitative importance, and provide a general idea of how the phase boundaries are affected by the presence of magnetic flux. In particular, they can be expected to work only for small hopping strengths when correlations are weak. To complete our study, we present in the following numerical calculations for the phase diagram and the chiral currents.
Numerical Results {#Sec: ClusterMF}
=================
In the following we analyze the model given in Eq.(\[eq:eq1\_model\]) numerically using a self-consistent cluster mean-field theory (CMFT) approach. For this a cluster of sites is considered as a unit cell of the system which is then decoupled from all other clusters using the mean-field decoupling approximation. For any two adjacent sites ($i,j$) which belong to different clusters we therefore write $$a_i^\dagger a_j \approx \phi_i^* a_j + a_i^\dagger \phi_j - \phi_i^* \phi_j,$$ where $\phi_i^*=\langle a_i^\dagger \rangle$ and $\phi_j=\langle a_j \rangle$ are the SF order parameters. The resulting cluster Hamiltonian is then diagonalized self-consistently with respect to the superfluid order parameter $\phi_i$, while keeping all other parameters fixed. The ground state obtained in this way can be used to calculate the number of particles at each site as $\rho_i=\langle n_i \rangle $.
CMFT takes into account the non-local correlations which are otherwise overlooked in the single-site mean-field method and it is therefore more accurate. With proper implementation, results from CMFT can match fairly well with those obtained from other sophisticated methods like Quantum Monte Carlo, etc. but with significantly less computational efforts. Owing to these features, CMFT methods have been used extensively to successfully study a variety of problems in the past [@Penna; @Hassan; @Yamamoto; @Macintosh; @Dirk; @MS1; @MS2; @MS4; @AD]. In this work we use a four-site cluster as indicated by dashed lines in Fig.\[fig:schematic\], fix the value of $J$ as $1$ and scale all other parameters in units of $J$.
Phase diagrams {#phase_dig}
--------------
![(Color online) Same as Fig. 3, but the results are obtained by using the CMFT approach. []{data-label="fig:CMFT_phase_diagram_1"}](phasediag_cmft.pdf){width="46.00000%"}
The phase diagram calculated using the CMFT method is shown in Fig. \[fig:CMFT\_phase\_diagram\_1\]. To obtain it we first fix the value $K=1$ and choose a particular value of $\alpha(=n\pi)$. We then fix $U$ and vary $\mu$ to determine the $\phi_i$ self-consistently and a vanishing value of $\phi_i$ along with an integer value of $\rho_i$ signifies the SF-MI transition. To obtain the critical point for the SF-MI transition, we increase the value of $U$ systematically until $\phi_i$ vanishes and $\rho_i$ becomes equal to 1, or in other words until the system enters the Mott-insulator phase with filling factor one. We repeat this procedure for several values of $\alpha$ varying from $-2\pi$ to $2\pi$ and the critical values of $U$ obtained in each case are marked by a black circle in the phase diagram in Fig. \[fig:CMFT\_phase\_diagram\_1\]. The continuous red line connecting the black circles then indicates the SF-MI phase boundary and by comparing these to Fig. \[Fig:phase\_ana\], one can clearly see that it matches the behaviour obtained using the Landau theory of phase transitions presented in Section \[Sec: Landau\]. Numerically studying the cases for $J=1$ and $K \neq 1$ gives the corresponding shifts in phase boundaries as well (not shown).
Chiral currents {#currents}
---------------
We finally calculate the chiral currents in the system using CMFT, which will allow us to determine the overall flow pattern in the system. The difference between the phases SF-1 and SF-2 can be characterized by their local current configurations and by their global chiral currents, the latter of which have the form $$j_c=\sum_{l} \langle j_{l,b}^{||}-j_{l,a}^{||}\rangle,$$ where the associated operators are $$\begin{aligned}
j_{l,a}^{||} &= iJ(e^{-i\alpha/2} a_{l+1}^\dagger a_l - e^{i\alpha/2} a_{l}^\dagger a_{l+1}), \nn\\
j_{l,b}^{||} &= iJ(e^{i\alpha/2} b_{l+1}^\dagger b_l - e^{-i\alpha/2} b_{l}^\dagger b_{l+1}). \label{localcurrents}\end{aligned}$$
![(Color online) Variation of $j_c$ (top panel) and $|j_c|$ (bottom panel) with $n$ for $J=1$ and $K=0.25, 1.0$ and $1.50$ and ($U,\mu$) $=(8.0,11.5$).[]{data-label="fig:CMFT_current"}](currents_cmft.pdf){width="46.00000%"}
Here $l$ represent the site-index and for the numerical calculations we set the values of on-site interaction to $U=8$ and of the chemical potential to $\mu=11.5$, as for these parameters the system remains within the SF phase. The resulting chiral currents for different values of $K$ are shown in Fig. \[fig:CMFT\_current\]. Two striking features are immediately obvious: (i) the sign of $j_c$ is reversed whenever the system makes a transition from the SF-1 to the SF-2 phase, while the sign of $\alpha$ is unchanged, and (ii) the slope of $|j_c|$ changes sign at the boundary between the two SF phases. The chiral currents for both SF-1 and SF-2 phases originate from the staggered currents going around each plaquette, and have opposite rotational directions in each phase. For the SF-1 phase, the value of chiral currents increases as a function of increasing magnetic flux $\alpha$, and local currents flowing around the plaquettes acquire a staggered (vortex-antivortex) configuration. At $\alpha=-\pi$ and $\pi$, the Hamiltonian becomes real and time-reversal invariant. Beyond these values, the staggered currents again break this symmetry, now with a reversal of the direction of the local currents around each plaquette, resulting in opposite chiral currents and a transition to SF-2 phase with a staggered (anti-vortex, vortex) current distribution. The flow of currents for both superfluid phases is schematically shown in Fig. \[fig:current\_schematic\]. Although the value of $\mu$ is fixed to $11.5$ for the chiral current calculations, we have checked and found similar results for other values of $\mu$ as well, as long as the system is in the superfluid phase. The only change is in the absolute value of $j_c$.
![(Color online) Schematic of current patterns associated with the SF-1 and SF-2 phases. The red arrows denote the local currents given by equation (\[localcurrents\]). The blue circular arrows denote the local staggered vortices/ anti vortices deduced from the local current pattern. The local currents possess opposite rotational directions for the two superfluid phases. []{data-label="fig:current_schematic"}](current_schematic2.pdf){width="46.00000%"}
Summary and outlook {#Sec: Summary}
===================
We have examined the Bose Hubbard model in the presence of a staggered magnetic flux on a two-legged ladder configuration. We have shown that such a system possesses an interesting phase diagram, which is strongly influenced by the magnetic flux. The presence of alternating flux in the system leads to the appearance of two distinct superfluid phases, which are different to the ones observed in the standard two-leg Bose Hubbard model with uniform flux. We have performed numerical cluster mean field studies to confirm these analytically obtained phases. We believe that the model we have considered serves as an example for understanding the fundamental properties of lattices gases coupled to more complicated gauge fields, and can, in particular, stimulate experimental work on two-leg ladder bosonic systems in presence of staggered gauge fields.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan. The computational simulations were carried out using computing facilities of Param-Ishan at Indian Institute of Technology, Guwahati, India and Sango HPC facility at Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan. M.S. acknowledges DST-SERB, India for the financial support through Project No. PDF/2016/000569. T.M. acknowledges Indian Institute of Technology, Guwahati, India for the start-up grant and DST-SERB, India for the financial support through Project No. ECR/2017/001069.
D. Jaksch, C. Bruder, J.I. Cirac, C.W. Gardiner, and P. Zoller, Phys. Rev. Lett. **81**, 3108 (1998).
M. Greiner, O. Mandel, T. Esslinger, T.W. Hänsch, and I. Bloch, Nature **415**, 39 (2002).
M. Lewenstein, A. Sanpera, and V. Ahufinger, [*Ultracold Atoms in Optical Lattices*]{} (Oxford University Press), 2012.
I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. **80**, 885 (2008).
M. Yasunaga, and M. Tsubota, J. Low Temp. Phys. **148**, 363 (2007); R.A. Williams, S. Al-Assam, and C.J. Foot, Phys. Rev. Lett. **104**, 050404 (2010) ; M. Aidelsburger, M. Atala, M. Lohse, J. T. Barreiro, B. Paredes, and I. Bloch, Phys. Rev. Lett. **111**, 185301 (2013) ; H. Miyake, G.A .Siviloglou, C.J. Kennedy, W.C. Burton, and W. Ketterle, Phys. Rev. Lett. **111**, 185302 (2013) ;
Y.-J. Lin, R.L. Compton, A.R. Perry, W.D. Phillips, J.V. Porto, and I.B. Spielman, Phys. Rev. Lett. **102**, 130401 (2009); Y.-J. Lin, R.L. Compton, K. Jimenez-García, J.V. Porto, and I.B. Spielman, Nature **462**, 628 (2009).
M.R. Matthews, B.P. Anderson, P.C. Haljan, D.S. Hall, C.E. Wieman, and E.A. Cornell, Phys. Rev. Lett. **83**, 2498 (1999); K.W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, Phys. Rev. Lett. **84**, 806 (2000); J.R. Abo-Shaeer, C. Raman, J.M. Vogels, and W. Ketterle, Science **292**, 476 (2001).
M. Aidelsburger, M. Atala, S. Nascimbène, S. Trotzky, Y.-A. Chen, and I. Bloch, Phys. Rev. Lett. **107**, 255301 (2011).
J. Struck, C. Ölschläger, M. Weinberg, P. Hauke, J. Simonet, A. Eckardt, M. Lewenstein, K. Sengstock, and P. Windpassinger, Phys. Rev. Lett. **108**, 225304 (2012).
M. Aidelsburger, M. Lohse, C. Schweizer, M. Atala, J.T. Barreiro, S. Nascimbène, N.R. Cooper, I. Bloch, and N. Goldman, Nature Phys. **11**, 162 (2015).
M. Lohse, C. Schweizer, H.M. Price, O. Zilberberg, and I. Bloch, Nature **553**, 55 (2018).
M. Ö. Oktel and M. Niţăă, and B. Tanatar, Phys. Rev. B **75**, 045133 (2007) ; R.O. Umucalilar, and M.O. Oktel, Phys. Rev. A **76**, 055601 (2007) ; D.S. Goldbaum, and E.J. Mueller, Phys. Rev. A **77**, 033629 (2008) ; R. Sachdeva, S. Johri, and S. Ghosh, Phys. Rev. A **82**, 063617 (2010); R. Sachdeva and S. Ghosh, Phys. Rev. A **85**, 013642 (2012) ; S. Powell, R. Barnett, R. Sensarma, and S.Das Sarma, Phys. Rev. A **83**, 013612 (2011).
D.R. Hofstadter, Phys. Rev. B **14**, 2239 (1976).
A. Tokuno and A. Georges, New J. Phys. **16**, 073005 (2014).
S. Uchino and A. Tokuno, Phys. Rev. A **92**, 013625 (2015).
Stefan S. Natu, Phys. Rev. A **92**, 053623 (2015).
E. Orignac and T. Giamarchi, Phys. Rev. B **64**, 144515 (2001).
A. Keles and M.O. Oktel, Phys. Rev. A **91**, 013629 (2015).
R. Wei and E.J. Mueller, Phys. Rev. A **89**, 063617 (2014).
R. Sachdeva, M. Singh, and Th. Busch, Phys. Rev. A **95**, 063601 (2017).
M. Atala, M. Aidelsburger, M. Lohse, J.T. Barreiro, B. Paredes, and I. Bloch, Nature Phys. **10**, 588 (2014).
A. Dhar, M. Maji, T. Mishra, R.V. Pai, S. Mukerjee, and A. Paramekanti, Phys. Rev. A **85**, 041602 (R) (2012) ; A. Dhar, T. Mishra, M. Maji, R.V. Pai, S. Mukerjee, and A. Paramekanti, Phys. Rev. B **87**, 174501 (2013).
L.-K. Lim, C.M. Smith, and A. Hemmerich, Phys. Rev. Lett. **100**, 130402 (2008).
L.-K. Lim, A. Hemmerich, and C.M. Smith, Phys. Rev. A **81**, 023404 (2011).
O. Tieleman, A. Lazarides, and C.M. Smith, Phys. Rev. A **83**, 013627 (2011).
J. Yao, and S. Zhang, Phys. Rev. A **90**, 023608 (2014).
M.P.A. Fisher, P.B. Weichman, G. Grinstein, and D.S. Fisher, Phys. Rev. B **40**, 546 (1989).
K. Sheshadri, H.R. Krishnamurthy, R. Pandit, and T.V. Ramakrishnan, Europhys. Lett. **22**, 257 (1993).
M.C. Gutzwiller, Phys. Rev. Lett. **10**, 159 (1963) ; M.C. Gutzwiller, Phys. Rev. **137**, A1726 (1965).
P. Buonsante, V. Penna, and A. Vezzani, Laser Phys. **15**, 361 (2005).
S.R. Hassan and L.de’ Medici, Phys. Rev. B **81**, 035106 (2010).
D. Yamamoto, A. Masaki, and I. Danshita, Phys. Rev. B **86**, 054516 (2012).
T. McIntosh, P. Pisarski, R.J. Gooding, and E. Zaremba, Phys. Rev. A **86**, 013623 (2012).
Dirk-Sören Lühmann, Phys. Rev. A **87**, 043619 (2013).
M. Singh, T. Mishra, R.V. Pai, and B.P. Das, Phys. Rev. A **90**, 013625 (2014).
M. Singh, S. Mondal, B.K. Sahoo, and T. Mishra, Phys. Rev. A **96**, 053604 (2017).
M. Singh, S. Greschner, and T. Mishra, Phys. Rev. A **98**, 023615 (2018).
R. Bai, S. Bandyopadhyay, S. Pal, K. Suthar, and D. Angom, Phys. Rev. A **98**, 023606 (2018).
[^1]: These two authors contributed equally.
[^2]: These two authors contributed equally.
| 2024-03-20T01:26:35.434368 | https://example.com/article/1769 |
Turkmenistan's leader wins presidential election
ASHGABAT, Turkmenistan (AP) — Turkmenistan's incumbent president has won re-election in a widely anticipated landslide victory, election authorities said on Monday.
Gurbanguly Berdymukhamedov garnered nearly 97.7 percent of the vote in the gas-rich Central Asian nation, Election Commission chairman Gulmurat Muradov told reporters. Muradov said the results from Sunday's election are preliminary and that election authorities still have to count ballots cast in Turkmenistan's embassies abroad.
The commission said turnout exceeded 97 percent of the electorate for the election, the first to feature candidates from non-government parties on the central Asian country's ballot.
The eight other candidates in the race had all expressed support for Berdymukhamedov's government, however.
Authorities in Turkmenistan have secured acceptance among the country's 5 million people through a combination of authoritarianism and generous welfare subsidies, such as free household gas and salt.
Berdymukhamedov has been the overwhelmingly dominant figure in the former Soviet republic since late 2006, when he assumed power after the death of his eccentric predecessor, Saparmurat Niyazov.
The country last year amended the constitution to extend the presidential term to seven years from five, and eliminated the age limit of 70, effectively allowing Berdymukhamedov to be president for life.
Under Berdymukhamedov, a law was adopted to allow non-government parties, although such parties are strictly vetted. The candidates nominally competing with Berdymukhamedov were allowed to meet with voters in theaters and cultural centers, but the encounters were not televised and no debates were held. | 2023-08-16T01:26:35.434368 | https://example.com/article/4147 |
namespace IdentityServer.Modules.Common
{
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.DataProtection;
using Microsoft.Extensions.DependencyInjection;
/// <summary>
/// Data Protection Extensions.
/// </summary>
public static class DataProtectionExtensions
{
/// <summary>
/// Add Data Protection.
/// </summary>
public static IServiceCollection AddCustomDataProtection(this IServiceCollection services)
{
services.AddDataProtection()
.SetApplicationName("identity-server")
.PersistKeysToFileSystem(new System.IO.DirectoryInfo(@"./"));
return services;
}
}
}
| 2023-11-08T01:26:35.434368 | https://example.com/article/5484 |
Q:
System crashes each time i run python command in Terminal, how to get rid of /opt/miniconda3/bin/python?
please I need your help here!
My system crashes each time i run a python command in terminal. When i run something like python app.py, my Mac will crash and bounce, and reset.
In Terminal,
When i run python -V, it returns Python 3.7.6,
When i run python3 -V, it returns Python 3.8.5
When i run which python, it returns /opt/miniconda3/bin/python.
A moment ago, i was trying to build a standalone app, and turned on virtualenv. However no matter how hard i tried, i failed. Then i deleted the virtualenv folder in the app folder.
Perhaps during the development processes, in Visual Studio Code, i accidentally "linked" the virtualenv with the python interpreter 3.7 (/opt/miniconda3/bin/python), which was set as the default python system long ago (i can't even recall when and how). After the virtualenv folder deletion, i remember the interpreter had the word "cached" in the front of the python version. Somehow i turned on and off this and that, and restarted the app, i have get rid of the word "cached". But the system still crashes .
My Mac crashes now every time i run a python command in Terminal. Please help!
i even have tried update conda update conda, it says no such directory,
ofcoruse i then tried update miniconda, same answer.
A:
You should remove it from your path.
To do so, check for the miniconda3 entry in the file named ~/.bashrc, ~/.bash_profile, and /etc/profile file and comment out the line:
export PATH=/opt/miniconda3/bin:$PATH
by preceding it with the # character.
| 2024-05-31T01:26:35.434368 | https://example.com/article/8077 |
To extend a frequency range and to cover process, voltage and temperature (PVT) variation, a voltage controlled oscillator usually requires a large gain. The large gain, however, may cause more noise and supply pushing (due to increased sensitivity to the variation of a supply voltage). To solve this problem, one solution provides a set of sub-bands each having smaller gains compared to the voltage controlled oscillator for preventing the noise and supply pushing issues; a disadvantage of this solution, however, is that this technique requires digital calibration that results in greater manufacturing costs. Another type of voltage controlled oscillator with linear input voltage characteristics may have smaller gain and does not use digital calibration; a disadvantage of this solution, however, is that the linear range of this type of voltage controlled oscillator may not cover the full input range. | 2024-01-17T01:26:35.434368 | https://example.com/article/1588 |
-1.8cm 0.35cm 1.5cm 16.5cm 23.8cm 0.0cm
.5in
.1in
.1in
April 1, 1998
.3in
Stanis[ł]{}aw D. G[ł]{}azek
.1in
Institute of Theoretical Physics, Warsaw University
ul. Ho[ż]{}a 69, 00-681 Warsaw
.3in
**Abstract**
.1in
This paper describes a network of teachers and students who form a living system of education at all levels. Organization of schools is based on new principles. One person can be a teacher in one area or activity and a student in another. Schools are owned and governed by the teachers and students. The system is powered at all levels equally by the will of students to learn and the will of teachers to learn and to share their expertise with students.
We describe main processes and structural principles of the network. The key processes are the process of learning by inquiry and the processes of design and learning by redesign. We also describe steps required to initiate the network growth process from small scale seeds. This avoids wasting human resources and money on a large scale. The first step we suggest for the teams of teachers and researchers who are interested in building the network is studying a bit of basic physics by inquiry using specially designed and well tested materials.
The network is economically sound. We distinguish the economy of the network because we claim that the freedom and safety of learning and teaching processes can be based only on the financial independence of teachers who gained their independence as a result of developing and using these processes. The system is designed to ensure highest quality in all respects. The design described here is provided as illustration for the underlying principles and their implications rather than as the ultimate structure. In fact, the living network is expected to evolve and adapt efficiently.
.5in [**CONTENTS:**]{} .1in
[**1. INTRODUCTION**]{} .1in
It is bad that our educational system does not teach students how to learn effectively. It is bad students are often lost and not interested in learning at school. But it is very bad that teachers at all levels get used to thinking they cannot change much in the way they teach. First of all, we teach the way we were taught. Secondly, it is very hard to make changes. We want to reach our students’ minds but we feel we have to do it in the environment which we have little influence on as individuals. Most importantly, the environment is not safe for experimentation on better ways of teaching. We know we have to complete an overwhelmingly difficult program. We know we have unacceptably short time to do so. We have little freedom to make choices. And we will be criticized if we admit our students do not learn much. In fact, we do not even know how bad and how little they learn, although we certainly know it is not enough. More interestingly, we hesitate to work on measuring how much they learn. We do not have time to do that. And it would be hard to accept to everybody in the system that our students do not learn how to think critically, how to learn effectively and how to approach new problems requiring solution. But if we admitted we do not know how to teach effectively we would question our competence and jeopardize our living.
Assuming that the overarching goal of education is to train people in learning effectively so that they are able to learn and change throughout their whole life, one is faced with the question: What needs to be changed in our current concept of teaching? For example, it is clear that a child learns very efficiently when it feels safe and when it can experiment. It can touch, taste, break and throw things, make its mother angry, etc. The child learns from failure and pain or success and pleasure. When such learning is wisely instructed I call it learning by inquiry. In mature form learning by inquiry is the only way humans truly learn new things and become owners and users of what they learned. I claim teachers could learn how to teach effectively if they worked in the environment safe for experimentation and had good examples to study and patterns to discover. Also, children will eagerly learn if guided by a teacher who feels like a lion letting cubs play with food before showing them how to hunt. Teachers will be self-motivated to learn and to teach by inquiry if the best sources to learn from are made available to them. One such already existing source is physics, the most advanced science driven by inquiry. I will return to this point later.
If the freedom to learn is the prerequisite to teachers’ action, on the one hand, and the fear of rejection when attempting new approaches originates in the fear of losing financial security in the current system, on the other hand, then, the way out is to create a system in which successful attempts of teaching in a better way will be rewarded with more financial independence for teachers. This independence is the bottom line security condition which we do not have satisfied and which we have to have satisfied to feel free to explore. But our safety has to result directly from our work, not from top-down orders. The reason is that the arbitrary top-down ruling may change at any time for some reason and the temporarily existing safety will be gone, as if the lion went away. However, if teachers and students work in a self-conscious network, which is in control of its income and spending and which is driven by merit of the overarching goal of education and, if teachers generate their income through their own action then, no arbitrary budget decisions could take away from us our freedom to learn and teach by inquiry the best way we can.
In the network I envision, whenever we will see the excitement about learning new things in the eyes of our students we will know we are making irreversible progress. We will be learning ourselves how to better the way we teach. We will engage in the work on creating new opportunities for achieving better results more often. We will be building our own system, and our own future. Thus, the ownership principle becomes the basis for the overarching goal of education. We the teachers and students working in our own network become the lions protecting our own development and future and we become the guarantors of our freedom to learn by inquiry both the subject matters and how to teach better.
Being a teacher and observing teachers in schools of all levels one can notice that the money a teacher earns for doing the job is not related to the depth of learning experience provided to students. On average, teachers are not in a position to be entrepreneurial [@Drucker] in their schools and they do not pursue quests like motivated professionals who create prosperity of their disciplines [@Hughes]. Teachers themselves do not achieve clear results that could raise awe in students and motivate them to learning. A tired teacher at the end of the day is supposed to check a pile of poorly done homework and accept the lack of future. Little time is left to the teacher for personal growth, gaining respect and winning intellectual freedom. [@Covey] The lion cub eats well after a good hunt. Teachers eat the same no matter how well they teach. But the coupling I postulate between the results of teaching and teacher’s income must not be confused with using greed for money to serve educational purposes. That would certainly not work.
The money reward to teachers in contemporary society is essential for many reasons. The key one is that low-paid teachers cannot teach students about achievements of the society. For they know these achievements only second hand and only as much as can be bought using very limited and already allocated funds. And currently even a modest attempt on the part of public school teachers to change this situation may take away their basic income because the system is not open to experimentation. Moreover, if the high quality teacher’s work is not steadily improving the teacher’s social status, the situation becomes the primary example in the eyes of students for the fact that learning more with more understanding is not the way to make one’s life complete. Students will learn to think independently from other sources and not from the limited and helpless teachers.
But most importantly, a typical teacher is subjected to power of an arbitrary educational system and she or he cannot teach students how to execute their rights to learn, understand, build, improve, prosper and be happy in life.[@Fullan] No wonder teachers believe that their work environment has never permitted them to show what they can really do.[@Lortie] In turn, no bright child wants to follow the lead to a dead end and become a teacher. This is very bad because without the brightest talent the teachers have little chance to change their status. There is no exception to this rule, from the lowest to highest levels of educational institutions.
Being a student and observing students one sees they do not learn at school enthusiastically. Students are not engaged mentally in the learning processes to the extent required to learn effectively. [@Sarason] One or two teachers impress a student sometimes but students too rarely see their school as a source of inspiration. They are often forced to do things they consider useless. Their compliance and hard work on assignments elevate their opinion among teachers but do not bring outcomes of clear on-line value to them. The school does not teach us how to learn according to current needs and how to use the knowledge and skills to steadily advance in life.[@Fullan]
This article describes principles one can use in attempting to change the educational status quo at the beginning of 21st century. The place to start is the reader’s own workplace and neighborhood. But before the reader gets a chance to start thinking how to go about her or his contribution to the process of redesigning education [@WilsonDaviss] I need to describe the processes I envision. Therefore, I invent a model structure and describe how the processes work in that structure. This article is limited to these two subjects. My aim is to show one needs to start thinking in terms of the system processes in order to find out how to make the overarching goal of education a driving force.
The vital processes of learning in the system I envision involve exchange and trade of knowledge, skills, materials and other resources such as time needed for learning and practice. The structure of the system supports this trading. Nevertheless, I shall first describe the structure of the system because the processes of learning and trading occur in the structure. The structure is changing according to the needs of the processes. Therefore, I will describe only an initially conceivable structure. In fact, it is not known that a single stable structure may fulfill the needs of learning processes. [@WilsonBarsky] It is more probable that the structure will evolve as is common to life and our civilization.[@deGeus]
It is essential to understand what kind of trading I am talking about. Teachers are perceived as hired by the society to push knowledge that is not produced by them. Teachers are considered to be passive in creating civilization. Teachers are supposed to pump civilization into students’ heads as if it were pumping gas into a car. A math teacher is perceived as someone to teach addition, not as somebody to teach thinking and learning on the example of addition. A student is perceived as a car that needs gas. Students are not perceived as having delicate brains to be loaded with skills and checked for flawless function by extremely competent artists of human mind-crafting. The way we look at the educational system resembles dumb work on trivial projects such as repainting white boards in color or hitting white and black keys. Students are not seen as learning how to become artists and teachers are certainly not perceived as masters. Therefore, teachers are denied the right to collect money as if they were creating something desirable to us by their own minds and hands. Teachers are so used to the treatment that they seem not to see they can learn more and become owners of a new generation school system.
But I need to warn the reader right away that a private school is not the idea I am talking about. What I am talking about is a new profession of teaching based on serious research aiming at understanding how to teach effectively. Professional teachers form a network of experts on the subject matter and teaching techniques. The network teachers earn more if they teach better. These teachers can say that their profession offers them multiple opportunities for personal and intellectual growth every day. And the teacher future looks brighter the more apt a student she or he is, at all levels, from the nursery to the highest academia.
My claim is that educational systems are hard to change because teachers are not owners of their trade. While all prospering businesses buy something, do work on the bought material and sell the result for a higher price, or provide services using their own knowledge and ability which is thus being sold, teachers are hired for a job that is considered in a way to be merely loading wagons with potatoes. The potatoes, trucks, ramps and trains belong to us. Anybody can do the loading job. And the students leave the station full of potatoes like wagons. We are shocked that they are not illuminated Picassos, Liszts or Pasteurs. We demand a lot providing merely a regular pay for compliance with rules.
Teachers need motivation to transform their occupation into a profession. Current systems are such that attempts to innovate put the regular income of a teacher in jeopardy. Trying new things is very risky because it is not guaranteed to bring results. Nobody wants to stop teachers from trying but the system effectively forces them to quit because they get tired and burn out. There is no structure to support innovation. There is no structure to develop innovations into elements of teaching profession and culture. And there is no way for teachers to be fully recognized. Therefore, teachers have to find a way to win the recognition of their trade themselves. They need to start trading with what they possess and do.
In fact, a successful teacher is an owner of incredible gifts, someone who possesses unusual skills and knowledge and can set a high price on services provided for students. Students can be active learners from competent teachers. Both need freedom to build a system to function in a natural way. And a clear suggestion for such a system to have a chance to succeed can be found in universities.
University professors are considered to be owners of their wisdom because they participate in the process of creating science. Teachers could be seen as owners of their teaching materials and techniques if they were creating the materials and methods they use. University students are responsible for what they spend their parents’ money on and the same could happen with students in lower schools. Universities are populated by a whole hierarchy of people with different levels of knowledge and skills while at school we have only teachers and students. A global network of schools based on the principles of a new generation university which belongs to teachers and students is the vision I describe.
The main problem with the model invented in this article is that it is too complicated to comprehend and judge quickly. Moreover, it is full of conflicts. If it could work it would only do so through a balance of opposing forces. One can easily point out apparent inconsistencies. To explain why the described system could work one would have to answer an unending chain of questions and the answers would stimulate new questions. Solutions to problems showing up in the system have to be invented on-line as the whole system grows. But what I am trying to sell is not the particular model. I am using the model to convey the idea that the simple principle of schools owned by teachers and students immediately leads to incredibly rich structures and provides clear criteria for distinguishing which new elements of the system might be useful and survive and which not and die. The ownership principle opens a new way of thinking about education. If this is understood by the reader, the rest is details that may change in time.
.3in [**2. STRUCTURE OF THE NETWORK**]{} .1in
It is important not to perceive the structure I describe as rigid and ultimately defined. A living company evolves.[@deGeus] The model example I provide here is arbitrary but it illustrates the underlying principles. The principles themselves are not arbitrary. If the described model structure has drawbacks, including serious ones, they should be thought of as resulting from a long evolution and one should ask the question what processes exist in the system that can resolve these problems. The structure I describe is arbitrary because no real process of evolution existed to create it in a natural self-correcting way. This is not a problem since the model is only a tool to bring up relevant issues.
Every teacher and every student is a member of the network as an individual with equal rights to all other individual members. The members form teams, classes, schools, school districts, school regions, academies and a single society of professors of education with its own hierarchy. These subgroups have different responsibilities and are distinguished by the responsibilities.
The responsibility of a team is to learn an assigned (or chosen) subject. A team contains about 4 or 5 members (between 3 and 7) and exists for as long as the assignment or chosen task is not completed. There may be 4 students in a team, or 4 teachers, or 4 professors, or a mixture thereof. Teams are formed according to the demands of the subject they are supposed to study. A team is the basic element of the system because it is the learning engine which delivers the result of its learning process: a report on what and how the team learned carrying out its project, and a product the team was supposed to produce through the project.
Teams form classes of varying size according to the amount of assistance the subject matter requires. Subjects such as floating or sinking of rigid bodies in liquids [@McDermott] can be studied in classes of 4, 8 or even 10 teams. Such an “Archimedes” class may require one to three instructors for assistance. The notion of a class is distinguished by the notion of instructors. Since teams encounter difficulties when learning new things and new ways of thinking they need to ask questions and verify their reasoning with instructors. The instructors can be students, graduate students, teachers, graduate teachers, professors, graduate professors and professors of education. It depends on the kind of class they instruct. For example, the class studying the notion of the contexts of productive learning [@Sarason] according to modules designed in analogy to Ref. [@McDermott] in case of physics may require instructors to become members of the teams. I will return later to the issue of materials for teaching the context of productive learning by inquiry since in the network I envision the context of productive learning by inquiry is a basic building element and in the current systems of education this context is almost entirely absent.
Classes form schools. It is important to form schools in order to sustain social aspects of the learning processes. Schools are to serve their surrounding communities. An elementary school is fairly local in this respect while a top ranking university may have a mission of national or even global outreach. Schools are distinguished by having a principal and a body of teachers. Schools are basic posts of the system. Schools owned by teachers and students form a network because it is easier to operate in the network than outside of it. The network is international because it draws on teaching and learning experiences which are published and useful across the world.
Teachers form a school to create a body of sufficient expertise in the subject matters to be able to teach and to form a setting in which their own development of skills and knowledge will be possible through sharing duties and exchanging experiences and results. Teachers of one or more schools may form a team to learn and work on some problem. The striking feature of the system driven by the learning of teams of various kind is a possibility of a self-organized virtual school formed of classes of teams of teachers as a result of their own recognition that the problem they want to study requires such structure, with a body of super-teachers drawn from other elements of the network.
The key function the principal is responsible for is to make the school productive. The productivity is measured by the results of students on standard tests and by the number of team reports from the school sold on the system market. It is a big success to produce a good report which sells well.
We need to recall that a team may be composed not only of students but of teachers as well, and a team may include people from outside the school. In that case, the team result is shared according to prescribed rules. Consequently, the number of reports or publications or student achievements do not need to be simple integers and instead of using special measures the school outcome is measured in terms of money: total sales minus total investment divided by the number of school members. Details of the accounting will be discussed later, but we need to mention three things.
One is that the team results may be highly professional and even able to solve practical community or wider problems. Therefore, they may be copyrighted, patented or sold. For example, an outstanding teaching material on the subject of sinking and floating, electric currents or optics, such as Ref. [@McDermott], may be in high demand in all schools or, a local solution to the problem of child care and a computer program needed in its administration, written by students and teachers, may have broad applications.
The other is that the standard test results of students need to be accounted for in money. Therefore, there are tables developed of equivalence between credits and money. Credits are universal for the whole system but they may be equivalent to more money in a better school when considered as a product and less money in a better school when considered as an expense. This point will be further discussed in the Section dealing with finances. Here we mention only that the accounting of schools and reviewing principal’s performance in terms of money makes it evident that education is not a burden to the society but a source of major income if money is properly invested. Accounting of education in terms of money also prevents wasting public funds for education. For it is too easy to spend money without accountability while good accounting creates responsibility.
The third thing is that details of the calculation do matter. In fact, they are essential. It is not obvious how to evaluate results of education in terms of money. Therefore, the method is a subject of ongoing studies. The studies are essential to the network because the education it offers must be useful to the society for the system to prosper and be actually paid as much as it aspires to. The studies feedback is critical to long term planning and development of the evaluation rules for credits in terms of money. But the studies are essential for many more reasons, vital to the network. Here are some examples: design and redesign of curriculum structures, admission, examination and testing procedures, hiring policy, communication with employers, satisfying needs of the job market through the network and longitudinal studies of alumni careers. Therefore, the evaluation scheme is a permanent source of initiative for improving the network to better serve the society. The details are hot subjects in the network.
School districts are formed by schools spontaneously to coordinate work and express opinions of many schools in a selective and organized fashion which guarantees coherent action in defense or promotion of the districts educational or other interests. The body of representatives is elected by schools. Therefore, the districts are distinguished by their representatives who serve their needs. Districts are formed to contain schools of their choice and do not have to be restricted to primary, middle or high schools only. Districts prosper if their schools earn money.
School regions include school districts and universities. The regions are formed to allow universities and schools to utilize their resources in producing team reports and selling them. The principal role of regions is to provide permanent in-service learning opportunities to every member of the system on the highest possible level. Teachers study in the region to keep abreast of the science or art they teach. University students, graduate students and faculties study ways their research capacity can be enhanced through becoming more useful in education, mainly through many opportunities of delegating responsibility for teaching to students. [@WilsonDaviss]
One of the students mission becomes then to work with less educated students on their learning skills, using the best materials available. Students who excel in teaching can pursue studies in the system and become teachers. It takes a region to create conditions for such advanced studies. Two reasons are essential. The region is the smallest structure whose size provides sufficient amount of students with talent for teaching and becoming teachers of teachers. The region is the smallest structure that can support high quality research. Regions are large enough to create conditions for unlimited personal growth of their members. School regions are also useful in creating a sufficiently stable environment to support educational processes in the periods of setbacks.
A school region is distinguished by the board of trustees whose role is to assure healthy economy of the region educational services. Trustees of a region are elected by the region members. The boards of trustees use help of academies.
Academies are organizations quite independent of the team, school, district and regional structure. They are the regional networks of experts who contribute their expertise to the region system and are recognized by the system as such. The academies are professional organizations of providers of services to the system. Academies recruit their members following their own rules. Academies can undertake action of their choice driven by the need of the system. A key additional function of academies is the publication of journals.
Academies publish refereed journals on education and sell them in the system as an additional source of income. The new striking feature of the journals is that they have subsections for learning materials which can be bought separately and in large quantities. School teams may attempt to publish outstanding reports in the journals. [@EJP]
The whole network of schools of all kinds requires a body of distinguished teachers for passing judgments on issues important to the whole system. This is a Society of Professors of Education. Members of the society have at least 100,000 copies of educational materials sold through the system. But in order to become a Professor of Education a candidate must have a record of working in the system for at least 25 years and have educated at least 25 teachers who sold more than 10,000 copies each of teaching materials through subsections in the refereed academic journals.
The network of schools is global and its international character is obvious to all members. It is clear that translation of the academic journals plays an important role in the international contacts and learning across the globe.
Different countries may have districts and regions of different sizes but no administrative superstructure above regions is needed or allowed. There exist data banks connected in the network so that no central headquarters are required and still the system is perfectly conscious of its identity. Members identify themselves by contributing to the processes of the system and by using its structures. Regions may easily cross state boundaries for their structure is governed by the processes they support. Examples of well known existing international network structures are Internet and VISA International [@Hock].
Analogies exist between the school network and other essential systems in our civilization. The system of electric power distribution is a leading example. [@Hughes] One can think of many analogies between the two systems. Let me give you a surprising example. One may think about the contexts of productive learning as analogous to the super-conducting wires, about the processes of learning and teaching by inquiry and circulation of teaching materials as analogous to the electric currents and about the overarching goal of education as analogous to the principle of optimizing the load factors. Human brains are the sources of power. The rules of science, democracy and total quality are analogous to the Kirchhoff rules. The ownership principle is analogous to the closed circuit condition for the currents to flow. The transition from the contemporary educational systems to the networks of schools owned by teachers and students is analogous to the transition from the direct to alternating current in the case of the electric power systems.
.3in [**3. OWNERSHIP PRINCIPLES**]{} .1in
The forms of ownership I describe are invented for illustration, have drawbacks and are partly contradictory. Such a situation may be realistic but the model I describe is not sufficiently studied to claim that much. One would have to study mathematical models of the ownership structure including mechanisms of governance, income, spending and population changes to make reasonable evaluations of the ownership principles and I have not done such studies. Still, the ownership principles are essential to the network idea and I offer a scenario to think about.
The system that can emerge from a real trial may evolve to other forms of the ownership but it is clear that if the system belongs to teachers and students at the beginning and grows successfully there will be little incentive for taking the ownership away from the primary constituents. And if many systems are initiated some ownership schemes will succeed and some will die.
Two different forms of ownership exist in the envisioned network, one for teachers and one for students. The need to differentiate comes from the fact that teachers support their own living (and their families’) through the work in the system while the students’ living is supported by parents or other supporters. In addition, there is a mechanism built in for a gradual change in the form of ownership available to students who learn particularly easily, satisfy well defined criteria and choose to make a career in the system.
Teachers own shares. Shares bring dividends. Shares cannot be bought. They can be earned. A teacher receives a prescribed basic number of shares when when joins the system. The basic number of shares ascribed to a job position is proportional to the time necessary for doing the job and the complexity of the tasks. The complexity factors are tabulated and published. The basic number of shares corresponding to a full time job of lowest complexity brings enough dividends to live if the whole system works productively and efficiently. Advancing in the hierarchy of teaching positions results from the growing ability to become responsible for more demanding jobs to which a larger basic number of shares is ascribed.
Anyone in the system can earn more shares than the basic number for her or his position by doing the job better. In particular, one can earn shares by publishing educational materials. To give a striking example: a cleaning staff member of some school may publish a material on economic organization of efficient cleaning in schools so useful it may sell in thousands of copies. The number of shares is proportional to the number of copies sold. Conduction of every activity in the network is evaluated in terms of shares. Every teacher knows the number of shares in the whole system and in her or his possession. Once the yearly budget forecast is published it is straightforward to foresee the individual basic income for the current year and everybody can evaluate their own additional income knowing the number of shares they have in addition to the basic number.
Students own credits. Credits have to be earned by passing standard tests and written and oral exams and by publishing individual and team reports through the academic journals (this is independent of the fact that producing team reports is the main source of learning experience for students). To be able to earn a credit a student has to buy a pass for a course that leads to the credit. Students can buy passes for money. They can also work in the system as teachers, administrative assistants or other staff, earn shares and pay for passes from their dividends. When students join the system to work as teachers their shares become sources of dividends as for teachers.
One can disclaim shares by leaving the system. Such shares die; cease to exist. For example, when a student finishes education in the system the number of her or his shares at that point is multiplied by the current value of a yearly dividend per share. This says how much the student was making a year at graduation - it tells potential employers how much they have to offer the alumna or alumnus to attract attention. In turn, students know how much they can expect on the basis of their education. The shares of the alumni are subtracted from the total number of shares. When a teacher leaves the system her or his shares are processed in the same way.
Possession of a large number of shares opens unlimited opportunities for personal growth and attaining intellectual freedom. Putting the process of individual growth of teachers and students on top of other processes and setting priorities in such a way that ownership remains in the hands of teachers and students no matter what happens, has one key implication: teachers become vitally interested in reform since reform as a process of redesign (cf. [@LBR]) is the natural way of improving their own living. At the same time otherwise insoluble or hard problems may become less forbidding.
I give examples of problems I heard about in “Discovery” [@Discovery] and in “Reading Recovery” [@Clay]. “Discovery” and “Reading Recovery” are educational reforms of unusual quality, “Reading Recovery” being one of the most advanced reform models in the world. But before I give the examples I need to explain why I am giving these examples. Namely, I only mean to suggest that if the ownership by teachers and students were seriously considered from the beginning then, new ways of approaching the problems could come to mind. Because of my limited knowledge about “Discovery” (which educated several thousand teachers) and “Reading Recovery” (which operates in about 9000 schools in the US alone and continues to grow), my suggestions are hypothetic. However, my goal is not to tell leaders of “Discovery” or “Reading Recovery” what they should have been or should be doing. My aim is only to show that the ownership principle implies a new way of thinking about problems of education.
In the case of project “Discovery”, I think the ownership principles could have changed the project recruiting scheme, profiles of the teacher leaders education and organization of their work, and the motivation of teachers attending summer institutes. The institutes would have new elements in the program of great interest for people seeking a status of independent thinkers and educators. Most importantly, however, at the end of the project when public funds expired, the participants could be prepared to sustain their independent network and continue to benefit thousands of students without a shock of disappearing outside support. The above conclusion may appear surprising in its simplicity. However, I recall meeting teachers, teacher leaders and directors in the project who did not expect to be left out in the cold. Most importantly, they found themselves rapidly developed, with broadened horizons and, ironically, unable to plainly return to their previous roles in the existing system having no room for growth in the directions they found attractive.
In the case of “Reading Recovery”, one might suggest that teachers and teacher leaders who would own the system could be motivated to develop their skills beyond the requirements set by leading scientists. They could have vested interests in enhancing rather than diluting standards of their services to children with diffusion of the system. The research leaders could securely develop the system if their judges were not arbitrarily selected but were mainly the teachers and parents who know first hand the project results and appreciate outcomes of the longitudinal studies. Even more importantly, the research leaders of their own self-improving system could attract new young researchers by the fact that the principle of “Reading Recovery” approach to reading could be extended to other disciplines if enough research were done. One direction which I consider very important is the development of materials for children having difficulties with learning science, materials analogous to the McDermott’s modules on physics [@McDermott] and the reading books in “Reading Recovery”. The projection into future would be totally unbounded and exciting. The system could remain highly interesting to its leaders independently of false outside opinions.
.3in [**4. GOVERNANCE PRINCIPLES**]{} .1in
All schools in the network have equal rights and the same standards of excellence. For individuals, there is a schedule of ranks based on the number of shares. People of lower rank usually pay attention to people with higher rank because those with higher rank usually know better how to earn shares. The principles of effective teaching govern in the network.
The average number of shares per teacher in a school measures the quality of the school. Schools are also measured by achievements of their students - the number of credits per student.
Credits are well defined through common standards and other criteria, such as juror judgments. The standard tests are built on the principle of one framework problem with varying input data which imply different correct output answers using the same reasoning. The tests verify understanding and reasoning. The skill of reason is the goal of education. The correct result is of value.
Shares become worthless if students do not buy passes to earn credits. Therefore, teachers are interested in keeping high the number of sold passes. This leads to the improvement of quality of education since students have freedom to make choices how to use their budgets. There is no prescribed governance structure that would need to be imposed artificially because of needs to serve other interests than the wisdom-centered learning. [@KGW]
Team leaders are elected within a team. Everybody can suggest a leader but it is the team who decides.
Class leaders are elected by the class. Classes can and often do bid for teachers for specific courses. Teachers have the right to choose with which class they will work. Teachers usually prefer to choose a bidding class with highest number of credits and shares. If there is a conflict without a rational solution the right of choice and responsibility for making decision belong to the teacher who has more shares in the system.
School principals are elected by the school teachers and students equally for a period of 5 years. There is a limit of 3 such periods for a single person to be a principal.
District representatives are elected from the whole district membership in the network by all teachers and students for 7 years to ensure continuity. Schools vote separately on a list of candidates. The number of votes per school is equal to the number of shares owned by members of that school. One person can serve up to 2 periods.
Region trustees are elected for unlimited time. A trustee ends her or his service only voluntarily. Candidates are suggested by school districts. In order to become a trustee one has to have at least ten times as many shares as average number of shares per teacher in the region on the election day. The second condition is that the candidate for a trustee must have worked in the system as a teacher for at least 10 years. These conditions eliminate the situation where some important person becomes a trustee despite that this person is fully ignorant about the system. This condition also helps to select people who are successful as teachers and have a remarkable record of achievement outside the network.
There is a danger of lowering the price of passes to sell large numbers of them. This is easily avoided through a feedback loop because good teachers will not work for free with too many students and such practice dies out. On the other hand, there is an issue of the system becoming a monopoly and dictating a too high price on the passes. Therefore, the system is built from more than one independent subsystems of shares and credits. There is no artificial limitation on the number of such subsystems. Teachers are free to initiate new subsystems but there is a requirement that a subsystem must be adopted by at least three schools. To create a subsystem teachers of the schools set up a company according to the common law and the subsystem becomes a partner in the network.
The standards of credit requirements and the educational materials are used across the board equally in all subsystems. Shares in different subsystems are compared using the ratio of the average number of credits obtained by students in the subsystem during the last year per share in the subsystem. Credits are universal because they are based on satisfying most objectively measured requirements by students while the subsystem share value depends on the subsystem. Each subsystem share value is calculated in terms of a universal share value for the whole system. The total number of the universal shares is equal to the sum of numbers of the universal shares in all subsystems.
All subsystems are free to function without state or community support for students (taxes) but if they use public money as income to pay dividends they have to comply with the general share evaluation scheme in which the share value is defined by test results for students. Testing schemes are continuously redesigned to satisfy changing requirements of the job market and the network. The testing practice is based on verifying thinking skills, understanding subjects and ability to learn new things, in mandatory agreement with the overarching goal of education. That this condition is satisfied results from the fact that the true value of the network to the society is precisely the supply of contexts of productive learning. In other words, the network tests students if they purchased and acquired what they intended to when paying for the passes.
The common share evaluation system is needed by all subsystems. The subsystems want to demonstrate effectiveness and quality of the education they offer. They want to attract the best students. Subsystems compete by keeping the number of credits issued per share as high as possible to keep the share value high. The reason standards are not reduced is that the demands for credits are universal and in check by all constituents. The network is also being constantly evaluated from the point of view of the job market. The market economy cures negative features in the network as it does for itself outside the network. The bonding scheme is based on the market competition for best students and its reluctance to hire poorly educated alumni. In turn, nobody is interested in educating students whom nobody wants to hire. Therefore, the schools keep records of their alumni careers. There is a whole area of studies on measuring alumni careers for meaningful comparisons.
.3in [**5. SOURCES OF INCOME AND FINANCE MANAGEMENT**]{} .1in
The system collects money for teaching students directly from the students budgets. The budget money is provided, for example, by parents for education of their children, by a local community to pay for education of its teachers, by state and international organizations for training of professionals or by foundations through grants for scientists doing research in the system. Students buy passes to earn credits.
It is essential for students to administer the process of purchasing passes. This way they learn the cost of their education and how to avoid waste of their money. The purchase of a pass to earn every single credit is done by a student separately, in an on-line process of learning how to manage her or his education program. The youngest students are being helped in this respect by their parents or guardians. There is a scheme of reducing the responsibility of parents or guardians as students grow up. There is a system of consultants to students and data banks for their use. Every school has its own data bank with a network connection to help students make choices.
Students have individual budgets for their education. All students have equal access to the minimal budget for purchasing a basic set of passes. The basic set defines the level of education guaranteed to be available to every student in the system. Students need to raise, borrow or earn more money to cover costs of passes to additional credits of choice for their careers. Students demonstrate their records to obtain such funds.
The budgets for students come from states (taxes), public or private and national or international organizations of all kinds interested in educating students of all kinds, and from students themselves. But the dominant source is the direct payment by parents or employers. In the fully operating system which is already blended with structures of whole societies, parents and employers may temporarily deduct transfers made to budgets of students they support. The deductions are allowed for as long as a student needs to earn credits or until the time foreseen for earning the credits expires.
However, in the current situation such a scheme is not directly implementable since taxes are paid today according to schedules that have nothing to do with how the tax money will be spent. In other words, we pay taxes on what we earn and we have no direct way to say how we want our money to be spent. There is no entry on the contemporary tax forms concerned with what we wish to provide our money for, except, for example, church taxes in some countries. Hopefully, in a future tax forms one will find entries for education. Until then, what I can offer in practice today is merely a seed or initial business plan for first steps on the way to make the educational network belong to teachers and students. This is described in next Sections below.
The total amount of money collected by the network for a fiscal academic year (or semester) is divided by the number of shares issued to date and the resulting number defines the total dividend per share. Owners of shares decide how to use the money they receive through their shares. They form organizations in the system to use their money. For example, each and every member of a school brings a definite amount of money for use by the school. The sum of money of all members has to cover all expenses of the school, including the owners income. A good teacher with many shares becomes thus a great asset to the school. Her or his opinion about teaching practice cannot be neglected. Such teachers make the schools going.
Teachers form schools voluntarily. If they do not form schools their shares will lose value - no single teacher is able to offer a comprehensive education. The same motivates formation and existence of districts and regions.
A highly sophisticated system of collecting payments, share accounts and copy rights is in operation. But the system rules are simple, published and easily available. They protect rights and intellectual properties of teachers and students, warrant creation of the contexts of productive learning and serve the overarching goal of education.
The financial management is not delegated to outside companies. The outside auditors are hired but mainly to help in eliminating errors and for communication purposes with outside the network.
The network employs its own highest quality accountants who are also teachers. The accountants are deeply aware of the network principles and serve well the network educational agenda. They teach the network accounting to their less experienced colleagues and students. Much of the work is done by the students as part of their credit earning in accounting and related subjects. Similar delegation of work and responsibility is practiced in all administrative functions.
The individual shares are issued by the subsystems according to their needs and the number of shares issued by every subsystem is decided within the subsystem. These shares are evaluable in terms of the universal shares. The number of the universal shares in the whole network is an abstract number which roughly equals the number of individual shares and results from the accounting rules. Thus, the individual share brings dividends in amount comparable to a universal share. You can check what is the dividend one gets for a single share in some school and you have an idea about the level of education the school offers.
Changes in the network accounting are induced by the majority vote, on recommendation by representatives of districts, with approving opinion of the region trustees. The Society of Professors of Education is obliged to help in assessment of proposed changes in the accounting rules used by regions.
The reason for that no monopoly can emerge is that many subsystems exist and they compete to win their share in serving educational needs of the society.
There exist also ways of giving money to the network and specifying the money is given to a subsystem or other unit for some purpose. The money is used then by issuing a corresponding number of new shares and distributing those shares in agreement with the intention of the donor.
.3in [**6. ASSESSMENT AND RECOGNITION**]{} .1in
The bottom line in assessing the quality of work and productivity of teachers and students is the number of credits they produce. Therefore, the credit system is a subject of continuous research, redesign, application and feedback. [@LBR] The notion of a credit guaranteed in gold explains the quality of reasoning skills and knowledge of students who earned the credit.
The award winning teachers obtain one time money prizes or a number of shares. Dividends from the prize shares are collectible over different periods of time; longest times for most prestigious awards.
If the number of shares grows with time and the number of students does not grow the value of a dividend per share becomes smaller with time, and one has to earn more shares to keep the individual income rising.
No other measure than students performance on universal tests is used in assessing effectiveness of teaching. But owners of copy rights for teaching materials and patents for teaching techniques collect royalties on their use.
Since there are differences between districts in student readiness to learn the districts must specialize in different levels of education. Whenever an opportunity arises for a school district to go to a higher level the opportunity is taken because it is preferred by the job market to employ people with higher number of credits. The credit system is such that gaining merely basic skills and knowledge cannot bring a high number of credits. A large number of credits may be obtained only by a student who learns many skills and subjects very well and the achieved level is verified thoroughly and trustworthy.
The protection of teachers rights to benefit from selling of their teaching materials is secured by the general patent and copyright laws. A recent example of such laws are laws prohibiting unauthorized duplication of video tapes or compact discs and fighting pirates across the world.
The highest recognition available to teachers and students is based on leadership positions they win in their own network units. For example, a team distinguishes its leader, a school distinguishes its principal and academies distinguish their leaders.
.3in [**7. ARCHIVES**]{} .1in
The system keeps an archive, in many copies and in a flexible network of easy access. The archive is sophisticated in its purpose, structure and availability. Highly sophisticated librarians manage the archives and make sure no member of the system is denied access to data. The contemporary electronic libraries such as the Los Alamos National Laboratory electronic preprint library [@LANL] can serve as a prototype example of the network archive.
The archive plays the role of a patent office library for educational materials, copy right guard (issuing single authorized copies), source of information and ground for longitudinal studies (independently of the studies conducted by the network subsystems) and the forum for research and discussion on educational matters including the performance of the system itself.
However, the archive is not able or allowed to become a publisher or distributor of academic journals. The publication of journals is reserved to academies in order to secure the journals high quality through the peer review processes. Still, the archive is indispensable since it provides information the publishers cannot provide, such as access to publications from different publishers.
.3in [**8. PREPARATION, DESIGN, LAUNCH, FEEDBACK AND REDESIGN**]{} .1in
The leading idea of the network is that teachers and students can build a healthy and rapidly evolving educational network if they start doing it step by step on sound economical basis. In the envisioned system, a group of interested teachers starts from earning funds for opening a small school.
Thus, the first step a group of teachers does is they decide they want to create their own school. The group then gets in touch with a local network subsystem to learn about what and how one can do to begin with. The local subsystem delegates a specialist who helps the group in identifying their goals. They draw the first draft of their vision, mission and business plan statements to present it to the subsystem they want to join. The group first investment is time and work on its own education in the already existing network. This education allows the group to start building a plan how to create a new school and beyond. More about it in Section 10.
The design, launch, collecting and analyzing data and redesign processes are gradually becoming a habit of the group. [@LBR] Once they succeed in setting a school in operation they begin to build partnerships with other schools of the subsystem and learn how the network works. The feedback from the network is essential for the new school development.
A mature school participates in the network operation without limitations. Schools of distinguished quality benefit from serving less developed schools by sharing their expertise. For example, an experienced teacher can teach a class of colleagues how to manage their time at school more effectively and, as a result, have more time available for personal growth. [@Covey] Where is the time coming from? The skillful teacher knows how to help students learn on their own, how to organize their work so that older or more experienced students help younger or less experienced ones. The teacher knows how to set up the teams work so that the teacher has plenty of time to think about the most interesting things to her or him. Teams of students can easily do a lot of work which otherwise overwhelms overworked and over-stressed teachers.
.3in [**9. COMMUNITY AND STATE SUPPORT**]{} .1in
Schools of the network are so effective they are eagerly welcome in local communities. It is worth for the community and state to invest in the learning processes kept alive by the schools of the network since these processes educate sophisticated alumni. It is essential to understand that schools never have a problem with getting local support because they grow out of local initiatives. They also never lose state support because no state is going to risk opinion of having no interest in the best possible education of citizens. Once a school is formed it operates for as long as its share value is reasonable. The school grows and brings higher income to teachers when its share value grows.
Communities are proud of the quality of their schools and press local governments for execution of productive educational policy.
.3in [**10. SEEDS AND TIMING**]{} .1in
The envisioned network of schools owned by teachers and students is built in analogy to leaving organisms. Life begins in small seeds, not big scale projects. We have four elements to mention in the analogy.
- [A single member of the network, a teacher or a student grows from an isolated individual of limited horizons to a member of a learning community with broad horizons and freedom to make choices. Thus, the seeds of the network come from personal learning and growth of its members.]{}
- [A school is born and it advances through levels of professional efficiency. A new post of the network emerges and supplies its strength to the whole structure, as a leaf or root of a tree.]{}
- [The whole network grows and improves its services to the society. A small size of an initial stage does not prevent the development into an impressive structure as much as the size of a sequoia seed does not exclude a giant forest in future.]{}
- [The core ideas evolve from the embryos such as this article to mature driving ideas for large systems if the ideas are helpful in practice. The educational network is a seed for bigger changes in our global civilization.]{}
The small size of seeds is important, not accidental or resulting solely from financial limitations. The small size of seeds is the condition which eliminates huge errors. The hardships of life teach members of the network how to go about developing their own schools. They are helped by the network in a number of essential ways characteristic of productive teaching but a school must learn how to grow and become strong. Only self-consistent and well adapting structures survive. Thus, even if the network could afford financing large scale projects it does not do it lightly without careful studies.
It will take a long time before a reasonably mature version of the envisioned network can become a reality and show its ability to improve and survive on the basis of effective education of its members. 50 to 100 years is not a bad guess. Before that happens, contemporary groups of teachers interested in building such a network must start with other enterprises in mind than creating the whole new educational system at once. The seeds must be sown differently.
The point is following. The existing educational systems have long traditions. There exists no counterexample of comparable magnitude that could substantiate claims one can do better. It is pointless to quarrel about what is good or bad and engage in easy to create wars of opinions. Moreover, it would be silly and immature on anybody’s part to claim knowledge of how to build a better system of education than the existing ones to the extent that once the solution is adopted from top to bottom no problem will arise. The principles of life do not favor concepts such as Frankenstein. Therefore, the network concept stays away from such ideas.
In contrast, teachers and students undertaking action to address their basic needs to learn and grow should not be met with a strong opposition. For opposing such movement cannot be supported by reason and any such opposition would contradict the purpose of education.
The timing idea is that once a growing self-organizing network of teachers and students begins to practice meaningful education it will be easier to found, fund and find (3f) exemplary schools where the contexts of productive learning [@Sarason] and learning by inquiry [@McDermott] is bread and butter.
Teachers need to prepare themselves to take initiative much earlier. One place to start with is after-school or after-work activities for youth and adults in the local community. [@Ekiel-Jezewska] The author knows that physical phenomena such as electric currents flowing in a circuit of batteries, bulbs, switches and wires, or a daily movement of the gnomon shadow on a sundial, provide opportunities for teaching how powerful is learning by inquiry. Understanding of the solar system or laws of electricity on the basis of a conscious inquiry induces deep changes in the learning habits. To get going, a team of teachers needs to see this happen to themselves and their students. Then, they need to repeat the success working with new people. For example, one can teach grandparents how to work with their grandchildren. Another viable program is a summer vacation or holiday camp for youth or adults. [@Ekiel-Jezewska] It is essential that the activities are conducted by experienced people using materials of high quality, at least as good as “Physics by Inquiry” [@McDermott]. Teachers engaging in such initiatives become active learners of the subject matters. More importantly, they begin to learn how they can become professionals. Teaching and learning according to principles of scientific inquiry are the cornerstone processes in the network development and one has to begin there. The same principles are then available for application in teaching and learning in other areas without limitation.
The initial studies must be economically sound, with direct collection of money from parents or employers. This way first self-organized learning teams of teachers may emerge. Otherwise they burn out. The teams learn what is involved in the enterprise. For example, a small company is subject to many laws teachers do not learn about in college. [@Markowski] The small scale operation is an indispensable source of knowledge for teachers about what they can accomplish. Only those who know their trade can build a school of the network.
The key characteristic of the contemporary situation in educational systems is the lack of shared understanding what is the goal of education. The seed activity must focus on building a shared practical vision of education among a team members. The remaining paragraphs in this Section describe the first seed activity which the founders of the network should seriously consider.
My claim is that the notions of the context of productive learning, learning by inquiry and the overarching goal of education are not commonly understood. I have noticed in science and educational institutions that almost never the bottom line of research and learning is put on the table as worth investigation. I have not heard people asking seriously what and how we really want to learn and why. Such issues go beyond the common discussion. They seem to be obvious. I claim they are too damn difficult to understand so that anybody worrying about their position cannot seriously admit ignorance in this area. The ignorance is covered by a tendency to push in the direction which is known and safe to the individual.
The next logical step is to ask: Do we have a textbook, a module analogous to Ref. [@McDermott], and a program which would be teaching what is the essence of productive learning by inquiry? My answer is: No. Moreover, the non-existence of such a module for learning how to teach by inquiry clearly shows our reform efforts are weak and missing the key innovation element. I suggest that the teams interested in building the network of schools owned by teachers and students start from creating their own versions of such modules. The modules should then be refined in the process of educating new people who join the founding teams. A new school founding team should start from work on their school cooking-book and I claim they should start from learning themselves what is food.
I give you another analogy which is useful here. Think about the educational system and the air transportation system.[@WilsonDaviss] In the air transportation the notion of flying was clear to everybody since they saw a smallest bird in action. From Icarus to Boeing 747, all participants of whatever was being done knew beyond any doubt what they have to demonstrate or see in order to say they fly. In the educational system, no analogous notion exists. The notion of flight in education can be the context of productive learning by inquiry, but it seems to be top secret now, or more probably, the notion does not even exist in the most educators and politicians minds. I do not blame anybody. The notion is very difficult to understand. You have to combine ideas of most advanced sciences and arts, climb high up on top of them and see far enough to come to grips with the notion of flying in education to try your own wings. Talking about the design of the wing curvature or the airplane factory management is premature when the notion of flying is not known. A school founding team needs to understand what they mean by flying in education before they can start working on their propeller.
Moreover, the majority of complaints about performance of the educational systems is not serious.[@Fullan] Namely, the relevant people do not understand the crisis to the extent of saying: I am not doing the right thing now, I am not teaching effectively, I do not know what is the context of productive learning by inquiry, I am unable to achieve the overarching goal of education for my students, I, in the first place, need to start thinking what I am doing. In other words, it is not only unclear what is flying but it is also not true that people realize they do not know what is flying in education. To the contrary, most educators are convinced they know something well enough to teach others. In fact, it often becomes comparable to teaching Little Mermaid how to comb her hair with a fork.[@Disney] The way one shows to somebody that something is not the way that person thinks is following: one asks the person to make a verifiable prediction of the real status of the matter in question and then one verifies the prediction together with the person. If the verification shows the person’s prediction was false the person is shocked, becomes curious, starts thinking begins to listen. This is teaching by inquiry. To start learning by inquiry you have to feel safe to ask questions about what bothers you.
My punch line is here. There is a science of incomparable clarity and focus in learning by inquiry. It is physics. Basic physics is the most transparent source of understanding what it means to learn effectively. And we already have a well tested material to study the notion of learning by inquiry in physics. This is Ref. [@McDermott]. The first thing to do for a team of teachers who want to understand what they are truly after if they want to join the living network of schools is to study electric circuits or optics in the way similar to the one from Ref. [@McDermott]. Then comes learning about what “Discovery” accomplished in Ohio and where it failed.[@Discovery] At this point one begins to understand the value of the context of productive learning in physics and how hard it is to create it. Then, one needs to ask what is “Reading Recovery” extending from New Zealand to the U.S.A. and where it is going.[@Clay] The context of productive learning by inquiry comes at this stage more clearly into your sight. Next step is to talk to the Learning by Redesign [@LBR] and learn by inquiry in the context of your path about past reforms and the status of science of change. Finally, once you become an owner of a clear notion of the context of productive learning by inquiry you can start your independent thinking about the living network of schools owned by teachers and students. If this outline frightens you, forget the network idea.
.3in [**11. INITIAL BUSINESS PROPOSAL**]{} .1in
The faculty member in a university who is prepared to do so might help a few colleagues to study the notion of productive learning by inquiry and understand the benefits of using the technique. A group of faculty members could set a program for learning by inquiry in the areas they see fit, at all required levels and for all people they want to talk to. This activity could produce the first shared notion of the purpose of education in the faculty team.
Three school teachers and a university faculty could spend a semester preparing a one semester course on electricity or optics by inquiry. They would have to provide their money for equipment and their time for the work and learning. In Poland, the investment could presumably be at the level of about 150 z[ł]{} per person per month. That makes 2400 z[ł]{} for 4 people in a 4 month semester with about 2 hours of studying session and 2 hours of preparation a week. The team would learn principles of learning and teaching by inquiry.
In a second semester, the same team could deliver a course for youth, adults, or both, to about 30 clients for money. Each client paying 50 z[ł]{} a month makes 1,500 z[ł]{} per month and about 6,000 z[ł]{} in four months. Divided by 4 it gives 1,500 z[ł]{} per teacher for all expenses of the course, or 375 z[ł]{} per month. Suppose all the money is spent on equipment and other expenses. One can buy a lot of nice stuff for this money. The next semester has much lower spending because a lot of the equipment is already in place, the time required is shorter since there is experience accumulated, and the price may even go up if the first trial creates demand and the course is considerably improved. Suppose the spending is slightly higher than in the first semester of preparation, say 175 z[ł]{} instead of 150 z[ł]{}. Then, the next semester brings 200 z[ł]{} per month of income per teacher. This means the income compensating the initial money investment in one or two years.
The major product of the first few semesters of the team work is a set of materials which allow a skillful teacher to engage many students in a highly productive learning process. Such material can be published and sold in the future in many copies. But what is created goes far beyond that - teachers begin to act using their new skills and their own process of continuous learning and self-development takes off.
The question is how to move forward with the idea of founding a new school. One team is not sufficient. Organization of summer camps and new courses should lead to a larger group of teachers in association with university teachers who can conceive a mission, a vision and a plan to found the school. A breathtaking variety of problems need solution to make this work. Business strategies are just a small part of it (for example, see Ref. [@Markowski]). But it is hard to imagine anybody or anything will be able to stop the development. On the contrary, with such grass root movement and solid preparation one can expect many foundations of education to be ready to support the plan. Publishers and distributors of the teaching materials would certainly be interested in the promotion and the widest possible use of maximal number of issues. They would sign contracts for producing such materials.
This time my punch line is here: the university faculty trained in teaching by inquiry could start teaching elementary science by inquiry to teachers whose participation in the program would be paid by employers (schools, communities, institutions engaging in education of teachers, foundations). The great benefit to the university faculties is they suddenly become obviously and undeniably useful to the whole society (remember, students have parents and it is the parents who keep the country going), irreplaceable and in high demand for doing what they are well prepared to do subject-wise, what they enjoy to do by nature and what they can eagerly do to raise their income without interference with their research as much and as stressfully as teaching ineffectively hoards of under-prepared students interferes with their underpaid research activities at the university. Long term benefits to scientists are then obvious and not limited. Most importantly, however, the faculty members begin to feel free to learn a completely new stuff and start thinking in new dimensions.
There is an important aspect to mention in such an approach: those who learn by inquiry are ready to tackle hard problems. They will spontaneously advance their knowledge and understanding. They will be driven by curiosity. They will create the culture able to sustain the movement towards the network of schools. And they will grow personally with the development of the network. The network will serve more students and bring respect and enthusiasm to leaders of effective learning. The network will continuously need its top experts to keep it on track and going. Everybody will have to learn at new levels about new difficult matters and how to solve problems efficiently, bringing splendor to the educational enterprise.
New teams will be trained to become able to offer training to many new teachers. The university faculty sharing the vision will be able to help in building the network of schools owned by teachers and students and the schools will produce students ready to study at the universities. Training of new teams will be based on a set of meta-teaching materials on the subject matter and methods of founding new schools and developing the network. People will not be merely hired to do all these things - they will own the network and make it live up to their expectations. The network will become a good client of the university faculties.
.3in [**12. COMMUNICATION**]{} .1in
The key roles of communication are exchange of information within the network and informing society about the network ability to teach, improve itself and grow. Internet-like structures may help but they will be only tools in the processes of importance.
The most important communication process in the network is the transfer (describing, explaining, selling and buying) of teaching materials combined with courses on teaching and learning how to use them offered by specially trained users.
The most important information sent to the society is the current dividend per share and the total number of existing universal shares. The published numbers include also the average number of shares earned by teachers and students in the whole system. In addition, tables of subsystem averages are published with explanation of their meaning. The tables help students, parents, foundations etc. in judgment of the subsystems performance. Next, the cost of the education of a single student and the profit from education of a single student are published with explanation of how both are calculated.
.3in [**13. CONCLUSION**]{} .1in
There is one consequence of the educational system belonging to teachers and students at all levels that has not been fully described yet and must be reiterated here. Namely, such a network could naturally support basic research. Moreover, it could do so without asking for immediate industrial applications. The new motivation comes from the fact that an educational system will not be truly useful, indispensable and always worth investment unless it becomes an independent source of enlightenment. New discoveries could first apply in driving education before being used in industry. Today, we learn at schools what happens in the world. In the new system, it would be natural for the world to eagerly learn what is being discussed at schools - an unthinkable situation today.
While today no economic competition in scientific progress between educational systems and industries is possible, the new system could engage in such competition. The engagement might have incredible consequences for the speed of developing our civilization. Imagine young people learning about the current status of our knowledge and understanding of the real world first hand and searching for solutions to problems without bias of employment and other commitments.
I also need to explain the opportunity the network creates for the contemporary university. The unique opportunity lies in the leadership role the university could try to attain. But we need to remember that the contemporary university is not an unquestionable institution that fulfills its mission and may securely keep going as it did so far. [@Hock]
The well known problem the university faces is that freshman students are not sufficiently educated at schools to undertake studies of modern science. The university becomes a place to teach elementary subjects because schools cannot fulfill their mission. Schools are supposed to teach so much so quickly that they are unable to help students learn with understanding. Understanding is replaced with a mindless drill of memory. Students become alumni who do not know how to learn new subjects. Worse, they are trained in faking knowledge and understanding. Thus, the contemporary university must face the highly probable possibility of becoming a high school of the 21st century and never gain the leadership position it might dream about or believe in attaining.
The main point is not merely that the university teachers would certainly enjoy having better prepared students as entrants (better students means better chances for prosperity of their professors). The point is that the university may become obsolete and useless no matter how good a science it supports if the students will not be able to study there in sufficient numbers. The above statement is not guaranteed to motivate a revolution in educational paradigms.[@Kuhn] But it means that universities have to help schools in changing their practice. I give the following example.
The university works as a hierarchy of teaching and learning staff from students to graduate students to teaching assistants to postdocs to levels of professorship, and administration. Climbing the ladder is related with achievements in science, teaching, building research teams, personal growth, gaining respect and winning intellectual freedom without limits. Simpler tasks are delegated down the hierarchy. The most advanced processes of study and teaching at the university are in the hands of the most talented and most educated people. How is the school organized? There are only teachers and students, and administration.
The questions to ask at the university are the following. How would you explain the utility of your system to school teachers if they asked? How could one implement similar principles at the school level? Why do teachers not come to ask how to do that with their students? At the same time one could ask the following questions at school. Why don’t you try to create a structure like in a university? Why don’t you talk about it with the university people? Are they not helpful or plainly ignorant? The university should consider the opportunity of helping to build and lead a living network of schools owned by teachers and students. It could make a difference.
.3in [**Acknowledgment**]{} .1in
The author would like to thank Ken Wilson for many stimulating discussions and comments. Multiple discussions with Seymour Sarason are gratefully acknowledged. The author benefited from meetings with Charlie Ericson, Constance Barsky and Ben Daviss. He also wishes to thank Maria Ekiel-Je[ż]{}ewska for discussions and collaboration on teaching experiments. .3in
[99]{}
| 2023-08-21T01:26:35.434368 | https://example.com/article/5491 |
Colombia facing violence despite Farc deal, ICRC says Published duration 9 March 2017
image copyright EPA image caption An estimated 7,000 Farc rebels have gathered in transitory camps to lay down their weapons and rejoin civilian life
Thousands of people across Colombia are still falling victim to rape, killings and torture despite a peace deal with Farc rebels, the International Committee of the Red Cross says.
The ICRC urged the government to take stronger action to reduce violence.
The Red Cross says the peace deal with the Farc is working.
But it warns that it will take decades for Colombia to deal with the direct and indirect consequences of the conflict, including urban violence.
'A long way to go'
The government and the Farc (Revolutionary Armed Forces of Colombia) signed a peace agreement in November, to put an end to more than 50 years of conflict.
In a report , the Red Cross says violence decreased in 2016 as a result of the peace process.
The signing of the deal was preceded by a ceasefire and a number of confidence-building measures.
But the ICRC calls on the government to do more to demobilise Farc child soldiers, clear landmines and tackle urban violence.
It points out that three other rebel groups remain active in the country.
image copyright EPA image caption The ELN peace talks were launched in Ecuador on 7 February
"Building a country at peace requires everyone to make an effort and can take decades," the ICRC's delegation head in Colombia, Christoph Harnisch, warned.
"The tragedy of missing loved ones, the fear of unexploded ordnance, armed violence in urban settings, threats, the restrictions on the movements of whole communities in areas controlled by armed groups - these all point to there still being a long way to go in Colombia," he added.
Some 86,000 people are missing because of the conflict, says Mr Harnisch.
The government says 260,000 people have died and 6.9 million people have been displaced since 1964, when the Farc began its uprising.
Last month President Juan Manuel Santos's government began formal peace talks with the country's second-largest rebel group, the ELN (National Liberation Army).
Two other smaller groups are still engaged in armed struggle against Colombian forces: the Gaitanista Self-Defence Forces of Colombia (AGC) and the People's Liberation Army (EPL). | 2023-09-05T01:26:35.434368 | https://example.com/article/4639 |
Sabah Democratic Party
The Sabah Democratic Party (; abbrev: PDS) was a political party based in Sabah, Malaysia. It was an ethnically-based party striving to voice the rights and advance the development of Kadazan-Dusun-Murut (KDM) populations of Sabah and the Orang Asli of Peninsular Malaysia.
History
PDS started as Sabah Democratic Party or Parti Demokratik Sabah which was founded by Bernard Dompok and other disgruntled leaders who split from United Sabah Party or Parti Bersatu Sabah (PBS) soon after the Sabah state election, 1994 to join the Barisan Nasional (BN) coalition. PBS had won a majority in the Sabah State Legislative Assembly, but the defections allowed BN to form government. Part of the enticement offered by BN to the defectors was the promise of a rotating Chief Ministers of Sabah post, which Dompok held from 1998 to 1999.
PDS was renamed as United Pasokmomogun Kadazandusun Murut Organisation (UPKO) on 8 August 1999, taking the same UPKO acronym of the defunct original United Pasokmomogun Kadazan Organisation, which was formed and dissolved in the 1960s. The party was re-branded again as United Progressive People of Kinabalu Organisation while maintaining its original UPKO acronym and opening party membership to other races than KDM in 24 November 2019.
See also
Politics of Malaysia
List of political parties in Malaysia
United Pasokmomogun Kadazan Organisation (UPKO) (Old)
United Pasokmomogun Kadazandusun Murut Organisation (UPKO) (New)
United Progressive People of Kinabalu Organisation (UPKO) (Re-branded)
External links
Official website
References
Category:Defunct political parties in Malaysia
Category:Political parties in Sabah
Category:1994 establishments in Malaysia
Category:1999 disestablishments in Malaysia
Category:Political parties established in 1994
Category:Political parties disestablished in 1999
Category:Ethnic political parties
Category:Indigenist political parties | 2024-02-20T01:26:35.434368 | https://example.com/article/3234 |
The Historic Core is an urban gem. Every time I’m feeling down about LA, I remind myself that that’s where the city first began and that it has the powerful potential to get both locals and non-locals alike to reimagine what DTLA can be moving forward. It represents the kind of high-quality urban template that simply can’t be recreated from scratch in today’s day and age.
On a side note, I always find it amusing how much tinier in real life the Predator 2 building (Eastern Columbia) and Die Hard building (yeah, I know not included in this photo thread for the obvious reason of being located elsewhere).
But damn, every time I make itnto downtown Los Angeles I have to spend a few minutes gawking like an idiot at the Eastern Columbia building, as it's such a fine specimen of art deco.
Nice photos! I love Downtown LA, and how there are so many different parts to it, each radically different from each other. The star is, of course, the Historic Core. I know the area is rapidly changing, and it's exciting to see, but it is currently so filthy and smelly. I went on an LA Conservancy tour of the Broadway theaters last weekend, and basically the entire stretch of Broadway from Pershing Sq. to Olympic reeked of piss. Saw 2 (!) piles of human shit, trash, dirty and sticky sidewalks, etc. I can overlook the abandonment of the upper floors, and the tacky stores jammed into the street level spaces of the theaters, but when you add the smells and filth, it really leaves a bad impression. I'm a local and used to it, but several people on the tour were from other states (couple from the UK, too) and they all were commenting on the dirtiness of downtown. | 2024-01-29T01:26:35.434368 | https://example.com/article/5473 |
Quintus Fabius Maximus Allobrogicus
Quintus Fabius Maximus Allobrogicus, was a Roman statesman and general who was elected consul in 121 BC. During his consulship he fought against the Arverni and the Allobroges whom he defeated in 120 BC. He was awarded a triumph and the agnomen Allobrogicus for his victory over the Gauls.
Career
Fabius Maximus Allobrogicus was the son of Quintus Fabius Maximus Aemilianus, the Roman consul of 145 BC, and a member of the patrician gens Fabia. His first appearance was during the elections for quaestor in 134 BC; he was recommended to the voters as a candidate by his biological uncle Scipio Aemilianus, and after Allobrogicus was elected, Scipio took him as his quaestor to Hispania Citerior where they fought in the Second Numantine War. While there, Allobrogicus was placed in charge of 4,000 volunteers.
By 124 BC, he had been elected to the office of praetor, since in 123 BC, he was appointed propreator (governor) of one of the Hispanias (Citerior or Ulterior). Whilst there, he was censured by the Senate, following a motion by Gaius Gracchus, for extorting gifts of grain from a Spanish town. Then in 121 BC, he was elected consul alongside Lucius Opimius. During his consulship, he campaigned in Gallia Transalpina (in the modern day Auvergne and Rhône-Alpes regions) with Gnaeus Domitius Ahenobarbus against the Gallic tribes of the Allobroges and Arverni. After his consulship expired, he replaced Domitius Ahenobarbus as proconsul in Gaul (120 BC), during which time he completed the defeat of the Allobroges and Arverni. For this he was awarded the honour of a triumph and given the agnomen Allobrogicus. The triumph he held was famous for its spectacle, including the captive Arvernian king Bituitus in his silver battle armor. From the plunder of the Auvergne, Fabius erected the Fornix Fabianus (121 BC) crossing the Via Sacra at the Forum Romanum, placing a statue of himself on top of the arch.
In 113 BC, he may have been the Quintus Fabius who was the leader of an embassy sent to Crete to help end some internal disputes between various cities on the island. Then in 108 BC, either he or Quintus Fabius Maximus Eburnus was appointed to the office of Censor.
He was a known orator and a man of letters. Upon the death of his blood uncle Scipio Aemilianus in 129 BC, Fabius presented a banquet to the citizenry of Rome and pronounced the funeral oration of the deceased general. He had at least one son, also named Quintus Fabius Maximus Allobrogicus, who was notorious for his vices. His grandson was Quintus Fabius Maximus.
Notes
References
Sources
Broughton, T. Robert S., The Magistrates of the Roman Republic, Vol I (1952)
Smith, William, Dictionary of Greek and Roman Biography and Mythology, Vol II (1867)
Who's Who in Military History by John Keegan and Andrew Wheatcroft.
Category:Roman Republican consuls
Category:2nd-century BC Romans
Category:Senators of the Roman Republic
Category:Roman governors of Hispania
Maximus Allobrogicus, Quintus
Category:Year of birth unknown
Category:Year of death unknown | 2023-12-28T01:26:35.434368 | https://example.com/article/9732 |
Multidetector CT findings in gastrointestinal tract perforation that can help prediction of perforation site accurately.
To assess the accuracy, sensitivity, and specificity of multidetector computed tomography (MDCT) findings by comparing the locations of free air in the abdomen and imaging findings with the site of gastrointestinal perforation. Ninety-three patients with acute abdominal pain who visited the emergency department between January 2015 and October 2018 were included in the study. There were 59 male and 34 female patients with a mean age of 50.5 years. The site of perforation was based on surgical findings in all cases. Among specific air distributions, periportal free air and subphrenic free air were statistically significant in differentiating upper gastrointestinal tract perforation. Whereas free air in the minor pelvis, right lower quadrant free air, left lower quadrant free air, and air in the mesentery were statistically significant in differentiation of lower gastrointestinal tract perforation. Multidetector findings may help to predict the site of gastrointestinal perforation, which would change the treatment plan. | 2024-07-03T01:26:35.434368 | https://example.com/article/1224 |
The managing director of the International Monetary Fund has delivered one of her strongest condemnations of the protectionist policies of Donald Trump, warning that putting up barriers to trade would be a “self-inflicted wound” to an improving global economy.
Christine Lagarde used a speech in Brussels to launch a strong attack on the go-it-alone approach championed by the US president during his election battle with Hillary Clinton.
“In our hyper-connected world, national policies tend to have major spillovers across borders. We are all sitting figuratively in the same boat. Which is why we need to encourage countries to support strong international cooperation,” she said.
Speaking before the IMF’s spring meeting next week, Lagarde added that international cooperation had been vital in preventing the deep recession of 2008-09 turning into a second Great Depression.
After Trump picked strong critics of the IMF to be key members of his treasury team, Lagarde defended her organisation, saying it had helped foster the international cooperation that had underpinned a “phenomenal rise in incomes and living standards around the world”.
She added: “More recently, we worked together to ensure that the great recession did not become another Great Depression. Cooperation through a multilateral framework has benefited every country. Fostering more resilient growth therefore requires more international cooperation – not less.”
Trump has threatened to put swingeing tariffs on Chinese goods and to impose a tax on imports coming into the US, as part of an economic strategy designed to put America first.
Lagarde said cooperation was a better way of dealing with the global imbalances that had resulted in some countries, such as China and Germany, running trade surpluses while others, including the US, run deficits. This meant working together to ensure that countries observed a level playing field, including by avoiding protectionist measures.
“Restricting trade would be a self-inflicted wound that disrupts supply chains, hurts global output, and inflates the prices of production materials and consumer goods. And low-income households are hurt the most as they consume the largest part of their incomes,” she said.
The IMF will release its half-yearly health check on the global economy next week, but Lagarde hinted that the growth prospects for 2017 would be revised up.
After six years of disappointing growth, Lagarde said the global economy was gaining momentum, holding out the prospect of more jobs and higher incomes.
She pointed out that the outlook had improved across the developed world, including in Europe, which had previously been lagging behind the US. Even so, emerging and developing countries would contribute more than 75% of global GDP growth in 2017.
“At the same time, there are clear downside risks: political uncertainty, including in Europe; the sword of protectionism hanging over global trade; and tighter global financial conditions that could trigger disruptive capital outflows from emerging and developing economies,” Lagarde said.
Weak productivity remained a severe drag on strong and inclusive growth, she added, largely because of population ageing, the slowdown in trade and weak private investment since the financial crisis. | 2023-10-01T01:26:35.434368 | https://example.com/article/3028 |
Nom de l'entreprise
getDigital.de
Présentation
We are a shop for geeks and everyone, who has a huge interest in high-tech and science or who wants to find a present for someone with that interest. Please find more details on http://www.getdigital.de.
Parlez-nous de vous et des personnes que vous souhaitez cibler
People who see our products in a retail store. The packaging design should reflect our target group "geeks", but it is very important that everyone is attracted by it as we also want to address the mainstream customer (so some geeky references are OK, but not too much).
Exigences
Please design a packaging for our products as a vector graphic in Adobe Illustrator. The packaging must contain:
- Our logo
- A picture of the product included
- Space for the product name in English and German
- Space for a description text in English and German
- Space for a short description in English and German
- Legal information
It is important that the design can be used for the packaging of all of our products and that we can modify it ourselves, for example it is very important that we can exchange the product picture, text, etc. and that the elements of the design are flexible enough so that we can modify them to fit to other packaging sizes.
To help you with the design, we attached the following files:
- GetDigital_back.psd and GetDigital.psd: Our current flyer design, feel free to use the elements in this design (for example, our logo, but we also like the design in general)
- packaging.pdf: Please use this as a template for a packaging (remember to keep the design flexible)
- legalfino.ai: These information have to included somewhere on the packaging (for example on the back), feel free to modify the size, arrangement, etc.
- product_description.txt: Sample product descriptions
- VideoKuehlschrankMangnet.jpg: Sample product picture
Winning designer has a good chance to do further work for us (we did this with all our winning designers so far...) | 2024-05-31T01:26:35.434368 | https://example.com/article/9853 |
Animals of the Arctic: From Symbiosis to Symbols
The 9th Arctic Workshop of the University of Tartu, Estonia, May 24–25
Ülikooli 16-109
The Arctic is a region that is commonly associated with animals. It is typical for people in the south to imagine (sub)arctic inhabitants living together with polar bears and reindeer (if not with penguins). Indeed, for thousands of years, human life in the boreal regions has been dependent on animals, probably more than anywhere else in the world. As a result, human-animal relations vary from domestication to avoidance, from socialization to demonization, and from symbolization to ignoring.
In the Arctic Workshop, we propose discussing these different qualities of human-animal relationship through the notions of symbiosis and symbolic value. In biology, symbiosis (from the Greek “living together”) refers to the interaction between two organisms that are in a mutualistic, commensalistic or parasitic relationship. We believe these different aspects deserve a closer look as heuristic conceptual tools for social scientists when discussing domestication, consumption, cohabitation, transportation, diseases, and pet ownership in the Arctic.
This workshop will focus on different aspects and interpretations of the human-animal relationship in the Arctic. Our goal is to assemble a truly interdisciplinary collection of presentations that will focus on the cultural and social side of the topic, contributing to a better understanding of the economic, political or ecological aspects in general.
The pre-contact way of life of Yup’ik people in southwest Alaska was little known until the 2009–2018 excavations at the Nunalleq site near the village of Quinhagak. Until recently, the site dating from around AD 1400–1670 had been locked in permafrost that secured the extraordinary preservation of organic artefacts and faunal materials.
As in many other hunter-gatherers' communities across the Arctic, animals were economically and culturally central to the lives of Nunalleq residents. Our multidisciplinary study combines the zooarchaeological analysis of faunal remains, previously published isotopic studies and the ethnographic study of artefacts unearthed from Nunalleq to better understand the economic, social and symbolic value of different terrestrial, marine and riverine animals in the life of pre-contact Yup'ik community.
What animals are predominant in the faunal assemblages from the site? What fish and game played an important role in the subsistence activities of Nunalleq residents? And foremost – how pre-contact Yup'ik human-animal relationships were manifested in their material culture, particularly in the iconopgraphy of ceremonial objects such as masks and mask attachments?
Early ethnographic records and collections suggest that complex in its structure and imagery, almost every hooped Yup'ik mask can be viewed as a model of a multi-layered universe in miniature. It represents the way humans and animals are related and reciprocally linked in the Yupiit's worldview. By taking this approach, our paper aims to demonstrate what could be learnt about the ecologies, social life and cosmology of Nunalleq residents when studying masks and mask adornments recovered from the site.
10:30
Nikolai Vakhtin (European University at St. Petersburg)
Domestication of reindeer: An emic perspective
In the 1940s, a collection of Yupik Eskimo folklore texts was recorded by Ekaterina Rubtsova (1888–1970). Part of the collection was published in 1954; the remaining texts stayed forgotten on the shelves of the Northern Languages Department of the Linguistic Institute in St Petersburg. In 2009 I re-discovered the dusty files and started to prepare the texts for publication.
In the course of this work, I came across three texts united by a common plot. All three were recorded between December 1940 and Spring 1941; all three were in the Ungazighmiit language; the story-tellers of two are known: Nalugyaq (1888–1942), and Tatko (ca. 1875 – ca. 1944); the third is anonymous. All three tell the same story.
A man abandons his older wife and two sons for his younger wife, taking away the herd and apparently leaving the old family to die. They survive; the boys grow up, start hunting, learn a lot from their mother, and finally come across a herd of wild deer grazing nearby. They start taming the herd, training it not to be afraid of humans, not to be afraid of fire and smoke, to get accustomed to new smells, and so on. Finally, they fully domesticate the herd and, driving the her with them, set out on a journey in search of their father. On their way, they pass other camps where people immediately see that the deer are wild, and are surprised that a wild herd behaves like a tamed one. (Eventually they find their father, but this is irrelevant for this talk.)
In this talk, I will deal with the following questions:
from the emic perspective, where is the borderline between tame and wild?
from the emic perspective, what should a tamed animal learn to do, or feel, or get accustomed to, or stop being afraid of, in order to be considered domestic?
and what are specific techiniques of taming?
The three texts in question provide a lot of details that allow us to answer those questions.
In my presentation, I look at the prose works of a Chukchi author called Yuri Rytkheu. I analyze how he describes the influence of the Soviet colonization of the Far North on the relationship between indigenous people and the nonhuman animals of the area. I argue, that this question reflects indigenous people's balancing of their traditional worldview with the modern one promoted by the Russian colonizers. As the indigenous people's traditional livelihoods depended upon Arctic animals, they had developed their own, cultural ways of dealing with the animals and understanding their relationship with them. The Soviet authorities, however, replaced the cultural worldview and epistemologies concerning the relationship between humans and animals with modern understanding based on materialism and natural sciences. I approach this question through Gayatri Spivak's concept of epistemic violoence, as I explore the replacement of the indigenous tradition with the values of the hegemonic culture as an act of such violoence. I also pay attention to the different ways in which Rytkheu treats this issue during different periods of time by comparing two of his short novels, The Harpooner, published in 1969, and Under the constellation of grief published in 2007 after the collapse of the Soviet Union.
11:30–12:00
Coffee break
12:00
Aimar Ventsel (University of Tartu)
How technology and laws shape human relationship to the tundra and water
In my talk I look at the use of natural resources in a fifteen years time span in a Dolgan settlement not far from the coast of the Arctic Ocean. The community I will tell about is a reindeer hunter, fishers’ and reindeer herder community. I also draw on comparative material from Nivkhi village in the North Sakhalin. My accent will be on how human-animal relationship is shaped by the development of the telecommunication and transport technology and laws defining ‘traditional indigenous economy’ in the Russian Federation. My argument is that the ‘indigenous tradition’ in modern times can be seen as a foraging and marketing strategy combined with the market economy shaped by the state laws. The regulations of land use and new tax laws are in fact impacting human-animal relationship in a very specific way forcing people to give up or adopt different hunting and fishing strategies. The mobile phone and changes in use of snowmobiles affect hunters’ and fishers’ mobility that also affects the intensivity of hunting, reindeer herding or fishing.
12:30
Donatas Brandišauskas (Vilnius University)
Emplaced relationships with wolves in the changing environment of East Siberia
In the presentation I aim to reveal how the wolf is featured in the daily lives of indigenous people in dynamically changing Siberian socio-environmental contexts. I will explore indigenous people’s interactive dimensions, contextualized knowledge and discourses surrounding the human mutual interactions with wolves. My presentation will aim to answer the following questions. How people relate to wolves in the context when herders and hunters as well as villagers live in close proximity with predators also sharing their common environment and resources? How such long-term human and predators’ coexistence shape the way humans think about wolves and how they interact with them on a daily basis? How people perceive wolves on personal and communal levels and how they integrate their experiences of interactions with wolves in the changing environment and sociopolitical context?
13:00
Maria Momzikova (University of Tartu)
How did the cow disappear from the tundra: Animals as a way of representation interethnic relationships
In 1927 tundra dweller and reindeer herder from the Taimyr Peninsula Sundampte Chunanchar told to the enumerator of the First Soviet Polar Census Alexandr Lekarenko the tale about the cow who lived in the tundra in the tent chum with the eagle and was supposed to be eaten by wild animals such as the wolf, the bear, the fox, the wolverine, living together in the second chum. At the end of the tale, the eagle moved the cow away from the tundra to the empty Russian house izba. Then the Russian man came there and took the cow with him. Since that time the cow started living with Russians.
As we can see the tale has the etiological ending. The whole plot about pursuing the cow by predators in the tundra becomes a reason for domestication the cow by Russians. In tales told by Nganasan indigenous people as well as in mythological narratives about the life of supernatural beings and reasons of appearing different phenomenon in the world, the ‘Russian’ space with houses izbas, cows, horses, peasants, soldiers, Russian tzar, Russian God, usually opposed to the tundra space with its tents chums, sledges nartas, reindeer, bears, other predators, nomadic indigenous peoples, indigenous supernatural beings. Besides this spatial separation, they are always in connection with each other.
I want to put the tale about the cow in wider context of relationships both between humans and animals and between different ethnic groups, in this case between Nganasans and Russians, and to show how animals can be used by informants for the representation of relationships between ethnic groups dividing or uniting them at least in the space created by tales and mythological narratives. I will use published and archival materials of folklore texts recorded from Nganasans mostly during the twentieth century.
13:30–14:30
Lunch (for the speakers)
14:30
Lidia Rakhmanova (National Research Tomsk State University)
The moment of a shot: When the distance between researcher and animal disappears faster than between researcher and hunter
The process of hunting, its tools, tricks, methods are directly related to the way of justification of admissibility to hunt the animal and kill it. This system of justification includes not only the choice of ‘humane traps’, but the hunter’s transport, way of stalking, and hunting season. In the lifeworld of the hunter's family, this system, reflexive and, in its own way, ethical, is included in the culture of everyday life so tightly that to justify the right to kill, is practically not necessary, and is to take a step from the ‘home’ culture to the outside.
So what should the anthropologist do when they are handed a gun by informants? And if the researcher refuses to make a striking shot, how far does the included observation remain true? Is the trust and support of informants a worthy goal to justify a shot during the hunt? How does the justification of killing, borrowed from the researcher’ world, combine with the interpretations of the informants themselves? How can the ethical issues of anthropological research be harmonized with the local hunting unspoken code of ethics?
Finally, when it comes to the participation of a woman-anthropologist in hunting, isn't this a form of double marginalization in societies where the practice of female hunting itself can question men's hunting luck? And what does it mean when you’re permitted to participate in the hunt along with the men? May the association of a woman anthropologist with the killing-process break social connections with local women? Does this mean the public disruption and a collapse of researchers’ identity, including gender and cultural identity?
I would like to consider the moment of a woman’s participation in hunting, her position regarding wild animals and status relative to other hunters, as well as the difficult choice between whether to participate but intentionally miss the target to maintain trust and friendship, or to participate and hit the target to earn the ghostly respect that perhaps further marginalizes the woman researcher even more.
15:00
Victoria Peemot (University of Helsinki)
Livestock of the land: Emplacing human-horse relationships in South Tyva
South Tyvan herders, who live on the border of the boreal forest and Inner Asian steppes, conceptualize own identity as the nomadic pastoralists through their relationships with domesticated animals and, the most important, with the horse: "Without the horse, there wouldn’t be Tyvan people in history, our homes wouldn’t be built. Home, the first houses in the Tes-Khem district have been built with the horse’s help. Horses have brought logs down here from the taiga. Without the horse, there is nothing in life. The horse is necessary" (Peemot 2017: 135-6). The defining features of human-horse relationships – an autonomy and symbolic value of equines – can be studied through an emplacement of relationships. This study proposes that the horse’s dominance in the hierarchical and gendered space of the yurt, a herders’ mobile dwelling, is equal to its high symbolic value. The herders’ respect for the autonomy of the horse reflects their categorization of domesticated animals into the ‘livestock of the kodan’ (cows, sheep, and goats) and ‘livestock of the land’ (horses and yaks). The study defines the key terms that are important for understanding the multispecies sociality and landscape perception of South Tyvan herders – aal, kodan, and khonash. While all of them are inherently human-present landscapes, their meanings are distinct, and that distinction sits at the core of the herders’ perception of the landscapes where they live with nonhumans, reflect and affect their relationships with horses.
15:30
Jyrki Pöysä (University of Eastern Finland)
Cockroaches, bedbugs, lice and ticks: From culture of poverty to culture of forest fear
In Finnish agrarian folklore, cockroaches, bedbugs and lice are usually described as inevitable companions of a poor man's life – at logging camps, in the army or with cheap accommodation in local hostels. Folklore about such vermin are rich, with stories about practical jokes and imaginary tales about arranged running competitions between trained cockroaches. The overall tone is never scary: the vermin is there to stay in the poor man's life, only sauna is able to give the poor man some short term release. The stories and the attitudes behind them could be interpreted as the culture of poverty, where release from hardship is brought about with the help of folklore and the sociability of joking about the inevitable realities of life. The contrast with current forest fears about diseases the ticks are spreading (borreliosis, brain fever) is striking. Is there any basis to regard this new forest fear as socially divisive or is it a sign of new equality in front of the inevitabilities of nature? A sign of a growing distance from the nature touching everyone? A new breakdown of symbiosis between contemporary humans and animals? In my paper I am trying to analyse the supposed breakdown of symbiosis in light of older relationships between humans and vermin and as a contrast to more aestheticizing approaches to nature then and now.
16:00–16:15
Coffee break
16:15
Semen Makarov (A. M. Gorky Institute of World Literature of the Russian Academy of Science)
Alien and dangerous: The image of a cat in mythological worldview of Yakuts
The presentation will aim to consider the symbolic contexts in which the image of a cat operates in the oral tradition of the Yakuts. In addition to the coverage of the named fragment of Yakut mythology as such, practically not touched by the scholarly depiction, the analysis of this topic allows us to come closer to clarifying the characteristic features of the images that represent the latest increments to the autochthonous mythological worldview.
Domestic cats were not known to the Yakuts until the emergence of contact with the Russian tradition. Once in the territory of Yakutia, presumably in the first third of the 17th century, cats quickly spread and became loved as companion animals. At the same time, like any pets, they began to accumulate a variety of mythological information.
In the symbolic ‘profile’ of a cat, upon closer examination, significant features of otherness are found. Understanding the concept joining the traditional worldview of the Yakuts as it continues to hold associations with the foreign culture: for cats, adopted words from Russia are used – kuoska, maaska (from Russian Mashka), and in some respects the cat itself acts as a symbol of the Russian person (see proverb: Kuoska khaana khaalbat / ‘The blood of the cat doesn’t disappear’ – disapprovingly about the long-term results of Russian-Yakut crossbreeding).
In addition: if a cat gets sick with ‘Yakut disease’ (in traditional nosology a disease of internal organs of unknown etiology) it must die. In children’s speech formulas, any mythological punishment for a rash or knowingly wrong doings is translated into a cat. Finally, the signs of ‘alienity’ are expressed in attributing to the animal potentially dangerous witchcraft abilities. Up to today, in the Yakut tradition we can fix ideas about the cat as an animal that can foresee the future and an animal-curser.
In this case, it draws attention that some traditional genres of Yakut folklore are not susceptible to the image of the cat: fairy tale, epic, verbal components of the rites. It seems that these features, taken together, can serve as a more or less reliable criteria for establishing the ‘later’ character of some mythological tradition.
16:45–17:15
Zoia Tarasova (University of Cambridge)
Human imaginations, cattle resolutions: A discourse on indigenous cattle breed among the Sakha of northeastern Siberia
Over the past couple of decades some representatives of Sakha (Yakut) intelligentsia have been increasingly engaged in a discourse on preserving and multiplying of an indigenous (Sakha) cattle breed of which a few hundred were left in a remote northern district during the Soviet reform of cattle Simmentalisation / industrialisation. These cattle are kept in special conservation farms located either in unpopulated settlements in isolation from other – 'Russian' (nuuccha), 'foreign' (omuk), 'incoming' (kelii) – breeds, or in the middle of villages which had previously bred non-Sakha cattle but are now undergoing all-village back-crossing to the Sakha breed by castrating their non-Sakha bulls and banning their import. Outside of these villages, some private farmers, too, are switching to the native breed thereby contributing to a gradual albeit not unobjected 'Sakhaisation' (sakhatytyy) of cattle in the region. Sakha stud bulls are praised for having greater sexual prowess as well as for having ‘thicker’ and ‘more motile’ sperm compared to other breeds. Conversely, their cows are valued for being sexually choosy and especially loyal to their own breed to the extent that a few people reported to ‘have never heard of a Simmental calf being born by a Sakha cow’. Drawing on a fifteen-month-long PhD fieldwork among both urban and rural Sakha in 2017-2018, I shall explore the symbolism underlying this discourse. What current anxieties and imaginations of these people might this discourse speak to? Is this another form of the human-animal oneness known to us from anthropological literature? And finally how can tackling these questions help us better understand the nature of such a relationship?.
Saturday, 25 May
09:30
Laur Vallikivi (University of Tartu)
Human and spirit herders of reindeer: Owning, guarding and exchanging in the Nenets tundra
The relationship between humans and reindeer is complex and multi-layered among nomadic Nenets. Throughout history, living in the treeless inland tundra has only been possible thanks to reindeer, either wild or domestic. Even today, almost any aspect of everyday life is related to reindeer as a crucial resource for living as they provide food, transport, clothing, dwelling and define many aspects of social relations with human and nonhuman others. The significance of reindeer is rendered by the Nenets terms for wild reindeer ilebts and the guardian spirit of reindeer ilebyam pertya which are etymologically related to the word ile, i.e. ‘to live’. Since the emergence of large-scale reindeer herding around 300 years ago, domestic reindeer (ty) have given the Nenets a greater control over reindeer as a material resource. However, unlike state economists and administrators, Nenets do not regard herding only as instrumental resource management: rather their half-tame reindeer in the herd are seen as agentive persons who are co-managed by the spirits who take part in the herding. Furthermore, reindeer reflect the qualities of their human owners, for instance, their skills or character. Animals in a herd are thus refractions of not only spirits but also of their human owners in the moments when certain events make these connections visible. I discuss a few such events and how these relate to the notions of owning, guarding and exchanging among Nenets reindeer herders. This research is based on my fieldwork in the Great Land Tundra and the Polar Ural Mountains over the last twenty years.
10:00
Art Leete (University of Tartu)
Understanding Komi dogs
In traditional ethnographies, scholars treat animals among other material objects. Animals were included into descriptions and analyses mostly representative of the culture (they influence design of agricultural tools, organisation of food production). Similarly, ethnographies that explore hunting among the Komi people consist of only marginal notes on the dog-hunter relationship. Ethnographers acknowledge the extraordinary importance of the hunting dog but in addition to this notion there is scarce published data concerning the Komi hunters’ understanding of their dogs. I have conducted fieldwork among the Komi annually since 1996. Gradually I have become acquainted with the Komi hunters’ attitudes towards their dogs. I have recorded numerous stories that reveal hunters’ ideas concerning characteristics that determine good and bad dogs, and rules for proper treatment of a dog by a hunter. In addition, scholars discuss some details of communication peculiarities between a dog and a hunter. Data concerning dog-related mythological beliefs (dogs participating in the Creation of the World) can be found randomly in the texts of a few Komi researchers. I attempt to discuss, how, and to what extent the few vernacular religious ideas appear in actual hunting practices of the modern Komi.
10:30
Nikolay Goncharov (Peter the Great Museum of Anthropology and Ethnography (Kunstkamera))
Peculiarities of human-animal interactions in the Zhigansk village (Republic of Sakha (Yakutia))
My paper describes the features of the relationship between humans and animals and the specific perception of animals by humans. It was written on the basis of the field material collected by the author during the expeditionary activity in the village of Zhigansk in the Sakha Republic (Yakutia). The choice of animals and the volume of the content devoted to them are directly related to the nature of the material that I have. I studied such narratives according to dog, bear, fish, birds, and also mammoths that have an indirect effect (through the extraction of tusks) on the economic and cultural life of the village today. As a result, I reveal some features of how the concept of domestic/wild, groups the animals and make an attempt to analyze the specifics of interaction with different groups, based on the collected material. Everyone in Zhigansk does connect with different species of animals during his/her life: both in the habitualized space (dogs in the village) and in unhabitualized spaces (bears, reindeers in a forest; fishes in rivers, and so on). These aspects reflect the impossibility of people’s separation from animals in the context of their lifestyle. In my report I am going to show some peculiarities which are coming from the perception of animals by humans and which are forming attractive ‘topology’ on the people’s mental spaces created by the comprehension of animals.
11:00–11:30
Coffee break
11:30
Stephan Dudeck (European University at St. Petersburg / University of Lapland)
The role of distancing practices in human-animal and human-human social relations in the Russian Arctic
The paper advances from the notion of interspecies symbiotic adaptation to a comparison of social relations based on distancing practices in the Arctic. It looks at several case studies from the Russian Arctic in order to identify how sustainable and mutually beneficial relations are built involving distancing, non-interaction, silence and ignorance. Special attention is paid to the way these relations oscillate in time and space involving seasonality and mobility. Under conditions of high mobility and the trade of goods and services between sedentary populations and nomadic reindeer herders, domestic animals were selected in order to occupy a variety of functional niches in the local economy. Recently new forms of governmentality put restraints on these interethnic and interspecies interactions, which is met by local reindeer, horse and cattle breeders with new strategies in order to safeguard the local socio-ecological system. The paper concentrates on these new evolving practices in different communities in the Northern part of European Russia and Western Siberia.
From hunting to reindeer herding: Reindeer as the axis of Yuri Vella's worldview
Yuri Vella (1948-2013), until he was circa 40, was a sable hunter and rather a good one. Then, he felt that he had killed enough animals and that it was time to fulfil his dream to live with reindeer. He resigned from his work as a hunter in a state 'artel', bought ten reindeer and started building a life for himself and for his family in the forest. His dream became true: through learning and suffering, he became a reindeer herder and lived with his herd until hid demise in 2013.
What did reindeer mean to his life? Reindeer were undoubtedly his childhood dream. But they became the very axis around which his life revolved. They determined his timetable, daily, monthly. They represented his identity as a native. But more deeply yet, they were the keepers of his grand-children and they connected him with his human environment, with his friends and acquaintances, with the life of his country.
We shall dwell on this last aspect of Yuri Vella's relationship with his reindeer by focusing on one quite well-known and publicised episode in his life: how he presented Russia’s president with one of his female reindeer, asserting that through this reindeer and her offspring he could monitor how the president fared and whether the president’s policy was agreeable or not to the Gods.
12:30
Elena A. Davydova (Peter the Great Museum of Anthropology and Ethnography (Kunstkamera))
Taste of meat through relation to an animal in the northeastern Chukotka
Reindeer killing is the crucial moment for turning the living being into dead food among reindeer herders’ communities. This research investigates the interrelation between slaughtering practices and taste of the produced venison that exists in the Amguma village and tundra in the northeastern Chukotka. I will argue that the relation to an animal at the time of killing defines the taste of the meat. To reveal this, I will compare the practices of reindeer killing in the slaughterhouse and the tundra. Today there is a slaughterhouse near Amguama village and many local people work there during the period of commercial herd’s slaughtering that usually happens in September-November. This meat mostly goes to Anadyr and is also partly for local consumption. Local people emphasize that venison produced in the slaughterhouse is tasteless and smelly, while the contrary tundra meat is considered to be tasty and fragrant. I will show that the differences in perception of food occurs due to distinctions in relation to a beast that exist in these two contexts. In the tundra a reindeer herder refers to an animal as a person that has an agency and the act of killing is a communication between two personalities. People have to eliminate the personality or subjectness of a living being and turn it to a material object to produce food (Kohn 2013). The actions of people during a slaughter and carcass cutting gradually depersonalize an animal. On the contrary in the slaughterhouse, living reindeer are treated as if they were already objects. Such ‘improper’ relation leads to ‘abnormal’ food production.
13:00–14:00
Lunch (for the speakers)
14:00
Vladimir N. Davydov (Peter the Great Museum of Anthropology and Ethnography (Kunstkamera))
Domestication with Evenki enclosures: Reflexing animal agency
Architecture embodies political decisions; it may serve as a manifestation of a political regime. At the same time, one cannot neglect its practical functions in the building of a certain kind of human-animal relations. Political changes in Russia during the last century brought changes to domestication regimes. The Soviet innovations were based on the ideology of control: human animal relationship involved such operations as counting, measuring, supervision and veterinary care. In many respects, the state introduced infrastructure which intended to rationalize work in the taiga. During the Soviet period Evenkis of the Olekma River Basin started to build long enclosures, which were spread over tens of kilometres. However, the fences Evenki reindeer herders use are not always very strong, or strict structures, to prevent reindeer movements. Animals can break the structures if they do not want to stay inside. These enclosures can potentially allow animals to manifest their own agency. It means that not just people decide for how long reindeer should stay enclosed. As soon as people see that reindeer start breaking a fence they move with the herd to another place. In this sense, Evenki fences help humans to recognize the agency of animals and can potentially give them a choice to stay or to move. Building of a new corral by Evenki reindeer herders is not a blind following of tradition or state ideology, but a reflection of the animals’ behaviour, environmental conditions and the landscape. Therefore, Evenki enclosures are not rigid structures as it can be assumed. Rather they are flexible to changes both in political sphere and natural environment. | 2024-07-19T01:26:35.434368 | https://example.com/article/9242 |
February 9, 1951.
Ron. C. E. Belk, Administrator
Texaa~State Board of Plumbing Ramlnem
Austin, Texas Oplalon HO. v-1150.
Re: Collateral~~8~cti~lt$~
of
bahk depoaftb oi State
Board of Plumbing Ex-
Dear Sir: aminers.
You have requested an bplnlon on stv'eral
queatlona pertaining to collateral security for bank
depoalte of the State Board OS Plumblug RaPrlners;
The first question 16:
Is the>Texas State Board of Plumbing Ex-'
aminers required by law to requliv that deposit8 .of
Its.funds be aecurti by'collateralpledged by bank8
where such f'undsare deposi.ted? .
Article 2529, V.C.S.,'yequlrefsthe State
Treasurer to secure collateralfor all state fUnUS
de@bbl.t&fw hti'lllb&i&s'whlchhave ~uallfled 88'
depo@torlea of such funds. Article 25117,V.C.Si,
requires bauks that are depoeltorlesoi Couhty Punde
t6 el.thi?r
post a bond or'pledge collateral secizrl
tb secure"the deposits of such bouhty.' BFtlele'~25
2'0;.
V.C.S., 1s a similar etatute with referenae to Becur-
lng depoelte of incorporatedcitlee. Article 2832',
V.C~.S.)requires fuude of certain Independent school
districts to be secured by collateral. Article 7880-
113, V.C.S.,makes provisions for the aecur5ug of funds
04 water control and Improvementdistricts.
Theae statutes are referred to in order to
show that In varloue Instances the Legislature has
epeclfloallyrequired the state agency or polltloal
eubdlvlalon Involved to secure either an lndemulty
bond or the pled@ug of'collateralasset8 by the de-
pository bank beftbreita fund8 could be deposited in
such bank.
Article 6243-101, V.C.S., creates the Texas
State Board of'Plumbing Rxemlners; conaletlng oi slx
members appointed by the Governor, provides for the
BOnm C. E. helk; page 2 (V-l&O)
collectionof fees by the Board, and makes other
general provisions for the enforcementof various
plumbing regulations. All expense8 of the~Board
In enforcing such Act are to be paid out of fee8
collectedby the hoard, but Section 7 of the'Act
provides that no fee8 80 collected shall ever be
deposited In the General Fund of the State. 'go- -
where In Article 6243-101,V.C.S., Is there any've-
qulrement that the hoard obtain collateral'securltp
for Its deposits. In fact, the Legislature contem-
plated that the Board should never accumulate a
large balance of funds, since Section 7 further pro-
vides that .I?the fund8 remaining In the hands of'
the Hoard at the end of each year are In excea8 of-
the expenses of the Board, then the hoard &a&l re-
duce the amount of license fees. In view of this
latter proolelon and the absolute omlti.slon
of any
requirement throughoutArtlcle.6243+j1, V.C.S.;
that collateral se~curltybe.fural@hedby' ; ~.pl-
tories of the hoard's fuads,~and lu ~vlevp"p&
of e -~
several statutes above cited speolflbil)yiequlr-
lng collateralIn many ln8tance~.,yve'are ,ofthe"
oplnlon thatthe Legislaturedid,not.ln~~nd:~t~ pe-
wire t&fitdepoalts of the State Bo~~.'of~:P:+~~i~iq
Examiners be secured by colla%er+.
'Thls'brlngsus to-jrorijr
other~que@tlonsi
whloh In sub+ance are a8 f0llow8~;
.
May the State Board of Pluiubig~Fxa*ip~..-
ers, a8 a matter of dlbcretlon,~eat& into aa 'agree-
ment with the depository bank, whether ,ltbea 's'tatb
or national bank;for the bank togeeb~e'depo:<uof
the Bxard's futids?
In the first place; the member8 of the
State hoard of Plumbing Examluer8 are public offl-
cers of thla State and the Board is an agepep OS,
Phapmaceutloal ‘Asbfn’ t.
ex. Clv. App. 1936) Ih
tate hoard of Fha~m6io$tiere
held to be public officers,of this State.
In the second place, the moneys collected
by.the Board are public money8. or funds of'the State,
ereozthoughnot required $0 be depoelted In the @en-
tial,Fundof the State. Cf. Game and Fish Com~ls-'
elon v. Talbott, 64 S.W.26 883 (4 Ct APPi I!391
II Forbes, 227 Pac. 768 (Cal: Su' 1924); St&r W-.
v. $ogie; 157 P.2d 135 (Cola. silp.lgg,. -
Hon. C. E. Belk, page 3 (V-1150).
The case of Lover Colorado River Authorllq
v. Chemical Bank & Trust Co. le S W Z?d'p61(T
I 1943) affirmed 184'Tex. 526 190 S.W%
k* considered
App* <he questlbn of ihether'or not a batik
co&d legally secure the fund8 belonging M‘the'Lower
Colorado River Authority by pledging part of Its at+
aets a8 collateral. Having determined that the Auth-
ority was an agency Of the State and that it8 ftilrd8
were public funds, the Court relied upon the prWl-
alone of Article 342-603, V.C.S., as authority for
a state bank,to pledge part of lts.asseta as secur-
ity for deposits of public funds. The pertinent
part of this article la:
"Bo state bank ahall pledge;~or
create any lien upon, any asset or In
any vay secure the repayment of any de-
posit except when apeclflcallyauthor-
ized to do so by law, except that It-~
may pledge it8 aSSet6 t0 secure a de-
posit of or by the Whited States Gov-
izrnment,the State of Texas; or aus
agenag or ~nstrum~ntalltyof either.
. . .
In upholding the authority of a national
bank in this respect, the COtWt said:
"The National Banking Act, 12
U.S.C.A. $ 90, authorizes national
banks to secure deposits 'of a State
or any political siibdlvl8lonthereof,'
where the law of the State 1n which
such benk le located authorices the
pledge of securltleaby other banking
lnati.tutlons.The Texas law author-
izes such pledge of assets t? secure
deposits of or by the United State8
Government, the State of Texas, or 'my
agency or lnatrumentalltyof either.
Acts 1943, 48th Leg., p. 152, oh. 9'7,
Sub. VI, Art. 3, Vernon's Ann. Clv.
art. 342-603.” (185 s.w.2d at
The Court then concluded:
Hon. C. E. Belk, page 4 (V-1150)
"Being publlc~funds the pledge oft
SUCh SeCUritieSby~t& cotrustee to ee-
cure the deposit thereof was clearly
authorized by both State aud Federal,
lavs." (185 S.W.2d at 468)
For other cases on the authority of-& na-
tlonal bank to pledge lte assets as tiecuritgf&r?"
publle funds where the state law permits estatebanks
to do so see Cl of Marion I. Sneeden, 291 U.S.
262, 1934); Lou&n v. T own of Pelham 126 F.2d
714 tC.C,A. %d 1942 Fidelity & DepoLlt Co. of
Maryland -7.Xokrda, F.2d 641 (C.C.A. 10th lg3).
It Is our oplnion,,therefore,that the
Board may, In Its dlacretloa, enter'lnto an agree-
ment with a state or national bank for the bank to
secure deposit.6of the Board'sYfunds.
.
The TeXaS State Board of Plumblng-
Examiners le not'rtiqulredby 1aWt.o Be-
eve collateralfor Its funds f&rn~a
depository bank. Hovever, the Board
may, In Its dlecretlon,enter lute an
agreement with a state or national bank
for‘the bank tg secure deposit8 of the
Board's funds.
APPROVED: Yours very truly,
Red McDaniel PRICE DARIRL
State Affairs Division Attorney General
JeSSe P. LUtOll, Jr.
Reviewing Assistant
Charles D. Mathews Clinton Foshee
Flret ASSistaut Asslatant
CF:rt:b:jmc
| 2023-09-22T01:26:35.434368 | https://example.com/article/1454 |
From phenotypic to molecular polymorphisms involved in naturally occurring variation of plant development.
An enormous amount of naturally occurring genetic variation affecting development is found within wild and domesticated plant species. This diversity is presumably involved in plant adaptation to different natural environments or in human preferences. In addition, such intraspecific variation provides the basis for the evolution of plant development at larger evolutionary scales. Natural phenotypic differences are now amenable to genetic dissection up to the identification of causal DNA polymorphisms. Here we describe 30 genes and their functional nucleotide polymorphisms currently found as underlying allelic variation accounting for plant intraspecific developmental diversity. These studies provide molecular and cellular mechanisms that determine natural variation for quantitative and qualitative traits such as: fruit and seed morphology, colour and composition; flowering time; seedling emergence; plant architecture and inflorescence or flower morphology. Besides, analyses of flowering time variation within several distant species allow molecular comparisons between species, which are detecting homologous genes with partly different functions and unrelated genes with analogous functions. Thus, considerable gene function differences are being revealed also among species. Inspection of a catalogue of intraspecific nucleotide functional polymorphisms shows that transcriptional regulators are the main class of genes involved. Furthermore, barely more than half of the polymorphisms described are located in coding regions and affect protein structure, while the rest are regulatory changes altering gene expression. These limited analyses of intraspecific developmental variation support Doebley and Lukens's proposition (1998) that modifications in cis -regulatory regions of transcriptional regulators represent a predominant mode for the evolution of novel forms, but await more detailed studies in wild plant species. | 2024-04-27T01:26:35.434368 | https://example.com/article/7517 |
package org.synyx.urlaubsverwaltung.web;
import org.slf4j.Logger;
import org.springframework.http.HttpStatus;
import org.springframework.security.access.AccessDeniedException;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.ResponseStatus;
import org.springframework.web.servlet.ModelAndView;
import static java.lang.invoke.MethodHandles.lookup;
import static org.slf4j.LoggerFactory.getLogger;
import static org.springframework.http.HttpStatus.BAD_REQUEST;
import static org.springframework.http.HttpStatus.FORBIDDEN;
/**
* Handles exceptions and redirects to error page.
*/
@ControllerAdvice
public class ViewExceptionHandlerControllerAdvice {
private static final Logger LOG = getLogger(lookup().lookupClass());
@ResponseStatus(BAD_REQUEST)
@ExceptionHandler({AbstractNoResultFoundException.class, NumberFormatException.class})
public ModelAndView handleException(AbstractNoResultFoundException exception) {
if (LOG.isDebugEnabled()) {
LOG.debug("An exception was thrown", exception);
}
return ViewExceptionHandlerControllerAdvice.getErrorPage(exception, BAD_REQUEST);
}
@ResponseStatus(FORBIDDEN)
@ExceptionHandler(AccessDeniedException.class)
public ModelAndView handleException(AccessDeniedException exception) {
if (LOG.isDebugEnabled()) {
LOG.debug("An exception was thrown", exception);
}
return ViewExceptionHandlerControllerAdvice.getErrorPage(exception, FORBIDDEN);
}
/**
* Get the common error page.
*
* @param exception has information about cause of error
* @return the error page as {@link ModelAndView}
*/
private static ModelAndView getErrorPage(Exception exception, HttpStatus httpStatus) {
ModelAndView modelAndView = new ModelAndView("errors");
modelAndView.addObject("exception", exception);
modelAndView.addObject("statusCode", httpStatus.value());
return modelAndView;
}
}
| 2024-06-26T01:26:35.434368 | https://example.com/article/2267 |
Wednesday, August 15, 2012
Day One
Day one is over.
The morning was a little crazy. Our students all arrive at different times, and after picking kids up from the office, meeting parents, meeting the bus from Okarche, eating breakfast, and taking morning restroom break, we finally got class started around 9:30. We started with music, which the kids LOVE. Then we had a period of work, followed by a short "station/center" time--playdoh, math games, train sets, and books.
Then we went to lunch. WARNING: grossness to follow!!! The lunch ladies must have decided to really initiate me on the very first day, because we had spaghetti. MY GOODNESS!!!! If you can survive spaghetti lunch day in the multi-handicap classroom, you can handle absolutely anything.
I learned several things during lunch today. First of all, spaghetti is better eaten with your hands, even if your teachers cut it up to bite-sized pieces and load it up on your fork for you. Also, green beans shouldn't be eaten at all (I kind of agree with this. I don't like them, either). So, if your teacher feeds you a forkful of them, the best course of action is to spit them back into her hand, along with the particles of spaghetti that may still be there. Something else I learned is that if you don't like grapes, if grapes in fact are a gag-inducing fruit to you, it is perfectly ok, acceptable even, to eat the stems instead, then laugh maniacally while your teachers try to pick them out of your mouth.
After lunch, we did some more work, while the the aides and I rotated eating our own lunch. Then some more centers before our 35 minute recess. After recess, we have snack time and clean-up. Then it was time to go home. Our students dismiss about a half hour before the rest of the school.
All in all it was a good day. We're getting a new student tomorrow. Another boy! We'll see how it goes. | 2024-07-11T01:26:35.434368 | https://example.com/article/1760 |
Date: Sat, 3 Dec 2011 17:46:31 -0800 (PST)
From: Jon Benjamin
Subject: The Daniel Diaries : Chapter 2
The next day i woke up at around 7am pretty pissed that it was so early. It
was Saturday and i had absolutely nothing to do all day. The next thing i
know i hear that little notification sound on my phone beep, i look at it
and it's not a number i recognize. Then i remember Kory from yesterday, it
must be him. So i text back asking who this is, right when i put the phone
down it went off again.
"It's Kory we met yesterday in the Grocery Store." he said
"Oh, Hi Kory what's up (: "I said. I was hoping that he wanted me to be his
tour guide today so i actually had something to do on a Saturday.
"Nothing really just woke up; I was wondering if you were up for that
tour?" He said
At this point i was so excited i had to compose myself luckily we were only
texting so he had no idea what my reaction was.
"Yes that would be great, just tell me where you live and I'll pick you up
in about 45mins." I said
"Okay, I live on 87 Winthrop Way. It's right near the Grocery Store we met
at." He said
I quick texted back ok be there in 45. And i jumped in the shower. When i
was done with my shower i went to my closet and looked for the right thing
to wear. I put on my favorite Grey skinnies, my baby blue vans sweatshirt,
and my green and black Osiris'. I headed out the door and got in my car,
put the address in my GPS and i was on my way to get Kory.
When i got there i sent him a text saying i was there and about 2 minutes
later i saw him walking towards my car. The door opened and he got in and
we started up a bit of small talk as i drove down the road.
"Hey" he said
"Hey ready for your tour." I said with a little smile on my face.
"Yes, I think it will be much needed" He said.
As we went down the main road of town i showed him where all the good
eating spots where and where the cheapest places to buy food where. I
showed him where the high school was and then i was stumped with what else
to show him.
"Do you guys have a mall around here" He said.
"Oh my god, i totally forgot. There's a mall about 10 minutes from here in
the next town" I said feeling completely dumb.
"It's okay i was just hoping i would be able to buy some new clothes before
i started at a new school." He said
"Well what stores do you usually shop at?" I asked
"Usually i shop at Zumiez or PacSun, Lately though I've been shopping at a
store called Rue21" He said
"Well i like shopping at Zumiez, i think there is a Rue21 at the nearby
Limerick Outlets" I said
"How far away is that?" He asked
"It's a little far about 45 minutes from here." I said
"Damn so far just for good clothes." He said
"Well how far away was it where you used to live?" I asked
"It was about 15 minutes, But i lived in the city" He said.
"Well yeah everything is closer in the city" I said
After all the small talk we finally reached the Limerick Outlets, you could
tell he was surprised to see such a huge place in such an odd place. He
looked so star struck and i couldn't help but giggle to myself and i think
he noticed because he closed his mouth and looked at me and asked.
"What's so funny?" he said nervously
"Nothing, you just looked so star struck and cute." I said. The more i
thought about it the more nervous i got i just told him he looked
cute. What was i thinking, what if he gets pissed and we have to go all the
way back to town, silently. To my surprise he just smiled, so i had no
choice but to then ask him why he was smiling.
"I'm smiling because you called me cute, and i think you're cute too" he
said still holding that little smirk.
"I'm really sorry.... Wait what? You think I'm cute. Are you joking with
me?" I said being very confused.
"I'm not joking at all, i think you're cute. In fact i think you're
actually a hottie." He said.
At this point i was growing rather hard thinking about him calling me hot,
then it came into my mind that maybe, just maybe me and him would be very
good friends in the future. I was still very surprised at what he said and
the next thing that happened surprised the hell out of me. He leaned over
the armrest and grabbed my face, turned my head towards his. I sat there
looking in his eyes, and then i realized he was moving closer to my
face. His lips started to pucker and i felt as if i was in slow motion. I
looked at his lips, they looked so beautiful i couldn't resist i wanted to
kiss him as well. As our lips touched i felt this little spark that made me
feel like i was on the cloud, that nothing could ever be better. It felt
like the kiss went on for an eternity, a i did not mind one bit. It was the
most intimate i had been with anyone ever, and it felt so right. Then it
felt like i was missing something, i realized he had stopped kissing me and
i opened my eyes with a questioning look. He just looked at me and giggled.
"I hope we can have another kiss that magical soon" I said hoping this
wasn't just some experiment he had planned up.
"Oh trust me there will be many more in the future, i do not just kiss
anyone." he said looking at me and winking.
Tell me what you think guys please, i really enjoy the criticism tell me
what you think about the characters, what you think will happen. What you
want to happen. (: | 2024-06-30T01:26:35.434368 | https://example.com/article/6949 |
Tagged: waterfall
Our first destination on the cross country roadtrip was Tahquamenon Falls in Michigan’s Upper Peninsula. We’d left Traverse City early on Tuesday morning. After a quick stop in Petoskey to get some coffee and... | 2023-12-21T01:26:35.434368 | https://example.com/article/1490 |
DiMenna–Nyselius Library
DiMenna–Nyselius Library is located on the campus of Fairfield University in Fairfield, Connecticut, USA.
History
In 1948, the library at Fairfield University under the leadership of Librarian Robert Gaffney, boasted over 5,000 books and the panoramic view of all panoramic views of the Long Island Sound from the windows of Berchmans Hall. With a new Librarian, Robert Barrows, it moved in 1949 to two rooms in Xavier Hall. Father Francis A. Small was named Director of Libraries in 1952, a position he would hold for over two decades of great change. Father Small led the move in 1957 to Canisius Hall, where the library remained for a decade. During this move, the science library was to stay in Xavier Hall. It was under the leadership of Father Small that the library began a microform collection, purchased its first electronic typewriter, and developed a procedure for duplication catalog cards.
Groundbreaking for a library building took place in 1967, with the building opening the following year. The new building was planned with an estimated 20 years worth of space to grow. With the opening of the new building in 1968, the library increased its capacity from 90,000 books to 300,000 books. It featured a smoking area, two typing rooms, and its first full-time reference librarian.
In 1971, the library was named Nyselius Library in honor of benefactors Gustav and Dagmar Nyselius. They were Swedish immigrants who had settled in Stamford and wanted to make a donation to Fairfield University to repay in part the kindness of their adoptive country. At the time of donation their gift was the largest ever given to Fairfield University.
In 1973, the library joined OCLC, an online cataloging service that provided access to a database as well as printed catalog cards. In 1974, Barbara Bryan, then Associate Director, was named University Librarian. The library added a media department in 1980, thanks to a grant from the Gladys Brooks Foundation. In 1982 it joined the Bibliomation consortium, which introduced barcodes and wands to replace handwritten sign out slips for checking out books. The library's first computer lab opened in 1986, offering access to 8 Apple computers. Steady technological improvements continued to augment the library-a CD-ROM reference center in 1990 thanks to the Gladys Brooks Foundation; a CD-ROM LAN in 1991 thanks to grants from the E.L. Cord Foundation and the George I. Alden Trust; an online public access catalog in 1993; and a computer lab with 25 workstations in 1997.
In 1996, James Estrada became the University Librarian and took the lead on the library expansion and renovation project. After long and careful planning, July 1999 marked the groundbreaking ceremonies for the library expansion, an undertaking supported largely by a gift from alum Joseph A. DiMenna, Jr. '80. The project neared completion as classes started in the fall semester of 2001, under the leadership of Estrada and Director of Library Services Joan Overfield. In the fall of 2001, the DiMenna–Nyselius Library opened.
References
https://www.fairfield.edu/library/about/history/
External links
DiMenna-Nyselius Library
Fairfield University
Fairfield University Digital Archive @ DiMenna-Nyselius Library
DigitalCommons@Fairfield
Category:University and college academic libraries in the United States
Category:Fairfield University
Category:Libraries in Fairfield County, Connecticut
Category:Buildings and structures in Fairfield, Connecticut
Category:Library buildings completed in 1968
Category:1968 establishments in Connecticut | 2024-03-15T01:26:35.434368 | https://example.com/article/8110 |
Biochemical characterization of a heterotrimeric G(i)-protein activator peptide designed from the junction between the intracellular third loop and sixth transmembrane helix in the m4 muscarinic acetylcholine receptor.
Muscarinic acetylcholine receptors (mAChRs) are G-protein coupled receptors (GPCRs) that are activated by acetylcholine released from parasympathetic nerves. The mAChR family comprises 5 subtypes, m1-m5, each of which has a different coupling selectivity for heterotrimeric GTP-binding proteins (G-proteins). m4 mAChR specifically activates the Gi/o family by enhancing the guanine nucleotide exchange factor (GEF) reaction with the Gα subunit through an interaction that occurs via intracellular segments. Here, we report that the m4 mAChR mimetic peptide m4i3c(14)Gly, comprising 14 residues in the junction between the intracellular third loop (i3c) and transmembrane helix VI (TM-VI) extended with a C-terminal glycine residue, presents GEF activity toward the Gi1 α subunit (Gαi1). The m4i3c(14)Gly forms a stable complex with guanine nucleotide-free Gαi1 via three residues in the VTI(L/F) motif, which is conserved within the m2/4 mAChRs. These results suggest that this m4 mAChR mimetic peptide, which comprises the amino acid of the mAChR intracellular segments, is a useful tool for understanding the interaction between GPCRs and G-proteins. | 2024-05-02T01:26:35.434368 | https://example.com/article/3398 |
At the start of 2017, former Techland COO Pawel Zawodny launched a new indie studio called Strange New Things, with a team assembled from other former Techland employees as well as ex-staffers from IO Interactive and CD Projekt Red. The goal at the time was to create "something that comes from 'us'," Zawodny said, although the exact nature of that "something" wasn't made clear: The "Project" section of the studio's website was more of a placeholder than anything else.
But whatever that something was, it's apparently now off the table, and so for that matter is Strange New Things because CD Projekt announced today that it has acquired and re-christened the studio as CD Projekt Red Wrocław, to support the development of Cyberpunk 2077.
"Aside from their immense technological knowledge and artistic flair, the core team of CD Projekt Red Wrocław are just great people," CD Projekt Red boss Adam Badowski said. "CD Projekt Red is not a typical game developer—we put gamers, creative freedom and quality games above making business. These guys not only share this approach, but, much like the rest of the team, think that this attitude is essential to creating epic videogames."
Zadowny, who will head up the Wrocław studio, added, "We’re pretty hyped to be on the spearhead of this new office. We know Wrocław inside out and it’s an amazing place to make games. The team is strong, and I’m sure we have both the experience and the creative firepower to make Cyberpunk 2077 an even better game."
Unfortunately, the announcement of the new studio does not come with any news about Cyberpunk 2077 itself. I remain vaguely hopeful that we'll see something about it at E3, but the Cyberpunk Twitter account hasn't made a sound since that solitary "beep" several months ago, and CD Projekt stony silence is as steadfast as ever. There is one way to get an inside track on what's going on, though: If you happen to live in Wrocław or don't mind moving, they're hiring. | 2023-10-16T01:26:35.434368 | https://example.com/article/1104 |
CANBERRA — Julia Gillard entered Australian federal politics in 1998 railing against big business and opportunism, but has since displayed a pragmatic streak that her Labor Party supporters hope will help correct the mistakes that led to the downfall of her predecessor.
Ms. Gillard, 48 years old, has always stood out in the male-dominated world of national politics and was tipped as a future leader years before she ousted Prime Minister Kevin Rudd in an uncontested party ballot Thursday morning. A key ally in Mr. Rudd’s landslide win over John Howard’s conservative government in 2007, she takes the helm with Labor trailing in opinion polls and smarting from a voter backlash on a number of issues, most prominently a new “super profits” tax that has enraged the mining industry.
Where Mr. Rudd was seen as unable to delegate, a shortcoming that eventually cost him the support of Labor’s factional chiefs, Ms. Gillard is viewed as a consensus politician. Among her first acts as prime minister was to extend an olive branch to the mining industry, canceling government advertisements supporting the new levy and saying she wanted to negotiate.
Victoria Premier John Brumby, who once employed Ms. Gillard as his chief of staff, told the Australian Broadcasting Corp. that he also expected her to revive an emissions-trading proposal that was shelved by Mr. Rudd earlier this year to the dismay of the Labor faithful.
Ms. Gillard’s position in Labor’s left faction means she may also come under pressure to loosen policies on such social issues as asylum seekers, gay-marriage legislation and Aboriginal welfare restrictions in the Northern Territory. But the conservatives who delivered her victory are likely to stymie any change deemed too unpopular.
Born in Wales — her father was a psychiatric nurse — Ms. Gillard came to Australia as a four-year-old and grew up in Adelaide, the capital of South Australia. She moved to Melbourne in her 20s and joined a law firm where she represented employees in workplace disputes.
She forged close links with the labor movement and with the union-aligned Labor Party, working with Mr. Brumby, among others, before being elected to Parliament in 1998 to represent gritty Lalor in Melbourne’s western suburbs.
Analysts were divided on how these experiences may translate. RBC Capital Markets economist Su-Lin Ong said Ms. Gillard’s union roots make her a backer of government intervention and spending.
But Australian Council of Trade Unions Secretary Jeff Lawrence said the new prime minister is nobody’s puppet. “I don’t think Julia Gillard is controlled by anyone,” he said. “I’ve known her a long time, and she’s a very independent, forceful person. She will set the agenda for the government and the country.”
In opposition, Ms. Gillard took the portfolio of workplace relations, where she tangled with the current opposition leader, Tony Abbott, over industrial overhauls. Her party’s opposition to those changes helped propel Labor to victory in 2007. Polls showed many voters preferred her as prime minister to Mr. Rudd.
After the election, she became deputy prime minister and was part of the Rudd inner circle that pushed through massive government stimulus spending, which has been credited with helping Australia escape the worst of the global financial crisis. Stimulus spending in her portfolio of education has been criticized for being wasteful and poorly targeted, representing the major hiccup in her ministerial career.
All the while, Ms. Gillard has faced scrutiny in Canberra because of her status as an unmarried, childless woman — with one conservative political opponent accusing her of being “deliberately barren” and not understanding families.
University of Melbourne politics lecturer Lauren Rosewarne said this would only intensify with her election to the leadership. “Stay tuned,” she said, “for the inevitable — and archaic — media obsession with discussing the PM’s hairstyle, unweddedness and hairdresser boyfriend!” | 2023-08-03T01:26:35.434368 | https://example.com/article/6126 |
Campbell's target article is a stimulating attempt to
extend our understanding of sex differences in risk-taking behaviors.
However, Campbell does not succeed in demonstrating that her
account adds explanatory power to those (e.g., Daly & Wilson
1994) previously proposed. In particular, little effort was made to
explore the causal links between survival (staying alive) and
reproduction. | 2024-01-09T01:26:35.434368 | https://example.com/article/7098 |
Solution synthesis, conformational analysis, and antimicrobial activity of three alamethicin F50/5 analogs bearing a trifluoroacetyl label.
We prepared, by solution-phase methods, and fully characterized three analogs of the membrane-active peptaibiotic alamethicin F50/5, bearing a single trifluoroacetyl (Tfa) label at the N-terminus, at position 9 (central region) or at position 19 (C-terminus), and with the three Gln at positions 7, 18, and 19 replaced by Glu(OMe) residues. To add the Tfa label at position 9 or 19, a γ-trifluoroacetylated α,γ-diaminobutyric acid (Dab) residue was incorporated as a replacement for the original Val(9) or Glu(OMe)(19) amino acid. We performed a detailed conformational analysis of the three analogs (using FT-IR absorption, CD, 2D-NMR, and X-ray diffraction), which clearly showed that Tfa labeling does not introduce any dramatic backbone modification in the predominantly α-helical structure of the parent peptaibiotic. The results of an initial solid-state (19)F-NMR study on one of the analogs favor the conclusion that the Tfa group is a very promising reporter for the analysis of peptaibioticmembrane interactions. Finally, we found that the antimicrobial activities of the three newly synthesized analogs depend on the position of the Tfa label in the peptide sequence. | 2024-02-21T01:26:35.434368 | https://example.com/article/3902 |
Didier Deschamps will not lose his job if France are knocked out of the World Cup by Argentina on Saturday, the president of the French Football Federation (FFF) has told L'Equipe.
Deschamps, appointed in 2012, led France to the 2014 World Cup quarterfinals, losing to eventual winners Germany, while they were runners-up at Euro 2016.
France's 1998 World Cup and Euro 2000-winning captain has steered the squad through the group stage in Russia, finishing top of Group C, but has tipped former teammate Zinedine Zidane to take over at some point.
He signed a contract extension until 2020 last year, and FFF president Noel Le Graet said he would remain in the role whatever happened.
"I'm convinced it'll go well and Deschamps -- I'm going to nip things in the bud immediately -- has a contract until 2020. He'll be there until 2020," Le Graet said.
"I am in the habit of respecting contracts. Deschamps signed through to 2020, he'll be there whatever happens."
World Cup 2018 must-reads
- Make your daily picks with ESPN FC Match Predictor 2018!
- World Cup fixtures, results and coverage
- Southgate resting England's best players is a gamble that will only pay off by beating Colombia
- Maradona, Neuer on the wing and Ronaldo free kicks: World Cup 2018 good, bad, ugly and bizarre so far
- World Cup faces: Check out some of the best fan pictures so far
Deschamps has been criticised by former France teammates who have been disappointed with his team's style of play, notably in the dull 0-0 draw with Denmark on Tuesday -- the World Cup's first goalless game.
Frank Leboeuf claimed the squad "have not understood what football represents in our country", while Christophe Dugarry said: "I can't stand watching this France team play any more."
But Le Graet said: "France, for the moment, has not disappointed anyone. France is going to win. That said, Argentina is a great football country, you shouldn't kid yourself.
"But France is a very complementary team which should comfortably be able to trouble them. The World Cup is starting now. It's a match that is going to test our worth, and France is ready." | 2023-10-25T01:26:35.434368 | https://example.com/article/2974 |
// Copyright (c) 2009-2016 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_CORE_IO_H
#define BITCOIN_CORE_IO_H
#include <string>
#include <vector>
class CBlock;
class CScript;
class CTransaction;
struct CMutableTransaction;
class uint256;
class UniValue;
// core_read.cpp
CScript ParseScript(const std::string& s);
std::string ScriptToAsmStr(const CScript& script, const bool fAttemptSighashDecode = false);
bool DecodeHexTx(CMutableTransaction& tx, const std::string& strHexTx, bool fTryNoWitness = false);
bool DecodeHexBlk(CBlock&, const std::string& strHexBlk);
uint256 ParseHashUV(const UniValue& v, const std::string& strName);
uint256 ParseHashStr(const std::string&, const std::string& strName);
std::vector<unsigned char> ParseHexUV(const UniValue& v, const std::string& strName);
// core_write.cpp
std::string FormatScript(const CScript& script);
std::string EncodeHexTx(const CTransaction& tx, const int serializeFlags = 0);
void ScriptPubKeyToUniv(const CScript& scriptPubKey, UniValue& out, bool fIncludeHex);
void TxToUniv(const CTransaction& tx, const uint256& hashBlock, UniValue& entry);
#endif // BITCOIN_CORE_IO_H
| 2024-01-13T01:26:35.434368 | https://example.com/article/1084 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.