Hjem
Institutt for politikk og forvaltning

Survey material - Legitimacy and Fallibility in Child Welfare Services

Hovedinnhold

The four country research project 'Legitimacy and Fallibility in Child Welfare Services' has developed and launched a number of surveys about decision making in the child welfare system in the last years (2014-2015). Surveys have been sent to different levels within the child welfare systems in Norway, Finland, England and the U.S. (California), including a representative sample of the general population, child welfare social workers, and most recently, to court level child protection decision makers. In addition, an experimental survey has been distributed to the general population in Austria, Estonia, Ireland, Norway and Spain in 2016. Below, you will find information about the development of the surveys, the samples, recruiting, permits, the structure and format of the surveys, as well as some other methodological reflections. You will also find supplementary documentation to some of the research articles that has been published from the project. 

Population surveys

POPULATION SURVEY 1 (2014)

Funding, sample and translation

The aim with the population survey was to ask a representative sample of the population in Norway, England, Finland and California (USA) about their opinions primarily concerning 1) trust in the child welfare system, 2) evaluations of pursuing adoption versus foster home, and 3) assessments regarding a vignette about two children possibly being neglected. The population study was funded by the Norwegian Research Council, and data was collected amongst a representative sample of the population from England, Finland, Norway, and California (U.S.) in the USA (N=4003 respondents, approx. 1000 in each country).

Four researchers developed the survey questions and vignette in English, which then were translated to Norwegian, Finnish and American English. The translations were tested by independent experts, and tested on small samples of professionals and citizens.

Privacy

For this type of survey research we do not have to report the project to the Norwegian Data Protection Office for Research, since the respondents are anonymous, and the data collection was conducted by an external firm, Norstat. 

Representative samples

To secure representativeness, there are two approached used.  For all surveys, the data collection firm secure country representativeness by establishing regional representative quotas or groups. Data collection proceedings are adjusted based on the responses received. If there are too few interviews in some groups, there will be collected extra interviews for that group, so that the result is a representative sample. For Norway, the survey material is also weighted by the data collection company. With a known distribution of a population on three dimensions, sex, age and geography, survey responses are weighted so that population surveys are nationally representative. The reason for using weights is that to achieve as similar as possible as the real distribution in the given population. The weights are represented by a single variable in the datasets named “weight” and the figures will naturally vary for each respondent. The closer the number in the column “weight” is to 1, the better, as this indicates that we have an approximately equal distribution in the survey sample as in the distribution in the population. For instance, men in the age group 18 years to 29 years in Northern Norway has a weight of 1.67, which is considered to be good. If the weight values are over 5, they indicate that there is a lack of responses in that group, and for this example, extra interviews is needed in that particular age group.

Distribution

The population survey was distributed in all four countries during the period May 15th-22nd, 2014. It was organized and distributed through the data collection company Norstat. In Finland, Norstat arranged an ‘ad hoc’ questionnaire for its representative panel, and in England and the U.S., Norstat programmed the survey, and partnered with Research Now for distributional purposes. In Norway a weekly omnibus was run, where a representative panel was asked, based on quotas and weighting for a representative sample of the population. An omnibus survey works differently than most surveys, since it covers a range of topics of interest. It has either multiple sponsors, or is designed to generate data useful to a broad segment of the social science community rather than answer one particular research question (Bachmann & Schutt, 2014). There are both strengths and weaknesses in using an omnibus to gather survey data. Positive aspects are that it is both a cost and time effective way to reach a large number of people that one otherwise might not have been able to, and the data material can easily become a large scale representative sample which one can use for generalizable results. The weaknesses is related to the fact that an independent company runs the survey, so they are in charge of the structure and format of the survey itself, and cater to several different clients at the same time. Thus, one cannot ask complicated or a great number of questions. It can also be the case that other topics in the omnibus interfere with the “vibe” of the questions the researcher has included. For the background questions, we used standard formulations provided by Norstat, who controlled the data collection process.

Comparative challenges

There were two variables that were challenging to find a comparable basis for in the four countries:  Firstly, it concerned political orientation in the question form “If there were to be a general election tomorrow, for which party would you vote?” We conferred with survey templates from ESS, ISSP, Norstat and original voter ballots from the California election in 2012 in order to adequately include the parties that would be on the voter ballots for each country, and in the "most common" order. Using common, locally embedded conceptions of the political scale from left, center and right, and how the parties roughly are placed and place themselves, we placed the parties in each country on a political left-to-right scale variable. For the analysis, political orientation was then coded as left-wing (1), centrist (2), and right-wing (3), and was also dummy coded, where left-wing was coded 1, and centrist and right-wing was coded 0. This variable was challenging to construct as a general for the four countries, so bearing the different political climates and national variances in mind, this variable must be applied with the mentioned awareness.

Secondly, for the income variable, the same difficulties were encountered, with financial systems, currencies and fiscal policies varying in the four countries . The national samples were asked about their annual income in their local currencies. The results show that the median household income levels are different, with Norway being on top, followed in descending order by California, Finland and the UK (SSB, 2014: Eurostat, 2013, 2012; US Census Bureau, 2012). For the analysis we have coded income in three categories on one variable; as low income (1), average income (2), and high income (3). The split into the three categories was based on each country's median income, so that the variable is a relative, and country embedded, measurement, and not based on a universal and equal amount for all countries. The income variable was also dummy coded, with high income coded as 1, and average income and low income coded as 0.

 

POPULATION SURVEY 2 (2016)

Funding and sample

The aim with the population survey was to ask a representative sample of the population in Austria, Norway and Spain about their attitudes towards 1) use of corporal punishment, and 2) reporting corporal punishment to the CWS. The respondents were asked to assess these issues based on an experimental vignette about a boy that had been exposed to mild corporal punishment from his parents. The population study was funded by the the Norwegian Research Council, and data was collected by the data collection firm Norstat from a representative sample of the population from Austria (N=1000), Norway (N=1002), and Spain (N=1000) (total N=3002 respondents).

Developments, translations and privacy

Researchers from the research team Legitimacy and Fallibility in Child Welfare Services' developed the survey questions and vignette in Norwegian. Thereafter, translated into English, Spanish and German by a professional translation company Amesto, and subsequently reviewed and validated by local researchers in the respective countries. These actors were to native speakers in our network with knowledge and competence in the field of relevance and of local conditions and terminology. After consulting these external sources, the vignettes were revised. The Spanish vignette and associated questions were also translated back to Norwegian to make sure that the content of the vignette had remained unchanged through the translation processes. The English version of the vignette, used for reference reasons, have also been controlled by native English speakers, both UK and US. In a final step the vignette and the associated questions was sent for validation from the same external sources, before sending these to the data collecting company Norstat.

Two additional background questions regarding the respondent’s religious affiliation and migration background were translated by the authors. In the case of Austria these questions were added after the first consultation with external sources, and have received a second review from a local researcher. These questions can be perceived as sensitive, so local survey companies and researchers have been consulted in order to avoid (local/national) reluctance to respond to the survey.

The research project is reported to the Norwegian Data Protection Office for Research.

Weight survey samples

To secure representativeness, the survey material is weighted by the data collection company. With a known distribution of a population on three dimensions, sex, age and geography, survey responses are weighted so that population surveys are nationally representative. The reason for using weights is that to achieve as similar as possible as the real distribution in the given population. The weights are represented by a single variable in the datasets named “weight” and the figures will naturally vary for each respondent. The closer the number in the column “weight” is to 1, the better, as this indicates that we have an approximately equal distribution in the survey sample as in the distribution in the population. For instance, men in the age group 18 years to 29 years in Northern Norway has a weight of 1.67, which is considered to be good. If the weight values are over 5, they indicate that there is a lack of responses in that group, and for this example, extra interviews is needed in that particular age group.

Distribution

The population survey was distributed during the period May 31 – July 13, 2016. It was organized and distributed through the data collection company Norstat.

In Austria and Spain, Norstat arranged an ‘ad hoc’ questionnaire for its representative panel. In Norway, a weekly omnibus was run, where a representative panel was asked, based on quotas and weighting for a representative sample of the population. An omnibus survey works differently than most surveys, since it covers a range of topics of interest. It has either multiple sponsors, or is designed to generate data useful to a broad segment of the social science community rather than answer one particular research question (Bachmann & Schutt, 2014). There are both strengths and weaknesses in using an omnibus to gather survey data. Positive aspects are that it is both a cost and time effective way to reach a large number of people that one otherwise might not have been able to, and the data material can easily become a large-scale representative sample which one can use for generalizable results. The weaknesses is related to the fact that an independent company runs the survey, so they are in charge of the structure and format of the survey itself, and cater to several different clients at the same time. Thus, one cannot ask complicated or a great number of questions. It can also be the case that other topics in the omnibus interfere with the “vibe” of the questions the researcher has included. For the background questions (other than those specifically mentioned above), we used standard formulations provided by Norstat, who controlled the data collection process.

Comparative challenges

There were two variables that were particularly challenging to find a comparable basis for in the three countries: Firstly, for the income variable, difficulties were encountered with financial systems, currencies and fiscal policies varying in the three countries. The national samples were asked about their monthly income in their local currencies. The results show that the median household income levels are different, with Norway being on top, followed in descending order by Austria and Spain. For the analysis, we have coded income in three categories on one variable; as low income (1), average income (2), and high income (3). The split into the three categories was based on each country's top and bottom 25% monthly income, so that the variable is a relative, and country embedded, measurement, and not based on a universal and equal amount for all countries. To secure accuracy we consulted the national statistics bureau in Norway on how to best define high, average and low income, and how to make the split of the income variable. We have compared this with the median income in the population, to make sure that this is an adequate way to categorize groups against the median. Local researchers studied income levels and medians, definitions for statistical usage and conferred with survey templates from ESS and ISSP (Statistik Austria, 2016; SSB 2014; EFF Bank of Spain Survey of 2011). The income variable was also dummy coded, with high income coded as 1, and average income and low income coded as 0.

Secondly, due to different school systems and somewhat different wording of the question in the countries, the process of operationalizing the education variable entailed some difficulties as to constructing categories for intuitive comparison. We engaged in numerous joint cross-country discussions on this matter, and while conferring with the data collecting company and at the same time looking towards the formal structures in the respective countries we coded the variable into a dichotomous variable with the values 1=Higher education, and 0=No higher education.

The migration status variable also received special attention, in order to secure that the definitions used are applicable within the specific countries, and also comparable between countries. The three countries defined migration status using the same set of conditions for categorizing for statistical purposes, so this variable was dummy coded as 1=migrant and 0=non-migrant. The migrant variable included both foreign-born persons with two foreign-born parents and people born in the given country with two foreign born parents.

 

Population survey 2 - part 2 (2016)

*The description of Population survey 2 - part 2 was updated 17.09.2019*

Funding and sample

The aim with the population survey was to ask a representative sample of the population in Austria, Estonia, Ireland, Norway, and Spain about their attitudes towards 1) use of corporal punishment, and 2) reporting corporal punishment to the CWS. The respondents were asked to respond to a vignette about a boy that had been exposed to mild corporal punishment from his parents. The migration status of the family was altered. The population study was funded by the Norwegian Research Council, and data was collected amongst a representative sample of the population from Austria (N=1000), Estonia (N=1069), Ireland (N=1000), Norway (N=1002), and Spain (N=1000) (total N=5071 respondents).

Privacy

The research team "Legitimacy and fallibility" developed the survey questions and vignette in Norwegian, which then were translated to English, and from this to Austrian (German), Estonian, and Spanish. We also did an adaption of the English version into Irish English. For this we used the Amesto, the official translation services provider for UiB for the initial translation. This translation was then reviewed and validated by local researchers in the respective countries with specific knowledge within the field. For this type of survey research, we do not have to report the project to the Norwegian Data Protection Office for Research. We receive anonymous data from an external firm. 

Weight survey samples

To secure representativeness, the survey material is weighted by the data collection company. With a known distribution of a population on three dimensions, sex, age and geography, survey responses are weighted so that population surveys are nationally representative. The weights are represented by a single variable in the datasets named “weight” and the figures will naturally vary for each respondent. When the “weight” is 1, this indicates that we have an approximately equal distribution in the survey sample as in the distribution in the population. For instance, men in the age group 18 years to 29 years in Northern Norway has a weight of 1.67, which is considered to be good. If the weight values are over 5, they indicate that there is a lack of responses in that group, and for this example, extra interviews would be needed in that particular age group.

Distribution

The population survey was distributed in two sequences during a four-month period in 2017. For Austria, Norway and Spain, the survey was distributed during the period May 31 – July 13, and for Estonia and Ireland during the period September 13 – September 28. Data collection was organized and distributed through the data collection company Norstat.

In Austria, Estonia and Spain, Norstat arranged an ‘ad hoc’ questionnaire for its representative panel. In Norway, a weekly omnibus was run, where a representative panel was asked, based on quotas and weighting for a representative sample of the population.  In Ireland Norstat programmed the survey, and partnered with Research Now for distributional purposes.

An omnibus survey works differently than most surveys, since it covers a range of topics of interest. It has either multiple sponsors, or is designed to generate data useful to a broad segment of the social science community rather than answer one particular research question (Bachmann & Schutt, 2014). There are both strengths and weaknesses in using an omnibus to gather survey data. On the positive side, omnibus is both a cost and time effective way to reach a large number of people that one otherwise might not have been reached. Thus, the data material can easily become a large-scale representative sample which one can use for generalizable results. The weaknesses is related to the fact that an independent company runs the survey, so they are in charge of the structure and format of the survey itself, and cater to several different clients at the same time. Thus, other topics in the omnibus may interfere with the “vibe” of the questions the researcher has included. Furthermore, one cannot ask complicated or a great number of questions. For the background questions, we used the standard formulations on Norstat.

Comparative challenges

There were especially four variables that were challenging to find a comparable basis for in the four countries:  

Firstly, for the income variable, the same difficulties were encountered, with financial systems, currencies and fiscal policies varying in the five countries. The national samples were asked about their monthly income in their local currencies. The results from our survey show that the median household income levels for our respondents differ, with Norway being on top, followed in descending order by Austria, Ireland, Spain and lastly Estonia. For Austria, Norway and Spain, we have coded income in three categories on one variable: as low income (1), average income (2), and high income (3). The split into the three categories was based on each country's top and bottom 25% monthly income, so that the variable is a relative and country embedded, measurement, and not based on a universal and equal amount for all countries. To secure accuracy we consulted the national statistics bureau in Norway on how to best define high, average and low income, and how to make the split of the income variable. We have compared this with the median income in the population, to make sure that this is an adequate way to categorize groups against the median. Local researchers studied income levels and medians, definitions for statistical usage and conferred with survey templates from ESS and ISSP (Statistik Austria, 2016; SSB 2014; EFF Bank of Spain Survey of 2011). The income variable was also dummy coded, with high income coded as 1, and average income and low income coded as 0. For Ireland and Estonia another approach had to be applied due to lack of sufficient information on income into top and bottom 25%. Nonetheless, the coding, as for the other three countries, respectively, also relied on statistical reports and central tendencies and was categorized into three categories based on the median income level in the country.

Secondly, due to different school systems and somewhat different wording of the question in the countries, the process of operationalizing the education variable entailed some difficulties as to constructing categories for intuitive comparison. We engaged in numerous joint cross-country discussions on this matter, and while conferring with the data collecting company and at the same time looking towards the formal structures in the respective countries we coded the variable into a dichotomous variable with the values 1=Higher education, and 0=No higher education (‘Higher education’ means tertiary education, as in everything beyond high school education).

Thirdly, city size was considered to a certain degree be country relative, as the countries in mind are different both in size and in population density. This was solved by conferring with the local researchers while also looking towards previously applied categorizations and studying the relative sizes on the countries’ respective cities.

Fourth, the variable with information regarding immigration background for the Estonian sample did not correspond sufficiently with the other samples where information regarding this was gathered so we did not include this in the regression analysis.

Social worker survey

Funding and sample

This study was funded by the Norwegian Research Council as part of our research project on decision-making in child protection in England, Finland, Norway and California in the US. The study includes a sample from the four countries of a total of 1027 child protection workers, who completed the online survey including questions about how they would involve children and parents in preparations for a care order, how time is spent on various parts of the care order process, how independent experts are used and evaluated, and their opinions of the quality of decisions they make throughout a care order process.

National adaptation

The survey was constructed in British English, and translated to Norwegian and Finnish. When translating the questions, attention was given to the terms relevant to the Norwegian and Finnish systems. Sometimes there was a need to change the original term to meet the national contexts (e.g. in the U.S.: care order proceedings  – dependency court proceedings). These changes were always discussed in the research team. The surveys were tested with several social workers so that the questions were understandable and relevant.

Survey tools

The survey for the child welfare workers was distributed using the online survey tool Questback. In Norway, the survey was distributed via the respondents e-mail addresses, and in the three other countries, via a non-traceable auto-generated link. The answers were anonymous.

Feedback

At the end of the survey, we asked the respondents if they had any comments on the survey itself. We inserted an open ended answer box, and there were only very few that commented on it. The English and U.S. social workers did not comment the survey much at all. The comments from the Norwegian and Finnish social workers were directed towards the case of Alex (it was too simple), the question of time (which can be difficult to know) and authorization in particular, and also the answer alternatives etc.; that they could be too limiting. The conclusion is, however, that we do not have reasons to believe that workers that answered the survey in general had any problems with the questions, answer alternatives, or the vignette. 

 

NORWAY

The sample and survey dates

In Norway there are approximately 4017 people employed as child welfare workers (SSB, 2011). They are considered the front-line decision makers, and in Norway they are employed at child welfare offices in municipalities of varying sizes, and have different educational backgrounds (see Skivenes 2011, 2014, for details on the Norwegian child welfare system). Child welfare workers can be difficult to study because they have significant work pressures with tight deadlines which squeeze time for research. To recruit study participants we approached the worker union Fellesorganisasjonen (FO) that allowed the researchers to e-mail all child protection members directly. These members were mostly educated as "sosionom" or "barnevernspedagog". These professions make for about 80 % of the total pool of municipality employed child welfare workers (SSB, 2009). Around 90 % of the child welfare workers organized through FO are women. The Norwegian survey was launched Tuesday afternoon, Feb 18th 2014, via e-mail to 1513 FO members, and we closed the survey March 11th, 2014. We sent out 3 survey reminders, two sent out in February and one in early March. We received a response from 23 workers saying that they were not in our target group (i.e. not working in the child welfare service agency in the municipalities). Thus, the survey was finally rightfully distributed to 1490 workers, but we do not know if all of these workers were front line child welfare workers. The response rate is about 35 %, and is thus a conservative estimate. In the survey, the first question aimed to identify our target group, i.e. child welfare workers with a specific work experience, and thus we have controlled for the inclusion of the target group answering our questions. In Norway we have answers from 454 child protection workers. To secure a similar approach to child welfare workers across countries, we developed the survey questions and the vignette in English. When we had reached agreement on the actual phrasing of the key terms, we translated the survey back and forth to US English, Norwegian and Finnish.

Response rate

The share of municipality employed child welfare workers who are organized in FO are around 37 per cent of all child welfare workers (waiting for confirmation on this from FO). 81 % of the 454 respondents, 370 people, stated that they were currently employed at a municipal child welfare agency, working with care orders and care order preparations and investigations, which were our primary target group. 

Privacy

The survey was reported to the Norwegian Data Protection Office for Research and are given the project number 36845 and titled Beslutningspraksis i barnevernet (Child protection decisions). The Privacy Ombudsman considered that the treatment of data met the requirements of the Personal Data Act, and that the processing of personal data could be pursued.

The data material

As with any data gathering method, using survey questionnaires comes with pros and cons. Conducting a survey and using Questback allows for access to large population of child protection workers, and is quick and inexpensive way to obtain a great deal of information, and thus to possibly make generalizations based on the findings. The format is very simple, and allows for many questions to be incorporated and distributed. However, not everyone uses e-mail, or has the patience for or the likings of online surveys. Our survey population of workers are those people who are organized in the union FO, and thus we per definition has a biased sample. There will also always be a distance between the researcher and the respondents, and one cannot control that the answers were interpreted as intended.

 

FINLAND

Distribution

In Finland, the trade union for professionals working in social welfare ‘Talentia’ sent the link to the survey to its members working in public social welfare.  As trade union participation is high in Finland and as this is the main trade union for social workers, this was a very practical way to reach geographically a good sample of social workers. The link was sent to all members who, according to the registered information, worked in municipal social welfare either in permanent or temporary posts of social worker or social work managers.

Representativity

However, as social workers working with child protection do not necessarily make a distinctive group of social workers in public social welfare (which is estimated to be 3300 social workers) and there is no estimate of the number of social workers in child protection, it is not possible to give a fair estimate of the response rate in Finland. In addition, this way of gathering data does not guarantee that the respondents were really working in child protection. The survey took place during a very difficult period of child protection in Finland and therefore it is a considerable achievement that so many responses were given. The social workers sent messages to the researcher to say how important it is to study care-order decision-making. In the end, 340 social workers Finnish respondents resondended to the survey, and among these there were 209 respondents (61%) with experience in care order proceedings.

Ethics

There was no ethical review of this sub-project in Finland as the ethics review system in Finland does not require studies of this kind to have an ethical review.

 

ENGLAND

Distribution  and recruiting process

In England, the survey for social workers was initially distributed via two representative bodies for social workers: the British Association of Social Workers (BASW) and the College of Social Work (TCSW). Uptake was slow and a more successful approach was a ‘snowball sample’ of social workers known to the School of Social Work at the University East Anglia. BASW is a long-established professional association for social workers; TCSW is a more recent body, only becoming formally established in 2012. It was set up to promote the professionalism and professional standing of social work. Social workers are not required to be members of either body, and both organisations include social workers from other service sectors as well as children’s services; and BASW includes social workers from the other countries of the UK. Membership of BASW is about 15,000 (September 2013: BASW annual report 2014). Membership of TCSW was about 6,000 in May 2013 (TCSW membership survey, 2013).  

The survey was first advertised to members of BASW, via their weekly email newsletter, at the beginning of March 2014. By the end of March, after one reminder, there were just 18 responses. An incentive was then introduced, of shopping vouchers worth £50 for the first ten respondents to be drawn from all respondents. The questionnaire was advertised again, with the incentive, via BASW, the Community Care website (a very popular, national free access website for social workers), and TCSW. By the middle of May 2014, there were 66 responses.

At the beginning of June 2014 another approach was used, sending a request by email to social workers known to the School of Social Work at the University East Anglia, with a request to complete the questionnaire and/or forward the message to any colleagues the recipient thought might be interested. This time the incentive was a £10 shopping voucher to the first fifty to complete the questionnaire. This produced the best response, with 68 questionnaires completed in two days. The final number of questionnaires completed in England was 132, of whom 103 said they were currently responsible for preparing cases for care proceedings.

Representativity

Government statistics indicate that there are almost 25,000 child and family social workers in England (DfE, 2014: Children's Social Work Workforce: Key numbers as at September 2013), but these do not all work in teams which do court work. Given the methods of recruitment, it is not known how many social workers received an invitation to complete the questionnaire, and therefore possible to calculate a response rate for England. 

Ethics

Ethical approval for the English part of the social worker survey was given by the Research Ethics Committee of the School of Social Work at the University of East Anglia.

 

USA (California)

Funding and recruiting

Child welfare managers in each of ten California counties surrounding the San Francisco Bay were invited to allow their staff to participate in the study.  All agreed to allow their staff’s inclusion in the study.  In all but one of these counties a research manager or an executive assistant to the child welfare manager distributed the link to the on-line survey, along with a brief note of introduction from Professor Berrick.  In one of the counties, the e-mail addresses for the nine eligible staff members were sent directly to Professor Berrick who contacted them herself. 

Requests for participation were sent once per week over the course of four weeks. 

Potential respondents in all but two counties were informed that they would receive a $20.00 gift card as thanks for participation, should they choose to reveal their name and contact information.  In two counties, the workers’ union allowed the survey to be distributed only if the gift card incentive was removed. 

The sample and survey dates

A total of 260 child welfare workers were invited to participate.  These staff included only those working at the “front end” of the child protection system in the selected counties.  The names of the units in which staff worked were variously called Emergency Response, Dependency Investigations, and Court Investigations.  These are the staff who conduct assessments and investigations following referrals for child maltreatment.  These also include staff who collect evidence and information to support decision making that might result in recommendations to the juvenile court.

Sample recruitment began in some counties on April 15, 2014.  Various counties joined the project at different times and thus recruitment continued until June 30, 2014. 

101 child welfare workers responded to the on-line survey (38 % response rate).  After removing those who indicated that they did not have responsibilities relating to child removal, the final response rate included 90 staff.

Privacy

Prior to distributing the survey, a protocol was submitted to the Committee for the Protection of Human Subjects at U.C. Berkeley (Protocol ID# 2013-02-5264) and was approved on April 8, 2014. 

Survey tools

The researchers developed the questions for the survey.  Considerable time was spent discussing the meaning of each question in each country-context.  Questions were then translated into American English and back-translated into the languages of the other countries represented in the study.

 

Court level survey

Funding and sample

The survey to the court level decision makers was also funded by the Norwegian Research Council as part of the research project on child welfare systems and decision making across the four aforementioned countries.

National adaptation

The survey was constructed in British English, and translated to Norwegian and Finnish. When translating the questions, attention was given to the terms relevant to the Norwegian and Finnish systems. Sometimes there was a need to change the original term to meet the national contexts (e.g. in Finland: independent expert – multiprofessional teams). These changes were always discussed in the research team. The surveys were tested with several judges so that the questions were understandable and relevant.

Survey tools

The survey for the judges was distributed using the online survey tool Questback. In Norway, the survey was distributed via the respondents e-mail addresses, and in the three other countries, via a non-traceable auto-generated link. The answers were anonymous.

 

NORWAY

The system

The first legal decision making body within the Norwegian child welfare system is the County Social Welfare Board, “Fylkesnemnda”. The “court” session is normally set with three members; the County Board leader, an expert member and a lay member. The votes of these members are equally weighted, however, the Board leader is the administrative head of the case. Through the Central Unit for the County Boards, and a fruitful dialogue with employee Monica Johnsen, we gained access to the entire population of decision makers in the County Boards nationwide within the three pools of judges. We received lists with full names, phone numbers and e-mail addresses. The survey was thus sent out to 74 County Board leaders on December 2nd, 2014, and 55 %, 41 leaders, responded. The survey was sent out to 437 (435 via e-mail and 2 via mail) expert members on December 12th, 2014, where 247 responded, which gives a response rate of 56,5 %. The survey was distributed to a total of 2451 lay members (2450 via e-mail and 1 via mail) on December 16th, 2014, and returned by 1348, giving a response rate of 55 %. We sent out weekly reminder e-mails, where the Board leaders received 6 reminders, the expert members received 5 reminders and the lay members received 4 reminders, since the survey was sent out to the different pools with weekly intervals, and the samples sizes varied.

Privacy

The survey was reported to the Norwegian Data Protection Office for Research and are given the project number 36845 and titled Beslutningspraksis i barnevernet (Child protection decisions). The Privacy Ombudsman considered that the treatment of data met the requirements of the Personal Data Act, and that the processing of personal data could be pursued.

Specifying the sample
The Board members who did not have their e-mail address listed were contacted by phone. The respondents could either specify their e-mail address so that they could receive the survey via e-mail, or have the survey distributed either by phone or mail. By phone we obtained 83 e-mail addresses, and 3 postal addresses, where we have received 1 survey filled out by hand. Of the lay members, many had not participated in care order cases yet, as new members were elected at the end of 2014. Here, some prolonged their engagement from the 2011-2014 period, and some were newly appointed. Several lay members contacted us and said that they did not feel competent enough to participate in the survey, as they might not have participated in a case yet. We responded first that our first question chartered experience, and that there was a vignette and some background questions they could answer. They were also able to “unsubscribe” to the survey, if they did not want to participate. Some responded that we should sign them off, and some completed the survey. These were not deleted from the sample, but simply “unsubscribed”, counting as a non- response. For the expert members, some of those listed were engaged by the County Board in Oslo and Akershus, which also functions as the Norwegian Infection Control Board.  We assume that most these members made contact with us in order to be deleted from our sample, which two people did, and so we deleted them. We cannot guarantee that everyone this applied for did the same, but as our expert member respondents state, 98 % of them have partaken in a care order case within the last 5 years. So this does not appear to be a significant problem. Some of the expert members also contacted us to say that they did not feel experienced enough to participate in the survey, since they had maybe only partaken in a case or two. These were also not deleted from the sample, but simply “unsubscribed”, counting as a non-response. The only reason for someone to be deleted from either sample was that they were on a long-term sick leave or other form of absentee leave, they had quit their engagement or were retired; in sum that they were not active County Board members at the time of the survey distribution. We also put the people we were unable to get a hold of here; people who’s phone number did not work, who did not answer the phone, people who did not have their e-mail listed and did not want to participate when we called. This constituted 2 Board leaders, 17 expert members and 124 lay members.

 

FINLAND

The system

Decisions about care orders are made either by the leading authority of child welfare in the municipality (‘voluntary care orders’) or by administrative courts (‘involuntary care orders’) in Finland. If a custodian or a child who is 12 years of age or older oppose the proposal for a care order, the decision is made by the administrative court. Annually this means that approximately ¼ of the carer order decisions are made by the administrative courts.

There are six administrative courts in Finland. The decisions of care orders are made by a panel that consists of two judges and one expert member.

Privacy

The survey was operated by the University of Bergen and the privacy issues are presented in the Norwegian section. In Finland, the administrative courts were contacted to get their permission to distribute the survey.

Sample

The regional administrative courts were each contacted in January 2015 to discuss their interest to join the study and about how to access the decision-makers with the survey. Some courts gave the e-mail addresses of the judges and expert members involved in care order decision-making whereas other courts promised to forward the invitation to join the research to the relevant people. If the e-mail addresses were given, each person was contacted by e-mail with a letter explaining the study and including the link to the survey. Otherwise the similar letter and link was sent to the contact person in the court who shared the material with relevant people. A reminder about the study and survey was sent to each administrative court in late February. As a result, 65 surveys were returned. Of these there were 28 judges, 30 expert members, and seven assistant judges answered.

The administrative courts employ approximately 115 judges who make decisions of all issues taken to the administrative courts. They may divide their tasks so that some of them work more with child protection issues than the others. They may also circulate these tasks during the year. Therefore it is not possible to estimate how many judges at the moment of the survey were working with the care order decisions. The survey was answered by 28 judges. The number of expert members is approximately 66, including some people who are only “stand-by”. The response rate – 30 surveys – was good. Seven assistant judges answered.

 

ENGLAND

Privacy

In England, ethical approval for the judges’ questionnaire was given by the Research Ethics Committee of the School of Social Work at the University East Anglia. But in order to undertake research with judges, permission has to be given by the senior judge of the relevant courts, which in in this case is the head of the family courts in England, known as President of the Family Division. There is an application procedure to follow, and the guidelines emphasise that judicial resources are limited and it may not be possible for all research applications to be granted.

Distribution  and recruiting process

We planned to distribute our questionnaire in autumn 2014, which came during a period of great change in the child and family courts in England. There had been the recent introduction of new targets and guidelines to reduce the duration of care order proceedings, and an increase in the number of applications. In our research application, we therefore had to be realistic about the request we were making.

We highlighted the benefits of international comparisons for opening up new perspectives and possibilities, questioning one’s preconceptions and enabling one to look one’s own system in a new light. We explained that the questions were the same as (or equivalent to) those used in an earlier questionnaire for social workers. We piloted the questionnaire with one judge, who advised us that it was relatively straightforward to complete, and not too time-consuming.

The system

Care cases may be heard at different levels of court. Some are heard by magistrates, also known as ‘lay judges’. They are not qualified lawyers, but volunteers who receive training for their role. They would usually sit for a few days per month. They hear cases as a panel of three, and are advised by a legal adviser, who is a qualified lawyer. Cases may also be heard by district judges, circuit judges and High Court judges. Different areas make more or less use of magistrates to hear care order cases.

Although there are national figures about the number of magistrates and judges (see Judicial Statistics 2015), it is not clear how many that are authorised to hear care order cases, nor how many of those who are authorised actually do (see Eekelaar and Maclean, 2013, Family Justice). There were 106 High Court judges, 640 circuit judges; 679 district judges and 115 deputy district judges; 1031 recorders working as part-time judges (this is often a route to becoming a circuit judge); and just under 20,000 magistrates (but again, no clear account of how many are doing family work. It would usually be a specialism for experienced magistrates).

The family court system is organised in 44 areas in England, each headed by a ‘designated family judge’ (DFJ). We asked for permission to send an email to each DFJ with an explanation of the study and a link to the questionnaire, asking them to select one professional judge and one lay judge (magistrate) in their area, or two professional judges if more appropriate, and ask them to complete it. This would have given us a maximum return of 88 questionnaires.

The sample and survey dates

Permission was granted by the President of the Family Division in November 2014, and the questionnaire was sent to the DFJs by the President’s PA in December 2014. This elicited responses from eleven judges and nine lay judges. Follow up emails were sent in January and February 2015, but with limited success. In March 2105, one of the leading family judges in the High Court, Mrs Justice Pauffley, emailed the DFJs with a reminder and encouragement to respond to the request. This led to a surge in responses. The final total was 54 returns, from 35 judges and 19 lay judges.

Therefore our return rate, in terms of what we asked for, was 61%; but as a proportion of all judges and magistrates doing this work we cannot be certain, although we do know it is far smaller

 

USA (CALIFORNIA)

The system

In California, the Juvenile Dependency Courts operate as a branch of the Superior Court, organized at the county level.  These courts hear cases relating to children or youth who are not safe living in the home of their parent(s); they also hear cases relating to juveniles who have been accused of breaking the law (www.courts.ca.gov).  Cases are heard by a juvenile court judge, commissioner, or referee.  Depending on the size of the population in the county, all counties will have a Presiding Judge (PJ) who is typically voted into office by the electorate for a six-year term.  If the caseload demands additional court personnel, the PJ is responsible for assigning commissioners or referees who serve as “at will” employees for the county (personal communication, Don Will, March, 2016). 

The sample

The link to the on-line survey was distributed along with a letter from the lead California researcher and a note of encouragement from a prominent juvenile court judge.  Since the judge had access to an e-mail listserve including all of the juvenile court judges, referees, and other court personnel, the solicitation went out to all members of the listserve (n=190).  We can not determine how many among the total number of recipients were juvenile court judges versus other court personnel and therefore cannot determine a response rate.  It may be important to understand that there are 58 counties in the state of California, each with at least one juvenile court judge.  But in some counties (particularly Los Angeles), there are several juvenile court judges, so the number of eligible respondents was some number greater than 58 but likely less than 100.

The survey was distributed via e-mail in early April, 2015.  Potential respondents were sent reminders every week for four weeks. 

Privacy

The study was reviewed by the institutional review board at the University of California, Berkeley and received approval (study number 2014-07-6491). 

Instrument

One juvenile court judge widely known for her significant leadership in the state was engaged to review the draft survey instrument and overall design.  The judge suggested revisions to the language of the survey to make each question more precise and relevant to the experiences of juvenile court judges. 

The final survey largely mirrored that distributed in each of the other three countries, though the “Jon and Mira” vignette was ultimately deleted.  In the California context, it would be highly inappropriate for a juvenile court judge to offer advice to a local principal – as described in the vignette.  As such, the context of the question was considered irrelevant to respondents.

Issues with sample size

The Norwegian sample of court decision-makers is considerably larger than the samples from England (N=54), Finland (N=65) and USA (N=39), with 1636 participants (41 judges, 247 expert members and 1348 lay members), an issue which represents a potential validity problem for the weighting of the data. The size of the expert sample is not considered as a problem for the validity of the analysis in the same manner as for the lay members, thus, we have isolated the sample of lay members from the other decision-makers, and conducted separate analysis and then included these results in the comparative analysis process and the presentation of findings.

 

References

Bachman, R. & R. K. Schutt (2014): The Practice of Research in Criminology and Criminal Justice, Sage Publications.

Eurostat (2012, 2013): Mean and median income by household type (source: SILC), available at: http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=ilc_di04&lang=en

Gilbert, N., N. Parton & M. Skivenes (Eds.) (2011): Child Protection Systems - International Trends and Emerging Orientations, New York: Oxford University Press.

Skivenes, M., Barn, R., Kriz, K. & T. Pösö (Eds.) (2015): Child Welfare Systems and Migrant Children – A Cross Country Study of Policies and Practice, New York, Oxford University Press.

SSB (2009): Barnevernspersonell, available at: http://www.ssb.no/barnevernp

SSB (2011): Barnevern, available at: http://www.ssb.no/sosiale-forhold-og-kriminalitet/statistikker/barneverng

U.S. Census Bureau (2011): Income statistics, avaiable at: http://www.census.gov/hhes/www/income/data/statistics/index.html

 

Appendixes to articles

«Are migrant children discriminated against? An experimental analysis of attitudes towards corporal punishment in Austria, Norway and Spain» Helland, H., Križ, K., Sánchez-Cabezudo, S.S. & Skivenes, M.

See appendix to this article below under «attachments».

«A cross-country comparison of child welfare systems and workers’ responses to children appearing to be at risk or in need of help» Berrick, J., Dickens, J., Pösö, T. & Skivenes, M. 

See appendix to this article below under «attachments».

«Children's involvement in care order decision-making: A cross-country analysis» Berrick, J. D., Dickens, J., Pösö, T. & Skivenes, M. (2015). Children's involvement in care order decision-making: A cross-country analysis. International Journal of Child Abuse & Neglect, 9, 128-141.

See appendix to this article below under «attachments».

«International perspectives on child-responsive courts: Is the voice of the child included?»

Berrick, J., Skivenes, M., Poso, T. & Dickens, J.

See appendix to this article below under «attachments».

«The best interests of the child in child protection»

Skivenes, Marit & Sørsdal, Line.

See appendix to this article below under «attachments».

«Children’s and parents’ involvement in care order proceedings: a cross-national comparison of judicial decision makers’ views and experiences»

Berricka, J., Dickens, J, Pösö, T. & M. Skivenes

See appendix to this article below under «attachments».